West Texas A&M University

Buff Transit Tracker
SSR Addendum Standard 5

STANDARD 5: Provider Quality, Continuous Improvement and Capacity

 

  1. Holistic summary of preliminary findings

 

  1. Narrative summary of preliminary findings

 

(FFR, p. 19, bottom of the page)

The LARS has been in operation since 2013 and the EPP indicates it is currently creating its own additional proprietary system with expected implementation in the fall 2016.

Response and Update:

In early Spring 2016, the EPP engaged extensive collaborations and discussions with our department stakeholders and a WTAMU Instructional Technology (IT) program developer. The EPP followed university protocol and all required procedures in order to submit our proposal and to seek university approval for an extensive Creative Request to the Instructional Technology (IT) Department of West Texas A&M University.

Working in conjunction with the primary WTAMU IT programmer, the EPP sought to engage the IT Department in the creation of an additional proprietary technology supplement for the EPP. Design elements, expectations, and specific requirements were included in the Creative Request for the design of the new proprietary technology supplement. Our request included a timeline for the development, training of personnel and stakeholders, and of system implementation for August 2016 in the fall semester.

The approval process took most of the summer in 2016. The university granted full approval for our project in late August, but informed the EPP that the immediate workflow of the programmers would take precedence and our project would be completed at a later date. Unfortunately, we are still awaiting an update from IT, but anticipate delivery of the proprietary technology supplement to be sometime in Fall 2016. Implementation will occur in the following semester after delivery, hopefully, in the Spring of 2017.

The Creative Request, design elements and EPP expectations of what the technology supplement will provide to our stakeholders and school partners, the granted approval notice, and communications from the IT Department are provided as an SSR Addendum Exhibit.

[See Addendum Exhibit (AE73) Learning Assessment Reporting System (LARS)].

[See Addendum Exhibit (AE74) Technology Supplement System Update].

(FFR, p. 20, paragraph 1)

Data are provided for three consecutive cycles for most, but not all, assessments and show a range of measures. The state of Texas requires that all candidate data from admission through recommendation for certification be housed in the Office of Teacher Preparation and Advising in individual folders. This ensures data is disaggregated by candidate. No evidence was found of data disaggregated by program.

Response:

The EPP has provided disaggregated program data for Elementary Education, Grades 4-8, Secondary Education, and Special Education in both traditional and alternative certification routes for initial certification in the SSR Addendum and SSR Addendum Exhibits. Thank you.

[See Addendum Exhibit (AE7) Specialty Licensure/Certification Data].

[See Addendum Exhibit (AE9) ASEP Reports].

[See Addendum Exhibit (AE12) PPR Exam Results].

[See Addendum Exhibit (AE13) LBB Certification Reports].

[See Addendum Exhibit (AE26) Revised Program Data].

(FFR, p. 20, paragraph 2)

The EPP indicates it gathers relevant, verifiable, representative, cumulative and actionable measures in an ongoing assessment cycle examining two of the EPP’s Program Education Outcomes (PEO) per year in all programs requiring three years before all six outcomes are assessed.

Response and Clarification:

For clarification, protocol requirements of the university have directed each degree-offering program to examine at least two candidate learning outcomes for each annual assessment cycle. In 2013, WTAMU used the Assessment Reporting System (ARS) as a vehicle for departmental reporting of operational effectiveness for each program across the university as protocols required. Each EPP program submitted assessment reports on at least two outcomes for ARS in 2013. However, the ARS system was not the only method or vehicle EPP  programs used to assess learning outcomes, but ARS was used to achieve the predetermined protocols of the university.

Additionally, the EPP exceeded the university’s predetermined protocols by assessing the PEOs programmatically in a variety of ways that have been discussed throughout the SSR Addendum.

Several EPP programs worked in tandem to assess the Program Educational Outcomes (PEOs) as candidate learning outcomes through KEI Assignments, reflection writings, and candidate projects and/or presentations during coursework progression in the EPP. Some programs, such as Reading, elected to use the PEO rubric to evaluate candidate progress and development. Their data was easily understood and proved easy to report, so that is why the EPP chose to present their data in the SSR. The Reading program piloted the PEO rubric, but Reading was not the only program that assessed the Program Educational Outcomes.

With a change of leadership in the WTAMU Assessment Office, the newly appointed Assistant Vice President of the Office of Learning Assessment Dr. Blake Decker refined and revised the ARS system to become the Learning Assessment Reporting System (LARS).

Working with Dr. Decker, the EPP continued to meet annual assessment protocols of the university in 2014 and 2015 for operational effectiveness of all programs within the EPP.

In 2016-2017 for continuous improvement, the EPP has modified internal assessment protocols and has instructed program faculty to continue assessing outcomes or PEOs. To meet university protocols for LARS, EPP programs will assess a minimum of three outcomes for the annual assessment cycle of LARS in 2017. If no other assessment vehicle were used, PEOs would now be assessed in a two-year cycle. However, additional programs are currently assessing outcomes through the use of the PEO rubric and the EPP is monitoring progress.

Additional PEO and CEI data for Spring 2015 as requested in the FFR have been provided in the SSR Addendum and Addendum Exhibits.

[See Addendum Exhibit (AE42) PEO and CEI Data, Spring 2015].

(FFR, p. 20, paragraph 3)

Though the EPP writes of having completer impact measures it provides limited evidence substantiating that claim as there is, for example, only one cycle of employer satisfaction.

Response:

As the EPP has explained previously in the SSR, the Texas Education Agency had only posted the 2012-2013 Principal Survey results on their website prior to our submission of the SSR. After repeated requests statewide, TEA released Excel spreadsheets of raw data for the 2013-2014 and 2014-2015 Principal Surveys. The EPP disaggregated the data and formatted the data in a similar manner to the 2012-2013 release.

Of the Principal Surveys for the 2013-2014 and 2014-2015 school years, the EPP completed a data analysis on complete surveys only for West Texas A&M University as well as statewide and for a comparison university, Tarleton State University.

As previously reported for 2013-2014, the total number of Surveys that TEA sent to principals included:

  • Statewide N=34,944
  • West Texas A&M University N=268
  • Tarleton State University N=275

 

TEA Surveys that were sent to principals in 2014-2015 included:

  • Statewide N=17,495
  • West Texas A&M University N=253
  • Tarleton State University N=183

 

Analysis was conducted for only completed surveys for both 2013-2014 and 2014-2015. For 2013-2014, completed surveys were Statewide N=17,795, WTAMU N=219, and Tarleton N=180. Completed surveys for 2014-2015 were Statewide N=17,124, WTAMU N=243, and Tarleton N=176. Sections evaluated on the surveys included:

  • Section II: Classroom Environment;
  • Section III: Instruction;
  • Section IV: Students with Disabilities;
  • Section V: English Language Learners;
  • Section VI: Technology Integration;
  • Section VII: Use of Technology with Data;
  • Section VIII: Overall Evaluation of the Educator Preparation Program;
  • Section IX: Teacher Effectiveness and Student Achievement.

 

In 2013-2014, principal ratings of these eight sections for WTAMU ranged from 3.21 to 3.50 Statewide scores ranged from 3.15 to 3.43; and Tarleton scores ranged from 3.17 to 3.46.

In 2014-2015, identical questions and sections on the survey for WTAMU ranged from 3.12 to 3.55; statewide scores ranged from 3.11 to 3.39; and Tarleton scores ranged from 3.26 to 3.55.

In Section IX, Teacher Effectiveness and Student Achievement, the survey poses Question 40 as: How would you rate this teacher’s influence on student achievement? (A 10-point scale was used).

  • The EPP average score for WTAMU was 7.43 with a Standard Deviation of 1.50.
  • The Statewide average score was 7.24 had a Standard Deviation of 1.59.
  • Tarleton had a comparison EPP average score of 7.27 with a comparison Standard Deviation of 1.61.

 

In 2013-2014, the evidence of 7.43 as the EPP average score by principals of beginning teachers from WTAMU indicates a favorable rating overall. For the same Section IX Question 40 in 2014-2015, the EPP’s (WTAMU) average score was 7.56 with a Standard Deviation of 1.36; Statewide scored 7.17 with a Standard Deviation of 1.58; and Tarleton had a comparison EPP average score of 7.40 with a Standard Deviation of 1.53.

As previously reported in the SSR, the recommended performance cut score for 2012-2013 was 67% as a weighted percentage. Based upon this survey data, WTAMU’s cut score was 75.4% that met Standard 2 of ASEP.

Upon analysis completion, a comparison of findings to the 2013-2014 survey report showed the results to be one point higher than before. For example, answers to WTAMU question 4 averages were 2.15 in 2012-2013 and 3.25 in 2013-2014. Similar findings were found throughout the survey. A phone call was placed to Michael Vriesenga at the Texas Education Agency. His understanding was that for the 2012-2013 school year, a  0-3point score was used instead of the 1-4 scale that was used in subsequent years.

Principals of beginning teachers or our completers are the best evaluators of the impact upon P-12 student learning and development. The Principal Surveys from 2012-2015 from TEA demonstrate that our completers have verifiable impact on student learning.

Please see the EPP’s previous responses in the SSR Addendum on pages 110 to 112 and pages 120 to 122. Thank you.

[See Addendum Exhibit (AE16) Principal Survey Reports (2013-2015)].

(FFR, p. 20, paragraph 4)
The EPP indicates that consequential validity is supported through the validation of assessment items by program faculty as experts in their fields. Predictive validity is ensured, according to the EPP, but there is no description of a process to ensure inter-rater reliability.

Response:

As previously discussed by the EPP in the SSR Addendum, surveys distributed by the Texas Education Agency to school districts statewide are identical instruments used by TEA each year to evaluate EPP programs, the effectiveness of new teachers, and the impact teachers have on P-12 student learning and development.

Through the use of these surveys and PDAS/T-TESS appraisal forms, TEA follows Principles of Assessment and supports only reliable assessment instruments and procedures. TEA uses only assessment procedures and instruments that have been demonstrated to be valid for the specific purpose for which they are being used. In collaboration with statewide partners of teachers, parents, and administrators, TEA has designed assessment survey tools that are appropriate for the target population.

In regard to the validity and reliability of these TEA instruments, the EPP has requested additional information from Dr. Tim Miller, Director of Educator Preparation, Testing, Program Accountability, and Program Management of the Texas Education Agency (TEA). We anticipate his response to be forthcoming.

Additionally, as also previously stated by the EPP in the SSR Addendum for content validity and inter-rater reliability of EPP developed instruments, the EPP is currently taking a three-fold approach. The first approach is for the EPP to assemble an unbiased Validity and Reliability Committee, the second is to engage professional colleagues from other colleges within our university, and the third approach will be to engage education faculty from another state who are unknown to the EPP to undertake validity and reliability studies in partnership with our university.

For mutuality of benefit, the out-of-state Department of Education Dean and faculty have requested copies of our Syllabi Analyses in exchange. Each group will use the EPP  developed PEOs and CEI instruments to assess samples of candidate coursework submissions of KEI Assignments. If the validity and reliability studies achieve inter-rater reliability  in the consistent test scores of 80% or higher when scored by two or more raters on identical validity and reliability studies from the three groups, the EPP is enabled to ensure that our instruments are valid and reliable.

Test validity refers to what characteristic the test measures and how well the test measures that characteristic. The particular job of teaching for which the test has been selected should be very similar to the job for which the test was originally developed. Through the job analysis of being a teacher as a systematic process used to identify the tasks, duties, responsibilities, and working conditions associated with teaching, and the knowledge, skills, abilities, and other characteristics required to perform that job, the EPP will continue to improve methods to test and to measure EPP-developed assessment tools that predict the success of our teacher candidates in the classroom.

The EPP’s plan for these validation and reliability studies is to examine the technical properties of the EPP-developed tests of the rubrics for Program Educational Outcomes (PEOs), and Candidate Evaluation Instrument (CEI) of the Ethical and Professional Dispositions used to assess our candidates over time through their coursework, field, and clinical experiences.

The methods for conducting validation studies for the EPP include an examination of Uniform Assessment Guidelines and to discuss the following three methods of conducting validation studies within the EPP. The Guidelines describe conditions under which each type of validation strategy is appropriate. They do not express a preference for any one strategy to demonstrate the job-relatedness of a test.

  • Criterion-related validation requires demonstration of a correlation or other statistical relationship between test performance and job performance. In other words, individuals who score high on the test tend to perform better on the job than those who score low on the test. If the criterion is obtained at the same time the test is  given, it is called concurrent validity; if the criterion is obtained at a later time, it is called predictive validity.
  • Content-related validation requires a demonstration that the content of the test represents important job-related behaviors. In other words, test items should be relevant to and measure directly important requirements and qualifications for the job of teaching.
  • Construct-related validation requires a demonstration that the test measures the construct or characteristic it claims to measure, and that this characteristic is important to successful performance on the job.

The degree to which our rubrics or tests have the qualities as indicated by the two technical properties of reliability and validity, the EPP will ensure inter-rater reliability.

Please see the EPP’s previous response in the SSR Addendum on pages 131 to 134. Thank you.

[See Addendum Exhibit (AE39) Validity and Reliability Studies].

(FFR, p. 20, bottom of the page and p. 21, top of page)

c. Evidence that is inconsistent with meeting the standard.

 

Response:

The EPP has previously responded to each of these prompts within the SSR Addendum. In response, Addendum Exhibits have been delineated for each prompt in brackets. Thank you.

  1. The assessment cycle requires three years to analyze all of the six PEOs.

[See Addendum Exhibit (AE73) Learning Assessment Reporting System (LARS)].

[See Addendum Exhibit (AE74) Technology Supplement System Update].

 

  1. Empirical evidence that interpretations of data are consistent and valid was not found.

[See Addendum Exhibit (AE39) Validity and Reliability Studies].

 

  1. Data is not disaggregated by program.

[See Amendment Exhibit (AE26) Revised Program Data].

 

  1. Process for establishing inter-rater reliability was not evident.

[See Addendum Exhibit (AE39) Validity and Reliability Studies].

 

  1. List of onsite tasks to be completed. Use the following three prompts for each task.

Standard 5 Task 1

(FFR, p. 21, top and middle of page)

 

Response:

The EPP has previously responded to each of these prompts within the SSR Addendum. In response, Addendum Exhibits have been delineated for each prompt in brackets. Thank you.

a. Evidence in need of verification or corroboration

 

  1. Check on status of new proprietary system the EPP indicates will be operational in fall 2016.

[See Addendum Exhibit (AE73) Learning Assessment Reporting System (LARS)].

[See Addendum Exhibit (AE74) Technology Supplement System Update].

 

  1. Sample candidate data housed in Office of Teacher Preparation and Advising.

[See SSR Exhibit 2.3.2. Samples of Candidate Individual Folder Content Evidence].

 

b. Excerpt from SSR to be clarified or confirmed

  1. Exhibit 5.2.1 indicates ‘Through agreements among our multiple raters of the same candidate’s performance over time with stable and consistent ratings, the EPP ensures the reliability and internal consistency of our assessment measures.’ How does the EPP ensure inter-rater reliability on a single instantiation of the instrument over time?

[See Addendum Exhibit (AE39) Validity and Reliability Studies].

 

  1. How is predictive validity of assessments ensured?

[See Addendum Exhibit (AE39) Validity and Reliability Studies].

 

  1. How does the EPP ‘pose[s] questions’ to all programs at all levels as articulated in Exhibit 5.3.1? Is this separate from the assessment cycle that analyzes two PEOs per cycle?

[See Addendum Exhibit (AE73) Learning Assessment Reporting System (LARS)].

[See Addendum Exhibit (AE74) Technology Supplement System Update].

 

c. Questions for EPP concerning additional evidence, data, and/or interviews, including follow up on response to 1.c.

Response:

The EPP has previously responded to each of these prompts within the SSR Addendum. In response, Addendum Exhibits have been delineated for each prompt in brackets. Some of the following statements required an additional response. Thank you.

  1. In supplementing the LARS, what specifically does the EPP anticipate the new supplemental technology assessment system will offer?

 

Response:

 

Specificity of design elements and expectations that the new supplemental technology assessment system will offer are provided in the Creative Request and SSR Addendum Exhibits.

[See Addendum Exhibit (AE73) Learning Assessment Reporting System (LARS)].

[See Addendum Exhibit (AE74) Technology Supplement System Update].

 

  1. What evidence is available demonstrating the EPP is monitoring its operational effectiveness?

Response:

The required protocols and procedures of the university, the LARS, and the WTAMU Assessment Cycle are provided in the SSR Addendum and Addendum Exhibits. Thank you.

[See Addendum Exhibit (AE73) Learning Assessment Reporting System (LARS)].

[See Addendum Exhibit (AE74) Technology Supplement System Update].

[See Addendum Exhibit (AE81) Faculty Handbook].

 

3.Preliminary recommendations for new areas for improvement and/or stipulations including a rationale for each

Areas for Improvement (AFIs)

(FFR, p. 21, bottom of page)

Area for Improvement: The EPP does not use evidence to evaluate its operational effectiveness.

Rationale: The EPP does not provide evidence/data from a coherent set of multiple measures to inform, modify, and evaluate its operational effectiveness.

 

Response:

As a research-, standards-, and evidence-based education preparation provider, the EPP extensively uses evidence for the ongoing evaluation of our operational effectiveness. The EPP has provided a coherent set of multiple measures to inform, modify, and evaluate our operational effectiveness in the SSR Addendum and Addendum Exhibits.

Based upon multiple sources of evidence, the EPP makes data-informed decisions concerning continuous improvement of course offerings, developing new courses or revising standard courses, curriculum, assessment results, coursework, hiring of needed faculty, implementing state-mandated policy changes, evaluating the strengths and weaknesses of each program,  and determining how our programs are achieving our mission of preparing educators who are confident, skilled, and reflective professionals.

The EPP began to make programmatic changes and improvements in earnest in 2013 upon the receipt of the TEA Audit Report of our Early Childhood Program. Curricular changes and other documentation are housed in the EPP Program Notebook and will be available onsite.

Since 2013, using data in an evidence-based approach for continuous improvement, the EPP has:

  • Changed leadership with a new Department Head and appointed a new Director of Accreditation;
  • Developing a culture of evidence within the EPP;
  • Analyzed ASEP and LBB certification data annually;
  • Effected program changes in the Early Childhood program resulting in increased scores on the TExES Content Exam;
  • Increased the selectivity standards for admission with a required 2.75 GPA that exceeded the state’s requirement of 2.50 GPA;
  • Required all candidates to maintain a 2.75 GPA throughout the progression of the program;
  • Developed Program Educational Outcomes (PEOs) and Ethical and Professional Dispositions of Candidates;
  • Created a new ADA-compliant syllabi template aligned with international, national, state, and local standards to create consistency and increase efficacy;
  • Moved the 40 hours of field observations to Methods courses;
  • Changed EPP policy to require teacher candidates to pass both the TExES Content and TExES PPR state certification exams prior to clinical teaching;
  • Provided an Intervention Specialist for those candidates who needed additional support in passing the exams;
  • Hired new program faculty;
  • Merged the Traditional and Alternative Certification routes into one Office of Teacher Preparation and Advising;
  • Hired a new Director of the Office of Teacher Preparation and Advising;
  • Developed the August Experience for all teacher candidates;
  • Provided seminars for clinical teachers in Diversity/Poverty, Technology, and Mental Health;
  • Increased student/clinical teaching to thirteen weeks; and
  • Overall, aligned standards and heightened expectations of all candidates throughout the EPP.

 

These multiple measures used by the EPP to inform, modify, and evaluate our operational effectiveness provide evidence that the EPP’s modus operandi (MO) is to analyze evidence in an ongoing evaluation of our operational effectiveness for continuous improvement.

 

Further evidence is provided in the SSR Addendum and Addendum Exhibits. Thank you.

[See Addendum Exhibit (AE73) Learning Assessment Reporting System (LARS)].

[See Addendum Exhibit (AE74) Technology Supplement System Update].

Area for Improvement: No analysis of specialty licensure areas data is provided.

Rationale: The EPP does not provide data from all specialty licensure data areas and does not provide indication of analysis of such data.

Response:

As previously discussed, based upon evidence, the EPP ensures that candidates in all programs and at all levels demonstrate an understanding of the 10 InTASC Standards through disaggregated data and EPP analyses of the data by specialty licensure/ certification areas in the SSR Addendum and Addendum Exhibits (AEs).

As an example, the EPP conducted faculty interviews of all faculty of initial certification programs in Spring, Summer I, and Summer II semesters of 2016. Of the fifteen questions from the interviews, data from specific questions that address the ten InTASC Standards demonstrate candidates are not only receiving instruction in the four categories and ten InTASC Standards in all EPP programs, but also are achieving InTASC outcomes. The questions included: “How does your program ensure that candidates demonstrate an understanding of the ten InTASC standards?” and “What data or evidence do you have to support this?”

For faculty responses in Elementary Education, Grades 4-8, Secondary Education, Special Education, and Alternative Certification Programs, the EPP used a qualitative methodology for analysis of the collected interview data. Emerging categories depicted alignment with InTASC Standards for all syllabi in all programs, in KEI Assignments or Capstone Projects, and of 2.75 GPAs of candidates for all courses within all programs. In addition to syllabi alignment in Elementary Education and 4-8, KEI Assignments provided evidence of learning outcomes of InTASC Standards. Some of these examples of KEI Assignments include:

Elementary Education (Early Childhood EC-6, Reading, and 4-8):

  • EDEC 2383 Dispositions and Philosophy Paper/Presentation;
  • EDEC 3384 Lesson Plans;
  • EDEC 4385 Literacy Backpacks;
  • EDRD 3301 Author Illustrator Presentation;
  • EDRD 3302 Balanced Literacy Project and Paper;
  • EDRD 3304 Structured Literacy Project and Paper; and
  • EDRD 4302 Diagnosis and Remediation Projects.

For the KEI Assignment or capstone project in EDRD 4302 Reading Evaluation Report, candidates locate a child (K-12th grade) and actually administer three tests (IRI, DRA, and Running Record) with their selected child. Candidates are required to reflect how they conducted each assessment and collect hands-on activities for their student based on the observation. Upon completion, candidates present their findings with Power Point slides and the best activity demonstration. At the end of each semester, candidates completed the self- designed survey and their responses were thoroughly analyzed to improve the project.

Candidates provided constructive feedback and the projects were revised accordingly each semester. The data demonstrated significant increases on almost all areas in Fall 2015, Spring 2016, and Summer 2016 semesters. In Fall 2015, approximately 50-60% of candidates demonstrated either “Distinguished” or “Proficient,” while in the spring and summer 2016, approximately 70-90% of candidates demonstrated as “Distinguished” or “Proficient” on the three test administrations. The alignment of EDRD 4302 and the capstone project with InTASC Standards (as well as in all courses and programs of the EPP) and the resulting data from the project provides evidence that candidates not only receive quality instruction, but also achieve learning outcomes in the InTASC Standards.

Syllabi alignment of Secondary Education and MAT/ACP and KEI Assignments demonstrated evidence of InTASC Standards instruction and learning outcomes for secondary candidates. Some examples of these KEI Assignments include:

 

Secondary Education and MAT/ACP:

  • EDSE 4320 Secondary Methods I Reflection Writings;
  • EDSE 4330 Secondary Methods II Teacher’s Notebook (requirements that address all ten InTASC Standards);
  • EDSE 6333 Secondary Methods Diversity/Micro Cultures Research Assignments;
  • EDSE 6311 Psychological Foundations of Education for MAT/ACP Diversity/Micro Cultures Research assignments;
  • Research studies include the study of Marzano, Dean, Lemov’s Teach Like a Champion, the thirteen TExES Competencies for Effective Teaching that include lesson planning, assessment, technology, working with low socioeconomic students, ELLs, and students with special disabilities; and
  • Professional behaviors and the Texas Code of Ethics.

 

In Special Education, syllabi alignment with InTASC Standards and KEI Assignments demonstrate evidence of instruction and candidate learning outcomes. Some examples include:

Special Education:

  • EDSP 4369 Special Education Methods;
  • EDSP 4358 Classroom Management of Exceptional Learners;
  • EPSY 3350 Characteristics of Exceptional Learners;
  • KEI Assignments in all Special Education courses;
  • the Center for Learning Disabilities Parent/Community Meetings;
  • Special Guest Speakers for the Center for Learning Disabilities; and
  • Fall Conferences.

Based upon EPP data, candidates maintain 2.75 GPA or higher in all education courses to remain in the program. The GPA results represent data from the following:

GPA Data Includes:

  • End-of-Course Grades
  • KEI Assignments
  • Methods Field Observation Evaluations
  • PDAS Evaluations
  • Clinical Teacher Exit Surveys
  • TExES Content Exam Results
  • TExES PPR Exam Results

 

These data provide evidence that candidates achieve learning outcomes in InTASC Standards 1, 2, 3, and 4. When candidates pass their state certification exams in content and pedagogy (TExES Content and TExES PPR Exams) that are based upon the state competencies for Texas educators that are aligned with the InTASC Standards, then candidates have mastered the thirteen state teacher competencies and have achieved the learning outcomes of the InTASC Standards.

 

Please also see the EPP’s previous response in the SSR Addendum on pages 27-29. Thank you.

[See Addendum Exhibit (AE7) Specialty Licensure/Certification Data].

[See Addendum Exhibit (AE9) ASEP Reports].

[See Addendum Exhibit (AE11) Routes for Initial Certification].

[See Addendum Exhibit (AE12) PPR Exam Results].

[See Addendum Exhibit (AE13) LBB Certification Reports].

[See Addendum Exhibit (AE15) Annual Performance Reports].

[See Addendum Exhibit (AE16) Principal Survey Reports (2013-2015)].

[See Addendum Exhibit (AE17) PDAS Evaluation Data].

[See Addendum Exhibit (AE18) Field Observation Evaluations].

[See Addendum Exhibit (AE21) PEO Additional Data].

[See Addendum Exhibit (AE24) Methods Field Experience Assessment].

[See Addendum Exhibit (AE25) Methods Field Experience Assessment Rubric].

[See Addendum Exhibit (AE26) Revised Program Data].

[See Addendum Exhibit (AE30) PPR and TExES Competencies Alignment].

[See Addendum Exhibit (AE34) Faculty Interview Questions Data].

[See Addendum Exhibit (AE36) Reading Evaluation Reports].

[See Addendum Exhibit (AE42) PEO and CEI Data, Spring 2015].

[See Addendum Exhibit (AE45) Tracking Student Performance].

[See Addendum Exhibit (AE46) Pre- and Post-Practice Test Grades].

[See Addendum Exhibit (AE47) Student/Clinical Teachers Evaluations].

[See Addendum Exhibit (AE72) ACT and SAT Data for Admissions].

[See Addendum Exhibit for Standard 1 (AE1.1.3) Completers Apply Content/Pedagogical Knowledge in Outcome Assessments].

(FFR, p. 21, bottom of page and page 22, top of page)

Area for Improvement: The EPP provides limited descriptions of content validity or inter- rater reliability.

Rationale: Content validity was spoken to in limited terms and process for establishing inter- rater reliability was not evident.

Response:

As previously discussed by the EPP in the SSR Addendum, surveys distributed by the Texas Education Agency to school districts statewide are identical instruments used by TEA each year to evaluate EPP programs, the effectiveness of new teachers, and the impact teachers have on P-12 student learning and development.

Through the use of these surveys and PDAS/T-TESS appraisal forms, TEA follows Principles of Assessment and supports only reliable assessment instruments and procedures. TEA uses only assessment procedures and instruments that have been demonstrated to be valid for the specific purpose for which they are being used. In collaboration with statewide partners of teachers, parents, and administrators, TEA has designed assessment survey tools that are appropriate for the target population.

In regard to the validity and reliability of these TEA instruments, the EPP has requested additional information from Dr. Tim Miller, Director of Educator Preparation, Testing, Program Accountability, and Program Management of the Texas Education Agency (TEA). We anticipate his response to be forthcoming.

Additionally, as also previously stated by the EPP in the SSR Addendum for content validity and inter-rater reliability of EPP developed instruments, the EPP is currently taking a three-fold approach. The first approach is for the EPP to assemble an unbiased Validity and Reliability Committee, the second is to engage professional colleagues from other colleges within our university, and the third approach will be to engage education faculty from another state who are unknown to the EPP to undertake validity and reliability studies in partnership with our university.

For mutuality of benefit, the out-of-state Department of Education Dean and faculty have requested copies of our Syllabi Analyses in exchange. Each group will use the EPP  developed PEOs and CEI instruments to assess samples of candidate coursework submissions of KEI Assignments. If the validity and reliability studies achieve inter-rater reliability  in the consistent test scores of 80% or higher when scored by two or more raters on identical validity and reliability studies from the three groups, the EPP is enabled to ensure that our instruments are valid and reliable.

Test validity refers to what characteristic the test measures and how well the test measures that characteristic. The particular job of teaching for which the test has been selected should be very similar to the job for which the test was originally developed. Through the job analysis of being a teacher as a systematic process used to identify the tasks, duties, responsibilities, and working conditions associated with teaching, and the knowledge, skills, abilities, and other characteristics required to perform that job, the EPP will continue to improve methods to test and to measure EPP-developed assessment tools that predict the success of our teacher candidates in the classroom.

The EPP’s plan for these validation and reliability studies is to examine the technical properties of the EPP-developed tests of the rubrics for Program Educational Outcomes (PEOs), and Candidate Evaluation Instrument (CEI) of the Ethical and Professional Dispositions used to assess our candidates over time through their coursework, field, and clinical experiences.

The methods for conducting validation studies for the EPP include an examination of Uniform Assessment Guidelines and to discuss the following three methods of conducting validation studies within the EPP. The Guidelines describe conditions under which each type of validation strategy is appropriate. They do not express a preference for any one strategy to demonstrate the job-relatedness of a test.

 

  • Criterion-related validation requires demonstration of a correlation or other statistical relationship between test performance and job performance. In other words, individuals who score high on the test tend to perform better on the job than those who score low on the test. If the criterion is obtained at the same time the test is  given, it is called concurrent validity; if the criterion is obtained at a later time, it is called predictive validity.
  • Content-related validation requires a demonstration that the content of the test represents important job-related behaviors. In other words, test items should be relevant to and measure directly important requirements and qualifications for the job of teaching.
  • Construct-related validation requires a demonstration that the test measures the construct or characteristic it claims to measure, and that this characteristic is important to successful performance on the job.

The degree to which our rubrics or tests have the qualities as indicated by the two technical properties of reliability and validity, the EPP will ensure inter-rater reliability.

As previously stated, If the validity and reliability studies achieve inter-rater reliability in the consistent test scores of 80% or higher when scored by two or more raters on identical validity and reliability studies from the three groups, the EPP is enabled to ensure that our instruments are valid and reliable.

Please see the EPP’s previous response in the SSR Addendum on pages 131 to 134 and pages 138 to 140. Thank you.

[See Addendum Exhibit (AE39) Validity and Reliability Studies].

Area for Improvement: There is little evidence that the EPP uses assessment results to improve program elements and processes.

Rationale: No process for assessing program elements was provided.

Response:

As the EPP previously clarified in a response in the SSR Addendum on pages 136 to 137, University protocol requirements have directed each degree-offering program to examine learning outcomes for each annual assessment cycle in each department across the University.

In 2013, WTAMU used the Assessment Reporting System (ARS) as a vehicle for departmental reporting of operational effectiveness for each program across the university as protocols required. Each EPP program submitted assessment reports on at least two outcomes (PEOS) for ARS in 2013. However, the ARS system was not the only method or vehicle EPP  programs used to assess learning outcomes, but ARS was used to achieve the predetermined protocols of the university.

Each program within the EPP gathers data annually and collaboratively analyzes their data to determine strengths and weaknesses within programs. Program faculty members contribute course data from every course they teach each year to include multiple data such as: KEI Assignments, Reflection Writings, candidate projects/presentations, etc. as well as ASEP TExES content and PPR exam results that are relevant to each program.

Based upon the evidence, the EPP makes data-informed decisions concerning continuous improvement of course offerings, developing new courses or revising standard courses, curriculum, assessment results, coursework, hiring of needed faculty, implementing state- mandated policy changes, evaluating the strengths and weaknesses of each program, and determining how our programs are achieving our mission of preparing educators who are confident, skilled, and reflective professionals.

As previously explained in the SSR Addendum, the EPP began to make programmatic changes and improvements in earnest in 2013 upon the receipt of the TEA Audit Report of our Early Childhood Program. Since 2013, using data in an evidence-based approach for continuous improvement, the EPP has:

  • changed leadership with a new Department Head and appointed a new Director of Accreditation;
  • effected program changes in the Early Childhood program resulting in increased scores on the TExES Content Exam;
  • increased the selectivity standards for admission with a required 2.75 GPA that exceeded the state’s requirement of 2.50 GPA;
  • required all candidates to maintain a 2.75 GPA throughout the progression of the program;
  • developed Program Educational Outcomes (PEOs) and Ethical and Professional Dispositions of Candidates;
  • created a new ADA-compliant syllabi template aligned with international, national, state, and local standards to create consistency and increase efficacy;
  • moved the 40 hours of field observations to Methods courses;
  • changed EPP policy to require teacher candidates to pass both the TExES Content and TExES PPR state certification exams prior to clinical teaching;
  • provided an Intervention Specialist for those candidates who needed additional support in passing the exams;
  • hired new program faculty;
  • merged the Traditional and Alternative Certification routes into one Office of Teacher Preparation and Advising;
  • hired a new Director of the Office of Teacher Preparation and Advising;
  • developed the August Experience for all teacher candidates;
  • increased student/clinical teaching to thirteen weeks; and
  • overall, aligned standards and heightened expectations of all candidates throughout the EPP.

Additionally, the EPP exceeded the university’s predetermined protocols by assessing the PEOs programmatically in a variety of ways that have been discussed throughout the SSR Addendum.

As an example, several EPP programs worked in tandem to assess the Program Educational Outcomes (PEOs) as candidate learning outcomes through KEI Assignments, reflection writings, and candidate projects and/or presentations during coursework progression in the EPP. Some programs, such as Reading, elected to use the PEO rubric to evaluate candidate progress and development. Their data was easily understood and proved easy to report, so that is why the EPP chose to present their data in the SSR. The Reading program piloted the PEO rubric, but Reading was not the only program that assessed the Program Educational Outcomes.

With a change of leadership in the WTAMU Assessment Office, the newly appointed Assistant Vice President of the Office of Learning Assessment Dr. Blake Decker refined and revised the ARS system to become the Learning Assessment Reporting System (LARS). Working with Dr. Decker, the EPP continued to meet annual assessment protocols of the university in 2014 and 2015 for operational effectiveness of all programs within the EPP.

In 2016-2017 for continuous improvement, the EPP has modified internal assessment protocols and has instructed program faculty to continue assessing outcomes or PEOs. To meet university protocols for LARS, EPP programs will assess a minimum of three outcomes for the annual assessment cycle of LARS in 2017. If no other assessment vehicle were used, PEOs would now be assessed in a two-year cycle. However, additional programs are currently assessing outcomes through the use of the PEO rubric and the EPP is monitoring progress.

In additional to internal assessment protocols, the EPP works closely with Dr. Decker and the Office of Learning Assessment each semester. With Dr. Decker’s help and support, the assessment system for not only the EPP, but also for the entire university has increased our operational effectiveness by the examination of valid and reliable data and taught us how closing the loop is mission-critical for continuous improvement.

Additional information from the WTAMU Learning Assessment Office website follows:

Office of Learning Assessment

The mission of the Office of Learning Assessment is to support and assist assessment efforts across the university, particularly those dealing with university-wide learning assessment and accreditation.

Objectives

The Office of Learning Assessment at West Texas A&M University exists to:

  1. Facilitate and support academic program-level assessment of student learning;
  2. Promote institutional effectiveness efforts across the university; and,
  3. Support university accreditation efforts.

Outcomes

Efforts of the Office of Learning Assessment have direct impact on the following:

  1. University assessment representatives being able to employ an acceptable assessment cycle for their particular department or unit;
  2. University Learning Assessment Committee (ULAC) members being able to interpret and explain all components of the university assessment cycle at West Texas A&M University (WTAMU);
  3. ULAC members being able to evaluate a learning assessment cycle; and,
  4. A comprehensive assessment report concerning Core Curriculum (CORE), General Learning Outcomes (GLOs), and Academic Disciplines (DSKs) to be arranged, designed, published, and distributed to WTAMU stakeholders.

Learning Assessment Across the University

In order to support the University's mission, the Office of Learning Assessment coordinates the systematic institutional and program level annual assessments of three major areas of student learning. We refer to these areas as

  1. Discipline Specific Knowledge (DSK);
  2. University Core Curriculum (Core); and
  3. University General Learning Outcomes (GLOs).

Based upon multiple evidences, the EPP vigorously uses assessment results to improve our program elements and processes on an ongoing basis. The creation and future implementation of the EPP’s technology supplement system will provide additional operational effectiveness, consistency, timeliness, and efficiency in assessing our programs.

The protocols predetermined by the University and established by the EPP provide evidence of how we assess program elements for continuous improvement of the EPP.

Please see additional information provided in the SSR Addendum.

Thank you.

[See Addendum Exhibit (AE73) Learning Assessment Reporting System (LARS)].

[See Addendum Exhibit (AE74) Technology Supplement System Update].

[See Addendum Exhibit (AE81) Faculty Handbook].

[See Addendum Exhibit (AE85) Educator Preparation Program Handbook (2016-2017) Draft Presentation].

Note: The EPP’s Complaint Process is posted in areas throughout the EPP. The Draft Presentation in Addendum Exhibit (AE85) of the Educator Preparation Program Handbook provides additional information about the steps in the complaint process. Addendum Exhibit (AE91) shows an example of the complaint process paperwork for anyone who would like to file a complaint or grievance against the EPP. The Texas Education Agency’s Grievance Process for Educators is housed on the TEA website and includes the state protocols for filing grievances against any EPP in the state. TEA outlines the processes for the resolution of a teacher’s grievance and the steps that must be followed in seeking resolution.

With the increase of standards, high expectations of both ethical and professional dispositional behaviors, enhanced selectivity requirements, and rigorous field and clinical teaching experiences of the EPP, one grievance was filed by a candidate against our EPP in 2015 and one was filed in 2016.

Due to the highly confidential nature of these two filings with TEA, the protocols, processes, and final resolutions for both aggrieved people are housed in confidential folders in the Office of Teacher Preparation and Advising. These confidential folders will be available for review onsite.

[Please see Addendum Exhibit (AE91) EPP Complaint Process].

[See also TEA Hearing and Appeals http://tea.texas.gov/index2.aspx?id=2147485744].