West Texas A&M University

Buff Transit Tracker
SSR Addendum Standard 2

Standard 2: Clinical Partnerships and Practice

(FFR, p. 9, middle of page)

Though the EPP provided evidence of ongoing mutually beneficial relationships, it was unclear the extent to which the P-12 community were involved in the development of criteria for entry/exit into clinical experiences.

Response:

The Texas Education Agency (TEA) is the accrediting and governing body of all EPPs in the State of Texas. The Texas Constitution [See https://www.sll.texas.gov/law-legislation/texas/constitution/] and statutory laws passed by the Texas Legislature in the Texas Administrative Code (TAC) [See http://www.sos.state.tx.us/tac/] and the Texas Education Code (TEC) [See http://www.statutes.legis.state.tx.us/?link=ED] determine the protocols that our EPP and our partnering school districts must follow in regard to criteria for entry and exit into clinical experiences of our teacher candidates.

With the recent retirement of our previous Director of the Office of Teacher Preparation and Advising in June 2016 and the promotion of the former Director of the Panhandle Alternative Certification of Educators (PACE) to Assistant Vice President for Academic Affairs in August 2016 and for continuous improvement of the EPP, we have combined the two offices for traditional and alternative certification into one and hired a single Director of Teacher Preparation and Advising to oversee both routes to initial certification.

For Fall 2016, the new Director revised and improved many of the documents or tools that our candidates and stakeholders will be using for reporting, evaluation, and assessment of the growth, progress, and development of our teacher candidates. One of these new tools will include ways to document who and when our online trainings are accessed and used. Some of these new tools are still in development, but will be available during the onsite review.

The extent to which the P-12 community and our partners were involved in the development of criteria for entry/exit into clinical experiences is dependent upon the protocols of TEA and state statute and through EPP meetings with principals, EPP Advisory Council, Teacher Education Unit (TEU), and mandatory face-to-face meetings with clinical teachers and University Field Supervisors (UFS).

(FFR, p. 10, top of page)

Descriptive feedback was provided through the newly constructed CEI rubric; however, formative and summative feedback provided by the university field supervisors and cooperating teachers to candidates in field and clinical experience appears to be in the form of a checklist organized into five domains. While the criteria on the checklist seem appropriate, the levels of performance are not defined for the candidates (1.1.11).

Response:

For the evaluation of each candidate, the levels of performance (or ratings) on the PDAS include the following categories:

  • Exceeds expectations;
  • Proficient;
  • Below expectations; and
  • Unsatisfactory.

The PDAS Domains and criteria that are scored using these levels of performance during formal observations (or even walkthroughs) include:

Domain I:  Active, successful student participation in the learning process;

Domain II:  Learner-centered instruction;

Domain III: Evaluation and feedback on student progress;

Domain IV: Management of student discipline, instructional strategies, time, and materials;

Domain V:  Indicators 1, 2, and 3 (1-Written communications with students; 2-Verbal and written communications with students; 3- Communications with reluctant students).

The PDAS Framework specifies the Domains and Criteria that are to be used in all decisions regarding the appraisal of a teacher. These research-based teaching behaviors represent quality teaching.

Since the goal of PDAS is to enhance the learning of all students, the “proficient” level is a high standard of performance. Teaching behaviors that result in considerable impact on student learning and that demonstrate a high percentage of the time and with a high percentage of students (80-89%) are “proficient.” Words associated with “proficient” teaching behaviors or the rating of “proficient” are: skillful, experienced, masterful, well advanced, and knowledgeable.

The four performance levels under PDAS (Exceeds Expectations, Proficient, Below Expectations, and Unsatisfactory) are defined in terms of the impact on student learning. In other words, what is the impact on student learning and how often and with how many students does the positive impact on learning occur?

To determine the performance level of candidates, the appraiser may use a variety of tools including the PDAS Framework. When making performance-level decisions, the appraiser first identifies evidence related to the critical attributes of the criteria. Next, the appraiser views the evidence in light of quality and quantity. Quality focuses on the “strength, impact, variety and alignment” of the teaching behavior and how it relates to student success. Quantity relates to the frequency and the number of students for which the teaching behavior resulted in student learning. The appraiser has the PDAS Framework, the Scoring Standards and Performance Level Standards sheet, the Scoring Criteria Guide, and the Scoring Continuum in making performance-level decisions.

For example, in Domain I, the scoring factors of quality are Exceeds Expectations (Great); Proficient (Considerable); Below Expectations (Limited); and Unsatisfactory (Little or None). The criteria used for evaluating appropriateness include:

Scoring Factors

Strength

Impact

Variety

Alignment

thinking at high cognitive levels

student success

varied needs and characteristics of learners

TEKS and district curriculum alignment

depth and complexity

effective formative and summative assessment

differentiated instruction

assessment data

significant content knowledge

multiple forms of assessments

range of strategies and support services

targeted instruction

making connections within and across disciplines

data-driven decision making

 

understanding of unified whole

connecting learning to work and life applications

 

 

 

Additionally in Domain I, the scoring standards of quantity for criteria evaluated by Frequency/Percentage of Time/Repeated Evidence are:

Exceeds Expectations (All/Almost All) 90-100%

Proficient (Most) 80-89%

Below  Expectations
(Some) 50-79%

Unsatisfactory  (Less than Half)
49% or less

Consistently:

Generally:

Occasionally:

Rarely:

uniformly

common practice

sporadic

infrequent

seen from beginning to end

predictable

random

nonexistent

highly predictable

typical

moderately

not attempted

seamless routines

prevalent

more often than not

minimal

Exceeds Expectations (All/Almost All) 90-100%

Proficient (Most) 80-89%

Below  Expectations

(Some) 50-79%

Unsatisfactory  (Less than Half)

49% or less

 

as a rule

irregular

hardly ever

The Scoring Standards and Performance-Level Standards sheet, the Scoring Continuum, and the Scoring Criteria Guide are tools that an appraiser may use to support the PDAS Framework when making performance-level decisions. The Scoring Standards and Performance Level Standards sheet outlines the process for making performance-level decisions and provides key words that are associated with each performance level. The Scoring Continuum provides a visual representation of quantity issues. The Scoring Criteria Guide provides descriptions of quality and quantity for each of the criteria, as well as descriptors for each of the performance levels. Read horizontally, the descriptors differentiate between the four performance levels. Read vertically, the descriptors indicate what teacher and student behaviors are associated with an individual performance level. An appraiser may use all or some of the descriptors in making performance-level decisions. The impact of one descriptor may be so significant as to indicate the performance level or the appraiser may view evidence of several descriptors to determine the performance level.

The EPP does not believe the PDAS evaluation rubric is merely a checklist, but is a research- based instrument extensively field tested by TEA and implemented in all school districts throughout the state since 1997 for the evaluation of in-service teachers. Our teacher candidates receive training in the evaluation instruments that will be used to evaluate their progress and growth as clinical teachers in field and clinical teaching during the mandatory orientations prior to their field and clinical teaching experiences.

Please also see the EPP’s previous response in the SSR Addendum on pages 23-24. Thank you.

[See Addendum Exhibit (AE41) PDAS Performance Levels].

Profession Development Appraisal System (PDAS) Links

[See PDAS Teacher Manual].

[See    http://www4.esc13.net/uploads/pdas/docs/LearnerCenteredSchools.pdf ].

[See the Professional Development Appraisal System Framework for Texas at http://www4.esc13.net/pdas/] for the PDAS Framework, Appraisal From, and an explanation of the state criteria used for scoring. Hard copies will available for the onsite visit.

[See Addendum Exhibit (AE22) PDAS Appraisal Instrument] for CAEP Standard 2.

[See Teacher Self Report http://www4.esc13.net/uploads/pdas/docs/tsrf.pdf].

Domain Scoring Guides

Please click the links below to access the Scoring Factors and Performance Level Standards for each domain.

Domain I [See http://www.ueatexas.com/pdf/Domain1.pdf].

Domain II [See http://www.ueatexas.com/pdf/Domain2.pdf].

Domain III [See http://www.ueatexas.com/pdf/Domain3.pdf].

Domain IV [See http://www.ueatexas.com/pdf/Domain4.pdf].

Domain V [See http://www.ueatexas.com/pdf/Domain5.pdf].

Domain VI [See http://www.ueatexas.com/pdf/Domain6.pdf].

Domain VII [See http://www.ueatexas.com/pdf/Domain7.pdf].

Domain VIII [See http://www.ueatexas.com/pdf/Domain8.pdf].

Texas Teacher Evaluation and Support System (T-TESS)

Note: In Fall 2016, the state will roll out the new Texas Teacher Evaluation and Support System (T-TESS) to replace the PDAS Appraisal System in evaluating teacher progress and development. All Texas administrators and teachers are in the process of being trained in this new system. Our University Field Supervisors attended a two-day T-TESS training at WTAMU on August 18 and 19, 2016 led by two state trained and T-TESS certified faculty members. New evaluation instruments aligned with the state’s T-TESS criteria have been developed and will be implemented in Fall 2016 in the evaluation of our clinical teachers.

[See Addendum Exhibit (AE23) T-TESS Training (Agenda, Sign-in Sheet, and Materials)].

[For more information, see also http://tea.texas.gov/Texas_Educators/Educator_Evaluation_and_Support_System/Texas_Teacher_Evaluation_and_Support_System/].

(FFR, p. 10, paragraph 1)

While the EPP articulated a system to co-construct criteria for the selection of clinical educators and to co-select clinical educators, a system of co-evaluation was unclear.

Response:

Co-evaluation of clinical educators is accomplished through the Professional Development Appraisal System (PDAS), and beginning in Fall 2016, the Texas Educator Evaluation and Support System (T-TESS). These assessment documents are used statewide in the evaluation of in-service teachers. Every teacher in the state has received training, participated in pre- and post-conferences with their administrators, and has been assessed through the PDAS framework each year teachers have taught. All educators both administrators and teachers alike have been trained in the new T-TESS evaluation system. The state will roll this system out in Fall 2016 and districts will begin implementation of the new evaluation system.

The EPP has provided an explanation in previous responses in the SSR Addendum of the EPP’s system of the co-evaluation of clinical teachers with our partners and stakeholders that include cooperating teachers, principals, University Field Supervisors (UFS), and the  Director of Teacher Preparation and Advising. Please see pages 23-24, pages 60-62, and page 67 of the SSR Addendum. Thank you.

(FFR, p. 10, paragraph 1)

The EPP provided a summary of candidates’ evaluations of cooperating teachers, university field supervisors, and the Director of Teacher Preparation and Advising (2.2.5) as well as cooperating teachers’ evaluations of candidates’ performance (1.1.11); however, it was unclear if school-based clinical educators and EPP-based clinical educators are provided opportunities to evaluate one another.

Response and Clarification:

As background information, perhaps it goes without saying that Texas is a big state. The same holds true for the EPP’s geographic service area--the Texas Panhandle. In our region, the Texas Panhandle consists of the northernmost twenty-six counties in the state and is bordered by New Mexico to the west, Oklahoma to the north and east, and extends as far south as Lubbock County [See The Handbook of Texas ]. The land area is nearly ten percent of the state’s total and is larger than the State of West Virginia with a population of 1.7 percent of the state’s total population. As of the 2010 census, the population density for the region was 16.6 per square mile. Of interest, Palo Duro Canyon, which lies east of Canyon where WTAMU is located, is the second largest canyon in the United States that was carved by the Prairie Dog Town Fork Red River [See Palo Duro Canyon; and Prairie Dog Town Fork Red River]. Interstate Highway 40 (I-40) passes through Amarillo and the counties of Deaf Smith, Oldham, Potter, Carson, Gray, Donley, and Wheeler. Randall County lies to the south of Amarillo in Potter County. These are the school districts we serve.

For clarification, because of the size of the geographical area our EPP serves, requiring the co-evaluation of our school-based clinical educators (cooperating teachers) and our EPP- based clinical educators (University Field Supervisors) is not only impractical, but also extremely difficult for the EPP to achieve. For many of our partners, there may be only one school-based clinical educator on their campus assigned to our teacher candidates. There are few if any opportunities for these teachers to co-evaluate one another.

For our EPP-based clinical educators (University Field Supervisors), the clinical educator (cooperating teacher) is usually out of the classroom during the three 45-minute observations of our student/clinical teachers by our University Field Supervisors. There are no opportunities for the school-based and EPP-based educators to co-evaluate one another.

However, these professional educators do work in tandem to co-evaluate our teacher candidates. The Weekly Progress Reports of Clinical Teachers, the PDAS/T-TESS evaluations, pre- and post-conferences, communications with principals and cooperating teachers, and the exit evaluations provide ample opportunities for the school-based and EPP- based educators to co-evaluate their work together in mentoring our teacher candidates.

[See Addendum Exhibit (AE17) PDAS Evaluation Data].

[See Addendum Exhibit (AE47) Student/Clinical Teacher Exit Evaluations].

[See Addendum Exhibit (AE49) Weekly Progress Reports of Clinical Teachers].

(FFR, p. 10, paragraph 1)

Additionally, the EPP did not indicate how the results of evaluations were shared and with whom the results were shared.

Response:

The results of evaluations are shared with EPP partners and stakeholders (including principals and cooperating teachers) through a variety of ways that include meetings of the EPP Advisory Council, Teacher Education Unit (TEU), and University Field Supervisors.

Evidence of meeting agendas, sign-in sheets, and minutes are housed in the EPP Program Notebook that will be available onsite. Samples of these types of data are provided as an Addendum Exhibit.

[See Addendum Exhibit (AE50) Samples of Meeting Data].

(FFR, p. 10, paragraph 2)

The online training is designed to clarify the roles and responsibilities of the clinical educator. Though these resources are available to clinical educators, it is unknown if the clinical educators use or participate in these opportunities.

Response:

The online trainings as presented in the SSR were designed to supplement the mandatory face-to-face meetings of our school partners and stakeholders and for the convenience of our school partners. Orientation meetings are held on campus each semester to review the handbook and evaluation instruments, answer any questions, and discuss criteria for entry and exit into clinical teaching.

Each of our school-based cooperating teachers receive a Cooperating Teacher Handbook. Expectations of the EPP, our partners, and of learning outcomes for teacher candidates are included in the Handbook. Beginning in Fall 2016, cooperating teachers will sign an Acknowledgement Form within the Handbook to indicate their having received and have read the Handbook. These documents and online trainings are designed to ensure successful clinical experiences for both our candidates and our partners.

(FFR, p. 10, paragraph 3)

Candidates may participate in additional experiences; however, this was unclear from the SSR.

Response:

Candidates in the EPP are provided multiple additional experiences to work with diverse P- 12 students in diverse settings. In Early Childhood EC-6 and 4-8, candidates participate at the Opportunity School in working with teachers and families of young children from poverty.Candidates in Reading, Early Childhood EC-6, and 4-8 have multiple opportunities to work with students at Eastridge Elementary of Amarillo ISD and Lakeside Elementary of Canyon ISD, schools of extremely diverse student populations and low socioeconomic families. Some Middle School and High School candidates participate in after school programs in diverse settings at Bowie Middle School, Horace Mann Middle School, Caprock High School, and Palo Duro High School. These schools have large percentages of Hispanic, African American, Asian, and low socioeconomic student groups as well as ELLs and special education students.

As previously stated in the SSR Addendum, the Center for Learning Disabilities provides many additional experiences for our Special Education candidates to work with faculty, teachers, parents, students, the community, and families of diverse students with special needs and disabilities.

Many of our MAT/ACP candidates are completing their Internships on campuses with diverse student populations and others actually participate in clinical teaching, as do our traditional candidates.

The SSR Addendum and Addendum Exhibits (AEs) provide evidence of the additional experiences our candidates are provided in obtaining additional experiences to work in diverse settings.

[See Addendum Exhibit (AE52) Additional Experiences of Candidates].

(FFR, p. 10, paragraph 3)

While it is probable candidates have the opportunity to work in diverse settings, the EPP did not describe how they ensure every candidate has this experience (2.2.2).

Response:

Texas has a rich history of diverse, multicultural populations throughout the state. With Mexico on our southern border, the influx of a large migrant population who ‘follow the sun’ for crop harvests each year, and the largest percentage of refugees of 10% and growing than the entire nation in Amarillo, Texas demonstrate the uniquely diverse school settings that the EPP serves. For example, Amarillo ISD has over 65 languages being spoken in their schools alone and area districts such as Hereford, Dumas, and Pampa enjoy similar diversity in their smaller school districts. Diverse school settings with a wide array of multicultural P-12 student populations with high percentages of low socioeconomic groups are the ways of life in our service area.

In the midst of such diversity and poverty, the EPP ensures that all candidates have opportunities to work in diverse settings in Amarillo schools and other area schools through an annual examination of the Texas Academic Performance Reports (TAPR) that are available on the TEA website for all districts and campuses within the state.The TAPR reports provide demographic information of Title I schools with over 50% low socioeconomic students on the Free/Reduced lunch program, percentages of ELLs and special education students, the years of experience of their teachers, and other important information.

The EPP reviews these reports and identifies schools that meet our criteria on diversity. Some campuses in our service area, for example, have an Autism Unit, while others, like Eastridge Elementary, Bowie Middle School, Caprock High School, and Palo Duro High School have extremely diverse Hispanic, African American, and Asian student populations. Before placement for field observations and clinical teaching, the Methods Chair (university faculty), the Director of Teacher Preparation and Advising, and principals (especially from Amarillo and Canyon ISDs where the majority of our teacher candidates request to complete their student/clinical teaching) meet and discuss the needs of the school and the specialty licensure/certification area candidates in our EPP program that meet our schools’ needs.

Co-decisions are made concerning placement for both field and clinical experiences through these meetings and ongoing discussions. Much of the evidence provided by the EPP in Program Notebooks consists of emails among school personnel and university faculty as evidence of these ongoing discussions.

As the demographics of the EPP’s candidates change as a reflection of the changes in the demographics of the areas we serve, the EPP will continue to discover ways to provide both field and clinical experiences in diverse settings.

These data will be available for review onsite.

[See Addendum Exhibit (AE51) Texas Academic Performance Reports (TAPR) Samples].

(FFR, p. 10, paragraph 4)

Though the EPP provided some general survey data as evidence of candidate impact on P-12 student learning, the EPP did not provide evidence to demonstrate candidates used both formative and summative assessments in more than one clinical setting and have used two comparison points, used the impact data to guide instructional decision-making, modified instruction based on impact data, and differentiated instruction.

Response:

The CAEP Evaluation Rubric for Visitor Teams Draft March 2016 for Standard 2 Component 2.3 states: “Attributes (depth, breath, diversity, coherence, and duration) are linked to student outcomes and candidate performance. Standard 1 evidence shows that candidate [sic] have purposefully assessed impact on student learning using both formative and summative assessments in more than one clinical setting and have:

o used two comparison points,

o used the impact data to guide instructional decision-making,

o modified instruction based on impact data, and

o have differentiated instruction” (page 8, bullet 3).

As previously indicated in the SSR Addendum, the CAEP Evaluation Rubric for Visitor Teams Draft March 2016 was not available to the EPP during the preparation of our SSR. The CAEP Accreditation Manual February 2015 for Standard 2 Component 2.3 states: [C]andidates demonstrate their developing effectiveness and positive impact on all students’ learning and development. Clinical experiences, including technology enhanced learning opportunities, are structured to have multiple performance-based assessments at key points within the program to demonstrate candidates’ development of the knowledge, skills, and professional dispositions, as delineated in Standard 1, that are associated with a positive impact on the learning and development of all P-12 students” (pages 95-96).

The guidance provided by the CAEP Accreditation Manual February 2015 does not contain the same language of the CAEP Accreditation Handbook March 2016. Rather, for a “Measure or type of evidence” for Component 2.3, the CAEP Manual suggests “[t]o examine clinical experiences, Standard 2.3 is asking that the provider consider the relationship     between the outcomes and the attributes of the clinical experiences. The question is: what is it about the experiences (that is, depth, breadth, diversity, coherence and duration) that can be associated with the observed outcomes?” (page 96, left column, bullet 4).

The EPP’s responses in the SSR Addendum and Addendum Exhibits demonstrate that the EPP provides appropriate field and clinical experiences that are of sufficient depth, breadth, diversity, coherence, and duration for our candidates that are associated with program outcomes. Additionally, candidates use formative and summative assessments in multiple settings for instructional decision-making, to modify their instruction based on data and to differentiate their instruction. Evidence demonstrates candidate use of formative and summative assessments in Lesson Plan Samples, Teacher Work Samples, and Addendum Exhibits.

[See Addendum Exhibit (AE16) Principal Survey Data].

[See Addendum Exhibit (AE17) PDAS Evaluation Data].

[See Addendum Exhibit (AE18) Field Observation Evaluations].

[See Addendum Exhibit (AE34) Faculty Interview Questions Data].

[See Addendum Exhibit (AE47) Student/Clinical Teachers Evaluations].

[See Addendum Exhibit (AE49) Weekly Progress Reports for Clinical Teachers].

[See Addendum Exhibit (AE53) Formative and Summative Assessment Data].

[See Addendum Exhibit (AE54) Letters of Support from University Field Supervisors].

[See Addendum Exhibit (AE59) Teacher Work Samples].

[See Addendum Exhibit (AE60) Interview Data of Superintendents].

[See Addendum Exhibit Standard 1 (AES1.1.2) Research and Evidence].

[See Addendum Exhibit for Standard 1 (AES1.1.5) Model and Apply Technology Standards].

(FFR, p. 11, top of page)

The EPP provided results of the ASEP Principal Survey (1.5.1, Table 1) and the Clinical Teacher Evaluation (1.5.1, Table 2) as evidence of candidates’ effective use of technology; however, the evidence provided did not demonstrate that both candidates and P-12 students have used technology to enhance learning or that candidates have used technology to track student progress and growth.

Response:

In Texas, clinical teachers are not allowed by district and campus policy to access schools’ student management system to track or share the academic progress, grades, and personal information of P-12 students due to the federal Family Educational Rights and Privacy Act or FERPA. [See http://familypolicy.ed.gov/ferpa-school-officials]. Our candidates are not permitted to use the districts’ technology resources to track and/or share student data.

Because of these restrictions by our districts, the EPP has striven to create our own proprietary technology supplement to enable our candidates to practice tracking and sharing academic progress that is both legal and appropriate. The EPP has carefully revised the Weekly Progress Reports for Clinical Teachers for Fall 2016 to foster additional opportunities to strengthen and/or gain abilities in tracking, sharing, and practice in communicating digitally with parents the academic progress of their students using technology.

In previous years from 2013 to 2015, clinical teachers worked collaboratively with their cooperating teachers to develop plans for weekly improvement during their thirteen weeks of student/clinical teaching. Copies of these Weekly Progress Reports were sent digitally or electronically (using technology) each Friday afternoon to the University Field Supervisor, the clinical teacher, and the Director of Teacher Preparation and Advising. These weekly reports for 2013 to 2015 are housed in Individual Candidate Folders in the Office of Teacher Preparation and Advising.

As continuous improvement in Fall 2016, the EPP is shifting more of the responsibility of developing weekly plans for progress upon candidates rather than the cooperating teachers. Aligned with the T-TESS Evaluation, the new Weekly Progress Reports will be completed by candidates in consultation with their cooperating teachers. University Field Supervisors will receive copies each week from their assigned clinical teachers that will enable greater interaction and a “heads-up” of “look-fors” during their observations and evaluations of the clinical teachers.

In the CAEP Accreditation Manual Version 2 February 2015, Component 1.5 for Standard 1 states: “Providers ensure that completers [at exit] model and apply technology standards as they design, implement, and assess learning experiences to engage students and improve learning, and enrich professional practice” (p. 93). The recommendations for “A. Measure or type of evidence” include evidence of “completers modeling and application of technology standards” through various measures of “Clinical experience observation instrument; lesson or unit plans, portfolios, work sample with exhibition of applications, and use of technology in instruction, technology course signature project/assignment” (pages 93-94).

The Area for Improvement states: “There is little evidence that candidates have the ability to track and share student performance digitally”. This language is located in the CAEP Accreditation Handbook Version 3 March 2016 for CAEP Standard 1 Component 1.5 in the CAEP Evaluation Rubric for Visiting Teams Draft, March 2016: “Candidates demonstrate the ability to track and share student performance data digitally with performance at or above the acceptable level on rubric indicators” (page 5, bullet 5 of the Rubric).

The CAEP Accreditation Handbook, March 2016 and the CAEP Evaluation Rubric for Visiting Teams Draft, March 2016 were not available to the EPP during our preparation of our Self-Study Report (SSR). Having evidence to demonstrate candidates’ ‘ability to track and share P-12 student performance data digitally’ were not requirements or expectations for the EPP. This language is not mentioned in the CAEP Accreditation Manual, February 2015. In discussions with CAEP Staff and our Team Chair, the EPP was instructed to include this information in our SSR Addendum. Due to evolving changes in the CAEP accreditation process with materials being added intermittently as the process evolves, the EPP understands how this oversight may have happened. We remain committed to the process and greatly appreciate the dedicated work of our CAEP Review Team. Yet, we also anticipate a fair review of the EPP’s work and efforts for continuous improvement based upon the CAEP Resources that were available to us. Thank you.

Please see the EPP’s previous response in the SSR Addendum on pages 60-61. Thank you. [See Addendum Exhibit (AE16) Principal Survey Data].

[See Addendum Exhibit (AE17) PDAS Evaluation Data].

[See Addendum Exhibit (AE18) Field Observation Evaluations].

[See Addendum Exhibit (AE34) Faculty Interview Questions Data].

[See Addendum Exhibit (AE45) Tracking Student Performance].

[See Addendum Exhibit (AE47) Student/Clinical Teachers Evaluations].

[See Addendum Exhibit (AE49) Weekly Progress Reports for Clinical Teachers].

[See Addendum Exhibit (AE53) Formative and Summative Assessment Data].

[See Addendum Exhibit (AE54) Letters of Support from University Field Supervisors].

[See Addendum Exhibit Standard 1 (AES1.1.2) Research and Evidence].

[See Addendum Exhibit for Standard 1 (AES1.1.5) Model and Apply Technology Standards].

c.Evidence that is inconsistent with meeting the standard

Response:

The EPP has previously responded to each of these prompts within the SSR Addendum. In response, Addendum Exhibits have been delineated for each prompt in brackets. Thank you.

(FFR, p. 11, middle of page)

  1. Lacking data to demonstrate impact of candidates on P-12 student learning (Component 2.3), specifically linking student outcomes and candidate performance.

[See Addendum Exhibit (AE75) Candidate Performance and Student Outcomes ( Component 2.3)].

(FFR, p. 11, middle of page)

  1. Formative and summative feedback provided by university field supervisors and cooperating teachers to candidates in field and clinical experiences appears to be in the form of a checklist organized into five domains. While the criteria on the checklist seem appropriate, the levels of performance are not defined for the candidates (1.1.11).

[See Addendum Exhibits (AE17); ( AE22); ( AE23); and ( AE41)].

(FFR, p. 11, middle of page)

  1. Lacking evidence to demonstrate both candidates and P-12 students use technology to enhance learning and candidates use technology to track student progress and growth.

Please see the EPP’s previous response in the SSR Addendum on pages 59-60 and page 68. Thank you.

[See also Addendum Exhibit (AE76) Technology and Student Progress].

  1. List of onsite tasks to be completed. Use the following three prompts for each task.

Standard 2 Task 1

Response: The EPP has previously responded to each of these prompts within the SSR Addendum. In response, Addendum Exhibits have been delineated for each prompt in brackets. Thank you.

(FFR, p. 11, middle of page)

  1. Evidence in need of verification or corroboration
  1. Verify the mutual benefit of EPP and P-12 partnerships.

[See Addendum Exhibit (AE65) Mutuality of Partnership Benefits].

  1. Confirm co-construction and regularity of review of criteria for mentor teachers.

[See Addendum Exhibits (AE49); ( AE50); and ( AE66)].

c.Questions for EPP concerning additional evidence, data, and/or interviews, including follow up on response to 1.c.

Response: The EPP has previously responded to each of these prompts within the SSR Addendum. In response, Addendum Exhibits have been delineated for each prompt in brackets. Thank you.

(FFR, p. 11, bottom of page)

  1. To what extent is the P-12 community involved in the development of criteria for entry/exit into clinical experiences?

[See the SSR Addendum, pages 58-59; Addendum Exhibits (AE50); and ( AE66)].

(FFR, p. 11, bottom of page)

  1. Candidates are given the opportunity to evaluate their university field supervisors, cooperation teachers, and the program. How is this information shared and with whom?

[See Addendum Exhibit (AE50) Samples of Meeting Data].

(FFR, p. 11, bottom of page)

  1. The EPP provides evidence of P-12 involvement in the creation of the CEI. In what ways has the P-12 community been involved in the creation of other instruments and evaluations (e.g., field experience assessments [1.1.11])?

[See Addendum Exhibit (AE50) Samples of Meeting Data].

(FFR, p. 12, top of page)

  1. How do candidates know the EPP’s expectations for various levels of performance on the field experience assessments (1.1.11, Tables 1 & 3)?

[See Addendum Exhibits (AE18); ( AE23); ( AE24); ( AE25); (AE41); (AE50); and SSR Exhibit (1.1.4)].

(FFR, p. 12, top of page)

  1. Do school-based clinical educators have opportunities to evaluate EPP-based clinical educators? If so, how are the results of the evaluations shared, and with whom are the results shared?

[See Addendum Exhibit (AE50) Samples of Meeting Data].

  1. Do EPP-based clinical educators have opportunities to evaluate EPP-based clinical educators? If so, how are the results of the evaluations shared, and with whom are the results shared?

[See Addendum Exhibit (AE50) Samples of Meeting Data].

  1. The EPP provides clinical educators with opportunities for professional development regarding their roles and responsibilities (i.e., handbook and online). Are clinical educators required to participate in the professional development? How does the EPP ensure clinical educators are aware of their roles and responsibilities?

Please see the EPP’s previous response in the SSR Addendum. Thank you. (FFR, p. 12, middle of page)

  1. Are clinical educators involved in the creation of their professional development opportunities?

Please see the EPP’s previous response in the SSR Addendum on pages 64-66. Thank you.

  1. How many distinct clinical experiences do candidates have during their preparation program in each of the programs? How does the EPP ensure depth and breadth of field experiences in all programs?

[See Addendum Exhibit (AE24); and (AE25)].

  1. How does the EPP ensure all candidates have opportunities to work in diverse schools with students representing different diverse populations?

[See Addendum Exhibit (AE51) Texas Academic Performance Reports (TAPR) Samples].

(FFR, p. 12, middle of page)

  1. What evidence is available to document that both candidates have used technology to track student progress and growth?

[See Addendum Exhibit (AE76) Technology and Student Progress].

(FFR, p. 12, middle of page)

  1. What evidence is available to document that candidates have used technology to track student progress and growth?

Please see the EPP’s previous response in the SSR Addendum on pages 59-60. Thank you.

[See Addendum Exhibits (AE16); AE17); (AE18); (AE34); (AE47); (AE49); (AE53); (AE54); (AES1.1.2); (AES1.1.5); and (AE76)].

(FFR, p. 12, middle of page)

  1. How does the EPP document the impact of candidates on P-12 student learning, including the use of formative and summative assessments, the use of two comparison points, the use of impact data to guide instructional decision-making, the modification of instruction based on impact data, and differentiation of instruction?

See the EPP’s previous response in the SSR Addendum. Thank you.

3. Preliminary recommendations for new areas for improvement and/or stipulations including a rationale for each.

Areas for Improvement (AFIs)

(FFR, p. 12, bottom of page)

Area for Improvement : The EPP does not document the impact of candidates on P-12 student learning, including the use of formative and summative assessments, the use of two comparison points, the use of impact data to guide instructional decision-making, the modification of instruction based on impact data, and differentiation of instruction.

Rationale : Only general survey data are provided as evidence of candidate impact on P-12 student learning.

Response:

The CAEP Evaluation Rubric for Visitor Teams Draft March 2016 for Standard 2 Component 2.3 states: “Attributes (depth, breath, diversity, coherence, and duration) are linked to student outcomes and candidate performance. Standard 1 evidence shows that candidate [sic] have purposefully assessed impact on student learning using both formative and summative assessments in more than one clinical setting and have:

o used two comparison points,

o used the impact data to guide instructional decision-making,

o modified instruction based on impact data, and

o have differentiated instruction” (page 8, bullet 3).

As previously indicated in the SSR Addendum, the CAEP Evaluation Rubric for Visitor Teams Draft March 2016 was not available to the EPP during the preparation of our SSR. The CAEP Accreditation Manual February 2015 for Standard 2 Component 2.3 states: [C]andidates demonstrate their developing effectiveness and positive impact on all students’ learning and development. Clinical experiences, including technology enhanced learning opportunities, are structured to have multiple performance-based assessments at key points within the program to demonstrate candidates’ development of the knowledge, skills, and professional dispositions, as delineated in Standard 1, that are associated with a positive impact on the learning and development of all P-12 students” (pages 95-96).

The guidance provided by the CAEP Accreditation Manual February 2015 does not contain the same language of the CAEP Accreditation Handbook March 2016. Rather, for a “Measure or type of evidence” for Component 2.3, the CAEP Manual suggests “[t]o examine clinical experiences, Standard 2.3 is asking that the provider consider the relationship     between the outcomes and the attributes of the clinical experiences. The question is: what is it about the experiences (that is, depth, breadth, diversity, coherence and duration) that can be associated with the observed outcomes?” (page 96, left column, bullet 4).

In response, the EPP has structured field and clinical experiences for all candidates that demonstrate their developing effectiveness and positive impact on all students’ learning and development. Sequential and multiple performance-based assessments such as the PEO Rubric, CEI Rubric, KEI Assignments in each course, the Methods Field Observation Evaluation, Weekly Progress Reports, and PDAS/T-TESS assessments are used at the key points of admission, during development, and upon completion within the program. These assessments demonstrate candidates’ development of the knowledge, skills, and ethical and professional dispositions (as delineated in Standard 1) that are associated with a positive impact on the learning and development of all P-12 students.

As our candidates develop as educators who are confident, skilled, and reflective professionals throughout our program, their impact upon student learning and development is ensured by the EPP. This assurance of impact is reinforced with data from the Principal Surveys of our completers as beginning teachers in the field.

The EPP’s responses in the SSR Addendum and Addendum Exhibits demonstrate that the EPP provides appropriate field and clinical experiences that are of sufficient depth, breadth, diversity, coherence, and duration for our candidates that are associated with program outcomes. These program educational outcomes (PEOs) prepare candidates who are critical creative thinkers, effective communicators, advocates for diverse learners, users of technology, life-long learners, and stewards of the profession. Additionally, our candidates use formative and summative assessments in multiple settings that include both field and clinical experiences for instructional decision-making, to modify their instruction based on data and to differentiate their instruction. Evidence demonstrates candidate use of formative and summative assessments in Lesson Plan Samples, Teacher Work Samples, and Addendum Exhibits.

Please see the EPP’s previous response in the SSR Addendum on pages 71-72 and as follows on pages 80-84. Thank you.

[See Addendum Exhibit (AE16) Principal Survey Data].

[See Addendum Exhibit (AE17) PDAS Evaluation Data].

[See Addendum Exhibit (AE18) Field Observation Evaluations].

[See Addendum Exhibit (AE34) Faculty Interview Questions Data].

[See Addendum Exhibit (AE47) Student/Clinical Teachers Evaluations].

[See Addendum Exhibit (AE49) Weekly Progress Reports for Clinical Teachers].

[See Addendum Exhibit (AE53) Formative and Summative Assessment Data].

[See Addendum Exhibit (AE54) Letters of Support from University Field Supervisors].

[See Addendum Exhibit (AE59) Teacher Work Samples].

[See Addendum Exhibit (AE60) Interview Data of Superintendents].

[See Addendum Exhibit Standard 1 (AES1.1.2) Research and Evidence].

[See Addendum Exhibit for Standard 1 (AES1.1.5) Model and Apply Technology Standards].

(FFR, p. 12, bottom of page)

Area for Improvement : The EPP does not provide evidence that both candidates and P-12 students use technology to enhance learning nor that candidates’ use technology to track student progress and growth.

Rationale: Component 2.3 is not fully addressed in the SSR. The EPP states that technology use occurs, but evidence is not provided.

Response:

To fully address Component 2.3 Clinical Experiences for our candidates, the EPP provides evidence of collaborative work with our partners based upon state statute and/or protocols to design clinical experiences of sufficient depth, breadth, diversity, coherence, and duration in the SSR, SSR Addendum, and Addendum Exhibits.

Component 2.3 states: “Clinical experiences, including technology-enhanced learning opportunities, are structured to have multiple performance-based assessments at key points within the program to demonstrate candidates’ development of the knowledge, skills, and professional dispositions, as delineated in Standard 1, that are associated with a positive impact on the learning and development of all P-12 students”.

The depth and breadth of clinical experiences follow state protocol of an established scope and sequence of Texas Essential Knowledge and Skills (TEKS) and EPP requirements with ongoing collaboration with our school partners. Other stakeholders including the Director of the Office of Teacher Preparation and Advising, principals, EPP faculty members, the Methods Chair, the EPP Advisory Council, the Teacher Education Unit (TEU) comprised with colleagues from other colleges within the university, and the Dean’s Superintendent Meetings work together to develop and coordinate the depth and breadth of the clinical experiences provided by the EPP.

The EPP ensures diversity in the placement of our candidates in diverse P-12 school settings after review of TAPR reports from the Texas Education Agency. The school districts of our service area are highly diverse as evidenced by the TAPR sample reports, the SSR Addendum, and Addendum Exhibits. The EPP strives to continue to ensure the diversity of our candidates as demonstrated in the Diversity, Recruitment, and Monitoring Plan of the EPP.

To achieve coherence, the relationship between clinical experiences and coursework are documented by the EPP through candidates’ KEI assignments for each course, and the practice of strategies by teaching three 20-minute lessons in collaboration with faculty and cooperating teachers during their field experiences. During the field experience, course assignments require guided discussions, papers, presentations, and/or class projects that demonstrate candidates’ developing knowledge, skills, and professional dispositions. At the end of the field observation experience in each Methods course, candidates write reflection papers about their experiences.

Duration is determined by TEA and state statute as state protocol for all EPPs in Texas. Candidates complete 40 hours of field observations during their Methods courses. EPP-based faculty and school-based cooperating teachers evaluate the progress and development of candidates using multiple performance-based assessments at key points. Two sequential performance-based assessments developed by the EPP include the PEO Rubric and CEI Rubric. These rubrics are used at key points throughout the progression of the EPP to evaluate candidate outcomes and ethical and professional dispositions as presented in the SSR Addendum.

As one of the data-driven decisions of the EPP, in Fall 2015 and Spring 2016, the EPP conducted a national faculty search for a newly developed position. In Fall 2016, the EPP hired a new faculty member to improve technology instruction, to redesign, develop, and offer a new course in Educational Technology, and to provide technology seminars for our candidates prior to their clinical experiences. In August 2016, two days of required technology instruction were provided for technology-enhanced learning opportunities for our candidates and their students.

It is important to note the CAEP Evaluation Rubric for Visiting Teams Draft March 2016 for Component 2.3 states: “Evidence documents that both candidates and students have used technology to enhance learning”; “Evidence documents that candidates have used technology to track student progress and growth” and “Specific criteria for appropriate use of technology are identified” (pages 8-9, bullets in the middle column).

Also, the FFR states in “ Area for Improvement : The EPP does not provide evidence that both candidates and P-12 students use technology to enhance learning nor that candidates’ use technology to track student progress and growth” and “ Rationale: Component 2.3 is not fully addressed in the SSR. The EPP states that technology use occurs, but evidence is not provided”.

As explained previously in the SSR Addendum, the CAEP Evaluation Rubric for Visiting Teams Draft March 2016 was not available to the EPP during the preparation of our SSR. The specific language of this Rubric is not contained in the CAEP Accreditation Manual February 2015. The Manual did not require the EPP to provide evidence of ‘both candidates and P-12 students use of technology to enhance learning’ ‘nor to track student progress and growth’. Rather the Manual recommends “A. Measure or type of evidence” as “the application of technology to enhance instruction and P-12 learning for all students” (page 96).

To ensure that candidates demonstrate their developing effectiveness and positive impact on all P-12 students’ learning and development through clinical experiences, including technology-enhanced learning opportunities, the EPP uses sequential and progressive performance-based assessments at key points within the progression of the program to demonstrate candidates’ development of the knowledge, skills, and professional dispositions, as delineated in Standard 1, that are associated with a positive impact on the learning and development of all P-12 students.

As further evidence of positive impact on P-12 student learning and development, the Principal Surveys for completers for the 2013-2014 and 2014-2015 Summary Results demonstrate eight (8) questions principals answered in regard to completers’ use of technology. These questions are in response to the Section VI: Technology Integration prompts, “To what extent did the educator preparation provider prepare this beginning teacher to”

  1. use technology available on the campus to integrate curriculum to support student learning?
  2. provide technology-based classroom learning opportunities that allow students to interact with real-time and/or online content?
  3. teach students developmentally appropriate technology skills?
  4. use technology to make learning more active and engaging for students? and to the Section VII: Use of Technology with Data:
  5. use technology to collect, manage, and analyze student data using software programs (such as Excel or an electronic gradebook)?
  6. use available technology to collect, manage, and analyze data from multiple sources in order to interpret learning results for students?
  7. use available technology to document student learning to determine when an intervention is necessary and appropriate?
  8. use available technology to collect and manage formative assessment data to guide instruction?

The Principal Survey of 2012-2013 was the only survey available from TEA prior to the EPP’s submission of the SSR. On a scale of 3, with 3=well prepared, 2=prepared, and 1=not prepared, the scores on Questions 31-38 ranged from 2.26 to 2.34 indicating principals believed completers were prepared to use technology. Statewide average scores and standard deviations were provided in 2012-2013.

The EPP analyzed the summary results of the recently released Principal Surveys from 2013- 2014 and 2014-2015 on Excel spreadsheets. The EPP used two comparison points, the state and a state university of similar size (Tarleton State University), the EPP standard deviation, and the Comparison EPP standard deviation. With an N=219 in 2013-14 and N=243 in 2014- 2015 on a 4-point scale, with 4 = Well Prepared to 1 = not prepared at all, the range of scores was 3.31 to 3.41 in 2013-2014 and 3.17 to 3.37 in 2014-2015 indicating principals believed our completers were prepared to well-prepared in the use of technology.

The EPP has fully addressed Component 2.3 and has provided multiple evidences of the use of technology by candidates throughout the progression of the program. Multiple evidences are provided in the SSR Addendum, Addendum Exhibits, and EPP Program Notebooks that will be available onsite. KEI Assignments, assessments, and reflection writings are housed in the Program Notebooks. Thank you.

[See Addendum Exhibit (AE16) Principal Survey Data].

[See Addendum Exhibit (AE51) Texas Academic Performance Reports (TAPR) Samples].

[See Addendum Exhibit (AE63) Diversity, Monitoring, and Recruitment Plan of the EPP].

[See Addendum Exhibit (AE64) Faculty Recruitment Research Study (Dr. Coneway and Dr. Garcia)].

[See Addendum Exhibit (AE75) Candidate Performance and Student Outcomes (Component 2.3)].

(FFR, p. 13, top of page)

Area for Improvement: The EPP does not provide evidence that all candidates have experiences in diverse P-12 settings.

Rationale: Although district demographics are provided, the EPP did not describe how they ensure every candidate has this experience during their preparation.

Response:

Texas has a rich history of diverse, multicultural populations throughout the state. With Mexico on our southern border, the influx of a large migrant population who ‘follow the sun’ for crop harvests each year, and the largest percentage of refugees of 10% and growing than the entire nation in Amarillo, Texas. Amarillo ISD has over 65 languages being spoken in their schools alone and area districts such as Hereford, Dumas, and Pampa enjoy similar diversity in their smaller school districts. Diverse school settings with a wide array of multicultural P-12 student populations with high percentages of low socioeconomic groups are the ways of life in our service area.

In the midst of such diversity and poverty, the EPP ensures that all candidates have opportunities to work in diverse settings in Amarillo schools and other area schools through an annual examination of the Texas Academic Performance Reports (TAPR) that are available on the TEA website for all districts and campuses within the state.The TAPR reports provide demographic information of Title I schools with over 50% low socioeconomic students on the Free/Reduced lunch program, percentages of ELLs and special education students, the years of experience of their teachers, and other important information.

The EPP reviews these reports and identifies schools that meet our criteria on diversity. Some campuses in our service area, for example, have an Autism Unit, while others, like Eastridge Elementary, Bowie Middle School, Caprock High School, and Palo Duro High School have extremely diverse Hispanic, African American, and Asian student populations. Before placement for field observations and clinical teaching, the Methods Chair (university faculty), the Director of Teacher Preparation and Advising, and principals (especially from Amarillo and Canyon ISDs where the majority of our teacher candidates request to complete their student/clinical teaching) meet and discuss the needs of the school and the specialty licensure/certification areas of candidates in our EPP program.

As the demographics of the EPP’s candidates change as a reflection of the changes in the demographics of the areas we serve, the EPP ensures that candidates are provided both field and clinical experiences in diverse settings.

These data will be available for review onsite. Please see also the EPP’s previous response in the SSR Addendum on pages 68-69. Thank you.

[See Addendum Exhibit (AE51) Texas Academic Performance Reports (TAPR) Samples].

(FFR, p. 13, top of page)

Area for Improvement: The EPP provides little evidence that clinical teachers before student teaching are of sufficient depth and breadth to demonstrate impact on all student learning.

Rationale: The EPP states candidates must complete a minimum of 40 hours of field experience prior to student teaching; however, it is unclear how these experiences are distributed across each program (i.e., courses in each program with field experience components) and how the EPP ensures these experiences are of sufficient depth and breadth to demonstrate impact on student learning and to prepare candidates for the student teaching experience.

Response:

By state statute and EPP requirements, each program of Elementary Education in Early Childhood EC-6, Grades 4-8, Secondary Education (Middle/High Schools), and Special Education EC-12 provide 40 hours of field observation experience for all candidates. In meeting these requirements, the EPP ensures the field observation experiences are of sufficient depth and breadth in candidates’ Methods courses that are taken prior to their student/clinical teaching experience. Through the field observations and assessment of candidates in those experiences, the EPP demonstrates positive impact on student learning and the preparation of candidates for their student/clinical teaching experience.

After review of the Texas Education Agency’s TAPR reports and collaborative meetings with our school partners to discover their needs and our specialty licensure/certification areas each semester, the EPP assigns placement of all candidates enrolled in Methods courses for field observations in their certification areas. Methods faculty, school districts, and the Director of the Office of Teacher Preparation and Advising work together to ensure candidates are placed in diverse P-12 settings in their specialty licensure/certification fields of study.

The EPP provides Methods courses for Early Childhood EC-6 and Grades 4-8 in math, science, math/science, English language arts, social studies, and in secondary education and Special Education EC-12. Secondary Methods provides instruction for most secondary specialty licensure/certification areas that include art, theatre arts, and sports, exercise, and science (SES). Two other colleges within the university provide specialized Methods courses for agriculture and music. The College of Agriculture and Natural Science provides a Methods course for agriculture candidates and the Sybil B. Harrington College of Fine Arts and Humanities School of Music provides the Methods course for music candidates. The EPP works with our colleagues in these two colleges in the placement of candidates for their field observation and clinical teaching experiences.

Through the 40 hours of candidate field observations, EPP-based faculty and school-based cooperating teachers assess candidate progress and development. Methods instructors provide opportunities for candidate growth through course assignments, discussions, reflection writings, and provide any needed support. Candidates observe and practice pedagogical knowledge and strategies during field observations. Candidates practice during their observation experience by teaching three 20-minute lessons in collaboration with their cooperating teachers. Feedback from their cooperating teachers and their university instructors provide candidates with additional opportunities for growth. During the Methods semester, all candidates must pass both the state certification exams (TExES Content and TExES PPR) prior to clinical teaching.

Through the EPP’s ongoing assessment of candidate progress and development during the 40 hours of field observation experience, candidates passing the required state certification exams and their Methods courses with a minimum of “C”, and maintaining a 2.75 GPA provide evidence that the EPP demonstrates positive impact on student learning and the preparation of our candidates for their student/clinical teaching experience. The EPP strongly believes that through these experiences, our candidates are now highly qualified to be placed in diverse P-12 settings for the student/clinical teaching experience.

[See Addendum Exhibit (AE24) Methods Field Experience Assessment].

[See Addendum Exhibit (AE25) Methods Field Experience Assessment Rubric].