D-tag Descriptions for EPP report

Old EPP report:

  1. This figure illustrates the first draft of the Principal Survey report template that was generated by the Texas Comprehensive Center (TXCC) for the Texas Education Agency (TEA).  The report is titled: Standard 2 – Principal Survey to Evaluate Texas Educator Preparation Programs (EPPs) on Preparation of First Year Teachers.   The purpose of the report was to provide survey results for all educator preparation programs (EPPs) in the state on how well school principals agreed that their first-year teachers were prepared to teach in six different domains.  The domains are: a) classroom instruction b) classroom management c) students with disabilities d) English language learners e) technology integration and f) use of technology with data.
  2. The main takeaways for the audience to glean from this figure is that it is not eye appealing, outcomes on the survey results are not transparent as to how a reader should interpret the findings, and there is only one comparison displayed: EPP A compared to all other EPP results in the state. 

More specific features of the report template include:

  1. The report with the results is very tabular showing a total of 8 columns and 5 columns. 
  2. There is no color in the figure.
  1. Descriptive information along the top rows includes EPP name, the number of valid surveys for the reporting year that were included in the analysis, and whether the EPP met the standard, or indicator 2 on a performance measure set at 67 percentage points out of a 100 total possible percentage points.  The descriptor options report that either an EPP met or did not meet the performance standard. 
  2. The subsequent row is titled: Classroom Environment to delineate the portion of the survey results under the domain.
  3. The next row displays the question stem for the subsequent survey items: “To what extent did the educator preparation program begin the beginning teacher to:”
  4. The subsequent rows include the survey item 4 -7, which are as follows:

4. To what extent was this beginning teacher prepared to effectively implement discipline-management procedures?

5. To what extent was this beginning teacher prepared to communicate clear expectations for achievement and behavior that promote and encourage self-discipline and self-directed learning?

6. To what extent was this beginning teacher prepared to provide support to achieve a positive, equitable, and engaging learning environment?

7. To what extent was this beginning teacher prepared to build and maintain positive rapport with students?

  1. The next columns next to each survey question have the following headers: EPP Average score, EPP Standard deviation, Statewide average score, and statewide standard deviation.  Next to the survey questions 4 -7 are displayed the EPPs outcomes (in numbers) under each of the four headers just described. The survey items are on a 0, 1, 2, 3 Likert scale where 0 = not at all prepared, 1 = not well prepared, 2 = sufficiently prepared and 3 = very well prepared.  Most of the survey items 4 -7 display an EPP average result in the range of a 2.21 being the lowest and a 2.36 as the highest.  The EPP standard deviation numbers range from a low of .60 to a high of .71.  The statewide average score is on a range from 2.19 to 2.44 and the statewide standard deviation varies from .64 to .73. 

Return to page

New EPP report:

  1. This figure illustrates a revamped version of the maps that was created by TXCC together with TEA.  This new report has a number of features that are very different from the first version of the report.  Some of the strategies include displaying the data in multiple formats to such as bar graphs, text boxes, strategic use of a color palette to emphasize results (blue for emphasizing the EPPs results, other colors like grey or black for comparison), side by side bar graphs for comparisons across EPP types and by survey domains.  A quick summary of some high level items of note include:
    1. As opposed to the simple, tabular view of the previous report, the revised report has the principal survey displayed in four different ways on the report.  The first display in the upper left hand corner shows a horizontal axis of Standard 2 Principal Survey Scores between 0 – 100.  The vertical access shows an axis of 0 – 100 percent.  The clustered column bars display the spread of the EPPs principal survey scores for all the surveys.  A textbox shows EPP X’s average score across surveys – in this example the average score was 72.  A red vertical line shows that the performance standard for the principal survey for an EPP to have met the standard was an average of 67.  Another text box also shows that the state average across all EPPs on the survey was 75.  The main point here is that this display shows both distribution of scores on the standard, and how those results compare to the state average and the performance standard all in a very visual display.
    2. A textbox in the upper right corner provides some more contextual information for EPP X including whether they met the standard (met, did not meet or n/a because the EPP has fewer than 20 surveys), the EPP type (alternative or traditional), the location of the service center (there are 20 across Texas), the number of teachers certified in that prior year by EPP X, and the number of graduates rated by principals in 2013-14 that were certified by EPP X. This textbox has a blue square around it to separate it from other parts of the report.
  2. The third data view is a series of small vertical bar graphs comparing the Standard 2 Principal Survey Average Scores by the six domains: a) classroom instruction b) classroom management c) students with disabilities d) English language learners e) technology integration and f) use of technology with data. Within each domain a vertical axis shows possible average scores between 1 to 100 shown increments of 20.  Within the column chart there are three colums: 1) a blue column displaying EPP X’s average score on that particular domain 2) a grey column displaying the average score for all EPPs of a similar type to that particular EPP (alternative or traditional EPP) and 3) a black column that displays the state average within the domain.  The point of this figure is that the side by side bar graphs help compare EPPs by type and to the state averages within domains.
  3. The last view at the bottom shows individual survey items.  Along the top header shows the name of the EPP: EPP X.  in the far left column it shows the average state score for that domain for how many surveys were rated sufficiently or well prepared.  Next, a column in blue to the right of the state average displays EPP X’s average domain score for how many surveys were rated sufficiently or well prepared.  The next column to the right shows a scale from 0 to 100, in increments of 0, 50 and 100.  Underneath the column header a horizontal bar graph displays the average for each survey item for how many teachers were rated sufficiently or well prepared on that particular item.  Next to that column is a header that shows the question stem: To What extent was this beginning teacher prepared to provide support to.  Underneath that column header was the domain header in blue: Classroom Environment.  Each of the five survey items under the domain are written out.  This graph helps us understand per survey item, how many surveys were rated as sufficiently or well prepared.  On the first survey question it shows that the survey item “effectively implements discipline management practices”, had, on average, 75 percent of teachers surveyed for EPP X in 2013-14 were rated as sufficiently or well prepared on this item.  The strategy of displaying multiple indicators allows readers to see differentiation either by comparing the state to EPP average numbers printed, or to visually look at the differences by comparing the size of the horizontal bar graphs for each survey item.

End of Article, Return to report at the top of the page.