Front Page
 
  Current Issue
 
  Back Issues
 
  Searchable
CrossTalk
Index
 
  Download
 
  Subscribe
 
  About National CrossTalk

News Editorial Other Voices Interview

3 of 3 Stories

Assessment Literacy
Do educators know how to make use of the new avalanche of standardized test data?

By Rebecca Zwick


 
It is no secret that standardized testing in America's schools has undergone unprecedented growth in recent years. A 2001 survey showed that California students in grades two through 11 spent an average of six to eight hours per year on standardized tests. To the existing array of state- and district-mandated testing, schools are now adding the tests required by the No Child Left Behind Act of 2001: Annual assessments in reading and mathematics for students in grades three through eight are to be implemented by the 2005-06 school year, and by 2007-08, schools must test students in science at least once in elementary, middle and high school.

No Child Left Behind (NCLB) has also increased demands on teachers, principals and superintendents to use standardized test data to strengthen instruction and improve the management of schools. Future school leaders "must be adept at using data to improve teaching and learning," according to a 2002 report from the Center on Education Policy, an independent organization based in Washington, D.C. "Too often, test data are used only for accountability, rather than to diagnose the needs of individual students and improve their education. Much more test data will soon be available, because the new federal requirements require states to produce descriptive and diagnostic information, as well as individual test results."

But what if teachers and school administrators don't have the training needed to interpret or use this wealth of information? The educational software industry has been quick to respond to this potential market by generating a raft of new products. For example, according to the website for one software package, the system can create "NCLB-like reports," as well as "collect real-time data and perform sophisticated analysis of multivariate data such as student demographics, test scores, attendance, discipline, staff development, financial information, and more." Users are assured that the product will "empower their decision-making processes" and allow them to "assess the relationships between student learning and the learning environment."

Software availability, however, is not the answer to the assessment training gap. On the contrary, these classroom computer packages often serve only to add to the glut of impenetrable data with which today's school personnel are inundated. One teacher told me that her school had provided her with classroom software that yielded an abundance of statistical information about student performance. The problem, she said, was that she had no idea how to interpret these student results or use them to improve her teaching.

To get an idea of just one type of standardized test data that teachers must digest, I obtained a report of math and language scores for a local fifth grade class on the Stanford Achievement Test Series, a widely used assessment tool published by Harcourt, Inc. (The report was for the ninth edition; the tenth edition is now in use.)

Assuming the teacher can decode the many obscure abbreviations, she still needs to understand the following terms and concepts in order to comprehend the report: mean raw score, mean national normal-curve equivalent, national individual percentile rank, stanine score, and national grade percentile rank (all of which are provided for ten mathematical and verbal "subtests and totals"). The report also gives the percentage of children who are below average, average and above average in 38 "content clusters."

When I subsequently met with students pursuing master's degrees in education and teaching credentials, to get their reactions to the score report, many acknowledged that they found much of it baffling. And in surveys conducted by my research team, both teacher education students and current school personnel revealed substantial gaps in their knowledge of fundamental principles of educational measurement and statistics. Many were unfamiliar with basic concepts, such as the definitions of test reliability and measurement error. And when told that "20 students averaged 90 on an exam, and 30 students averaged 40," only half of a group of experienced K-12 teachers and administrators were able to calculate the combined average correctly.

This finding, of course, is just the tip of the iceberg. Understanding standardized test results well enough to use them productively requires much more than just a grasp of technical terms and a knowledge of basic statistical principles. A teacher or principal might well have many questions about the interpretation of a test score report that go beyond definitions and formulas. How precise are these scores? Are some more important than others? When is the difference in scores between two students large enough to be noteworthy? How about the difference in average scores between two classes, or the change in average scores over time? How do changes in the school population affect the interpretation of achievement trends? What does it mean to say that a difference between average scores is statistically significant? How is statistical significance related to educational importance? What information about a student's particular strengths and weaknesses can be gleaned from the test results?

Although there is little formal research in this area, it is widely recognized that most educators are not well-equipped to respond to the increasing emphasis on the interpretation of standardized assessment results. According to a local superintendent, "Increasingly, school personnel are expected to use test results to make curricular decisions, to identify students, classes or schools that require additional instruction or resources, and to explain test results to parents, students and the media. Many teachers and principals, however, do not have the background needed to perform these tasks effectively."

Another school administrator said, "In this era of increasing accountability, training is needed now more than ever before. Educators must do a better job of using assessments to inform instruction. Good decisions are often hampered by a limited understanding of what test results really mean."

But why hasn't the recent boom in state and national testing produced a flurry of training efforts in the area of educational measurement and assessment? Ultimately, the answer lies in state licensing requirements. Students in teacher and administrator training programs have schedules that are jam-packed with required courses and internships. Classes in areas that fall outside the requirements are, at best, reluctantly omitted or, at worst, regarded as unnecessary frills.

In a 2002 Education Week article, Rick Stiggins, founder of a company called the Assessment Training Institute, noted that "only a few states explicitly require competence in assessment as a condition for being licensed to teach. No licensing examination now in place at the state or federal level verifies competence in assessment. Since teacher-preparation programs are designed to prepare candidates for certification under these terms, the vast majority of programs fail to provide the assessment literacy required to prepare teachers to face emerging classroom-assessment challenges.

"Furthermore, lest we believe that teachers can turn to their principals for help, almost no states require competence in assessment for licensure as a principal or school administrator at any level. As a result, assessment training is almost nonexistent in administrator-training programs."

What can be done about this state of affairs? Clearly, state licensing requirements for teachers and administrators, including the licensing exams themselves, must be reformed to reflect the importance of assessment literacy in today's testing-conscious environment. Required topics must include the educational measurement and statistics concepts needed to interpret the results of large-scale standardized tests, to use these results to inform instructional decisions, and to explain them to others. Once the content of teacher and administrator licensing exams is modified, curricular changes in teacher education programs are sure to follow.

Unfortunately, because changes in licensing and course requirements need to clear multiple bureaucratic hurdles, they are likely to be slow. What steps can be taken in the meantime to address this critical training gap? This question was considered in a 1990 document, Standards for Teacher Competence in Educational Assessment of Students, developed by the American Federation of Teachers, the National Council on Measurement in Education, and the National Education Association. The statement proposed standards for training teachers in assessment-related skills, including "competencies underlying teacher participation in decisions related to assessment at the school, district, state and national levels."

The document enumerated concepts and terms with which teachers should be familiar, such as percentile ranks, percentile bands, grade-equivalents, and errors of measurement. The statement further expressed concern "about the inadequacy with which teachers are prepared for assessing the educational progress of their students" and recommended that "assessment training be widely available to practicing teachers through staff development programs at the district and building levels." Although this standards document is now 14 years old, it is, regrettably, still current: On-the-job opportunities for teachers and administrators to increase their assessment literacy are still needed.

During the next three years, my research team at UC Santa Barbara will be making one effort in this direction. As part of a project sponsored by the National Science Foundation, we will be developing instructional materials for teachers and administrators at K-12 schools. These 30-minute modules, which will be available in DVD, videotape and web-based formats, will address a variety of questions about the appropriate interpretation of standardized test results.

In order to be successful, on-the-job instruction of this kind needs to have three properties. First, it must be practical, using common-sense language and emphasizing information that is needed in the daily work of school personnel. Second, it must be specific, ideally making use of examples involving the same standardized tests the teacher or principal must administer, score or interpret. Third, the instruction must be convenient: Instructional materials must be made available for teachers and administrators to use at the time and location that is convenient to them. For some, this may involve taking advantage of a free class period; for others, it may involve borrowing materials to take home.

We have entered an era in which teachers and school administrators are expected to use standardized test results to make instructional decisions, pinpoint schools, classes and individuals that require added attention or resources, and explain test results to students, parents, the school board and the press. At least until certification programs are restructured to provide the required instruction, on-the job training efforts will be needed to help school personnel perform these functions effectively.


Rebecca Zwick is a professor in the Gevirtz Graduate School of Education at the University of California, Santa Barbara. She is the author of "Fair Game? The Use of Standardized Admissions Tests in Higher Education," and the editor of "Rethinking the SAT: The Future of Standardized Testing in University Admissions."

E-Mail this link to a friend.
Enter your friend's e-mail address:

National CrossTalk Fall 2004

PREVIOUS STORY | FRONT PAGE | NEXT STORY

Top

National Center logo
© 2004 The National Center for Public Policy and Higher Education

HOME | about us | center news | reports & papers | national crosstalk | search | links | contact

site managed by NETView Communications