Front Page
  Current Issue
  Back Issues
  About National CrossTalk

News Editorial Other Voices Interview

3 of 3 Stories

College Student Literacy
New report provides compelling evidence that America's students are not measuring up

By Emerson J. Elliott

In a recent assessment of literacy, the performance of America's college students was alarmingly poor. Although they did test better in some categories than other adults in the population with similar levels of education, sizable percentages were unable to carry out relatively simple reading comprehension tasks or make basic calculations.

"The Literacy of America's College Students," a new report from the American Institutes for Research (AIR), describes these results, which were derived from an assessment of a nationally representative sample of graduating students in two-year and four-year colleges and universities.

The college assessment used the same instrument as the National Center for Education Statistics' 2003 "National Assessment of Adult Literacy (NAAL)." That makes it possible to compare college graduates with both the whole U.S. adult population and the two- and four-year graduates within the U.S. adult population. NAAL and AIR measure three dimensions of literacy skills: prose literacy (the ability to use information from continuous texts such as editorials, news stories, brochures and instructional materials); document literacy (the ability to use information from discontinuous texts in various formats, such as job applications, payroll forms, transportation schedules and maps); and quantitative literacy (the knowledge and skills to identify and perform computations, alone or sequentially, using numbers in printed materials. This would include balancing a checkbook, figuring out a tip, completing an order form, or determining the interest on a loan).

These are skills that Americans require in order to function in everyday life, and the test items are facsimiles of text, documents, forms, or calculations that one encounters frequently.

Many of the reports' findings are not surprising. The average prose, document and quantitative literacy of graduating students in two- and four-year colleges was significantly higher than the average literacy of adults in the nation. Graduating students, on average, outperformed the average of two- and four-year graduates in the adult population, through age 64, on prose and document literacy, but were no better in quantitative literacy. The most difficult test items for both two- and four-year students were quantitative.

The literacy of graduating students from both two-year and four-year institutions whose parents completed college or attended graduate school was significantly higher than that of students whose parents stopped their education after completing a GED or graduating from high school. Sadly, and consistent with findings from the National Assessment of Educational Progress and other data, AIR found a significant gap between performances of the majority population and African American, Hispanic and Asian/Pacific Islander populations in America.

The nation could celebrate some of the findings. For example, there was no gap found in literacy between men and women completing their programs. This contrasts with the adult population as a whole, in which women score higher than men on prose literacy, and men outscore women on quantitative literacy. The study also found evidence that students in four-year colleges who first attended school with a non-English-speaking background or a background in both English and another language, score higher in average literacy than do adults in the nation who spoke only English before starting school.

The study found no significant differences in some comparisons. There were no differences in average literacy of students graduating from private or public four-year colleges; none between full-time and part-time students; and none among students attending one, two, or three or more institutions.

Differences did appear in other comparisons, however. There were higher average prose literacy scores in selective four-year colleges than in nonselective ones (but not in document or quantitative literacy). Students who took remedial courses, and especially those who took both English and math remedial classes, consistently performed less well than those who did not take remedial classes.

While average prose, document and quantitative literacy were similar across most academic majors in four-year colleges, math, science and engineering students scored higher on all three literacy scales. And among students in four-year colleges and universities, document literacy scores were 20 points higher for students who indicated a "high" degree of analytic emphasis in their coursework compared with their peers in classes with a "low" emphasis.

What do these new data tell us?

Earlier in my career I had the responsibility on frequent occasions to release statistics from the National Assessment of Educational Progress. Helping reporters, the public and policymakers interpret test results is challenging. The contents of assessments are usually not well known by the public or policymakers, or even by educators in the field. So reported scores often seem to be just random numbers, although presumably a higher number is better than a lower one. But are America's students "measuring up?" Measuring up to what?

I'll explore four complementary perspectives from which to address these questions. Together these four different perspectives offer a composite view of what the AIR results mean.

  • A National Research Council Perspective
    In preparing to release results from the 2003 assessments, NCES asked the National Research Council (NRC) to recommend a set of performance levels that could be used in reporting the 2003 results and that could also be applied to the similar 1992 national literacy assessment results, in order to make comparisons across years. The NRC developed descriptions of performance levels intended to correspond to "policy-relevant" categories of adults. It convened panels of experts to determine where, along the scale, each level would end and the next begin—a "bookmark" process.

    NCES defines literacy as the ability to use printed and written information to function in society, to achieve one's goals, and to develop one's knowledge and potential. The NRC elaborated on this around the prose, document and quantitative tasks in the assessment, and created the following descriptions of performance levels:

    "Below basic" literacy is the lowest level, extending from nonliteracy in English to ability to locate easily identifiable information, follow written instructions, and perform simple quantitative operations in concrete and familiar situations.

    "Basic" literacy indicates an ability to understand information in short, commonplace prose and documents, locate easily identifiable quantitative information and solve simple problems.

    "Intermediate" literacy indicates an ability to understand moderately dense and less commonplace prose, summarize and make inferences, and solve problems when the arithmetic operation is not specified. Most college students performed at this level.

    "Proficient" literacy indicates an ability to read lengthy, complex abstract prose, make complex inferences, integrate and analyze multiple pieces of document information, and use quantitative information to solve multi-step problems not easily inferred.

    While the description of "proficient" literacy seems to be a closer characterization of a student who is doing college-level work, the proportion of students performing at this level seems inconsistent with that assumption. Only 23 percent of two-year students, and 38 percent of four-year students, were found to be performing at this level in prose literacy. The numbers were even lower in quantitative literacy: 18 percent of two-year students, and 34 percent of four-year students.

    By comparison, fully 65 percent of two-year students, and 56 percent of four-year students, were found to be performing at the "intermediate" level in prose literacy.

  • A Media Perspective
    Through the wonders of Google, dozens of news articles and media broadcasts reporting on the AIR study have come to my attention. Perhaps a third of these reports originated from a single AP story from which listeners and readers learned that the assessment concerned literacy for real-life, everyday skills, and that large percentages of graduating college students could not interpret a table about exercise and blood pressure, understand the arguments of newspaper editorials, compare credit card offers with different interest rates and annual fees, or summarize results of a survey about parental involvement in school.

    This became fodder for numerous editorials and commentaries expressing alarm and disgust: The findings are appalling; with all the money spent on college, this shouldn't happen; there is a long way to go; these skills should be mastered by the fifth grade; action must be taken to reverse this trend; the national implications are dire.

    Finding fault was a common thread: Parents and high schools are not adequately preparing students for college; we don't value education the way we once did; academia fails by refusing to set meaningful standards for entry.

    And several commentators tried to wave the results away: It is not a university's role to teach students how to read a map; these skills don't correspond to a particular college course; modern electronics supplant the need to figure tips, map routes, and calculate fuel economy; and the survey only dealt with students' on-the-spot abilities, not their potential learning skills.

    Depending on the location and source, the public certainly heard, saw or read a number of viewpoints about the significance of the AIR study. Compared with many reports on the National Assessment of Educational Progress, all the reporting provided superior information about the purpose of the assessment and the nature of tasks the test takers were asked to perform.

  • A "College-Learning" Assessment Perspective
    The AIR study is an assessment of literacy, and it does not address general intellectual skills that college students are expected to master. Different types of measures would be required to do that. Last fall, the National Center for Public Policy and Higher Education (which also publishes National CrossTalk) released "Measuring Up on College-Level Learning," a report by Margaret Miller and Peter Ewell describing the results of a four-year demonstration project that measured cumulative college-level learning at the state level. Five states agreed to participate in this effort, which was designed to create measures of "educational capital" in a state and evaluate how higher education systems are performing in relation to state goals.

    The demonstration project was built around three key components. First, information was drawn from existing tests for licensure (e.g., nursing, physical therapy or teaching) and graduate admissions (GRE, MCAT) that many college students take on graduation. These served as indicators of readiness for advanced training or practice. Second, the 1992 National Adult Literacy Survey results were used as an indicator of literacy levels for the adult population of the entire state. And third, tests measuring general intellectual skills were administered, including the ACT WorkKeys assessments at two-year institutions, and the RAND Collegiate Learning Assessment (CLA) at four-year institutions.

    WorkKeys assessments examine what students are able to do with what they know. For example, students might be asked to extract information from documents and instructions, or to prepare an original essay for a business context. The CLA is an attempt to create a college-level assessment of problem solving, critical thinking and communication skills of baccalaureate completers, although it does not measure expertise in an academic content major. Test takers might be asked to draw scientific conclusions from a body of evidence in biology, or to examine historical conclusions based on original documents. A written essay is a part of the assessment.

    As noted, a literacy measure was included in the College-Level Learning project, but only as one indicator of the education resources achieved by a state's adult population-not of what is currently being learned by graduating college students. The AIR study does not incorporate anything like the CLA or WorkKeys, nor does it ask for a written essay, so some might think it can easily be dismissed as irrelevant for evaluating college learning.

  • A Test-Item Perspective
    I believe that such a dismissal of the AIR study would be ill-considered. In the last of these triangulations to interpret the findings, I want to look more closely at some of the items on the test. The first example requires the test taker to read a bulletin of some 450 words, titled, "Too many black adults die from the effects of high blood pressure." The test question posed is, "According to the brochure, why is it difficult for people to know if they have high blood pressure?" The answer must be written, and basic responses such as "symptoms are not usually present" or "high blood pressure is silent"—both mentioned in the article—are acceptable. More than 95 percent of all college graduates answered correctly, compared with just 74 percent of all adults.

    A more demanding reading sample, in the form of a newspaper article printed in three columns, contains about 650 words. In this case only 27 percent of graduating four-year students, and 24 percent of two-year students, were able to locate a specific piece of information in the article, compared with 16 percent of all adults. While this was among the more difficult items on the assessment, the task should not be at all uncommon for any student who is doing college-level work, or even discussing an article with friends.

    A document literacy question displays survey data indicating the percentage of parents and teachers at elementary, junior high and high school levels who agreed with such statements as, "Our school does a good job of encouraging parental involvement in educational areas." The respondent was asked, "Seventy-eight percent of what specific group agree that their school does a good job of encouraging parental involvement in educational areas?" They needed to find "78" along the "teacher" row, in the "junior high" column. Seventy-four percent of four-year college students and 65 percent of two-year college students answered correctly, compared with only 36 percent of all adults.

    To my thinking, these examples are the most compelling evidence that college performance is alarmingly poor. It is true that the examples are not what is taught in college literature or history, biology or mathematics. But the tasks that respondents are asked to perform should be commonplace for college students. An English reading assignment, a history time chart, an accounting data table, observations of events for a physics lab, calculations in any mathematics class-all are opportunities to practice and perfect the skills called for in the AIR assessment.

    Seen in that light, these results represent a shameful and indefensible performance for graduating college students.

    To sum up, although the AIR study of graduating college student literacy is not an assessment of general education skills that students gain from college experiences, it is not easily dismissed. The skills it asks test takers to demonstrate are practical everyday capabilities, and all college students should be well versed in them. The media have accurately portrayed results from the study, raising questions, and—in the nuances of opinion pieces—providing an ample range of views about their meaning. Most view the results of the study with alarm.

    Finally, the specific tasks on the assessment are just not all that difficult. As a nation we would expect that nearly all college students should be able to perform in the upper levels of this assessment. That they did not—wherever the cause lies—is a disgrace.

    Emerson J. Elliott is a retired U.S. Commissioner of Education Statistics. He served on the advisory panels for the AIR report, and for "Measuring Up on College-Level Learning."

  • E-Mail this link to a friend.
    Enter your friend's e-mail address:

    National CrossTalk Spring 2006



    National Center logo
    © 2006 The National Center for Public Policy and Higher Education

    HOME | about us | center news | reports & papers | national crosstalk | search | links | contact

    site managed by NETView Communications