Front Page
  Current Issue
  Back Issues
  About National CrossTalk

News Editorial Other Voices Interview

2 of 3 Stories

Accountability Measures
States rely on new "data systems" to track institutional success and student outcomes

By Kevin Carey

In August 1986, the National Governor's Association published an education manifesto titled "Time for Results." Led by young, reform-minded leaders like Bill Clinton, Lamar Alexander and Tom Kean, the commission that wrote the report made a series of recommendations for K–12 schooling before moving on to the nation's higher education institutions. "States," they concluded, "should insist that colleges assess what students actually learn while in college."

Many improbable and momentous things have happened in the years since. Communism fell, the Internet rose, multiple Bushes became president, and the Red Sox won the World Series—twice! Skinny neckties fell in and out of fashion at least three times. And much of the governors' 1986 agenda for K–12 education eventually became the law of the land.

Yet even as the world was transformed around them, America's core higher education institutions remained largely the same, with one glaring exception: They became a whole lot more expensive. Despite the fact that undergraduates' inflation-adjusted out-of-pocket costs have more than doubled since 1986, we still know very little about what they actually learn in exchange. Prodded by the federal government, states have taken up the challenge of holding K–12 schools accountable for teaching students well. Public universities, by contrast, have been largely left alone. The biggest policy question facing higher education today is whether that will change.

States haven't ignored higher education entirely. Over the last two decades, most have created something that they call an "accountability system." While no two are exactly alike, the systems tend to gather certain basic data elements—enrollment, tuition, degree production and (since these numbers have to be reported to the federal government anyway) graduation rates. They tend to compile those numbers into reports that are then converted to a handy computer format and posted on a website of some kind. And the process tends to stop there, allowing colleges and universities to go about their business as before.

What state accountability systems don't tend to do is the one and only thing that accountability systems are ultimately for: improve colleges and universities on behalf of students and the public at large. This we know because affordability is declining, graduation rates are stagnant, and the few indicators of college student learning we can find, such as the National Assessment of Adult Literacy, are so terrifying (most college graduates aren't proficient on a test of prose literacy) that we all but pretend they don't exist.

Fortunately, there are reasons to be optimistic. While most states are not gathering useful information about teaching and learning in higher education, some are. While most states do not use the information they gather to create real incentives for institutional change, some do. If every state did nothing but adopt the best practices that already exist elsewhere, higher education accountability in America would be greatly improved. This possibility is further enhanced by the single biggest difference between 1986 and the present: the explosion in availability of inexpensive, timely and reliable information.

In Texas, for example, the Collegiate Learning Assessment—a test of higher-order thinking and communications skills developed by a subsidiary of the RAND Corporation—is being administered at all University of Texas campuses. What's more, they are publishing the results for all to see. It turns out that when results for freshmen and seniors are compared, the biggest gains aren't being realized at the flagship research university in Austin but at regional campuses like UT-Pan American.

Other states, like Vermont and Kentucky, are using student surveys—the National Survey of Student Engagement for four-year institutions, and the Community College Survey of Student Engagement for two-year institutions are the most popular—to better understand whether colleges are providing an academic environment that helps students learn. The State University of New York system recently published the percent of students attending different classes of institution—doctoral, comprehensive, community colleges, etc.—who are meeting academic standards in various disciplines, such as "application of scientific data, concepts and models in one of the natural sciences." Measuring learning is complicated, but it can be done.

States have also developed powerful new tools to track students into the workforce after college. In 1986, this would have involved expensive phone and mail surveys, as well as laboriously tracking students who move from place to place, a logistically daunting task (unless, apparently, you're in charge of alumni fundraising). Today a growing number of states have linked up centralized education databases that hold millions of individual student records with similarly large employment data systems created by state departments of labor.

Florida publishes employment rates and the percentage of recent graduates earning various amounts of money, for every public university in the state, broken down by field of study. The highest earners? Not the best and the brightest emerging from the flagships, but rather undergraduates who majored in business at Florida International University. The Illinois Community College System compares real student earnings before and after completing a degree, calculated as earnings gains per credit hour completed. The 82 students who completed an education degree at an Illinois community college in 2005 gained $36.96 per credit completed. The 1,686 students who completed a "protective services" degree saw their earnings rise by $388.91 per credit. Most students go to college in order to launch a successful career. With new state data systems, we can find out if they succeed.

States are using the same sophisticated data systems to create better measures of student attainment. Federal graduation rates measures are limited to full-time students, and do not give colleges credit for people who transfer or who take more than six years to graduate. Indiana publishes graduation rates for part-time students at the state's regional universities, and extends the time-frame to ten years. Minnesota tracks student persistence by income level, another important factor not included in the federal data. Among students who entered a two-year institution and reported family income below $30,000 in 2002, only 54 percent returned the next year. Low graduation rates are a huge problem in higher education. The better we understand the complex patterns of student persistence, the better we can help more students to earn degrees.

These are just a few of the new measures that states have developed in the last twenty years. Scholarship, service, access, efficiency—all of these important factors and more are being tracked by some state, somewhere. But this wealth of new data is worth little if states do not use it to change the incentives that influence institutional behavior. Making information available isn't enough—states have to make it meaningful. Again, a few states are leading the way.

Some have integrated accountability measures into their systems of governance. The Maryland Higher Education Commission uses a peer-based evaluation system in which each institution selects a set of performance measures appropriate to their academic mission and is compared to a unique set of institutional peers. The Texas Higher Education Coordinating Board also uses peer comparisons for measures that are then rolled up into statewide totals and compared to state strategic goals. The latest reports show that Texas is falling short of its goals for increased Hispanic student enrollment by 2015, but is ahead of pace for doctoral degree production. Putting institutional accountability into a larger state context is key.

Other states have linked performance with money—always a controversial step, particularly in these tight fiscal times. But it is certainly possible. Tennessee's performance funding system has been in place for 30 years, with 5.45 percent of current funding based on measures like employee surveys, student engagement, faculty productivity and student learning. Colorado's "performance contracts" are another example, as are systems in Virginia, Oregon and elsewhere. Accountability systems only work if performance is tied to something decision-makers care about, and they all care about funding.

Accountability information can also be influential without being formally tied to governance and funding, but only when the information is communicated in a targeted, accessible way. Most students and parents—or, for that matter, state legislators—are not going to spend hours poring over tables of numbers printed in small type. The Minnesota State Colleges and Universities system recently launched an "Accountability Dashboard" that displays institutional performance data in the form of multiple automobile speedometer-style dials, with a "red zone" indicating poor performance. Kentucky's accountability reports are similarly straightforward and clean, with up and down arrows that indicate change. Magazines like U.S. News & World Report have become enormously influential by translating higher education data into terms the public can understand. To match (and counter) that influence, states will need to do the same.

The current economic crisis will undoubtedly create a great deal of fiscal pain for higher education. Some might say that the added pressure of accountability is the last thing colleges and universities need. Actually, the opposite is true. The governors' 1986 call for assessment and accountability fell on mostly deaf ears. Since then, we have seen states slowly but steadily pull back from their financial commitment to higher education. And commitment is what accountability ultimately represents—a shared responsibility for fulfilling the vital purposes of higher education, for meeting the needs of students and the public, for building and sustaining institutions that play a crucial role in American lives.

Colleges that actively participate in fair, accurate, multi-dimensional accountability systems will ultimately be in a far stronger position to claim the public support they need and deserve. Accountability isn't easy, but it benefits everyone in the end.

Kevin Carey is the research and policy manager of Education Sector, an independent think tank in Washington, D.C.

E-Mail this link to a friend.
Enter your friend's e-mail address:

National CrossTalk March 2009



National Center logo
© 2009 The National Center for Public Policy and Higher Education

HOME | about us | center news | reports & papers | national crosstalk | search | links | contact

site managed by NETView Communications