Download PDF Version
The Challenges Presented by Measuring Up 2000
Crunching the Numbers: Looking Behind the Grades
Presentation of the "Best" Models for Each Performance Measure
Discussion of Model Results
Policy Conclusions and Suggestions for Future Research
Appendix I: Preliminary Measures Organized into Clusters of Influence
Appendix II: Final Variables Used in Analysisthor
About the Authors
About the National Center for Public Policy and Higher Education

home   about us   news   reports   crosstalk   search   links  

Page 3 of 11

The Challenges Presented by Measuring Up 2000

What's in the Report Card?

Measuring Up 2000 grades each of the 50 states on an A to F scale (based on a numerical index) in five statewide performance categories: preparation, participation, affordability, completion, and benefits. Measures for several sub-components of performance are embedded within each grade, each weighted by the National Center to reflect its judgment about the importance of the measure in relation to performance. A brief summary of the performance categories, the variables within them, and how they are weighted, is shown in Table 1. The grades, based on benchmarks or standards achieved by the highest performers, were assigned according to states' performance in comparison to one another.

Moving from Critique to Analysis, Diagnosis, and Engagement

Measuring Up 2000 is designed as a diagnostic tool for state policymakers, but it is incomplete. Because the report card focuses on aggregate state performance (rather than on institutions or sectors), it reorients the policy discussion from the institutional to the state level. As a result, state decision makers-using more detailed data and informed by their own understanding of the state's approach to higher education-will need to look behind the grades in the report card.

The State as Unit of Analysis

Measuring Up 2000 is designed to focus on aggregate, systemic performance for the entire K-16 continuum, and does not differentiate between K-12 and postsecondary education, or between sectors of postsecondary education2. Aggregate statewide data trouble many people because such data gloss over important differences-programs, student admissions criteria, faculty, mission, funding-that are presumed to determine performance at the institutional level. Higher education analysts understand higher education in institutional and sector terms, rather than in state terms. They also prefer more subjective assessments about quality to objective data-driven measures because of the widely held view that quality is best understood in the context of the individual institution and its mission. The state-level aggregation presents another challenge in the wide differences between the states in terms of size, location, and access to natural and other resources. Many of the influences on state performance emanate from geographic, economic, and human influences that come from other states (or countries). This is a particular challenge in the New England and other northeast and mid-Atlantic states, where the states are often very small geographically, and in the southern states, which share a strong regional history.

The Focus on State Policy

One of the most important challenges to come from Measuring Up 2000 is that it forces the conversation about higher education performance to a state policy level. States continue to be the biggest decision makers about higher education in the United States. States make decisions about how to design and structure the "system" of higher education, including the relative role of community colleges, comprehensive institutions, public research universities, and private, not-for-profit institutions. Latent in these design formulations are decisions about policy priorities for higher education, as these are manifested in factors such as admissions selectivity and doctoral education. They also reflect decisions about governance. States decide who will sit on public institutional governing boards, and whether the boards should be system- or campus-based. They decide whether private, not-for-profit institutions will play an explicit role in meeting state policy goals. Yet despite the importance of state decision making, the relative role and effectiveness of state policy on higher education performance has not been the focus of higher education research for some time. Many states have reorganized their statewide governance structures in the past decade, moving away from regulated, centralized bureaus toward more market-based strategies. One of the consequences of the change in governance has been a weakening of state planning and policy capacity for higher education. As a result, state decision makers are not well positioned in many states to return to analytical, data-based conversations about aggregate postsecondary performance. This situation holds only in higher education, however; in K-12, the last decade has generally seen a strengthening of central state policy and planning capacities.


2 The measures in the report card are limited for the most part to traditional, "collegiate" institutions of higher education. Some of the measures (e.g., participation, affordability, and completion) in Measuring Up do not include data from the proprietary sector of higher education, because state-level comparable data are not available. It is hard to know how these exclusions influence aggregate performance; presumably some states would "do better," and others worse.


National Center logo
© 2000 The National Center for Public Policy and Higher Education

HOME | about us | center news | reports & papers | national crosstalk | search | links | contact

site managed by NETView Communications