Download PDF Version
 
Foreword
 
Introduction
 
The Challenges Presented by Measuring Up 2000
 
Crunching the Numbers: Looking Behind the Grades
 
Presentation of the "Best" Models for Each Performance Measure
 
Discussion of Model Results
 
Policy Conclusions and Suggestions for Future Research
 
Appendix I: Preliminary Measures Organized into Clusters of Influence
 
Appendix II: Final Variables Used in Analysisthor
 
About the Authors
 
About the National Center for Public Policy and Higher Education
 

home   about us   news   reports   crosstalk   search   links  



Page 7 of 11

Policy Conclusions and Suggestions for Future Research

Statistical analysis is just one of the many avenues of inquiry that can yield insights into the utility and meaning of Measuring Up 2000. These limited results must be supplemented with different kinds of inquiries to guide state and institutional policymakers to more fundamental conclusions about the basis for their different grades. Nonetheless, the results presented here do suggest several conclusions about the nature of Measuring Up 2000, and the face and content values of the grades.

1. Demography is not destiny. While demographic and environmental factors are among the strongest predictors of state-level higher education performance, they only explain around half of the variation between the states in the preparation and benefits measures, and even less in the other areas. This is a potentially important finding about the extent to which state-level performance is susceptible to change at the hands of policymakers. It suggests that demographic influences are powerful, but that a good half of performance is associated with individual and institutional influences that are susceptible to change through decisions about system design and finance. The "expected-predicted" analysis shows that several of the poorer states (Arkansas, Alabama, South Carolina) do better on several of the measures than would be predicted based on their state characteristics. Understanding what policies are in place in these states to influence their grades is a good starting place for further research into the factors that influence performance.

2. Aiming policy to improve performance. Analyses suggest that there are major differences between the graded areas in Measuring Up 2000, whether they are driven by environmental/demographic, funding, or system design. The performance measures that were selected for the report card point to economic and demographic characteristics as most strongly related to performance in the graded areas of preparation and benefits. If this result is "correct," it suggests that policies designed to improve performance in these areas will succeed if they connect education with other aspects of social and economic policy. Affordability similarly requires attention to both structure and funding. The results for participation and completion show that all three mega-drivers (economic, design, and funding) combine to influence performance, which suggests that a combination of policy strategies-for example, attention to academic preparation, diverse institutional funding options, tuition and financial aid-must be used to influence statewide performance in increasing college participation and completion. Addressing these solely through changes in any one variable (by changing institutional missions or structures, or through funding) will fail to highlight the connections between the influences.

3. No state is an island. The inherent difficulty of the state as the unit of analysis is particularly evident in the "benefits" measure, where there is a weak accountability relation between the measure and the institutional performance of higher education within the state. The data suggest that economic and demographic variables are the most important influences on "benefits" from higher education. Yet the geographic location of the state, and its proximity to other states and to regional centers of commerce, can be responsible for drawing educated citizens to the state. State higher education policy is only tangentially related to these kinds of broad-brush benefits. This is an area the National Center should continue to invest in to find better measures of the direct relationship between higher education performance and civic, social, and economic benefits.

4. The problem of statewide measures in relation to policy interventions. Statistical techniques can only point to gross performance indicators, and do not reveal much about deeper influences on performance. The fact that Measuring Up 2000 has only 50 observations further severely limits the number of variables that can be included in any equation. The data presented in this analysis suggest that characteristics not measurable in statewide data account for about one-half of the variation in most of the performance measures. To get a better understanding of the details behind the data would require delving into aggregate data for individual institutional measures of performance. Institutional and sector data, and more detailed regional data, could show more potential influences on performance, such as staffing ratios within the institutions, student admissions policies, or institutional governance structures. However, comparable institutional data that allow this kind of disaggregation do not exist, and although the research would be interesting, it would be very time consuming and would distract attention from a focus on state policy.

Future Directions

The results also suggest possible directions for future policy research. Using the statistical models presented in this paper is only one way to begin to understand the basis for the state performance in Measuring Up 2000, and it is probably not the best way to present information digestible to policymakers. Nonetheless, this type of analysis can suggest avenues for further research that are more relevant to state decision-makers. For instance, the statistical technique described in this paper could be applied to the sub-components of the five graded areas with results that are more likely to have traction for state policy audiences. As an example, measures of public college pricing and need-based financial aid are buried within the "affordability" measure. It would be instructive to see if performance in these areas are linked to the drivers used in this analysis. The statistical analysis could also be used as a point of departure for qualitative research on the reasons for differences between states. This type of inquiry could reveal individual, organizational and political reasons that underlie some of the performance differences. The goal of such analyses would be to peel back the layers of the core questions behind Measuring Up 2000: how best to measure and improve state performance in higher education.

DOWNLOAD | PREVIOUS | NEXT

Top

National Center logo
© 2000 The National Center for Public Policy and Higher Education

HOME | about us | center news | reports & papers | national crosstalk | search | links | contact

site managed by NETView Communications