Download PDF Version
 
Foreword
 
Introduction
 
The Challenges Presented by Measuring Up 2000
 
Crunching the Numbers: Looking Behind the Grades
 
Presentation of the "Best" Models for Each Performance Measure
 
Discussion of Model Results
 
Policy Conclusions and Suggestions for Future Research
 
Appendix I: Preliminary Measures Organized into Clusters of Influence
 
Appendix II: Final Variables Used in Analysisthor
 
About the Authors
 
About the National Center for Public Policy and Higher Education
 

home   about us   news   reports   crosstalk   search   links  



Page 6 of 11

Discussion of Model Results

These findings support the notion that there are different kinds of "drivers" for the different report card performance areas. Economic/demographic variables have the greatest associations with the measures of preparation and benefits, which are also highly correlated with each other. To a lesser extent, economic/demographic variables also are associated with performance in participation and completion. System design and funding variables play a more important role for participation, completion, and affordability. Surprisingly, economic/demographic variables do not account for the state-level variations in the affordability measure.

Unraveling the relative influences among the design and funding measures is difficult because there is so much inter-correlation between them, suggesting that many of these factors tend to occur together. For the purposes of this analysis, we could only choose one or two from this set of variables for each equation, so it is important to keep in mind that a variable may be symbolic of a whole "system" of characteristics rather than important in itself. It is interesting that the different funding measures-including measures of subsidy patterns, pricing, and state aid-are all close to one another in a statistical sense. This may indicate that the total availability of resources is more important than the means of delivering subsidies (whether through students or institutions). It is a preliminary result worth greater exploration.

The statistical tests may reveal as much for what they do not show as for the "positive" results. For all of the models except completion, close to half of the variation between states appears to be driven by factors other than economic/ demographic characteristics, funding, or design. This means that something not captured by these simple measures is responsible for one-half of the measured performance differences between the states. Intangibles such as leadership, history, and governance seem to play an important role, but are difficult to measure. The percentage of minorities in the population is another "non-result," as that measure is weakly negatively correlated with preparation and completion, but is not a major predictor of performance for any measure. In addition, although average tuition and fees (a measure that includes both public and private institutions) and financial aid (a measure that includes institutional aid) are positively correlated with participation, both were "stepped out" in the regression equations-when all other factors are taken into account, these are no longer predictors of performance. This suggests that while tuition and aid levels may be important at individual institutions, they are not as important in influencing college participation at the aggregate statewide level as other economic/demographic, funding, and design variables.

The regression results for the completion variables are a good example of another problem with statistical data, which is that the interrelationship of variables does not show the direction of causality for any measure. The regression shows that the three factors associated with completion are average tuition and fees (positively associated with completion), percentage of part-time students (negatively related to completion), and per capita personal income. This probably doesn't tell us much more than what is already commonly known in higher education, i.e., the students who attend higher-priced institutions on a full-time basis are more likely to complete their college degrees. Whether this is due to higher tuition, or because they are more academically prepared and motivated to finish college, cannot be determined with these data.

Predicted versus Expected Values

One other type of statistical test was administered using these data, which was to look at the variation between the measures of performance that might be predicted by the "best" models, and to compare these to the actual values calculated by the National Center (see Table 10). When reading this table, it is necessary to recall that the indexed score was used in the models, not the grades, which is why a state with an F or A grade can still show up as receiving a higher/lower grade than would have been predicted. When the actual values are higher than the predicted values, it suggests that the state may be using policy levers not captured in these models, but which affect its performance. Future research may want to look at these "better-than-expected" states in more detail to see if there is evidence of such policy interventions.

For example, for the performance measures most associated with economic/demographic factors (preparation, participation, benefits)-which may be postulated as largely outside the state's control, at least in the short term-several states (in particular, Nevada, Oregon, and Wyoming) have higher values than were predicted, suggesting that policy decisions may be being used to counteract or improve upon economic/demographic circumstances. In the cases of affordability and completion, states-such as Arkansas-that do better than predicted by the funding and/or design variables may be using interventions not captured in these models.




DOWNLOAD | PREVIOUS | NEXT

National Center logo
© 2000 The National Center for Public Policy and Higher Education

HOME | about us | center news | reports & papers | national crosstalk | search | links | contact

site managed by NETView Communications