Download PDF Version
 
Foreword
 
Introduction
 
The Challenges Presented by Measuring Up 2000
 
Crunching the Numbers: Looking Behind the Grades
 
Presentation of the "Best" Models for Each Performance Measure
 
Discussion of Model Results
 
Policy Conclusions and Suggestions for Future Research
 
Appendix I: Preliminary Measures Organized into Clusters of Influence
 
Appendix II: Final Variables Used in Analysisthor
 
About the Authors
 
About the National Center for Public Policy and Higher Education
 

home   about us   news   reports   crosstalk   search   links  



Page 5 of 11

Presentation of the "Best" Models for Each Performance Measure

Below, we discuss the "best" predictive models-i.e., those using the combination of variables that gave the highest R-squared, which can be interpreted as "explaining" the most variance in the performance measure. The "best" models are shown in Tables 5 through 9.4 It is important to keep in mind, however, that models using other combinations of variables may have been quite close to the "best" model in terms of their predictive ability; these cases are discussed in the text where they are relevant to a complete understanding of the results.

According to the results for the preparation model (Table 5), 54 percent of the variation in the preparation grades (the dependent variable) is accounted for by the combination of three economic/demographic variables: per capita personal income, the percentage of minority students, and the index of income inequality in the state. Both the percentage of minority students and the GINI ratio have negative correlations with performance, meaning that higher levels of either are associated with lower grades. While it had a high bivariate correlation with preparation, the measure of total public elementary and secondary expenditures per student was "stepped out" in the regression as not significant. Again, these tests do not measure the relative influence of variables like academic preparation for college, since such measures are already embedded in the grade. The statistical test measures only the association of these external factors with the performance measured in the grades.

The participation results (Table 6, following page) show that the model explains only 37 percent of the variation in the grades, the weakest "predictor" model for any graded area. Unlike the preparation grade, a combination of economic/demographic, design, and funding variables are associated with the dependent variable. It should be noted that, while this participation model had the highest predictive power of all combinations of variables tried, another model that used only the "per capita personal income" and "less than high school" variables also led to strong positive associations, but with slightly lower predictive power. This suggests that the economic/ demographic variables tend to predict the most about variations in participation grades.

The results from the affordability model (Table 7) show that both funding and design variables (per capita appropriations and the number of institutions per population) account for about half of the variation in state grades. The other design measure, enrollments in private not-for-profit four-year institutions, was stepped out of the equation despite its high bivariate positive correlation with affordability. Although the tuition and financial aid variables were not significant in the "best" model, relatively high bivariate correlations among the funding variables suggest that per capita appropriations may be representative of a pattern of characteristics.

The completion model (Table 8) shows that a combination of funding, design and economic/demographic factors explain close to 64 percent of the variation in grades-the highest value for any of the models. Higher average tuition levels are positively associated with higher completions and have the strongest relationship in the model, whereas the percentage of part-time students is negatively associated with completions. Although the tuition variable led to the "best" predictive model, the use of other funding variables-as well as the percentage of students enrolled in private, not-for-profit four-year institutions-led to models almost as powerful.

The results of the benefits model (Table 9) indicate that slightly over half of the variation between states is attributable to two economic/demographic variables-per capita personal income and the percentage of the population without a high school certificate. None of the design or funding variables associated specifically with postsecondary education were found to be significant in the model for benefits.

 

4 The tables include the following statistics: (1) R-squared, which measures the percentage of the variation in the dependent variable (the grades) accounted for by the independent variables; (2) the regression coefficients, which indicate the estimate of the average amount the dependent variable changes (increasing or decreasing) with a unit change in each independent variable, after controlling for the other independent variables; (3) the probability for each independent variable, which indicates the probability that the relationship with the dependent variable is due to random factors; and (4) the portion of the variation in the dependent variable accounted for by an independent variable, controlling for the other factors (calculated by measuring the extent to which R-squared declines with the deletion of the variable from the model).

DOWNLOAD | PREVIOUS | NEXT

National Center logo
© 2000 The National Center for Public Policy and Higher Education

HOME | about us | center news | reports & papers | national crosstalk | search | links | contact

site managed by NETView Communications