Foreword
 
Introduction
 
Defining and Analyzing the Issues
 
Diagnostic Questions
 
Compiling the Basic Data
 
Data Analyses
 
Creating a Policy Environment for Change
 
Formulating a Public Agenda
 
The Higher Education Policy Environment
 
The Capacity Audit
 
The Policy Audit
 
Policy Formulation
 
Alignment of Policy Tools: Two Examples
 
No Single Answer
 
Appendix: Examples of the Presentation of State Data
 
About the Authors
 
About the National Center for Public Policy and Higher Education
 

home   about us   news   reports   crosstalk   search   links  



Page 6 of 17

Data Analyses

In working to create the conditions for improving public policy in higher education, access to good data is necessary but not sufficient. It is also crucial to transform the data into information that illuminates the dimensions of the problem, and that provides insights into possible remedies. Analyzing the data in useful ways depends, of course, on the kinds of data collected-which vary considerably from state to state. There are some common criteria, however, and this section seeks to provide suggestions in this regard. Illustrations of results of many of the suggested analyses, presented in ways that have proven effective with policymakers, are contained in the appendix.

Preparation

Scores in the preparation category of Measuring Up 2000 are based on measures of:

  • High school completion rates (18- to 24-year-olds with a high school credential).
  • Course-taking patterns of 8th-12th graders.
  • Student achievement in math, reading and writing, based on scores on the National Assessment of Educational Progress (NAEP).
  • Numbers of students performing well on ACT/SAT exams.
  • Numbers of scores of 3 or better on Advanced Placement exams.
The first of these-high school completion-is a population statistic for which detailed data can be acquired by geographic area (county), race/ethnicity, and gender. In most instances, presentation of information indicating proportions of 18- to 24-year-olds who have completed high school by county is the most useful because it indicates the preparation of students most likely to attend higher education. Statewide information categorized by race/ethnicity and gender is useful, but it becomes much more useful if it, too, is disaggregated by county. This allows, for example, an understanding that it is not minorities (or males) in all parts of the state who are not completing high school; rather it is those in urban (or rural) parts of the state, or parts of the state with particularly low incomes, etc.

If data about course-taking patterns are available at all, they are likely to be available by district or school. Data disaggregated by school tend to be too voluminous. As a result, district data are preferred, at least initially. If the district data cannot easily be translated to a format that can be presented as a map, a listing (arranged in ascending or descending order) of proportions of students enrolled in advanced classes is a starting point. Arraying the data within groupings of district size provides a different perspective as do groupings by wealth (for instance, expenditures per student) or by composition of the student body (for instance, predominantly minority versus predominantly white). Presentation of information in map form, however, is particularly persuasive to policymakers, largely because they intuitively relate to and understand geographic subdivisions of the state.

Data derived from national assessments often are not extensive enough to allow geographic disaggregation, although differentiations by gender and (sometimes) race/ethnicity are possible. For some examinations available nationwide, disaggregation of data by zip code may be possible. In addition, as more and more states move to wide-scale K-12 testing of one form or another, it is becoming possible to identify characteristics of students who are performing particularly well-or poorly. It is in this area that state-specific data tend to be the most useful adjunct to national data. Presentation of data in terms of the percentage of students performing well or poorly (for example, in the top or bottom quartile) by county, by size of the district, and by wealth is particularly useful.

In presenting data about proportions of students who do well (top 20%) on ACT/SAT scores, it is useful to display information about proportions of students who are test-takers as well-not because this variable explains the scores (the ratings are based on numbers of graduates, not test-takers), but because variations reveal much about expectations of both students and the schools. Displaying the data by district (and in some states, by school) helps to make the case concerning the areas of the state (and sub-populations within those areas) that have the greatest opportunity for improvement.

Finally, data about "passing" AP scores per 1,000 high school juniors and seniors should be augmented by information about enrollments in AP courses per 1,000 juniors and seniors. Again the data should be compiled and displayed geographically-by district, by size of district, and by other important characteristics. If state data allow, additional information about enrollments in dual credit courses and the extent to which students leave high school with college credit is also helpful.

However data are displayed, the objective is to identify schools/counties (and the students therein) whose performance on key measures of preparation provide the greatest opportunity for improvement in the overall state score. The choice of the measures themselves provides policy guidance: students will not succeed in challenging courses if they do not have access to such courses (or if expectations are low and they are discouraged from enrolling).

Participation

Scores in the participation category of Measuring Up 2000 are based on measures of:

  • Enrollment of recent high school graduates in college-level programs.
  • Enrollment of working-age adults (ages 25-44) in education and training beyond high school.
In many states, the higher education agency compiles data about county of origin (or high school district of origin) of first-time, full-time college freshmen who are recent high school graduates (for example, see Appendix, Figure 18). These data are typically categorized by college or university in which the students are enrolled. The basic calculation is:

(number of first-time, full-time freshmen enrolled, by county of origin)
-----------------------------------------------------------------------
(number of high school graduates, by county)

Greater detail can be achieved by subdividing these data by type of institution-doctoral, comprehensive, community college (and private institutions in some states)-and even further by race/ethnicity.

Measuring Up 2000 calculates the participation of young adults through two measures: high school freshmen who enroll in college four years later, and 18- to 24-year-olds who enroll in college. An important explanatory variable is the proportion of these populations who are not eligible for college because they have not completed high school. It may also be necessary to subdivide the drop-out numbers as (1) "true" drop-outs and (2) students who have transferred to another high school or district.

Data on 25- to 44-year-olds enrolled part-time are sometimes more difficult to acquire-not all states compile these data by age category, at least not in a disaggregated fashion. States typically do, however, collect data on part-time students by institution of enrollment, county of origin, and undergraduate and graduate level (for example, see Appendix, Figure 19). A basic calculation of participation for part-time students is:

(number of part-time undergraduates, by county of origin)
---------------------------------------------------------
(number of 25- to 44-year-olds, by county)

Additional information can be acquired by disaggregating these data by type of institution, and in some instances, by race/ethnicity.

The migration of students into and out of the state offers another perspective on access and participation. Measuring Up 2000 provides a measure of net migration as a part of the "Facts and Figures" section for each state. To shed more light on higher education opportunity in the state, it is useful to calculate net migration by type of institution-doctoral/research/comprehensive/ baccalaureate/two-year-separately for public and private institutions (for example, see Appendix, Figure 14). In addition, it is useful to ascertain the top 20 (or so) specific institutions attended by the state's out-migrating students. These data allow a state to gauge if:

  • Access is being limited because of the absence of particular sectors of institutions in the state or the absence of institutions offering unique programs. For example, it is common for net out-migration to be concentrated in a single institutional sector.
  • Out-migration is concentrated in institutions with which states could not compete (for instance, elite/national or church-related institutions) or in institutions where convenience is the distinguishing characteristic.
The results of these analyses serve to pinpoint those parts of the state where participation by either recent high school graduates or working-age adults is particularly low. Information about participation in various kinds of institutions-along with knowledge of the physical locations of institutions and their respective program offerings-can do much to explain participation patterns and indicate the barriers to be overcome if participation is to be enhanced (for example, see Appendix, Figure 19).

Affordability

In the affordability category of Measuring Up 2000, the measures fall into the following clusters:

  • Family ability to pay: the percentage of income needed to pay for college expenses.
  • The availability of state need-based aid.
  • The average loan amount that students borrow each year.
For states to understand their in-state variations on these variables, they need additional information about income distributions within the state as well as information about distribution of student financial aid. Among the data that can be compiled readily on these matters are:
  • Median household incomes by county (for example, see Appendix, Figure 20).
  • Median household incomes by county for families in which the head of household is between age 40 and 60 (that is, those families most likely to have college-age children).
  • Median household income by income quintile (for example, see Appendix, Figure 21).
  • The amount of unmet need for need-based financial aid. If this is available, it is typically from the state student financial aid agency.
A more sophisticated analysis of ability to pay combines information about personal incomes by county and attendance patterns of students from that county-developing a weighted average cost of attendance relative to income. The calculation for a state is basically:

[(Number of enrollees from the county in each institutional sector) X
(proportion of income required per institutional sector)]
------------------------------------------------------------------
(total number of enrollees from the county)

This calculation identifies those institutional sectors in which price considerations are heavily influencing the state result. Similarly, distributions of income by racial groups can be shown, indicating the extent to which economic need is correlated with racial category. Given the legal uncertainties associated with affirmative action in college admissions, it is usually best to deal with affordability as an economic issue rather than as a racial one.

Completion

The measures in this performance category of Measuring Up 2000 focus on:

  • The proportions of freshmen (at two- and four-year colleges) who return for their sophomore years-measures of persistence.
  • The proportions of first-time, full-time freshmen completing a baccalaureate degree within five years.
  • The numbers of undergraduate certificates, degrees and diplomas awarded per 100 undergraduate students-a measure of degree production by the state's system of higher education.
Unlike the other performance categories in the report card, where disaggregation was primarily by geographic area (county) within the state, "drilling down" in this category focuses on institutions-or types of institutions. If detailed persistence and retention data are available at the state's higher education agency, it will be available for each institution. (Note: comparable data on degree completion will soon be available for all institutions since these data will be required as part of each institution's IPEDS submission.) Thus, analyzing data in persistence and degree completion can best be accomplished by:
  • Comparing each institution with top performance as identified in Measuring Up 2000.
  • Comparing performance with the more detailed (by institutional type) empirical information developed by organizations such as ACT.
The objective is to identify those institutions where performance is well below the best-performing institutions and then to begin the process of determining why this is so and what might be done to improve performance.

With regard to degree production, several additional analyses are suggested. These include determining:

  • Comparative degree production by type of award-certificate, associate, baccalaureate separately.
  • Comparative degree production by field of award (using Classification of Instructional Programs codes), with special attention to those fields of particular importance for economic development and quality of life in the state, such as health professions, engineering and teacher education.
  • Comparative degree production relative to the population to be served (especially for certain professional programs).
In creating these displays, it is useful to compare the state not only with the best-performing states and national averages, but also with selected other states that compete most directly with the state in pursuing economic development opportunities.

As a final variation in this category, it can be useful to change the denominator in the calculation-from 100 undergraduate students to 1,000 high school graduates in earlier years (five years for the baccalaureate degree comparison, three years for the associate degree comparison, and one or two years for the certificate comparison). These displays provide the state with overall measures of degree production conditioned roughly on the size of the potential pool of eligible degree recipients. These displays also subsume considerations of participation and other similar rates, and provide the state with a perspective on its competitiveness in workforce development (for example, see Appendix, Figures 22-24).

Benefits

The benefits category in Measuring Up 2000 is somewhat different than the other performance categories in the report card, in that it does not correspond to specific policy interventions. Rather, the benefits category evaluates the benefits that states receive if high performance is achieved in the other graded categories of Measuring Up 2000. As a result, disaggregation of these data are not necessarily required and none is suggested here.

There is a closely related area, however, that is not included in Measuring Up 2000 but for which more detailed investigation is appropriate. Many of the benefits that accrue to a state are as much a function of the state's economy as of the performance of its educational system. For example, states with a productive educational system may not reap the benefits of this system if the state's economy cannot absorb the number of college graduates produced. Similarly, a state can benefit greatly by importing highly educated people. As a consequence, it is useful to display data that compare the state's economy with the economies of those states with which it competes most directly. The basic measures for these comparisons are:

  • Distribution of employment in the state, by occupation.
  • Distribution of employment in the state, by industry.
  • Proportions of gross state product attributable to different sectors of the economy (for example, see Appendix, Figures 25 and 26).
  • Detailed information from the New Economy Index, prepared by the Progressive Policy Institute (for example, see Appendix, Figure 27).
  • Measures of competitiveness in economic development, as provided by the Corporation for Enterprise Development (www.cfed.org) (for example, see Appendix, Figure 28).
Displaying these economic factors can assist policymakers in linking economic enhancement with higher education performance.




DOWNLOAD | PREVIOUS | NEXT

National Center logo
© 2000 The National Center for Public Policy and Higher Education

HOME | about us | center news | reports & papers | national crosstalk | search | links | contact

site managed by NETView Communications