Page 144 - American_Higher_Education_V4

Basic HTML Version

144
“I was just as surprised as
anyone else,” said Deborah
Cureton, dean of academic and
student affairs at Lancaster. “I
guess we just looked particularly
good on some of those first 14
indicators.”
Michael Smith, who is
coordinating the new plan
for the higher education
commission, warned that the
first-year scores don’t mean
much because only a limited
number of indicators were used,
the benchmarks were based
on past performance instead
of future goals, and there was
insufficient planning time.
“We’re feeling our way,”
Smith said. “We’re looking for
answers. We won’t know what
works and what doesn’t until we
have all 37 indicators in place
and we’ve been using the whole
system for a couple of years.”
But critics wonder if the
scheme will ever make sense. They believe large amounts of
meaningless information are being compiled that, in the end,
will make little difference in state financial support.
Some indicators, or “critical success factors,” are puzzling.
For instance, one tries to measure “institutional emphasis
on quality teacher education and reform” when the state’s 21
two-year colleges do not train teachers.
Act 359 calls for the Commission on Higher Education
to apply “objective, measurable criteria” in judging a school’s
performance, but for some indicators no such criteria exist.
For instance, one indicator is: “Curricula offered to achieve
mission.” For a given school, any judgment about that would
be, as Fred Sheheen has written, “subjective” and “largely in
the eyes of the beholder.”
Most campus officials who were interviewed agreed
that some of the indicators are valid measures that can be
quantified, at least to some degree: Does the institution have
a strategic plan? How do faculty salaries compare with those
of rival institutions, both inside and outside South Carolina?
Howmuch of the total budget is spent on administration?
Is there a post-tenure review policy for tenured faculty
members? Howmany degree-granting programs have been
accredited?
But other measurements appear to be trivial or
meaningless—for example, one requiring institutions to
report on the “non-academic achievements” of their students
when they were in high school.
Still others are contradictory. One indicator rewards
an institution whose students enter with high SAT or ACT
scores, while another gives high marks for enrolling in-state
students, many of whom have low test scores.
South Carolina State University, the only historically
black school in the state, made the mistake of taking the test
score indicator seriously, increasing the SAT score required
for entrance last fall. The result was a 25 percent drop in
freshman enrollment, causing a revenue loss of more than
$500,000 that the state will not reimburse.
“We have always done a very good job of retaining the
students who enroll here, including those with low test
scores,” said Leroy Davis, South Carolina State’s president. “I
should think that would be a more significant indicator of
quality than entering test scores.”
Another indicator calls on campuses to gather “employer
feedback on graduates who were employed or not employed,”
a measure that Susan Pauly, director of planning at the
University of South Carolina at Lancaster, called “ludicrous—
employers aren’t going to take the time to do that, and, if they
answer honestly, they might be opening themselves to law
suits.”
Class size seemed at first to be an appropriate quality
measure but problems developed immediately. How can
an introductory course in political science or psychology,
almost always a large lecture, be compared with the one-on-
one instruction required in some programs at the Medical
University of South Carolina?
Commission staffers first decided not to apply this
indicator to the medical campus. Then they limited the
evaluation to freshman and sophomore classes. Campus
officials doubt that the eviscerated indicator has much
meaning.
In order to generate more money for faculty salaries
and other high priorities, Coastal Carolina University had
increased its student-faculty ratio gradually from 19-to-one
to 25-to-one. “This was cost-effective and we were convinced
quality did not suffer,” said Sally Horner, the campus
executive vice president. “But we got a low rating (3.5) on
that indicator.”
Some of the people who are working on implementation
of Act 359 understand that some indicators need to be
modified and others should be tossed out altogether.
Changes have been made already, such as limiting the
class size measure to the lower division. Other modifications
are coming.
Work is being concentrated on indicators that seem
practical, while others are being silently ignored.
“We’ve found, as we work through this, we tend to find
problems that were not apparent until our work began,”
Dalton Floyd, who chairs the higher education commission’s
planning and assessment committee, said diplomatically.
Floyd, a 59-year-old attorney from Surfside Beach who
specializes in golf law, is given credit for bringing calm and
Critics believe large amounts of
meaningless information are
being compiled that, in the end, will
make little difference in
state financial support.
South Carolina
Statistics
(Fall 1996)
• Number of public campuses: 33
(three research universities, nine
four-year comprehensive
universities, five two-year regional
colleges, 16 technical colleges)
• Enrollment: 157, 363 (headcount)
• Students: 73 percent white, 27
percent minority (mostly African
American)
• Operating budget: $642,407
(1996–97 academic year—14.7
percent of total state revenue)
• Tuition and fees: average for four-
year universities $3,133, for two-year
regional colleges $1,850, for technical
colleges $975
(Source: South Carolina Commission on Higher
Education)