At the risk of writing something that could be taken as special pleading, I’ll weigh in on the WSJ article “Putting a Price on Professors.” The risk exists because the article is about state policy towards professors in Texas, and specifically discusses the University of Houston. I plead guilty on all counts.
The article describes recent state laws that are intended to improve academic performance, specifically by increasing professorial accountability and efficiency, in part by quantifying faculty performance, with the implication that these quantitative measures will eventually be inputs to hiring, pay, and curricular decisions.
At the outset, I should express my broad sympathy with these objectives. As to the means, well, just let me say that with respect to the metrics for evaluating faculty performance, those pushing these metrics want to improve education in the worst way, and they’ve found it.
This is in fact a great illustration of the perils of high powered incentive systems in multi-task environments, something first analyzed rigorously by Holmstrom and Tirole Milgrom in JLEO in 1991*. The basic problem with high powered incentives, in which compensation and perks are strongly related to measured performance, is that these incentives can cause serious distortions when some important aspects of performance are very hard to measure. Those subject to this incentive system tend to devote excessive effort to the measured activities that determine compensation, and too little effort to the unmeasured–but potentially valuable–activities that do not affect compensation because they cannot be measured with any precision.
A canonical example from education is teaching to the test. When teacher performance is evaluated primarily by student test performance, teachers have an incentive to teach to the test only, and stint on any non-test related instruction, even though said instruction may be quite beneficial and valuable to the students.
In many ways, this multi-tasking problem is captured by Einstein’s aphorism that not everything that counts can be measured, and that not everything that can be measured counts. The Holmstrom-Tirole Milgrom corollary is that counting only those things that can be measured means that nobody will produce the things that count but can’t be measured, and are likely to produce things that can be measured but don’t really count.
Higher education is rife with measurability issues. It is easy, for instance, to count student enrollments and graduation rates. It is far harder to determine how much knowledge those students have received, especially in higher level and more abstract courses in non-quantitative areas. It is difficult to monitor and measure faculty engagement with students outside the classroom. Research is also extremely hard to evaluate. Yes, you can count papers. Yes, you can rank journals. But both are noisy measures–potentially very noisy measures–of research “quality.”
So, it is much harder to measure some crucial dimensions of faculty quality and performance than others. If you want to have research universities, and faculty who engage with students outside the classroom, etc., high powered incentive systems are not the way to achieve it.
If you take a look at WSJ article’s description of the spreadsheet at Texas A&M used to evaluate faculty contribution, you might be reminded of old Soviet incentive systems. When the Soviets would try to measure the performance of nail makers by the weight of nails produced, the factories would churn out small numbers of huge, heavy nails. Not liking this, the planners changed the incentive system to base compensation on the number of nails produced, so the factories produced lots of little nails. Analogously, if you reward faculty for teaching big sections of basic courses, small specialized electives will disappear, or will be staffed by the least influential (usually, most junior) faculty who are less capable (on average) of teaching them. If you include grant monies as a primary criterion, faculty members will spend more time grant grubbing and less time doing other things that are also important to the educational and research missions. Students are frequently not in the best position to evaluate the value or utility of what they are being taught, so making faculty salary and promotion highly dependent on student teaching evaluations tends to skew teacher efforts towards entertainment and achieving popularity, rather than delivering knowledge and constructive (and sometimes painful) feedback.
This is not to say that low powered incentive systems are costless. All of the criticisms of the modern university have a basis in fact. Some faculty respond to low powered incentives by mailing it in, or teaching classes that are personally satisfying (or easy) but which are not useful for students.
But in this, as in everything else, there are trade-offs. If you don’t like things the way they are . . . be careful what you ask for. Changes intended to address one problem–e.g., faculty sloth/moral hazard–can create other problems that are far more costly.
The modern university is characterized by low powered incentives. Universities are almost exclusively not-for-profit. Internal reward systems have low incentive power. Before rushing in to change that system wholesale, it is worthwhile to stand back and consider why these arrangements have developed, and more importantly, survived. The well-documented problems with for-profit universities provide a valuable cautionary tale, and a useful contrast to the criticisms of the traditional university.
As in most things, rather than trying to engineer from above (in legislatures or governors’ offices) to achieve superior results, it is better to encourage an environment in which competition not just in price, but in organizational form and internal management and governance, can flourish. The practices that survive will not be perfect, because perfection is not an option in economics, but it is likely that they will have attributes that are, as a whole, superior to those that do not.
* Error caught by “Anonymous.”