This conference is supported by the Spencer Foundation and the Alfred P. Sloan Foundation
Professors and instructors are a chief input into the higher education production process, yet we know very little about their role in promoting student success. This is in contrast to elementary and secondary schooling, for which ample evidence suggests teacher quality is an important determinant of student achievement. Whether colleges could improve student and institutional performance by reallocating instructors or altering personnel policies hinges on the role of instructors in student success. In this paper De Vlieger, Jacob, and Stange measure variation in postsecondary instructor effectiveness and estimate its relationship to overall and course-specific teaching experience. The researchers explore this issue in the context of the University of Phoenix, a large for-profit university that offers both online and in-person courses in a wide array of fields and degree programs. They focus on instructors in the college algebra course that is required by all BA degree programs. They find substantial variation in student performance across instructors both in the current class and subsequent classes. Variation is larger for in-person classes, but is still substantial for online courses. Effectiveness grows modestly with course-specific teaching experience. The results suggest that personnel policies for recruiting, developing, motivating, and retaining effective postsecondary instructors may be a key, yet underdeveloped, tool for improving institutional productivity.
In this paper, Deming, Lovenheim, and Patterson study the impact of online degree programs on the market for U.S. higher education. Online degree programs increase the competitiveness of local education markets by providing additional options in areas that previously only had a small number of brick-and-mortar schools. The researchers show that local postsecondary institutions in less competitive markets experienced relative enrollment declines following a regulatory change in 2006 that increased the market entry and enrollment of online institutions. These impacts on enrollment were concentrated among private non-selective institutions, which are likely to be the closest competitors to online degree programs. The results indicate that these schools also increase tuition in order to offset some of the revenue losses coming from enrollment declines but that per-student instructional spending does not decrease. This suggests a relative shifting of resources towards instruction due to online competition in the private, nonselective postsecondary sector. The researchers also find increases in per-student instructional spending among public institutions. Their results suggest that by increasing competitive pressure on local schools, online education can be an important driver of innovation and productivity in U.S. higher education.
This paper was distributed as Working Paper 22749, where an updated version may be available.
Though undergraduate tuition generally varies little or not at all by field of study, instructional expenditures vary widely. This paper uses administrative student and expenditure data from Florida public universities to describe a) how the cost of producing graduates varies by major, b) how the inclusion of field-specific instructional costs alters the estimated net returns to different fields of study, and c) how major-specific instructional expenditures changed between 1999 and 2013. Altonji and Zimmerman find that the cost of producing graduates in the highest cost major (engineering) is more than double that of producing graduates in the lowest-cost major (library science). Measures of private return net of cost differ significantly from returns measured using labor market outcomes for a number of fields. On a per-graduate basis, low-cost but relatively high-earning fields like business and computer science offer higher net returns than higher-earning but higher-cost majors like engineering. On a per-dollar basis, differences between net returns and earnings returns are even more pronounced. Per-credit expenditures for undergraduate classes dropped by 16% in the Florida SUS system between 1999 and 2013. The largest drops occurred in engineering and health, growing fields with high individual-level returns, where per-credit spending fell by more than 40%. The observed changes have little relationship with average per credit costs or earnings effects.
Policymakers at the state and federal level are increasingly pushing to hold institutions accountable for the labor market outcomes of their students. There is no consensus, however, on how such measures should be constructed, or how the choice of measure may affect the resulting institutional ratings. Using state administrative data that links postsecondary transcripts and instate quarterly earnings and unemployment records over more than a decade, Minaya and Scott-Clayton construct a variety of possible institution-level labor market outcome metrics. They then explore how sensitive institutional ratings are to the choice of labor market metric, length of follow-up, and inclusion of adjustments for student characteristics. The researchers also examine how labor market metrics compare to the academic-outcome-based metrics that are more commonly incorporated into state accountability systems. They conclude that labor market data, even when imperfect, can provide valuable information distinct from students’ academic outcomes. Institutional ratings based on labor market outcomes, however, are quite sensitive to the specific metric. The most obvious labor market metric – average earnings within a year after graduation – proves to be highly unreliable, unduly influenced by incoming student characteristics, and fails to capture other aspects of economic wellbeing that may be valued by both policymakers and students themselves. The findings suggest a cautious approach: while a mix of feasible labor market metrics may be better than none, reliance on a single unadjusted earnings metric, especially if measured too early, may undermine policymakers’ ongoing efforts to accurately quantify institutional performance.
This paper explores the implications of measuring college productivity in two different dimensions: earning and learning. Riehl, Saavedra, and Urquiola compute system-wide measures using administrative data from the country of Colombia that link social security records to students’ performance on a national college graduation exam. In each case the researchers can control for individuals’ college entrance exam scores in an approach akin to teacher value added models. They present three main findings: 1) colleges’ earning and learning productivities are far from perfectly correlated, with private institutions receiving relatively higher rankings under earning measures than under learning measures; 2) earning measures are significantly more correlated with student socioeconomic status than learning measures; and 3) in terms of rankings, earning measures tend to favor colleges with engineering and business majors, while colleges offering programs in the arts and sciences fare better under learning measures.