NATIONAL BUREAU OF ECONOMIC RESEARCH
NATIONAL BUREAU OF ECONOMIC RESEARCH
loading...

Advancing the Science of Science Funding Workshop

An NBER Summer Institute workshop on Advancing the Science of Science Funding took place on July 19-20 in Cambridge. Research Associate Paula Stephan of Georgia State University Reinhilde Veugelers of the Katholieke Universiteit Leuven organized the meeting. These researchers' papers were presented and discussed:

Henry Sauermann, European School of Management and Technology and NBER; Chiara Franzoni, Politecnico di Milano; and Kourosh Shafi, University of Florida

Crowdfunding Scientific Research (NBER Working paper No. 24402)

Crowdfunding may provide much-needed financial resources, yet there is little systematic evidence on the potential of crowdfunding for scientific research. Sauermann, Franzoni, and Shafi first briefly review prior research on crowdfunding and give an overview of dedicated platforms for crowdfunding research. They then analyze data from over 700 campaigns on the largest dedicated platform, Experiment.com. The researchers' descriptive analysis provides insights regarding the creators seeking funding, the projects they are seeking funding for, and the campaigns themselves. They then examine how these characteristics relate to fundraising success. The findings highlight important differences between crowdfunding and traditional funding mechanisms for research, including high use by students and other junior investigators but also relatively small project size. Junior investigators are more likely to succeed than senior scientists, and women have higher success rates than men. Conventional signals of quality - including scientists' prior publications - have no relationship with funding success, suggesting that the crowd applies different decision criteria than traditional funding agencies. The results highlight significant opportunities for crowdfunding in the context of science while also pointing towards unique challenges. The findings relate to research on the economics of science and on crowdfunding, and discuss connections with other emerging mechanisms to involve the public in scientific research.


Charles Ayoubi and Fabiana Visentin, EPFL, and Michele Pezzoni, Université Nice

The Important Thing is Not to Win, it is to Take Part: What if Scientists Benefit From Participating in Research Grant Competitions?

"The important thing is not to win, it is to take part," this famous saying by Pierre de Coubertin asserts that the value athletes draw from Olympic games lies in their participation in the event and not in the gold they collect during it. Ayoubi, Pezzoni, and Visentin find similar evidence for scientists involved in grant competitions. Relying on unique data from a Swiss funding program, they find that scientists taking part in a research-grant competition boost their number of publications and average impact factor while extending their knowledge base and their collaboration network regardless of the result of the competition. Receiving the funds increases the probability of co-authoring with co-applicants but has no additional impact on the individual productivity.


Misha Teplitskiy, Harvard University; Eva C. Guinan, Dana-Farber Cancer Institute; and Karim Lakhani, Harvard University and NBER

Social Influence in Science Funding Evaluation Panels: Field Experimental Evidence from Biomedicine

Many organizations, particularly those in the domain of scientific research, rely on experts to evaluate new ideas. However, when the objects to be evaluated are complex and require the opinions of multiple experts, it is unclear whether experts should provide evaluations independently or collaboratively. Although normative models of decision-making suggest that information exchange among individuals improves judgments, it is unknown whether and under what conditions experts actually utilize information from one another. Here, Teplitskiy, Guinan, and Lakhani report an experiment that measures information utilization among 277 expert reviewers of 47 multidisciplinary applications for awards. Reviewers were faculty at U.S.-based medical schools. In particular, the researchers measure whether reviewers do or do not update how they score applications after observing the scores of artificial "other reviewers." The scores of other reviewers were randomly generated and their discipline was experimentally assigned to be same or different to that of the reviewer. The researchers found that reviewers updated scores in 47% of cases after exposure to the artificial stimuli. Contrary to normative models, reviewers were insensitive to the disciplinary expertise of the stimulus. Much more important was the reviewer's own identity: female reviewers updated their scores 12% more often than males. Similarly, reviewers with relatively high status (​H​-index) updated substantially less often than low-status reviewers. Lastly, updating was more common for the medium- and high-scoring applications, leading to high turnover in the top proposals before and after exposure to the stimuli. The experiment reveals extends findings on social influence within non-expert groups to experts, and suggests a new pathway through which bias can enter evaluations - through the gendered openness to external information.


Alfredo Di Tillio and Marco Ottaviani, Bocconi University, and Peter Norman Sørensen, University of Copenhagen

Strategic Sample Selection

What is the impact of sample selection on the inference payoff of an evaluator facing a monotone decision problem? Di Tillio, Ottaviani, and Sørensen show that anticipated selection increases or decreases the accuracy of a statistical experiment according to whether the reverse hazard rate of the data distribution is log-supermodular -- as in location experiments with normal noise -- or logsubmodular. The results are applied to the analysis of strategic sample selection by a biased researcher and extended to the case of uncertain and unanticipated selection. The researchers' theoretical analysis offers applied research a new angle on the problem of selection in empirical and experimental studies, by characterizing when sample selectivity, selective assignment to treatment, and strategic omission of variables benefit or hurt the evaluator.


Marc J. Lerchenmueller, Yale University

Does More Money Lead to More Innovation? Evidence from the Life Sciences

Large sums are often invested into scientific innovation -- the creation of new knowledge through scientific research. In this paper, Lerchenmueller argues that increasing investments may lead scientists to pursuing average (but more certain) as opposed to higher risk projects, with negative consequences for scientific innovation. Exploiting an exogenous multi-billion dollar shift in the budget of the world's largest financier of scientific research, the U.S. National Institutes of Health (NIH), Lerchenmueller finds that the influx of more money led to a significant decrease in scientists' innovation productivity (14% fewer papers published), its significance (16% less citations generated), and novelty (9% reduction in unprecedented content). These negative effects become more pronounced when Lerchenmueller, in addition to the macro level (federal budget), also accounts for differences in funding at the micro level (project budget). The decrease in scientific innovation is primarily driven by top scientists changing their research strategy with greater funding availability: the data reveal a 1.7x to 3x larger reduction in scientific innovation for scientists in the top quintile versus scientists in lower quintiles of the capability distribution. Lerchenmueller concludes with Implications for public policy, corporate strategy, and the allocation of resources in support of scientific innovation.


Jacques Mairesse, CREST-ENSAE and NBER; Michele Pezzoni, Université Nice; Paula Stephan; and Julia Lane, New York University

Examining the Returns to Investment in Science: A case Study

Using detailed transaction level data from a small highly selective university for the period 2000-10, Mairesse, Pezzoni, Stephan, and Lane first estimate the link between public research funding and research output at the grant level, taking into account the effect of the presence of overlapping grants in the principal investigator's (PI) grant portfolio. The researchers measure grant research output in terms of publication quantity and quality, as represented by the number of articles and the average impact factor of the journals in which they are published. They then consider the economic implications of the grant level results, aggregated across all grants at the PI level. The researchers estimate the effects of both the size of total funding and the composition (bundling) of total funding on the PI's publication quantity and quality. At the grant level, after addressing the two main challenges of the attribution of publications to grants and the endogeneity of funding, they find that the elasticities of quantity and quality of publications attributed to a focal grant with respect to its size (as measured by the yearly flow of funds) are respectively 0.43 and 0.27, holding constant the flow of funds from other grants. The researchers also find negative elasticities of -0.19 and -0.20 for the quantity and quality of publications attributed to the focal grant with respect to the size of other grants. When these estimates are aggregated to the PI level, publication output increases with respect to total funding in terms of quantity with an elasticity of 33%, but decreases in terms of quality with an elasticity of -11%. The researchers find no evidence of negative marginal returns. They also find that results are sensitive to the way funding is bundled and the number of overlapping of grants. Specifically, holding total funding constant, more grants are associated with more articles but articles of lower average quality, clearly suggesting that the bundling of grants matters and that there is a quantity quality trade-off. These results, which concern the scientific productivity of highly successful researchers at one of the U.S.'s top research institutions, should be investigated for other less highly-selective research institutions. The researchers prior is that the tradeoff observed would be stronger if they were to study a more heterogeneous group of researchers than a highly selective group whose ability and expertise arguably attenuate the tradeoff.



 
Publications
Activities
Meetings
NBER Videos
Themes
Data
People
About

National Bureau of Economic Research, 1050 Massachusetts Ave., Cambridge, MA 02138; 617-868-3900; email: info@nber.org

Contact Us