What is science worth for us?

Thursday, August 17, 2017

Jack Spaapen, senior policy advisor, Royal Netherlands Academy of Arts and Sciences

Since the 1990s, policy makers progressively became interested in assessing scientific research not only on its merits for the scientific community, but also for society at large. However, we still do not have a widely accepted, systematic way to assess scientific impact. So why is it so difficult to assess impact of research?

The main reason is that there are so many different kinds of impact, depending on the societal context. Clearly, this goes for researchers working in, say, medical fields compared to those working in agriculture or ICT. But it goes a fortiori for researchers working in the broad array of humanities and social science (HSS) fields. Researchers who work in language departments and want to have an impact on the language curriculum of high schools have to deal with legal and governmental departments, with school boards, with student and teacher organisations, with parent groups, with publishers, etc. And each of these “stakeholders” has specific interests, ideas and wishes. A researcher working in the area of, say, religious studies or art history faces a rather different context, filled with refugees, NGOs, politics, etc. in the first case and with museum directors, curators, audiences, local and national politics in the second. Moreover, many of the issues HSS researchers are interested in also attract passionate debate among members of the public.

These circumstances make it difficult to develop impact measurements that resemble procedures used for evaluating the scientific quality of research, a system that arguably works the same for all fields of research. The context of the scientific community is overall much more monolithic, and interests of participants are more based on shared values (Merton’s CUDOS for example). Ergo, a one-size-fits-all approach is possible (but see the Metric Tide report for a convincing critique).

However, the situation is not hopeless. On both side of the Atlantic, researchers of the science and technology studies community and beyond have been working steadily on approaches to societal impact evaluation. Journals like Research Evaluation or Science and Public Policy regularly report on these developments. In Europe, there is an active network of HSS researchers under the EU-COST aegis covering most if not all Europe (ENRESSH) and countries are beginning to integrate impact in their national evaluation systems (REF in the UK, SEP in the Netherlands). A 2013 RAND report presents a nice overview of methods for impact evaluation. In the USA and Canada, there is a growing research community (active groups are for example in Arizona State University, University of North Texas). In Canada, the Federation for the Humanities and Social Sciences has been active in this area with several reports in HSS impact.

And the interesting thing is that many of these efforts have arrived at similar conclusions. One is that societal impact is not a linear thing; rather, is it the result of the productive interactions between researchers and stakeholders. Assessment methods should respect this. Another is that quantitative methods may be good for measuring certain kinds of impact (for example economic), but qualitative methods are preferred in many other impact areas (changes in politics, or in attitudes, public influence, a new protocol in hospitals, improvements of rules and regulations, organizing work in a different way, a more humane treatment of refugees). Another is that it makes no sense to ignore the differences in context, and that it is much more productive to ensure that contexts inform the evaluation process. In case of the UK (REF 2014) and the Netherlands (SEP 2015-2021) this has led to an emphasis on narratives and case studies, which comes as an advantage for HSS researchers because that is part and parcel of what they do and produce. And after all, Elliot Eisner was right when he slightly rephrased a famous Einstein quote: not everything that can be measured matters, and not everything that matters can be measured.

About this blog series: Following the publication of the Federation’s new report, “Approaches to Assessing Impacts in the Humanities and Social Sciences,” we have reached out to other members of the research community to share their thoughts on the challenges and opportunities associated with assessing scholarly impacts. It is our hope that this series of blogs and our new report will help support a productive conversation in the HSS community about the important topic of scholarly impact assessment.



Research stars


Research and ProgramsResearch