Responsible metrics for societal  value of scientific research

Responsible metrics for societal value of scientific research

What does responsible research evaluation look like when it comes to societal value? This blog post provides four practical recommendations.

In the global research evaluation community, there is an increasing awareness of the importance of responsible evaluation. The current situation, with an emphasis on quantitative metrics, does not do justice to diversity between scientific fields, to different roles of researchers, or to the societal value of research. Moreover, studies have shown that researchers adjust their activities in anticipation of evaluations. So far, there has been little awareness for effects on the system level (see Leiden Manifesto for research metrics).

The debate about responsible evaluation focuses mainly on indicators of citation impact, such as journal impact factors or the Hirsch index. This blog post explores the requirements for a responsible approach to evaluating the societal value of science.

Over the past few decades, a number of tools and methods have been developed to evaluate the societal value of science. Society urgently needs science to guide sustainability transitions, to fight the COVID-19 pandemic and to decrease global economic inequality. High expectations from publicly funded research give rise to a need for assessing the societal benefits of science. Does scientific research actually deliver the societal value that it promises or that public funders expect?

Available methods use qualitative or quantitative data to make the use, uptake or impact of scientific knowledge in society visible. A promising avenue is to focus on the process rather than eventual impact. Societal value comes in many different forms, such as improved public health, economic growth or better education. Such diversity makes its measurement and comparability highly unfeasible. Moreover, the impact of scientific knowledge often develops over longer time periods, and is influenced by many factors beyond the control of the researchers involved. It becomes more attractive to focus the evaluation on the immediate response in society, or on the interactions between research and societal stakeholders as these are instances that are more concrete and verifiable than broadly defined or large-scale societal impact. These responses and interactions are then taken as pre-conditions of impact.

How can the available methods be applied to evaluate societal value in a responsible way? Here we provide four recommendations. 

1. Choose methods that match the purpose of evaluation

      A key consideration in planning your research evaluation is what you want to achieve with it. Is it mainly an accountability exercise, to illustrate that investments have resulted into societal uptake and use? In that case you may want to focus on empirical evidence of impact, for example using the Monetization method. Or do you want to support a learning process, in order to improve strategies for societal impact? In that case, it will be more helpful to make an inventory of the key audiences in society and investigate how they respond to the research products of the unit of evaluation, using altmetrics, for example.

      Toolbox
      There’s nothing like a well-equipped toolbox!

      2. Choose methods that fit the research context

      Given the heterogeneity of research practices, it is key to tailor the evaluation approach to the disciplinary and organizational context of the research unit under evaluation. While patents may be a meaningful indicator of impact in a biotechnology department, advisory reports will be more important in an institute for macro-economics. If the evaluation context allows, it can be useful to design a ‘theory-of-change’. This is a causal model representing the desired impacts, the intermediate ‘outcomes’ and the immediate ‘outputs’ that can lead to those impacts. Building a theory-of-change, preferably in co-creation with stakeholders, will help distinguishing relevant audiences inside and outside of the research setting that will need to be reached in order to generate any impact. Analyses of collaboration networks or social media interactions can then help to explore to what extent these audiences respond to research products or how they interact with the researchers. 

      3. Combine qualitative and quantitative data

        Qualitative and quantitative data can both provide insights into the dynamics of generating value from scientific research. The Leiden Manifesto argues to use quantitative indicators only to support qualitative, expert assessment. In line with this, we recommend using quantitative analysis of digital responses or productive interactions to start a conversation rather than to end one. And to consider the use of qualitative data from interviews, focus-groups or interactive workshops in addition. One of the tools that we use in Leiden, Area-Based Connectedness, focuses on the connections of a research area with industry, policy or other societal domains. Instead of “measuring” the direct connections of a research unit with societal actors, this method provides evidence of the connectedness of research areas in which the unit publishes. In this way, it indicates the potential societal relevance rather than the particular impact it generates. We have experienced that this kind of information can help both research units and evaluation committees to understand the interactions between research activities and society. It can form a fruitful basis for conversations either to improve research strategies or to formulate evaluative conclusions.

        4. Consider the theoretical assumptions of your evaluation method

          In a recent review, Jorrit Smit and Laurens Hessels show that the available methods vary not only in terms of the methodological approach but also in terms of their theoretical assumptions about societal value, the actors involved and the interaction mechanisms to produce this value. Some methods, such as Science and Technology Human Capital, for example, hold a linear view on knowledge exchange and perceive knowledge users merely as recipients of knowledge. Other methods, such as ASIRPA, are based on a cyclical model, emphasizing the feedback mechanisms between the production and the application of knowledge. There are also methods, such as Contribution Mapping, that follow a co-production model, an almost equal perspective that allocates more agency to users and intermediaries. Similarly, the new proposal of “heterogeneous couplings” introduces more interactive science-society perspectives in the altmetrics and social media metrics toolset. Each method has its own merits, as it will highlight particular achievements or mechanisms. For this reason, it is often fruitful to combine different methods. One key consideration for choosing methods will be the data required. However, we recommend research managers and evaluators to not only consider practical constraints, but also the alignment between the theoretical foundations of an evaluation method and their own convictions about the way scientific research generates value in society.

          To conclude

          The recommendations presented here are grounded on an interactive understanding of the creation of societal value, assuming that the value of science to society is produced in mutual interactions between academia and society. They address a rather broad audience, as research evaluations are typically designed by collectives of researchers, policy makers, research managers and advisory committees. These four recommendations can help them to design suitable evaluation approaches. We do not believe in blueprints of how to evaluate the societal value of scientific projects, programs, or organisations. Rather we hope to give some guidance in tailoring evaluation approaches, in order for them to support learning and accountability effectively.


          Photo credits header image: Patrick Perkins; photo credits in-text image: Barn Images

          0 Comments

          Add a comment