Transition in science systems and evaluation practices: evolutionary or revolutionary?

Universiteit Leiden / CWTS

Transition in science systems and evaluation practices: evolutionary or revolutionary?

From the European Commission’s (EC) perspective, and still maintained in the core of Horizon Europe, the program to fund Research and Innovation (R&I) builds upon objectives “to strengthen the EU’s scientific and technological bases and the European Research Area (ERA), to boost Europe’s innovation capacity, competitiveness and jobs; and to deliver on citizens’ priorities and sustain our socioeconomic model and values”. EC-R&I – policy always has been far ahead, a dedicated line of support for connecting science to the values and interests of European citizens has run like a red thread through two decades of European framework programmes (FPs). There has been consistent attention for creating (socio-economic) impact, addressing grand societal challenges and how research can contribute to solving them. The interaction between the science system and society was strengthened over the subsequent programmes: it has moved from Science and Society (FP6, 2002-2006), Science with Society (FP7, 2007-2013) to Science with and for Society SwafS (Horizon 2020, 2014-2020). In fact, the European SwafS community (actors funded through SwafS) developed a new research field with its own vocabulary, norms, ideas and actions (see: https://doi.org/10.1016/j.techfore.2021.121053).

But whilst creating impact may be a policy goal, in the science system it is often seen as an additional requirement, to be delivered on top of the academic disciplinary traditions, aims and outputs. Concomitantly, the evaluation of the science system has been revolving, and still does, around traditional ways of valuing science, such as the bibliometric analysis and other related, widely used metrics (see: https://doi.org/10.1016/j.joi.2010.08.001). This may, unintentionally, create a science-society divide. In line with this way of valuing science, the scientific career system depends on this valuation, as well as university rankings. Only recently (see blog post on metrics) reward and recognition systems discuss opening up, focusing on other aspects and results of scientific work.

And so in this annual event, we brought together the traditional issues of valuing science through metrics with the issues of opening up and responsibility from an RRI perspective (see blog post on responsible evaluation). As SUPER MoRRI, we think it is important to be explicit about this divide, in order to open up the debate on alternatives to academic traditionalism, workable solutions, and to mobilize people around the topic.

The policy concept of Responsible Research and Innovation (RRI) was designed to bring together societal actors (researchers, citizens, policymakers, business, third sector organisations, etc.) to ‘work together during the whole research and innovation process in order to better align both the process and its outcomes with the values, needs and expectations of society’, which is easier said than done. In practice, RRI is implemented as a package that includes multi-actor and public engagement in research and innovation, e.g. enabling easier access to scientific results as well as co-defining the topics of research interest. In a sense, RRI is providing the (policy)conditions for creating societal impact through changing the science system at the researcher, the institutional and the national level through engagement with stakeholders. ‘Grounding RRI supports transformative change’ is the idea, creating a more responsible, open and transparent science system. Grounding actions are for instance development and implementation of new norms, procedures, guidelines, agreements; formulating explicit mission statements and changes in organisational structure or functions. But how to evaluate such complex, and changing, systems, acknowledging local contexts, diversity and inclusion, while at the same time complying (also for many global south countries) with the metrics held in place in academia?

While there is much support for change (paying attention to stakeholders, other output, needs and demands, local context) on the one hand from society in general and policy in particular, there is an equally strong power among academic institutions to maintain the system as it is. Another take on this comes from biology: Homeostasis is the state of steady internal conditions maintained by living systems. This is the condition of optimal functioning for the organism and includes many variables, being kept within certain pre-set limits (homeostatic range). Homeostasis is brought about by a natural resistance to change when already in an optimal state, and equilibrium is maintained by many (feedback) regulatory mechanisms. In fact, the example of homeostasis teaches us that there are control mechanisms that have at least three interdependent components to regulate variables: a receptor (the science system), a control centre (policy), and an effector. If we know what is the most effective ‘effector’, the control centre could deal with that. RRI was intended to be the effector. The big question is hence: is the science system already in optimal condition? (How to define “optimal” opens up another debate that will require another blog post). Whether or not this is the case, the natural resistance to change does explain why policy pressures are less effective than you would expect from the amount of funding put in it.

You could call it a struggle, but as Laerte stated at the end of the Americas event: the science evaluation system is a mammoth tanker that takes time to steer in another direction. The clue is that any process of change is slow. Almost by default (and homeostasis), systems change will be evolutionary, and not revolutionary. Evolutionary or incremental change, versus transformational, revolutionary change, that is fundamental, dramatic, and often irreversible, but highly dependent on leadership to manage change. Wanted from policy, but not ready for in the system. Therefore, evolutionary change may be not such a bad idea, despite our revolutionary SwafS hearts that would like to see transformative change enforced. In the same Americas event, the representatives from Colombia and Mexico mentioned that they were looking at the Brazilian system with some envy, trying to ‘catch up’ to the Brazilian standard of research evaluation in their respective countries. Their focus is on setting up systematic and fair science evaluation first. Hence, from a global perspective, it is important to be aware that the evaluation of science is not equally developed everywhere.

This has, of course, to do with the idea that metrics, and especially publication metrics, are neutral. Even though it is questionable whether there is such thing as neutral bibliometrics (see: https://doi.org/10.1038/d41586-019-01643-3), and the Declaration on Research Assessment (DORA), clearly indicates to stay away from journal impact factors and H index, it is exactly this that has driven the Chinese system to the boundaries of integrity, with many retractions and other issues that reflect the pressure on the young scientists. However, changing that in one revolutionary go to something else (even when very responsible) would jeopardize a lot, especially in the context of highly competitive systems and many researchers’ careers depending on it.

So including new aspects into evaluation systems, such as education-related aspects (see previous blog post), stakeholder engagement related aspects or local contextual aspects, requires careful consideration – not only at the Research Performing Organisation level but also needs support and incentives from policy and research funders. That was the topic of many discussions in the Asia/Pacific event and Africa/Middle East event discussion groups: Japan, Australia, South Africa, India, China, all are working towards the same goals of including a more and more diverse set of indicators of topics to address. Are we talking about responsible development of indicators for a wider range of outputs, or are we discussing responsibility in evaluation processes? This was a question posed in the reflection webinar by Michael Ochsner. Either way, training into that is necessary.

In that respect the overlap of the RRI policy with the successor policies of open science and citizen science will only reinforce the slow, evolutionary change, along even more axes: Open Access (one of the RRI keys) in fact really aims at making the results of science openly available for everyone to use. And it is precisely that which isn’t always the case in African universities, or institutes, as was presented in the discussion. Here, university libraries are stuck with abonnements that do not fit all their science needs and hence lack access to scientific results from others.

Public engagement and Citizen science share a common idea that scientists, by engaging with citizens and those who have a ‘stake’, will address the needs of society. That is however not always the case, and in the Americas session discussion it was concluded that South America research mostly does not address local issues (e.g. specific disease, or working with disadvantaged groups of people in favelas of Rio de Janeiro).

In conclusion, most countries participating in the SUPER MoRRI annual event agreed that, when it comes to responsible practices in research, it is not particularly difficult to change policy; the problem is changing the culture. While new legislation and regulations can impose change, they must be aligned with people’s motivations towards responsibility, and the transition needs to be incremental. Hence, the mammoth tanker is slowly turning but the exact direction in which it is turning still has to be concluded. Good that change takes time.

Read the previous posts in the annual event series

In this blog post, André Brasil discusses the Super MoRRI annual event of 2021 and brings a first impression of the discussions that took place over four webinars.
In this blog post, André Brasil reflects on the discussions of the Super MoRRI annual event of 2021 around metrics and peer-review in evaluation practices.
Share via
Copy link
Powered by Social Snap