From MoRRI to SUPER MoRRI

Universiteit Leiden / CWTS

The MoRRI project had the aim of establishing a monitoring system that measures how, where, and to what extent Responsible Research and Innovation (RRI) has become interwoven within European Research practices. The SUPER_MoRRI project aims to build on this monitoring system through empirical and theoretical work. CWTS played a focal role throughout the MoRRI project, and will continue to do so within SUPER_MoRRI, primarily in the impact and communication and  work package. Wouter van de Klippe will be participating in the SUPER_MoRRI project as part of his work within CWTS. Currently, SUPER_MoRRI is in its nascent stages, which offers a unique opportunity to reflect on potential improvements to be made from MoRRI. In this blog post, Wouter aims to provide an impetus to re-conceptualize the function of monitoring throughout the SUPER_MoRRI project.

Monitoring Responsible Research and Innovation: the MoRRI project

The goal of aligning research processes and outputs with societal needs is a bold one. Its scale is evidenced by, for example, the 10/90 problem in biomedical research which refers to the observation that diseases accounting for about 90% of the world’s disease burden receive only 10% of biomedical research attention (Global Forum for Health Research, 1999; Sarewitz & Pielke, 2007). In the Horizon 2020 framework programme, the concept of Responsible Research and Innovation (RRI) was employed to take steps towards this alignment. Within the European Commission, RRI aims to spur this alignment through encouraging institutional change towards facilitating the working together, throughout all stages within research and innovation, of a diversity of societal actors with a multiplicity of value commitments (European Commission, 2019). This means actively working towards a European research area (ERA) more conscious of and responsive to the needs and values at stake within research and innovation.

RRI as a concept has a diversity of (theoretical) literature describing it. Sometimes it contains divergent components depending on context and approach. For example, von Schomberg (2013) emphasizes that RRI should be anchored to the ‘EU charter on fundamental rights’ and the ‘grand societal challenges’. In contrast, Stilgoe and colleagues (2013) and Owen and colleagues (2012) emphasize the importance of inclusive participatory exercises where a diversity of stakeholders can influence research practices according to their value commitments. Furthermore, they encourage the application of the dimensions of anticipation, reflexivity, inclusion, and responsiveness within their frameworks. Despite RRI’s fluid and sometimes contentious definition, the “Monitoring the evolution and benefits of Responsible Research and Innovation in Europe” (MoRRI) project sought to measure the degree to which the RRI agenda had permeated European research practices, and if it had, to locate where and why.

While there did exist some diversity of approaches to measure RRI within MoRRI, the primary focus was the construction of quantitative indicators to measure the presence of RRI within the European Science system. The construction of these indicators was completed through the analysis of data from surveys, bibliometric and altmetric work, and other data sources. These indicators are indeed informative, and the intended use of these results is to encourage institutional learning, which is commendable. However, I believe that by thinking primarily in terms of the outputs of these indicators, we have lost sight of the opportunity that exists in being more creative and reflexive throughout the process of their creation.

In this blogpost, I hope to make evident this opportunity of thinking more creatively about the function of the act of measurement and monitoring. Fortunately, this blog post is in the context of a second life for the MoRRI project, SUPER_MoRRI, where this shift in attention could be applied. SUPER_MoRRI builds on the initial aims and results of MoRRI by continuing with the task of monitoring the presence of RRI within the European research area. Additionally, SUPER_MoRRI has the aim of investigating the relationships between RRI policies and applications and their subsequent societal and democratic benefits. Below, I will further explain what I mean by shifting towards reflection on the possibilities that exist within the process of monitoring instead of only considering the outcomes of measurement and their accuracy. This blog takes inspiration from work within Science and Evaluation Studies (SES) research group at CWTS, and in particular, work describing the ‘evaluative inquiry’ approach in the context of research evaluation (Holtrop, 2018; Fochler & de Rijcke, 2017; Wouters, 2017). I will briefly introduce these ideas now.

The evaluative inquiry and SUPER_MoRRI

In the context of a hypothetical research evaluation within the field of science and technology studies (STS) Fochler and de Rijcke (2017) note that evaluative inquiry requires rethinking the purpose of evaluation from a bureaucratic task of measurement and reduction towards an opportunity to “produce and represent the meaning and purpose of STS work”. This means that evaluation need not entail the production of indicators with the aim of representing an underlying reality. Instead, evaluation can be an opportunity to create a space for articulating what is valued within the evaluated and why. In describing a markedly less hypothetical example within the evaluation of CWTS, Paul Wouters describes the act of applying a “future oriented” evaluative inquiry as an “exercise in collective future making, rather than a game of trying to score as high as possible on a set of indicators that were more or less relevant to our work” (Wouters, 2017). Again, evaluation in this way should be thought of as a moment of reflection for what the evaluated and evaluating collectively want the future look like, and examine potential ways to create that future.

So what has this to do with SUPER_MoRRI? Although the rhetoric of monitoring may at first glance seem neutral, this task of measurement and indicator creation still carries normative and evaluative weight behind it (Davis et al., 2012, p. 9). This is because this task of measurement operates according to the assumption that the existence of RRI implementation is desired, although this is often left implicit. Thus, monitoring in this context can be understood as evaluative, and the insight gleaned from the SES literature is applicable. As discussed above, the methodology employed within MoRRI (the original project) was primarily one of data extraction to support subsequent analysis and create visualizations with the aim of representation which is then intended to facilitate future institutional learning. This aim is exemplified in the development of the 36+ ‘keys’ used to measure RRI, the development of ‘country cluster maps’ that represent the RRI profiles of nations, and the aims of refining, expanding, and validating these different indicators throughout SUPER_MoRRI. The emphasis within the descriptions of both projects appears to be placed almost entirely on the validity of these indicators and their outputs – the potential opportunity to transform research practices throughout the act of their construction is ignored. In contrast, insight from the evaluative inquiry approach would allow for this space of monitoring to become a forum for which the monitored can express how (or whether) they envision and apply RRI. Monitoring and evaluation can be understood as an opportunity to express what a future of RRI within European science looks like, without losing sight of the context of the evaluated.

Rethinking the opportunities within monitoring and measurement

Currently, SUPER_MoRRI is in its nascent stages, meaning that this is a time of reflection on what should be improved from MoRRI. If indeed we are to rethink the possibilities of what monitoring and evaluation might possibly resemble in SUPER_MoRRI, that means asking an entirely different set of questions in this time of reflection than might intuitively arise. For example, where many of the closing recommendations in the final report for MoRRI were focused on calling attention to the need to assess the reliability and coverage of indicators, rethinking evaluation would mean thinking less in terms of an indicator’s representational accuracy and would instead require focusing on its transformative potential. This means asking questions like: How might the results of MoRRI be used as a discussion point with those being evaluated in SUPER_MoRRI? Better yet, how might the results of MoRRI be discussed in a participatory way to instigate further institutional change? Do the evaluated researchers or representatives from research funding organizations agree with the 6 keys of RRI that were selected within MoRRI? Is there divergence between (national, institutional, disciplinary, etc…) contexts in the relevance or interpretation of these 6 keys, and what components of RRI have been excluded as a consequence of selecting these keys? Do those being evaluated identify with the results of MoRRI, why or why not? The opportunity to glean knowledge from these questions is considerably limited when asking them in the context of closed response questionnaires or surveys. Additionally, there is an immense opportunity to use existing tools that have been developed in the context of RRI-projects as part of open evaluative exercises. For example, the RRI tools project has developed an online platform of tools that can be used to foster learning and discussion regarding the implementation and meaning of RRI. Why not use this as a discussion point in an open evaluation exercise and explore the multiplicity of ways the lessons of this tool can be implemented in a contextualized way for those being evaluated?

There is no need to worry for those readers who feel a sense of dread in the abandoning of indicators in these evaluative exercises. As Fochler and de Rijcke (2017) note, opening up evaluation need not preclude the use of indicators altogether. Indeed, throughout these more open evaluative exercises, surveys or metrics can be used and created for subsequent analysis to encourage institutional learning. Furthermore, the aim of SUPER_MoRRI is to measure and monitor the presence of RRI throughout the ERA, hence the task of monitoring and representation need to remain a core component of the project. The difference I hope to encourage is in the position of these indicators in the exercise: from the end goal with the aim of representation, towards being one component of many and as a tool for opening up discussion (Ràfols et al., 2012).

Thinking more openly about what monitoring and evaluation could resemble in the context of SUPER_MoRRI is a return back to some core tenets of what RRI was intended to mean. Stilgoe and colleagues (2013) remind us that “the ways in which the concept of responsible innovation is being constituted should themselves be opened up to broad anticipation, reflection and inclusive deliberation”. Monitoring RRI by utilizing closed response surveys, questionnaires, and other highly quantified methodologies risks serving to reify narrow interpretations of its application and constitution. Fortunately,  SUPER_MoRRI is an opportunity to focus on the transformative potential of monitoring and assessment – reflecting on this transformative opportunity will better equip us to develop more open potential RRI enriched futures, as diverse and inclusive as were initially hoped for.

This blog post benefitted from feedback given by and conversations with Sarah de Rijcke and Ingeborg Meijer.

References

Davis, K., Fisher, A., Kingsbury, B., & Merry, S. E. (Eds.). (2012). Governance by indicators: global power through classification and rankings. Oxford University Press.

European Commission. (2019). Responsible Research & Innovation. European Commission. Retrieved from https://ec.europa.eu/programmes/horizon2020/en/h2020-section/responsible-research-innovation

Fochler, M., & De Rijcke, S. (2017). Implicated in the indicator game? An experimental debate. Engaging Science, Technology, and Society3, 21-40.

Global Forum for Health Research. (1999). The 10/90 report on Health research. World Health Organization. Available at: https://www.files.ethz.ch/isn/20437/1090.99_FullText.pdf

Holtrop, T. (2018). The Evaluative Inquiry: a new approach to academic evaluation. Centre for Science and Technology Studies blog. Available at: https://www.cwts.nl/blog?article=n-r2u2b4&title=the-evaluative-inquiry-a-new-approach-to-academic-evaluation

Owen, R., Macnaghten, P., & Stilgoe, J. (2012). Responsible research and innovation: From science in society to science for society, with society. Science and Public Policy39(6), 751-760.

Rafols, I., Ciarli, T., van Zwanenberg, P., & Stirling, A. (2012). Towards indicators for ‘opening up’ science and technology policy. Available at: http://blogs.oii.ox.ac.uk/ipp-conference/sites/ipp/files/documents/Rafols-Ciarli-OpeningUp-FULL.pdf.

Sarewitz, D., & Pielke Jr, R. A. (2007). The neglected heart of science policy: reconciling supply of and demand for science. Environmental Science & Policy10(1), 5-16.

Stilgoe, J., Owen, R., & Macnaghten, P. (2013). Developing a framework for responsible innovation. Research Policy42(9), 1568-1580.

Von Schomberg, R. (2013). A vision of responsible research and innovation. Responsible innovation: Managing the responsible emergence of science and innovation in society, 51-74.

Wouters, P. (2017). Bridging the evaluation gap. Engaging Science, Technology, and Society3, 108-118.

Share via
Copy link
Powered by Social Snap