Move away from metrics,
but beware of subjectivity
A recent post by the University World News highlighted the effort by Dutch academia to revolutionise its recognition and rewards strategies. The piece mentions, for instance, how Utrecht University is dropping the impact factor altogether for hiring and promotion in exchange for a broader and more responsible assessment of researchers. Earlier this year, Leiden University also acted in the same direction by launching the Academia in Motion program, designed to contribute to the national initiative for a new approach to value not only research but also matters such as education, societal relevance, and leadership. All this comes as a response to the position paper Room for Everyone’s Talent, a major call for culture change in recognition and rewards, as conceived by Dutch knowledge institutions and funders (VSNU, NFU, KNAW, NWO and ZonMw).
The concern around metrics and their role in the (e)valuation of science was also the subject of fruitful discussions in a series of seminars promoted by the SUPER MoRRI project in 2021. As described in a previous post, four meetings were held on “Responsibility in research evaluation practices”, three with a regional orientation (Americas, Asia/Pacific, Africa/Middle East), culminating with a global reflection webinar. I had the opportunity to be one of the speakers in the Americas event, where I introduced an issue that applies to Latin American countries and the Global South: how indicators are usually unable to capture the complexity of regionally relevant research.
Publications not written in English have a smaller chance of being indexed in international databases such as Scopus or the Web of Science. As a result, many of the leading indicators used in research evaluation disregard a large share of the scientific output from countries publishing in other languages. In the Latin American setting, a large share of the papers are in Portuguese or Spanish, and most of these can only be found in local databases such as Latindex, RedALyC and SciELO. Thus, without a peer-review component in local evaluations, these tend to be ignored or undervalued. That perspective was reinforced by Odir Dellagostin, president of the Brazilian Confederation of State Funding Agencies (CONFAP), who discussed the challenges to fund research in the face of asymmetries. Without the human element, metrics alone reduce the chances of a responsible evaluation that would lead to a fair distribution of funding; one that is capable of valuing more than the traditional scientific impact.
The importance of peer review was also a central point of discussion in the Africa/Middle East event. For instance, Rocky Skeef presented evaluation practices of the National Research Foundation of South Africa (NRF), where he is the executive director for reviews and evaluations. The use of experts and peers in the assessment process is the first of the principles adopted by NRF. According to Rocky, this is one of the ways to guarantee that research does not need to divorce itself from the local problem to be valued and ranked at the top. Aligned with that idea, Pouya Janghorban (University of Tehran) also defended evaluations that move away from metrics, as he sees the solutions for regional problems are not necessarily the focus of the type of research traditional indicators would consider the most successful.
Another known problem regarding metrics is how they can influence research behaviour. That is an issue contributing to a significant review in evaluation procedures in China, as discussed in the Asia/Pacific event. As presented by Junpeng Yuan (Chinese Academy of Sciences), the pressure from evaluation practices has led to an increase in research integrity problems in the country, and a series of national policies have been issued to address the situation. That includes getting rid of the paper-oriented assessment and promoting a new responsible evaluation that reforms China’s approach to metrics. According to Lin Zhang (Wuhan University), that means:
- A farewell to “SCI worship”, as indicators based on Science Citation Index will not be applied directly in evaluation and funding at any level;
- A move from metrics to peer review, with a new focus on research quality and societal relevance replacing indicator-based assessments at all levels of evaluation;
- Added priority to local relevance, as high-quality publications in domestic journals will be encouraged to the point that they represent no less than 1/3 of the total output.
The other side of the coin
A total of 30 different countries were represented in the four SUPER MoRRI seminars on responsible evaluation practices. While many of those are braving parallel paths towards better perspectives on metrics and the value of qualitative approaches to assess research outputs, different science systems exist in distinct levels of maturity and development. For example, while China has reached top positions in the number of papers published every year (e.g., in the Web of Science) and is reviewing its approach to quality vs quantity, Iranian participants declared their country still pushes for quantity rather than quality. While China aims to value local language and domestic journals, Brazilian policymakers seek to increase English output and presence in international databases by expanding the adoption of metrics such as CiteScore and Journal Impact Factor. With that, they move away from a peer review centred methodology already in place.
Countries have different motivations and goals that derive from their maturity, but they often have different capacities. For instance, many participants in the SUPER MoRRI events mentioned adequate peer review to be a challenge due to the lack of a critical mass of qualified and experienced reviewers and evaluators. A recurring word entered the discussion from that concern: “subjectivity”. In many environments, there is anxiety that the lack of proper training could hinder evaluators abilities to overcome personal dispositions in their decision process, leading to subjective assessments influenced by extrinsic factors. Consequently, even recognizing metrics limitations to capture research quality, especially that of regional relevance, many still rely on traditional indicators to justify funding distribution. Therina Theron (Stellenbosch University) pointed out that the solution could come from investing in highly skilled professional research and innovation managers that can play an essential role in ensuring the appropriate evaluation of responsible research and innovation.
While the challenges and proper execution of peer review in the face of metrics go beyond the scope of this blog post, a consensus of equilibrium seemed to prevail in the SUPER MoRRI discussions. For most participants, the secret to responsible evaluation practices may come from the balance of qualitative and quantitative methods, applied systematically. Regardless, the main message we take from the promoted seminars is that countries have similar concerns despite being at distinct stages of their scientific development. Perhaps, they can support each other from their particular experiences so that mistakes are not revisited, and successes can be easier to replicate.
Related posts
- September 10, 2021