~ 29 min read

DOI: https://doi.org/10.36850/p761-ey93

How 'Recognition and Rewards' in Dutch Academia Turned Metrics into Incentives

ByMartijn van der MeerOrcID

Setting the stage

Just before the Christmas break in 2022, the Advisory Council for Science, Technology, and Innovation presented an advisory letter on ‘qualities of science’ to the chair of the House of Representatives in the Netherlands. The document concluded that ‘there is no reason to expect that new ways of assessing academics would harm the international position of Dutch science.’ [1] It even celebrated pioneering efforts undertaken in the Netherlands to combine new quantitative and qualitative indicators to assess scientific quality under the banner of ‘recognition and rewards’ (in Dutch: ‘erkennen en waarderen’).

The presentation of the document marks the latest chapter of a heated, confusing, but certainly interesting debate about how science and scholarship should (not) be assessed. Those familiar with the discussion might know it as an international conversation stimulated by science funders, universities and policymakers. Yet, in the remarkable case of the Netherlands, the discussion left the protective walls of Dutch universities and entered the Dutch political arena – severely transforming the nature and scope of the discussion.

An overview of the trajectory of this debate helps unravel the confusing entanglement of the many seemingly related policy themes in discussions about research assessment. The connection between ‘recognition and rewards’ and ‘open science’, to give just one example, is not self-evident, especially to those opposed to both or either of these movements. And why criticism of reformative policies focussed primarily on the proposed abandonment of ‘quantitative’ and ‘objective’ indicators to assess quality might benefit from some explanation. From my attempt to explain the relationship between these themes, I learned that the Dutch debate about policies under the umbrella of ‘recognition and rewards’ slowly transformed into a technical discussion about the merits of quantitative versus qualitative indicators.

To me, this seems to be a missed opportunity. The past decade of discussing recognition and rewards reveals a fundamentally different view on the purpose of assessing scientific quality. Instead of finding out who is a great scientist and how this can best be measured, the discussion now centers around who meets policy goals and — only in the second instance — how this can be measured. Descriptive metrics have explicitly become ways of incentivizing researchers and steering research towards certain prescribed goals. Yet, what these goals are – what is prescribed – Is neither naturally given nor self-evident. When policymakers and administrators discuss which incentives to pick rather than the most effective metrics, the urgent question arises of what ‘we’ — citizens of a democratic, expertise-based society — expect from scientists and scientific research. With this blog, I intend to show that this side of ‘recognition and rewards’ appears to be neglected.

Act I – The problems of journal-level metrics within the academic community

On Sunday, 16 December 2012, during the Annual Meeting of the American Society for Cell Biology (ASCB) in San Francisco, a small group of prominent scientists and journal editors met to discuss ‘the scientific community’s obsession with the Journal Impact Factor (JIF).’ [2] This indicator is constructed by the number of citations a journal received in a given year for the total citable publications in the two preceding years. The resulting number suggests the popularity of articles in a periodical based on the average amount of citations every publication received in the past two years. The academics conveying at the 2012 ASCB observed that the JIF was not used to assess the popularity of journals but had become a tool for evaluating the merit of individual scholars during the recent decades. Using journal-level metrics as a proxy for academic quality in hiring, promotion, and funding decision, would result in a move away from the content of individual articles by individual researchers, they worried. [3] The conference panel, and the mail conversation that followed, resulted in what came to be known as the ‘San Francisco Declaration on Research Assessment’ (DORA) in 2013.

The authors of the DORA text point out that the Journal Impact Factor was not designed to assess the quality of researchers. Eugene Garfield (1925-2017), one of the pioneers of bibliometrics, is often credited with introducing the measuring principle behind journal-level metrics. [4] His version of the ‘Science Citation Index’ (SCI) added the ‘Journal Citation Reports’ (JCR) in 1975. This dataset included information retrieved from journal-to-journal citations and the possibility of comparing the popularity of journals based on the Journal Impact Factor. This information was marketed by the Institute for Scientific Information (currently owned by Thomas Scientific) to librarians. [5] Due to the combined rise of subscription fees, the number of new journals, and the relative decrease in science funding in the 1970s and 1980s, decisions on which journals to subscribe to became harder. [6] The JIF provided a seemingly substantiative and feasible solution to legitimise these decisions. According to Garfield, it helped librarians ‘counteract the inertia that too often prevails regarding journal selection.’ [7]

The measurement of the journal’s reputation based on the average citation every article receives seems objective because of its quantitative form. Yet, the JIF is problematic as the primary parameter for comparing institutions’ and individuals’ scientific output. Interestingly, the journal editors of reputable journals—generally academics with high standing in the field—were among the first to voice their concerns about the growing use of metrics. In 2005, Nature published an editorial acknowledging that their impact factor was influenced by only a ‘small minority of papers’. [8] This highlighted how the distribution of citations is highly skewed in most journals, meaning that the impact factor is a problematic predictor for the number of citations that publications receive. [9]

Another objection was voiced in an editorial by the editors of PLoS Medicine in 2006. They aimed to debate how editorial policies can manipulate the JIF openly. Strategies to increase citations included ‘encouraging authors to cite articles published in the journal’, ‘publishing reviews that will garner large numbers of citations’, and ‘decreasing the number of research articles published’. [10] These instances of ‘gaming’ are often explained as resulting from an indicator becoming a target, thereby foreclosing its ability to function as a good indicator. [11]

A third objection regards the discipline-specific nature of citation habits and the different average citations ‘high-impact’ publications receive. This potentially crowds out highly-specialized and less conventional fields receiving a lower amount of citations because they are not connected to big disciplines. Critics problematise using the JIF across disciplines without taking into account these differences. [12]

Within the Declaration on Research Assessment, these objections culminated into eighteen recommendations for funding agencies, institutions, publishers, organisations that supply metrics, and researchers. Its key message is directed to the discrepancy between the quality of individual research and journal-level indicators: ‘do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.’ [13] The original statement was signed by 75 academics. At the dawn of 2023, 22.674 individuals and organizations in 169 countries signed DORA.

Looking back at the five years that passed since launching the declaration, initiator Sandra Schmid observed how several funding agencies ‘instituted, strengthened, and/or made more explicit their guidelines to curtail the use of the JIF and to allow researchers to articulate the significance of their own work, through selected and annotated bibliographies.’ She also highlighted how since the presentation of DORA, researchers and journals started to develop and implement alternative measures that can indicate the quality of articles, disciplines, and individual researchers. Schmid was aware that these general developments may not be the immediate result of the declaration itself, but is still convinced that it at least contributed to an international debate on the problematic use of journal-level metrics. [14]

In more recent years, however, the focus of DORA changed. The 2018 strategic plan of the emerging community around DORA reveals a subtle yet fundamental change. Improving research assessment was now seen as ‘only a first step.’

Act II – Metrics as incentives in the arena of Dutch academic institutions

That next step required ‘changes in academic culture to ensure that hiring, promotion, and funding decisions focus on the qualities of research that are most desirable – insight, impact, reliability, and re-usability – rather than on questionable proxies.’ [15] This shift implies a fundamentally different interpretation of the goals that bibliometric indicators serve in the first place. The metrics are not necessarily aimed at measuring the quality of research output, but are instead interpreted as instruments for nudging scientists towards the most desired results.

Instead of only fixing a malfunctioning procedure, DORA now aimed to open a discussion on the rules of the game that scientists play. It reminds me of Goodhart’s law, stating that “when a measure becomes a target, it ceases to be a good measure,” but then operationalized intentionally. Bibliometric measures had not become mere descriptive indicators: they had explicitly become incentives. This interpretation of research assessment as a policy instrument during the last decade is clearly observable in the Netherlands.

A motley collection of reputable professors under the banner of Science in Transition had started to provoke and question the inward-oriented and metrics-obsessed research culture already in 2013. [16] Yet, the criticism of the DORA became institutionally embraced when the Alliance of Dutch Universities (VSNU) signed the declaration in 2014. [17] This formal and public endorsement preluded an official collaboration in 2019 with the Dutch Federation of Academic Medical Centers (NFU) and the two main science funders in the Netherlands: the Dutch Research Council (NWO) and the Dutch Health and Medical Research (ZonMW). In their 2018 press release, these institutional players announced developing and implementing a new system of ‘recognition and rewards’ in academia. [18]

The change that the VSNU, NFU, NWO, and ZonMW envisioned was threefold. Firstly, they wanted to innovate the research evaluation system to enable new ways of assessing the ‘quality and (societal) impact of research.’ This ambition was sealed by announcing the intended signature of DORA by science funders NWO and ZonMW in 2019. Secondly, these key institutions envisioned recognizing and rewarding ‘team science’ instead of ‘individual achievements’ in career paths. Finally, the collaboration aimed at in the ‘diversification of career paths’ by recognizing and rewarding individual researchers for investing time in ‘research teaching, knowledge valorization, and/or leadership.’

The coalition of universities, medical centers, and science funders not only announced to organize conferences, symposia, and round table discussions to discuss research assessment. They also aimed to change the criteria and guidelines used by review panels for funding, hiring, and promotion decisions. The VSNU, for example, announced to change its ‘strategic evaluation protocol’ (SEP) used for institutional accreditation and its ‘Job Classification System’ (UFO) to make explicit the expectations of academics and their direct institutional contexts.

The Dutch Research Council, to give another concrete example, announced to ‘incorporate expected quality and impact’ in the evaluation of researchers and funding proposals. [19] In the Netherlands, the discussion on research assessment did not solely refer to the problems of journal-based metrics that DORA initially set on the agenda. Instead, changing evaluation criteria became entangled with the answer to the normative question that formed the title of the kick-off meeting of ZonMW and NWO on ‘Recognition and Rewards’: ‘How do we envision the researcher of the future?’ [20]

An answer was formulated in a position paper called ‘Room for Everyone’s talent,’ drafted and endorsed by all Dutch universities, most research funders, and the Royal Netherlands Academy of Arts and Sciences (KNAW) in 2019. The document lists the priorities of these institutions very explicitly. Next to providing ‘academic education at the highest possible level,’ it listed ‘carrying out academic research’ and ‘having an impact on society’ as its key objectives. This required ‘leadership’ and’ sharing our academic research and education with society and making it accessible (open science).’ Moreover, the knowledge institutions and institutions stated that ‘Dutch science and academia is grounded in the principle of spanning the wide breadth of the knowledge chain. This ranges from fundamental, curiosity-driven questions to application and implementation—and back.’ [21]

To realize these changes, the document pleads for a ‘modernization of the system of recognition and rewards. This modernization should be designed to improve, in a reciprocal way, the quality of each of these key areas: education, research, impact, leadership, and (for university medical centers) patient care.’ [22] After all, ‘many academics feel there is a one-sided emphasis on research performance, frequently leading to the undervaluation of the other key areas.’

With reference to DORA, the position paper equates this narrow focus on research performance with the prominence of ‘traditional quantifiable output indicators (e.g., number of publications, h-index, and journal impact factor).’ This reliance on quantifiable indicators disrupts diversity and the societal impact of research and impedes the practice of open science. It is therefore essential to recalibrate and broaden the assessment system for research.’ [23] The position paper thereby marks another subtle but fundamental shift in research assessment: The key institutional players agree that quantitative bibliometric indicators are not only unsuitable for evaluating performance because they might incentivize undesired behavior, , but are instead inherently problematic.

That authors of the position papers were not merely playing with words, as became clear one month after publication. In December 2019, the Dutch Research Council announced implementing a ‘narrative academic profile’ for its ‘Vici-programme’—a funding scheme with grants of 1,5 million euros—after the ‘successful experimentation with the format in the Veni-scheme pre-proposal phase.’ Presented as an implementation of the 2019 position paper and the act of signing DORA, policy change requires applicants to provide ‘a narrative description of the candidate’s narrative profile.’ According to the Dutch Scientific Research Council, ‘this enables candidates to decide what is/is not important to mention in their CV.’ [24] Despite that, applicants must attach a list of ten items considered ‘key output,’ with a narrative explanation of why this is regarded as the researcher’s most important results. [25] In the pre-proposal form of the 2020 Veni-call, it becomes clear that applicants are instructed to ‘not mention H-indexes, impact factors, or any type of metric that refers to journal/publisher-impact.’ [26]

In a similar vein, Utrecht University—a member of the VSNU—announced in 2021 to formally abandon the impact factor in all hiring and promotion decisions. With respect to the new changes, ‘there are feelings of insecurity among young academics’ Paul Boselie states on behalf of the institution. ‘We feel that it’s a risk we are willing to take because we believe [the evaluation system] will change in the end.’ [27]

Act III – quantity versus quality in the public arena

Boselie was undoubtedly correct in expecting pushback. [28] The announcement that Utrecht University abandoned the Journal Impact Factor and NWO’s introduction of a narrative CV not only popularised Dutch attempts to change how academics are ‘recognized and rewarded’ at a fast pace. It also strengthened the perceived connection between policy reforms and general criticism of quantitative indicators. This criticism did not solely come from the ‘young academics’ that Boselie mentioned in his Nature interview. [29] The article in Nature inspired a heated debate on social media and in the form of published open letters.

The former president of the Royal Netherlands Society for Arts and Sciences and renowned cell biologist Hans Clevers felt the need to set up a special Twitter account to inject his strong concerns with the trend to implement alternative ways to evaluate research performance in the Netherlands. [30] ‘My university abandons quantitative output measurements and focuses solely on “collaboration” and “open science”.’ The cell biologist did not necessarily deny its relevance. Still, he objected that ‘this is unfortunately not quantifiable and therefore not objective.’ [31] As a result, Clevers stated, ‘the primary goal of science, the production of new knowledge, threatens to be no longer measured.’ [32] For the former president of the Royal Netherlands Society for Arts and Sciences, this felt like a step back in time: ‘at the start of my career, universities did not evaluate research performance. As a research intern, I worked at two places in Utrecht. Both groups had not published anything until then, and nobody seemed to care.’ [33] He remembered how in the 1980s, an evaluation method was introduced based on the number of publications per year. When researchers started gaming these assessment procedures, ‘the journal impact factor and citations became new criteria.

That system,’ according to Clevers, ‘worked quite well’. [34] The high-profile cell biologist already observed the first consequences of the new ‘recognition and rewards’ policies at Dutch universities and funding bodies: ‘this year, the positions of Dutch universities dropped sharply in the Shanghai rankings.’ [35] The Twitter thread received strong support in De Telegraaf, in a newspaper op-ed by Ronald Plasterk, former Dutch Minister of Education, Science, and Culture and professor of molecular biology. According to Plasterk: ‘Clevers is absolutely right that the most objective test of making discoveries would not be taken into account anymore in funding decisions.’ He even went one step further: ‘these forms of anti-intellectualism […] can harm the high quality of Dutch science.’ [36]

Memories of changes made to evaluation procedures in the 1970s and 1980s to determine quality based on quantitative and ‘objective’ bibliometric indicators—celebrated by Plasterk and Clevers for resulting in the top position of Dutch science—served as a central argument in a few open letters published as op-ed’s at Dutch news platform ScienceGuide. The letter titled: ‘threats to Dutch fundamental science endanger our welfare’ was written by cell biologist and associate professor Raymond Poot. ‘Dutch scientists are no longer assessed based on international, scientific, and measurable criteria, as it has been the successfully been done by the Dutch Research Council. These measures inspired by Open Science and DORA are partly removed and replaced by politically motivated criteria that can hardly be measured.’ In line with Clevers and Plasterk, Poot stated that ‘signing DORA by Dutch Universities and funders should be reconsidered’ and that ‘the Netherlands should stay in the top 5 of rankings based on international science indicators – if only for economic and societal reasons.’ After all, the Netherlands faced severe ‘socioeconomic and ecological crises.’ And according to Poot, this requires a strong knowledge economy rooted in top research. Against this backdrop, the preservation of quantitative ‘quality indicators’ and ‘selection mechanisms that assisted us for thirty years’ was essential to the author. [37]

Over a hundred prominent Dutch professors, including a Nobel Prize winner, a member of the Dutch Outbreak Management Team, and aforementioned former minister Plasterk, signed and endorsed the letter and accompanying report. A couple of days after publication, members of ‘Young Science in Transition’ and the Dutch ‘Young Academy’ published a response in favor of ‘Recognition and Rewards’ on Dutch science platform Scienceguide. In a piece titled ‘we have to abandon numeracy in science,’ the authors stated that ‘bibliometric indicators are easy and deceptively objective measures.’ Policies under the banner of ‘recognition and rewards,’ the authors claimed, ‘challenge us to evaluate each other’s publications on their quality of content, apart from quantity and venue of publications.’ [38] Both open letters show that in the Netherlands, criticism, and support of policy reform under the banner of ‘recognition and rewards’ became strongly intertwined with the use and abuse of quantitative indicators.

Act IV – The international position of Dutch science in the political arena

The criticism of policies in the name of ‘recognition and rewards’ entered a new phase when the Dutch national media outlets announced that Hans Clevers was about the leave the Netherlands to become the new director of Roche Pharma Research and Early Development on 2 February 2021. [39] The same day, he was invited to the Dutch talk show OP1 to explain his motivations. Introduced as a potential Nobel-prize laureate, Clevers presented his unanticipated ‘new challenge’ as resulting from a ‘hostile’ Dutch academic environment. In the talk show, Clevers voiced his irritation about the bureaucratic burdens of Dutch publicly-funded universities involved in the commercialization of biomedical inventions, or turning ‘public into private knowledge.’ [40]

Clevers added another dimension to this ‘academic environment’ in a much longer interview on a radio show hosted by Dutch media personality Jort Kelder about two weeks later. In the interview, later titled ‘Does science go down because of open science?’, he added his criticism on disbanding quantitative metrics intended to stimulate ‘open science’ to his sketch of an academic climate that did not support biomedical inventions. [41] A creative attempt to connect two personal irritations of the stem cell biologist Clevers. Stating that he feared a brain drain of talented young academics since it had become harder to obtain Dutch funding based on a CV stuffed with publications in high-impact factor journals. Assessing academics’ commitment to open science rather than their publications in HIF journals would damage the Dutch ‘international top position’. Clevers himself was not young anymore. Yet, his transfer to the pharmaceutical company could be considered an example of the point he tried to make.

The exodus of a ‘potential Nobel laureate’ because of a hostile academic climate, worries about a brain drain, and potential damage to the international top position of the Netherlands because of ‘open science’ presented new political stakes. In response to Clevers’ media performances, Hatte van der Woude, MP for the Dutch conservative-liberal party VVD, filed parliamentary questions for minister Robbert Dijkgraaf, a theoretical physicist, and – just like Clevers – a former president of the Royal Academy of Arts and Sciences. Dijkgraaf had just exchanged his post as director of the institute of advanced study at Princeton for a post as Minister of Education, Culture, and Science. Van der Woude asked whether Dijkgraaf was familiar with the radio show and signed letters and how he related to the trend of abandoning quantitative metrics and instead assessing scientists on their commitment to Open Science with qualitative criteria and ‘narrative CVs.’ She furthermore focussed on whether Dijkgraaf agreed that these reforms damaged the international position of Dutch science. [42] Dutch ministers are required to formally answer these questions on behalf of the Dutch government, and as such, these inquiries pushed the debate on reforming assessment to a pivotal edge. If the Minister agreed with the critics, reformative policies were about to suffer from a severe crisis of legitimacy.

Precisely the opposite happened. In straightforward language, Dijkgraaf replied that he supported the reforming assessment procedures under the banner of ‘recognition and rewards.’ He wanted to stimulate ‘open science’ as it matched his ambitions to strengthen ‘the interaction between science and society,’ as stated by the coalition agreement signed by Hatte van der Woude’s own VVD. Abandoning the ‘problematic’ Journal Impact Factor and the Hirsch Index, he explained with extraordinary precision, was in line with European trends. Such policy reforms did not harm the international reputation of the Netherlands, Dijkgraaf claimed. Instead, he noted: ‘the recent steps taken by Dutch universities enable Dutch science to strategically position itself even better.’ [43] Yet, the conservative-liberal MP was not entirely convinced: in the parliamentary committee on education, culture, and science, she demanded an ‘external and independent evaluation’ of the effects of new ‘recognition and rewards’ policies on the international position of Dutch science. [44] She even proposed an agency that had to carry out such an assessment: the Advisory Council for Science, Technology, and Innovation (AWTI).

The council’s advisory letter, titled ‘evaluating the qualities of science,’ was presented in December 2022 to the chair of the House of Representatives and concluded that they found ‘no evidence that new ways of assessing academics would harm Dutch science.’ The advisory council substantiated minister Dijkgraaf’s conviction that new ways of assessing academics reflect international developments. Most major science funders in Europe and the United States, AWTI stated, have underwritten the principles of DORA or the ‘Agreement on Reforming Research Assessment’ (RRA) from the European Commission, and some of these funders also stepped away from quantitative indicators, such as the H-index and the Journal Impact Factor. The advisory council described the pioneering position of the Netherlands within these trends, and highlighted the opportunity of shaping the international movement towards new ways of determining scientific quality. [45] Their advisory letter settled parliamentary worries about ‘recognizing and rewards’ and its connection to open science and substantiated the criticism of assessing academics solely on quantitative indicators. Ironically, political pressure inspired by public criticism of prominent scientists and the signatories of open letters published in the Dutch news platform Science Guide increased the legitimacy of using qualitative indicators as incentives for policy goals. Does this mean that ‘recognition and rewards’ has won?

Reprise: from technicalities to political deliberation

Looking back, it becomes clear how a political community of Dutch reformers united in their aim to change how research quality is evaluated, has successfully mobilized institutional and political actors over the past decade. The Dutch debate about reforming research assessment is fascinating in that it took shape along the lines of a perceived dichotomy of critics versus reformers. [46] More specifically, Dutch critics worried about abandoning quantitative indicators and the introduction of a ‘narrative-based CV.’ As such, the public and political discussion about ‘recognition and rewards’ centered on technicalities: the question of how research quality can be defined. And because critics failed in mobilizing political resistance to recent policy changes, it is tempting to frame the developments in this story as a victory of ‘qualitative’ perspectives over ‘quantitative’ ones.

Such a perspective overlooks the most impressive achievement underlying the process of making policies on ‘recognition and rewards’ uncontroversial. Namely, in the past decade, indicators to assess research(er) quality have become perceived as incentives to meet policy goals. This means that policymakers operationalized a cliché in the sociology of science: academics shape their behavior in adherence to how their performance is measured because they have an interest in how they are assessed. The political community that aim to reform how academics are ‘recognized and rewarded’ are not - at least not anymore - interested in the ‘best’ way to measure scientific quality or ‘excellence’, but in defining what scientists should do and finding indicators to assess whether scientists meet these goals.

However, what the goals of science as a collective and scientists as individuals are, is not naturally given: answering this question requires a fundamental and, indeed, a political discussion. From that perspective, the public and political discussion about recognition and rewards in 2022 is a missed opportunity. Before debating whether publications, high-impact factor journals, narrative CVs, citation counts (or whatever) are the best way to define quality, Dutch politicians, worried academics, or ‘the public’ could have discussed what they expect from academia and what values should serve as a point of departure. Is it international competitiveness? And if so, why is that necessary? Is it socioeconomic growth? Is it technology? Is it a well-informed and critical democratic society? The values and ideals underlying ‘recognition and rewards’ have become unclear as well as uncontested. If policymakers start introducing assessment criteria based on the presumption that these indicators incentivize academics to behave in specific ways, those representing Dutch citizens better deliberate what we, as citizens of a democratic expert-based society, want from our science. In the end, we should realize that ‘recognition and rewards’ is not about quantitative or qualitative indicators: it are the social and political goals of science that are at stake.

References

[1] Technologie en Innovatie Adviesraad voor Wetenschap, ‘Duiden van de kwaliteiten van wetenschap - Adviesraad voor wetenschap, technologie en innovatie’ (Adviesraad voor het Wetenschaps- en Technologiebeleid, 19 december 2022), https://www.awti.nl/documenten/publicaties/2022/12/19/index.

[2] Sandra L. Schmid, ‘Five years post-DORA: promoting best practices for research assessment’, Molecular Biology of the Cell 28, nr. 22 (1 november 2017): 2941, https://doi.org/10.1091/mbc.E17-08-0534.

[3] The PLoS Medicine Editors, ‘The Impact Factor Game’, PLOS Medicine 3, nr. 6 (6 juni 2006): e291, https://doi.org/10.1371/journal.pmed.0030291.

[4] Paul Wouters, ‘Eugene Garfield (1925–2017)’, Nature 543, nr. 7646 (maart 2017): 492–492, https://doi.org/10.1038/543492a.

[5] Vincent Larivière en Cassidy R. Sugimoto, ‘The Journal Impact Factor: A Brief History, Critique, and Discussion of Adverse Effects’, in Springer Handbook of Science and Technology Indicators, onder redactie van Wolfgang Glänzel e.a., Springer Handbooks (Cham: Springer International Publishing, 2019), 3–24, https://doi.org/10.1007/978-3-030-02511-3_1.

[6] Aileen Fyfe e.a., ‘Untangling Academic Publishing: A history of the relationship between commercial interests, academic prestige and the circulation of research’ (Zenodo, 25 mei 2017), 8, https://doi.org/10.5281/zenodo.546100; John B. Thompson, Books in the digital age: the transformation of academic and higher education publishing in Britain and the United States (Polity, 2005); Kimberly Douglas, ‘The Serials Crisis’, The Serials Librarian 18, nr. 1–2 (2 augustus 1990): 111–21, https://doi.org/10.1300/J123v18n01_08.

[7] Eugene Garfield, Journal citation reports (Citeseer, 1991); Larivière en Sugimoto, ‘The Journal Impact Factor’, 4.

[8] ‘Not-so-Deep Impact’, Nature 435, nr. 7045 (juni 2005): 1003–4, https://doi.org/10.1038/4351003b.

[9] Per O. Seglen, ‘Why the impact factor of journals should not be used for evaluating research’, Bmj 314, nr. 7079 (1997): 497.

[10] Editors, ‘The Impact Factor Game’, 0707.

[11] Mario Biagioli en Alexandra Lippman, Gaming the metrics: misconduct and manipulation in academic research (Mit Press, 2020), 1; Marilyn Strathern, ‘‘Improving ratings’: audit in the British University system’, European review 5, nr. 3 (1997): 305–21.

[12] Jerome K. Vanclay, ‘Impact Factor: Outdated Artefact or Stepping-Stone to Journal Certification?’, Scientometrics 92, nr. 2 (1 augustus 2012): 214, https://doi.org/10.1007/s11192-011-0561-0.

[13] ‘Read the Declaration’, DORA (blog), geraadpleegd 20 december 2021, https://sfdora.org/read/.

[14] Schmid, ‘Five years post-DORA’, 2942.

[15] ‘DORA Roadmap: A Two-Year Strategic Plan for Advancing Global Research Assessment Reform at the Institutional, National, and Funder Level’, DORA (blog), 27 juni 2018, https://sfdora.org/2018/06/27/dora-roadmap-a-two-year-strategic-plan-for-advancing-global-research-assessment-reform-at-the-institutional-national-and-funder-level/.

[16] Frank Miedema, ‘Science in Transition How Science Goes Wrong and What to Do About It’, in Open Science: The Very Idea, onder redactie van Frank Miedema (Dordrecht: Springer Netherlands, 2022), 67–108, https://doi.org/10.1007/978-94-024-2115-6_3; ‘Science in Transition Conference — KNAW’, geraadpleegd 21 december 2021, https://www.knaw.nl/nl/actueel/agenda/science-in-transition-conference.

[17] ‘Impact niet langer leidend’, ScienceGuide, 4 december 2014, https://www.scienceguide.nl/2014/12/impact-niet-langer-leidend/.

[18] ‘Instellingen en financiers gaan wetenschappers anders belonen en waarderen’, ScienceGuide, 26 november 2018, https://www.scienceguide.nl/2018/11/anders-belonen-en-waarderen/; VSNU e.a., ‘VSNU, NWO, NFU en ZonMw geven impuls aan verandering in het waarderen en belonen van wetenschappers’, geraadpleegd 21 december 2021, https://www.scienceguide.nl/wp-content/uploads/2018/11/Statement-Waarderen-en-Belonen-van-Wetenschappers_vDEF-2.pdf.

[19] ‘Instellingen en financiers gaan wetenschappers anders belonen en waarderen’.

[20] ‘‘Hoe maak je een systeem waarvan niet iedereen opnieuw gek wordt?’’, Mare Online, geraadpleegd 21 december 2021, https://www.mareonline.nl/achtergrond/hoe-maak-je-een-systeem-waarvan-niet-iedereen-opnieuw-gek-wordt/.

[21] VSNU e.a., ‘Room for everyone’s talent: towards a new balance in recognising and rewarding academics’ (The Hague, november 2019), 3, https://recognitionrewards.nl/wp-content/uploads/2020/12/position-paper-room-for-everyones-talent.pdf.

[22] VSNU e.a., 3.

[23] VSNU e.a., 4.

[24] ‘NWO Talent Programme | Vici - Health Research and Medical Sciences | NWO’, geraadpleegd 21 december 2021, https://www.nwo.nl/en/calls/nwo-talent-programme-vici-health-research-and-medical-sciences; ‘NWO | NWO voert narratief CV door in Vici-ronde 2020’, geraadpleegd 21 december 2021, https://www.nwo.nl/nieuws/nwo-voert-narratief-cv-door-vici-ronde-2020.

[25] ‘NWO | DORA’, geraadpleegd 21 december 2021, https://www.nwo.nl/dora.

[26] NWO en ZonMW, ‘Grant application pre-proposal form 2020 : NWO Talent Programme – Veni scheme’, 5, geraadpleegd 21 december 2021, https://www.nwo.nl/sites/nwo/files/media-files/EXAMPLE%20Veni%202020%20Pre-proposal%20Form%20with%20expanded%20-%20AES%20SSH%20ZonMw.pdf.

[27] Chris Woolston, ‘Impact Factor Abandoned by Dutch University in Hiring and Promotion Decisions’, Nature 595, nr. 7867 (25 juni 2021): 462–462, https://doi.org/10.1038/d41586-021-01759-5.

[28] ‘Banning Journal Impact Factors Is Bad for Dutch Science’, Times Higher Education (THE), 3 augustus 2021, https://www.timeshighereducation.com/opinion/banning-journal-impact-factors-bad-dutch-science.

[29] ‘Nieuwe Erkennen en Waarderen kan kwetsbare groepen schaden’, ScienceGuide, 27 juli 2021, https://www.scienceguide.nl/2021/07/nieuwe-erkennen-en-waarderen-kan-kwetsbare-groepen-schaden/.

[30] ‘Oud-KNAW-president ernstig bezorgd over Erkennen en Waarderen, maar minister enthousiast’, ScienceGuide, 8 november 2021, https://www.scienceguide.nl/2021/11/oud-knaw-president-ernstig-bezorgd-over-erkennen-en-waarderen-maar-minister-enthousiast/.

[31] Hans Clevers NL, ‘Mijn universiteit stopt kwantitatieve output meting en richt zich enkel op de thema’s “samenwerking” en “open science”. Belangrijk, maar dat zijn bijv mentorschap, good citizenship, talent en passie ook. Allemaal relevant, maar helaas niet kwantificeerbaar en dus niet objectief.’, Tweet, @HansCleversNL (blog), 7 november 2021, https://twitter.com/HansCleversNL/status/1457382003242897412.

[32] Hans Clevers NL, ‘Toevoeging van deze criteria aan het bestaande evaluatiesysteem zou uitstekend zijn. Maar E&W lijkt een andere weg te kiezen: het primaire doel van wetenschap, het daadwerkelijk genereren van nieuwe kennis, dreigt niet meer kwantitatief gemeten te worden.’, Tweet, @HansCleversNL (blog), 7 november 2021, https://twitter.com/HansCleversNL/status/1457385667789529094.

[33] Hans Clevers NL, ‘Nederland wil via ‘Erkennen en Waarderen’ het personeelsbeleid (onderwijs, wetenschap EN management) aan de universiteiten moderniseren via een radicale wijziging van de wetenschappelijke evaluatiesystematiek. NWO loopt hierin mee. Mijn grote zorgen hierover:’, Tweet, @HansCleversNL (blog), 6 november 2021, https://twitter.com/HansCleversNL/status/1457017307096625157.

[34] Hans Clevers NL, ‘Er kwam een evaluatiemethodiek: het aantal publicaties/jaar. Dit werkte redelijk, maar wetenschappers pasten hun publicatiestrategie aan: liever drie kleine papers dan een mooie. Journal impact factor en citaties werden toen het nieuwe criterium. Dit systeem werkte aardig goed’, Tweet, @HansCleversNL (blog), 6 november 2021, https://twitter.com/HansCleversNL/status/1457022284191969284.

[35] Hans Clevers NL, ‘Maar anderen doen dat wel: Dit jaar zakten de Nederlandse universiteiten fors in de Shanghai ranking, en enkele vielen zelfs uit de top 100. Deze ranking is niet op reputatie gebaseerd, maar exclusief op kwantitatieve output parameters. We kunnen daar onze schouders over ophalen.’, Tweet, @HansCleversNL (blog), 7 november 2021, https://twitter.com/HansCleversNL/status/1457386237946474499.

[36] ‘Anti-intellectualisme kan de Nederlandse wetenschap in gevaar brengen’, Telegraaf, 11 november 2021, https://www.telegraaf.nl/watuzegt/2016592458/anti-intellectualisme-kan-de-nederlandse-wetenschap-in-gevaar-brengen.

[37] ‘Bedreigingen voor fundamenteel wetenschappelijk onderzoek in Nederland brengen onze toekomstige welvaart in gevaar’, ScienceGuide, 9 december 2021, https://www.scienceguide.nl/2021/12/bedreigingen-voor-fundamenteel-wetenschappelijk-onderzoek-in-nederland-brengen-onze-toekomstige-welvaart-in-gevaar/.

[38] gastauteurs, ‘We moeten af van telzucht in de wetenschap’, ScienceGuide, 21 juli 2021, https://www.scienceguide.nl/2021/07/we-moeten-af-van-telzucht-in-de-wetenschap/.

[39] ‘Topwetenschapper Clevers stapt over naar farmaconcern Roche’, FD.nl, geraadpleegd 6 januari 2023, https://fd.nl/bedrijfsleven/1428836/topwetenschapper-clevers-stapt-over-naar-farmaconcern-roche; ‘Nederlandse wetenschapper Hans Clevers gaat onderzoeksafdeling Roche leiden’, 2 februari 2022, https://nos.nl/artikel/2415388-nederlandse-wetenschapper-hans-clevers-gaat-onderzoeksafdeling-roche-leiden.

[40] Hans Offringa, ‘Hans Clevers: “‘Wetenschap top in Nederland, aantal bedrijven buitengewoon armoedig’”’, Op1 (blog), 3 februari 2022, https://op1npo.nl/2022/02/03/hans-clevers-is-benoemd-tot-het-hoofd-van-de-onderzoeksafdeling-van-het-zwitserse-roche/.

[41] ‘dr Kelder en Co: Gaat de wetenschap ten onder aan open science? & Gezag in plaats van macht, alstublieft - 19 februari 2022 - NPO Radio 1 Gemist | NPO Radio 1’, 19 februari 2022, https://www.nporadio1.nl/uitzendingen/dr-kelder-en-co/28bd11da-a771-482b-b2bd-52045f2adaf9/2022-02-19-dr-kelder-en-co-gaat-de-wetenschap-ten-onder-aan-open-science-gezag-in-plaats-van-macht-alstublieft.

[42] Hatte Woude, van der, ‘De podcast “Gaat de wetenschap ten onder aan open science?”’ (2022), https://www.tweedekamer.nl/kamerstukken/kamervragen/detail.

[43] Cultuur en Wetenschap Ministerie van Onderwijs, ‘Antwoorden op Kamervragen over de podcast over wetenschap en open science - Kamerstuk - Rijksoverheid.nl’, kamerstuk (Ministerie van Algemene Zaken, 30 maart 2022), https://www.rijksoverheid.nl/documenten/kamerstukken/2022/03/30/antwoorden-op-kamervragen-over-de-podcast-gaat-de-wetenschap-ten-onder-aan-open-science. (My italics).

[44] ‘Verslag van een notaoverleg, gehouden op 11 april 2022, over hoofdlijnenbrief Hoger onderwijs en wetenschapsbeleid’, kamerstuk, 11 april 2022, https://www.tweedekamer.nl/kamerstukken/commissieverslagen/detail; ‘Motie Van der Woude/Van der Graaf over een eenduidige visie op maatschappelijke impact, waaronder valorisatie - Hoger Onderwijs-, Onderzoek- en Wetenschapsbeleid - Parlementaire monitor’, kamerstuk, 11 maart 2022, https://www.parlementairemonitor.nl/9353000/1/j9vvij5epmj1ey0/vls0qdwicbyg.

[45] Adviesraad voor Wetenschap, ‘Duiden van de kwaliteiten van wetenschap - Adviesraad voor wetenschap, technologie en innovatie’.

[46] ‘Verslag van een notaoverleg, gehouden op 11 april 2022, over hoofdlijnenbrief Hoger onderwijs en wetenschapsbeleid’, 4.


v
Martijn van der Meer
Chair

Martijn is a historian of science and medicine and works as a lecturer and researcher at Erasmus Medical Centre and Erasmus University Rotterdam. He is also a policy advisor on responsible research at Tilburg University. In 2018, he co-founded the Center of Trial and Error and currently chairs the board.

Latest

Founding JOTE: A Conversation with Stefan Gaillard: “I Believe That Positive Publication Bias is Actively Harming Science”

Read

Founding JOTE: A Conversation with Martijn van der Meer. “My Interests Became More Political. I Wanted to Contribute to a Mission-Driven Model of Scientific Publishing”

Read

Founding JOTE: A Conversation with Max Bautista Perpinyà. “I Don’t Think Forbes Has Any Legitimacy in Acknowledging Scientific Success.”

Read