~ 35 min read
DOI: https://doi.org/10.36850/448e9a73-75b2
Cultures of Trial and Error: The Narrative Side of (Open) Science
JOTE x NanoBubbles present Cultures of Trial and Error: a peer reviewed blog series on error correction in science.
Heterogenous communities with explicit commitments to science corrections or what this blog series summarises under the descriptor ‘cultures of trial and error’ have existed for the longest time in science (Knorr, 1979; Shapin & Schaffer, 1985; Derksen, 2019; Hesselmann & Reinhart 2024; Antonakaki et al., 2025). Some might even argue that the supervision and monitoring of scientific practices is a key concern of science, equally as important as its primary fact-finding mission (Derksen, 2019). One reason being that with the flawed human nature, mistakes, – or even worse – intentional fraud, as well as other kinds of issues and redundancies will be an inevitability. And if left unchecked, ‘perverse incentive structures’ will grow rampant, jeopardising public trust in science, undermining its credibility, and thereby directly threatening the scientific endeavour itself (see e.g., Hubbard 2016; Lilienfeld, 2017; Crane, 2018; Klein et al. 2018; Dirnagl, 2019; Wingen et al., 2019; Penders, 2022; Methner et al. 2023; Brodeur et al., 2024; Balafoutas et al., 2025). Naturally, these issues need to be addressed. This is especially true, when circumstantial evidence of the non-replicability of central research findings is pointing towards not just individual problems occurring within a single discipline, but rather systemic issues spanning across multiple research disciplines. As was the case in social psychology and biomedicine, when large-scale replication studies failed to replicate key findings of their respective fields (see Crane, 2018; Zwaan et al., 2018; Malich & Munafò, 2022). The uncovering of these cross-disciplinary issues has led many to the declaration of an urgent crisis of confidence or a replication crisis in the mid-2010s (see Pashler & Harris, 2012; Earp & Trafimow, 2015; Ioannidis, 2015; Nosek et al., 2022). Science seemed ‘broken’ and inspired researchers and other non-academic stakeholders to come together in a grassroots-like manner to find practical solutions to these issues. For example, by creating international replication initiatives (Bartscherer & Reinhart, 2025), new publishing formats such as pre-prints and registered reports or the publishing of null results (Nosek et al., 2015; Baxter & Burwell, 2017; Penders, 2022; Balafoutas et al., 2025). Other solutions included the creation of new reporting and teaching guidelines (McLoughlin & Drummond, 2017; Perrault, 2023), automated labs (Adam, 2024; Ulpts et al., 2025-2), as well as continued campaigning about these issues and through releasing manifestos (e.g., Munafò et al., 2017).
However, what this recounting or story of science tends to overlook, is that it is simply that: a story. A very convincing and highly successful one, given how many actors in academia and its periphery have since adopted it and joined the Open Science (OS) movement as active reformers (Nelson et al., 2021; Penders, 2022; Peterson & Panofsky, 2023; Bartscherer & Reinhart, 2025; Ulpts et al., 2025-2). Many of them are highly influential stakeholders from academia and beyond (Bartscherer & Reinhart, 2025). All having seemingly embraced the OS mission, all agreeing that science is deeply broken and needs to fundamentally change its ways. Primarily, by becoming more open, replicable, reproducible, reliable, efficient, transparent, robust, rigorous, inclusive, diverse, etc. pp. (Ulpts et al., 2025-2).
Building on Bruner’s (1991) idea that narratives are a cornerstone of how we, as humans, construct our social realities and Polkinghorne’s (1995) extension of it into the method of narrative inquiry, in this blogpost, I will attempt to provide a first schematic overview of the narrative mosaic that pieced together, help create this highly successful story the OS movement tells about science. I will highlight some of its more prominent narrative elements (subplots) and contextualise the demands that are formulated in them.1 The overall story they help to form, is intended to convince others of the need for what it is promoting: fundamental (and lasting) changes to the way that science is being done. The OS story is thus purposeful, influential, and by no means neutral. Hence why we shall now take a closer look at the ideas that this story of science promotes and the perspectives that underpin it (Bruner, 1991; Crespo López et al., 2025).
To do so, I will be drawing from my work in the DFG-funded research project Replication as a Social Movement, in which we mapped a total of 175 international replication initiatives (RIs) via virtual snowball sampling of this core community in the OS movement as well as through conducting research interviews (n = 28) with people active in these RIs to learn about their work and what motivated them to engage with it in the first place (Bartscherer & Reinhart, 2025). This material, an extensive literature review, and participant observations of key conferences and events of the OS movement (2023 – 2025) are the basis for the now following analysis.
Six Prominent Subplots in the Open Science Movement’s Story of (a broken) Science
- Open Science as ‘good science’ that is fighting a corrupt system:
As part of their story about science, reformers commonly describe academia as a deeply ‘broken system’ that is dominated by biased researchers (see ‘publication bias’; Flis, 2019), who get awarded despite their corrupt behaviours and ‘questionable research practices’ (QRPs; also see ‘perverse incentives’, Stephan, 2012; Chambers et al., 2014; Crane, 2018; Penders, 2024; Balafoutas et al., 2025). This subplot usually promotes a rather Manichean characterisation of the social dynamics at play, commonly speaking of a fight between ‘good’ versus ‘bad science’. Despite such over-simplifications, the wider community seems to agree that there is in fact a ‘better science’ that they are helping to produce (Nosek et al., 2012; Farnham et al., 2017; Goldensher, 2023; Crespo López et al., 2025). As part of this subplot, reformers tend to depict themselves as (anti)heroes who are trying to counteract the ‘broken system’ by bringing about the urgently needed (positive) changes and reforms. And they do all this, despite not being acknowledged or rewarded for their efforts by those in power (e.g., not receiving awards or funding for their OS work; see Buck, 2023). This first subplot provides a core explanation as to why reformers assume that the replication crisis was able to occur in the first place (see corruption and biases) and why their OS work matters as much: because they are fighting the (flawed) human nature and the self-serving interests of researchers and corrupt academic institutions.
- Reformers are outsiders & Open Science is driven by grassroots activism:
When discussing their positions in the current science system, reformers often stylise themselves as outsiders (see e.g., Buck, 2023), who belong to an active grassroots movement that is predominantly driven by early career researchers, seeking to create change bottom-up (Nicholas et al., 2019, 2020; Toribio-Flórez et al., 2021). Consequentially, reformers tend to portray themselves as those who ‘shake things up’, speaking inconvenient truths to power (see publication of null-results; Govaart et al., 2022), and pushing for urgently needed (credibility) revolutions (O’Donohue et al., 2022). This ‘underdog’ or ‘outsider’ subplot of OS as a ‘revolutionary’ (scientific) counter culture movement (Bartling & Friesike, 2014), is also illustrated well by the names that reformers choose for their projects, such as for example, the ‘RIOT Science Club’, which was founded in 2018 by researchers at King’s College, London. Their inconvenience and rioting are, however, always seen as serving a greater good (science). This subplot seems most relevant for the reformers’ communal identity-building and is usually brought up to publicly justify their (often rather harsh) critiques of existing protocols and (academic) institutions. However, given the financial ties of central OS institutions (e.g., Center for Open Science, US) to private funders (e.g., Arnold Ventures) and the direct involvement of highly influential actors from politics and private business (e.g., ex-Alphabet CEOs in the newly founded Metascience Alliance), the claims in this subplot about OS being ‘counter-culture’ or ‘grassroots’ appear to stand in stark contrast to the current organisational practices of the movement (Bartscherer & Reinhart, 2025).
- Science is in crisis & Open Science will save it:
Another important subplot in their story on science promotes the idea that science is not only corrupt (see first subplot) but that it is (or has been) in an urgent crisis. Namely a ‘replication crisis’ or a ‘crisis of confidence’ (Hesselmann & Reinhart, 2024; Crespo López et al., 2025). A handful of highly publicised cases of academic fraud and QRPs of renowned researchers in the mid-2010s were commonly interpreted not as outliers, but as symptoms of a sick and corrupt system (see first subplot) (Penders, 2024; Bartscherer & Reinhart, 2025). The failed large-scale replication attempts that followed them (see e.g., Klein et al. 2018) were thus seen as undisputable proof that issues of irreproducibility were indeed systemic and wide-spread, and that a crisis was indeed unfolding (Morawski, 2022; Ulpts et al., 2025-2). As a consequence, reformers commonly warn about a loss of the public’s trust (in the institution of science) if these issues and ‘low standards’ are left unaddressed (see e.g., Camerer et al., 2016; Penders et al., 2019; Roettger et al., 2019; Wingen et al., 2020; Edlund et al., 2022; Methner et al., 2023; Penders, 2024). The loss of the public’s trust in science thus serves as justification for the urgency with which reformers move to implement their drastic and universal reforms. This urgency can be seen as a narrative motor behind the reformers’ push to act immediately and implement OS reforms as quickly as possible (Penders, 2024). An additional reason being that OS and its reforms, in this subplot, are usually depicted as the (research) programme that will save science as a whole (Mirowski, 2018; Crespo López et al., 2025). Allegedly by making the system more resilient. Thereby preventing similar crises from happening in the future (Stengers, 2016).
- Open Science as the correct way of ‘doing science’:
Building on the crisis subplot and the claim that OS is able to save science, is the subplot that OS is bringing science back to its roots, its original ideals, and (with its proposed reforms) is going to undo all the ‘bad tendencies’ that have been allowed to grow rampant for so long. As a part of this subplot, OS is usually hailed as a panacea that is going to help fulfil science’s ‘original’ promises to society (Mirowski, 2018; Bartscherer, 2025; Crespo López et al., 2025; Ulpts et al., 2025-1). With a somewhat melancholic undertone, OS herein is typically stylised as the actual embodiment of an imagined past, a utopia of ‘good science’ that will finally serve the common good (again) (Bartscherer & Ulpts-1, 2025-1; Penders, 2025). Slogans such as ‘Open Science: Just science done right!’ embody this idea (Imming & Tennant, 2018). Closely related to the second subplot of OS as a counter culture movement, herein OS is commonly characterised as actively trying to resist, what some have called the ‘neoliberalisation of science’ (Nosek et al., 2012; Mirowski, 2018; Hostler, 2024). Discourse on ‘publication biases’ (Marks-Anglin & Chen, 2020), the ‘gaming of metrics’ (Biagioli & Lippmann, 2020) or the ‘publish or perish’ mentality (Elbanna & Child, 2023; Knöchelmann, 2024) are often pointing to this problematised aspect of the commodification of science to explain why things need to fundamentally change and why OS reforms should be implemented (Hostler, 2024).
- Open Science will foster a ‘slow science’ & increase its overall quality, democratisation, equity, inclusion, and sustainability:
Closely related to the fourth sublot are claims in which OS reforms are depicted as inevitably resulting in a positive ‘slowing down of science’. Reformers argue that through the active shift of incentive structures towards OS practices, the pace of science will become ‘humane’ again. Primarily through the anticipated rewards of getting more ‘correct’ claims and replications of central studies published (which takes time), instead of the quick and constant publication of loads of papers annually (with low quality claims), e.g. in the form of ‘salami slicing’ (Nosek et al., 2012; Stengers, 2016; Frith, 2020; Antonakis, 2023; Hostler, 2024; Köstenbach & Oransky, 2024). It is then usually assumed that through following OS ideas, pressure on researchers to follow academia’s ‘perverse incentives’, is going to be effectively alleviated and research as well as career decisions are going to be made more consciously (Berkowitz & Delacour, 2020; Balafoutas et al., 2025). This ‘anti-capitalist’ promise of OS (as a better future) also goes hand in hand with the idea that OS is helping science become more sustainable through its practices and new formats – e.g., Open Access publishing or preprinting. The idea is that they are effectively ‘saving human, monetary, and natural resources in all research fields’, ‘stimulating international collaborations’, and making science overall more efficient (Fraser et al., 2021; Govaart et al., 2022).
Furthermore, OS is argued to positively influence scientific progress and the scientific exchange of knowledge. This is supposed to be achieved through, e.g., eliminating gatekeeping functions of journals or peer review (e.g. through pre-printing), making access to knowledge (primarily online) more democratic, and increase its inclusion (Desmond, 2024). Equity and inclusion are thus often brought up as core values of OS, making their (structural) increase within the science system a primary goal. To realise this goal, in this subplot to their larger story on science, reformers usually advocate for the need to make research (findings) more (financially) accessible to marginalised researchers, particularly to those in the global south, e.g., through open data practices and the elimination of paywalls (Serwadda et al., 2018; Desmond, 2024; Govaart et al., 2022). If these measures have actually been effective and how much they have contributed to the de-marginalisation of global south researchers (if at all), however, has been put into serious question (Mirowski, 2018; Bartscherer & Ulpts, 2025-2).
- Humans are the problem; standardisation & automation are the (mechanical) solutions:
To offer explanations as to why so many replication attempts have failed (and continue to fail) and why cases of fraud and QPRs were able to occur in the first place, reformers often point to the flawed human nature as a primary reason for concern: in true rationalist fashion, researchers are seen as ‘only human’ and thus as deeply flawed beings, who – given the opportunity – will always lie, cheat, steal. Humans thus introduce uncertainties to any scientific endeavour, no matter how noble the cause; some intentionally, but most unintentionally (Ulpts et al., 2025-2). To eradicate these (human) uncertainties from research processes, reformers commonly present automation and standardisation (through strict protocols) as viable solutions to reduce said (inherently human) biases and uncertainties (see Flis, 2019; Penders, 2022), increase (scientific) efficiency (Peterson & Panofsky, 2021), and help produce a ‘better science’ overall: A ‘mechanically objective’ science – that will make research decisions and practices more transparent, traceable, and thus reproducible. Research that has been produced in this manner, is depicted as serving public as well as private interests more effectively, introducing a cost-calculation-logic to the discourse (Daston & Galison, 2010). The use of computerised tools is often brought up in this context as well, usually of artificial intelligence (AI), to help estimate replicability of studies in a faster, more reliable, and efficient way (see e.g., Yang et al., 2020). Another related example would be ‘automated labs’ that, with the help of AI, promise to reduce human interventions to an absolute minimum (see Mirowski, 2018; Ulpts et al., 2025-2). Reflections about concerns regarding the irreproducibility of AI decision-making are seemingly absent from these suggestions. At its core this subplot seems to follow the positivist conviction that there is an ‘objective’ truth that science ought to uncover and (human) obstacles need to be removed from this process as much as possible (see Derksen, 2019; Graves et al., 2025).
The Story Reformers Tell About (Open) Science
The six subplots that I outlined in this blogpost are to be understood as condensed versions of some of the more prominent narrative elements one encounters when talking to OS reformers about their work, when reading their papers, or listening to their talks and conference presentations as well as their public online discussions on everything (open) science. They are not to be understood as having hard and clear boundaries. Rather, as one might have already noticed when reading this blogpost, they tend to blend into each other as they build and expand on one another, typically occurring together in actual discourse. Their distinction into six separate subplots on my end is therefore more so a theoretical aid to help render certain notions and ideas within the OS story on science distinguishable and therefore discussable. Woven together, these and other elements spell out the (sometimes contradictory) ‘communal story telling’ of the OS movement: about science in general, their own work, their goals, but also their perceived ‘outside’. A story I attempted to retell in the first paragraph of this blogpost. This story can be seen as a partial account of reality, and according to Bruner (1991), is by no means a ‘neutral’ form of communicating information. Instead “There seems indeed to be some sense in which narrative, rather than referring to ‘reality’, may in fact create or constitute it, as when ‘fiction’ creates a ‘world’ of its own – […].” (Bruner, 1991, p.13). Using the lens of narratives or stories when looking at the discourse around OS allows us, the (scientific) audience, to take a step back and critically reflect on how the OS movement makes sense of the world. Instead of taking their story of science as fact, we now can take it for what it is: a summary of personal accounts and attempts at making sense of the (scientific) world and what reformers believe to have occurred in it. A prime example being the so-called replication crisis, which reformers are vehemently insisting has in fact occurred. Whereas their critics call it a moral panic and claim that the cases of fraud and problematic research practices were simply overblown. Critics further counter that the failed replication attempts that followed this panic were more so an issue of skill, incomplete protocols or tacit knowledge that those attempting to replicate the works seemed to be lacking (see, e.g., Monin & Oppenheimer et al., 2014).
As demonstrated with the chosen examples, the story that OS tends to tell about the (more recent) history of science consists of many universal claims and recurring themes, usually with a strong ‘positivist’ undertone (Derksen, 2019; Graves, 2025). Most often this story and its subplots convey reformers’ deep convictions that there are indeed universal and objective truths that they have helped to uncover: about science being in crisis, corrupt, and vulnerable due to human errors and biases. All while promoting ideas of a ‘good’ vs ‘bad science’ (or right vs wrong science), and thus rather simplistic Manichean characterisations of the world of science. It is this story that reformers tell about science, that has given them the confidence to urgently push their reforms onto the larger science system. Reforms that they claim will fix a (currently) broken science system. OS for them – and increasingly more actors beyond academia – seems to have become the answer to all questions (Mirowski, 2018; Bartscherer & Reinhart, 2025; Crespo López et al., 2025). Yet, their story of science promotes understandings of our (scientific) realities that have apparent (positivistic) biases themselves: ideals of hard facts and undeniable truths that can be uncovered and corrected with the right (=OS) methods; all intended to lead us towards an idealistically proclaimed ‘scientific utopia’ (see, e.g., Nosek, et al., 2012; Bartscherer & Ulpts, 2025-1; Penders, 2025). Noticeably, the solutions that they produce, show a strong practice orientation, materialising in a multitude of mechanical and standardised OS guidelines, protocols, and recommendations that are to be implemented immediately and with urgency (Penders, 2024; Ulpts et al., 2025-2). The OS story has, however, not only produced practical solutions, it has also introduced a strong sense of uncertainty into the overall discourse about science (in crisis). Beyond cohorts of anxious students and researchers (‘what/who can we still trust?’), some negative consequences have since manifested in the form of unwanted fraternisations from autocratic figures in the political domain. The Trump admin notoriously picked up elements of the OS story and language in a White House memorandum (‘restoring gold standard science’) to justify their own political agenda and exert control over the scientific domain (Hesselmann & Reinhart, 2024; Penders, 2024; O’Grady, 2025; Ulpts et al., 2025-2).
The OS story is undeniably a demonstratively successful one, given the large uptake by (non)academic actors across many different social domains (Bartscherer & Reinhart, 2025). Its impact can be felt across academia and beyond, despite theirs being only one story amongst many of the (recent) history of science (Daston, 2017). Its cursory adoption as fact, however, jeopardises our collective memory of the scholarly community as well as the imagination of its possible future(s) (Devezer & Penders, 2023; Crespo López et al., 2025). When engaging with the OS story and the proposed solutions by reformers to (systemic) issues, issues they claim to have identified across all of science, caution or at the very least, a heavy dose of scepticism seems more than warranted.
This blog post series has been financially supported by 'NanoBubbles: how, when and why does science fail to correct itself', a project that has received Synergy grant funding from the European Research Council (ERC), within the European Union’s Horizon 2020 programme, grant agreement no. 951393.
Footnotes:
[1]: Also see Nelson et al., (2021) for a more detailed analysis of the discursive dimensions of the replication crisis and more specifically of reproducibility in the literature.
[2]: Replication initiatives were defined by us as “(temporary) projects and institutions that have or had the explicitly stated mission2 to, either, replicate or reproduce findings of existing (central) studies and experiments themselves (i.e., replication studies), or, support the increase of replication numbers and the reproducibility of research findings (e.g., via providing digital or institutional infrastructures, funding, awards, university curricula, publishing guidelines, workshops, teaching material, etc.).” (see Bartscherer & Reinhart, 2025).
References:
Adam, D. (2024). The automated lab of tomorrow. Proceedings of the National Academy of Sciences,121(17), e2406320121. https://doi.org/10.1073/pnas.2406320121
Antonakaki, M., Sanchez, C. F., Barbeitas, M. (2025). Cultures of Trial and Error. Identifying and Overcoming Barriers in Science Correction. Blogpost. https://doi.org/10.36850/dc83cd30-11ad
Antonakis, J. (2023). In support of slow science: Robust, open, and multidisciplinary. The Leadership Quarterly, 34(1), 101676. https://doi.org/10.1016/j.leaqua.2023.101676
Balafoutas, L., Celse, J., Karakostas, A., & Umashev, N. (2025). Incentives and the replication crisis in social sciences: A critical review of open science practices. Journal of Behavioral and Experimental Economics, 114, 102327. https://doi.org/10.1016/j.socec.2024.102327
Bartling, S., & Friesike, S. (Eds.). (2014). Opening Science. Springer International Publishing. https://doi.org/10.1007/978-3-319-00026-8
Bartscherer, S. F. (2025). 'People tend to be kind by default'. Presentation. Let’s Talk about Science Reform! A Workshop on Theoretical and Methodological Approaches to Investigating the Open Science Movement, Bartscherer, Sheena F. et al., 2025. https://doi.org/10.5446/70580
Bartscherer, S. F., & Reinhart, M. (2025, August 9). The (Non)Academic Community Forming around Replications: Mapping the International Open Science space via its Replication Initiatives. Preprint. https://doi.org/10.31235/osf.io/rbyt6_v2
Bartscherer, S. F., & Ulpts, S. (2025, July 26). Stories from the Imagined Utopia of an Open Science. Contextualizing Reformers' Conceptions of "Good Science". Presentation. Evidence & Uncertainty in Science: Methodological, Philosophical and Meta-Scientific Issues, Tübinger Forum für Wissenschaftskulturen, Universität Tübingen. Zenodo. https://doi.org/10.5281/zenodo.16453233
Bartscherer, S. F., & Ulpts, S. (2025, July 28). The Broken Promises of Open Science. How Open is Science Today Truly and for Whom?. Inclusive Open Science. From Global Asymmetries to Pluriversal Design, Berlin, Germany. Zenodo. https://doi.org/10.5281/zenodo.16531429
Baxter, M. G., & Burwell, R. D. (2017). Promoting transparency and reproducibility in Behavioral Neuroscience: Publishing replications, registered reports, and null results. Behavioral Neuroscience, 131(4), 275–276. https://doi.org/10.1037/bne0000207
Berkowitz H., & Delacour H. (2020). Sustainable Academia: Open, Engaged, and Slow Science. M@n@gement, 23(1), 1-3. https://doi.org/10.37725/mgmt.v23.4474
Biagioli, M., & Lippman, A. (Eds.). (2020). Gaming the Metrics: Misconduct and Manipulation in Academic Research. The MIT Press.
Brodeur, A., Esterling, K., (…), Young, L. (2024). Promoting Reproducibility and Replicability in Political Science. Research & Politics, 11(1), 20531680241233439. https://doi.org/10.1177/20531680241233439
Bruner, J. (1991). The Narrative Construction of Reality. Critical Inquiry, 18(1), 1–21.
Buck, S. (2023, August 13). Metascience Since 2012: A Personal History [Substack newsletter]. The Good Science Project. https://goodscience.substack.com/p/metascience-since-2012-a-personal
Camerer, C. F., Dreber, A., (…), Wu, H. (2016). Evaluating replicability of laboratory experiments in economics. Science. 351(6280), 1433-1436. https://doi/10.1126/science.aaf0918
Chambers, C. D., Feredoes, E., D. Muthukumaraswamy, S., & J. Etchells, P. (2014). Instead of “playing the game” it is time to change the rules: Registered Reports at AIMS Neuroscience and beyond. AIMS Neuroscience, 1(1), 4–17. https://doi.org/10.3934/Neuroscience.2014.1.4
Crane, H. (2018). The Fundamental Principle of Probability: Resolving the Replication Crisis with Skin in the Game. Unpublished Manuscript. https://harrycrane.com/FPP-final.pdf
Crespo López, M., Pallise Perello, C., de Ridder, J., & Labib, K. (2025). Open Science as Confused: Contradictory and Conflicting Discourses in Open Science Guidance to Researchers. Preprint. https://doi.org/10.31222/osf.io/zr35u_v3
Daston, L. (ed.). (2017). Science in the Archives: Pasts, Presents, Futures. University of Chicago Press, Chicago Scholarship Online, 2017. https://doi.org/10.7208/chicago/9780226432533.001.0001.
Daston, L., & Galison, P. (2010). Objectivity (First paperback edition). Zone Books.
Derksen, M. (2019). Putting Popper to work. Theory & Psychology, 29(4), 449–465. https://doi.org/10.1177/0959354319838343
Derksen, M. (2021). A Menagerie of Imposters and Truth-Tellers: Diederik Stapel and the Crisis in Psychology. In S. Woolgar, E. Vogel, D. Moats, & C.-F. Helgesson (Eds.), The Imposter as Social Theory: Thinking with Gatecrashers, Cheats and Charlatans (1st ed., pp. 53–76). Bristol University Press. https://doi.org/10.2307/j.ctv1p6hphs.8
Desmond, H. (2024). Gatekeeping should be conserved in the open science era. Synthese, 203(5), 160. https://doi.org/10.1007/s11229-024-04559-2
Devezer, B. & Penders, B. (2023). Scientific Reform, Citation Politics and the Bureaucracy of Oblivion. Quantitative Science Studies 4 (4):857–859. doi:10.1162/qss_c_00274.
Dirnagl, U. (2019). Rethinking research reproducibility. The EMBO Journal, 38(2), e101117. https://doi.org/10.15252/embj.2018101117
Earp, B. D., & Trafimow, D. (2015). Replication, falsification, and the crisis of confidence in social psychology. Frontiers in Psychology, 6. https://doi.org/10.3389/fpsyg.2015.00621
Edlund, J. E., Cuccolo, K., Irgens, M. S., Wagge, J. R., & Zlokovich, M. S. (2022). Saving Science Through Replication Studies. Perspectives on Psychological Science, Vol. 17(1), 216-225. https://doi.org/10.1177/1745691620984385
Elbanna, S., & Child, J. (2023). From ‘publish or perish’ to ‘publish for purpose’. European Management Review, 20(4), 614–618. https://doi.org/10.1111/emre.12618
Farnham, A., Kurz, C., (…), Hettne, K. (2017). Early career researchers want Open Science. Genome Biology, 18(1), 221. https://doi.org/10.1186/s13059-017-1351-7
Flis, I. (2019). Psychologists psychologizing scientific psychology: An epistemological reading of the replication crisis. Theory & Psychology, 29(2), 158–181. https://doi.org/10.1177/0959354319835322
Fraser, N., Brierley, L., (…), Coates, J. A. (2021). The evolving role of preprints in the dissemination of COVID-19 research and their impact on the science communication landscape. PLOS Biology, 19(4), e3000959. https://doi.org/10.1371/journal.pbio.3000959
Frith, U. (2020). Fast Lane to Slow Science. Trends in Cognitive Sciences, 24(1), 1–2. https://doi.org/10.1016/j.tics.2019.10.007
Goldensher, L. O. (2023). Problems of knowledge, problems of order: The open science field site. Frontiers in Sociology, 8, 1149073. https://doi.org/10.3389/fsoc.2023.1149073
Govaart, G. H., Hofmann, S. M., & Medawar, E. (2022). The Sustainability Argument for Open Science. Collabra: Psychology, 8(1), 35903. https://doi.org/10.1525/collabra.35903
Graves, T. A., Pownall, M., Prosser, A. M. B. (2025). Getting Creeped Out? Open Science, Qualitative Methods, and the Dangers of Positivism Creep. Preprint. https://doi.org/10.31235/osf.io/nphjc_v1
Hesselmann, F., & Reinhart, M. (2024). From scandal to reform: Approaches to research integrity at a turning point. Journal of Responsible Innovation, 11(1), 2414491. https://doi.org/10.1080/23299460.2024.2414491
Hostler, T. J. (2024). Open Research Reforms and the Capitalist University: Areas of Opposition and Alignment. Collabra: Psychology, 10(1), 121383. https://doi.org/10.1525/collabra.121383
Hubbard, R. (2016). Corrupt research. SAGE Publications, Inc., https://doi.org/10.4135/9781506305332
Imming, M. & Tennant, J. (2018). Sticker open science: just science done right (ENG). Zenodo. https://doi.org/10.5281/zenodo.1285575
Ioannidis, J. P. A. (2015). Failure to Replicate: Sound the Alarm. Cerebrum, Nov. 2015
Klein, R. A., Vianello, M., (…), Nosek, B. A. (2018). Many Labs 2: Investigating Variation in Replicability Across Samples and Settings. Advances in Methods and Practices in Psychological Science, 1(4), 443–490. https://doi.org/10.1177/2515245918810225
Knorr K. D. (1979). Tinkering toward success: Prelude to a theory of scientific practice. Theory and Society, 8(3), 347‐376. https:// doi.org/10.1007/BF00167894
Knöchelmann, M. (2024). Formal authorship in the wake of uncertain futures: The narrative of publish or perish in the humanities. Research Evaluation, 33, rvae044. https://doi.org/10.1093/reseval/rvae044
Köstenbach, T., & Oransky, I. (2024). Salami slicing and other kinds of scientific misconduct: A faux pas for the author, a disaster for science: An interview by Tamara Köstenbach with Ivan Oransky in October 2022 for the research project “Summa cum fraude – Wissenschaftliches Fehlverhalten und der Versuch einer Gegenoffensive”. Information – Wissenschaft & Praxis, 75(1), 1–6. https://doi.org/10.1515/iwp-2023-2041
Lilienfeld, S. O. (2017). Psychology’s Replication Crisis and the Grant Culture: Righting the Ship. Perspectives on Psychological Science, 12(4), 660–664. https://doi.org/10.1177/1745691616687745
Malich, L., & Munafò, M. R. (2022). Introduction: Replication of Crises - Interdisciplinary Reflections on the Phenomenon of the Replication Crisis in Psychology. Review of General Psychology, 26(2), 127–130. https://doi.org/10.1177/10892680221077997
Marks‐Anglin, A., & Chen, Y. (2020). A historical review of publication bias. Research Synthesis Methods, 11(6), 725–742. https://doi.org/10.1002/jrsm.1452
McLoughlin, P., & Drummond, G. (2017). Publishing replication studies to support excellence in physiological research. Experimental Physiology, 102(9), 1041–1043. https://doi.org/10.1113/EP086344
Methner, N., Dahme, B., & Menzel, C. (2023). The “replication crisis” and trust in psychological science: How reforms shape public trust in psychology. Social Psychological Bulletin, 18, e9665. https://doi.org/10.32872/spb.9665
Mirowski, P. (2018). The future(s) of open science. Social Studies of Science, 48(2), 171–203. https://doi.org/10.1177/0306312718772086
Morawski, J. (2022). How to True Psychology’s Objects. Review of General Psychology, 26(2): 157–171.
Monin, B., Oppenheimer, D.M., Ferguson, M.J., Carter, T.J., Hassin, R.R., Crisp, R.J., … Kahneman, D. (2014). Commentaries and Rejoinder on. Social Psychology. 45(4): 299–311.
Munafò, M. R., Nosek, B. A., (…), Ioannidis, J. P. A. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1(1), 0021. https://doi.org/10.1038/s41562-016-0021
Nelson, N. C., Ichikawa, K., Chung, J., & Malik, M. M. (2021). Mapping the discursive dimensions of the reproducibility crisis: A mixed methods analysis. PLOS ONE, 16(7), e0254090. https://doi.org/10.1371/journal.pone.0254090
Nicholas, D., Watkinson, A., (…), Herman, E. (2019). So, are early career researchers the harbingers of change? Learned Publishing, 32(3), 237–247. https://doi.org/10.1002/leap.1232
Nicholas, D., Jamali, (…), Polezhaeva, T. (2020). A global questionnaire survey of the scholarly communication attitudes and behaviours of early career researchers. Learned Publishing, 33(3), 198–211. https://doi.org/10.1002/leap.1286
Nosek, B. A., Hardwicke, T. E., (…), Vazire, S. (2022). Replicability, Robustness, and Reproducibility in Psychological Science. Annual Reviews Psychology, 2022. 73:719-48. https://doi.org/10.1146/annurev-psych-020821114157
Nosek, B. A., Alter, G., (…), Yarkoni, T. (2015). Promoting an open research culture. Science, 348(6242), 1422–1425. https://doi.org/10.1126/science.aab2374
Nosek, B.A., Spies, J.R. and Motyl, M. (2012). ‘Scientific Utopia: II. Restructuring Incentives and Practices to Promote Truth Over Publishability’, Perspectives on Psychological Science, 7(6), pp. 615–631. Available at: https://doi.org/10.1177/1745691612459058
O’Donohue, W., Masuda, A., & Lilienfeld, S. (Eds.). (2022). Avoiding Questionable Research Practices in Applied Psychology. Springer International Publishing. https://doi.org/10.1007/978-3-031-04968-2
O’Grady, C. (2025). Science’s reform movement should have seen Trump’s call for ‘gold standard science’ coming, critics say. Science. https://doi: 10.1126/science.zlwoaxz
Roettger, T. B., Schmalz, X., Chambers, C. D., Fraser, H., Mellor, D. T., Ostojic, L., & Schoof, T. (2019). We still need more trust in science: The need for broader adoption of Registered Reports. https://doi.org/10.31222/osf.io/4sb7c
Pashler, H., & Harris, C. R. (2012). Is the Replicability Crisis Overblown? Three Arguments Examined. Perspectives on Psychological Science, 7(6), 531–536. https://doi.org/10.1177/1745691612463401
Penders, B. (2022). Process and Bureaucracy: Scientific Reform as Civilisation. Bulletin of Science, Technology & Society 42 (4):107–116. doi:10.1177/02704676221126388.
Penders, B. (2024). Scandal in scientific reform: The breaking and remaking of science. Journal of Responsible Innovation, 11(1), 2371172. https://doi.org/10.1080/23299460.2024.2371172
Penders, B. (2025). Cultures of Trial and Error: Wissenschaftsschmerz. Blogpost. https://doi.org/10.36850/fd2ec6bc-702c
Penders, B., Holbrook, J. B., & De Rijcke, S. (2019). Rinse and Repeat: Understanding the Value of Replication across Different Ways of Knowing. Publications, 7(3), 52. https://doi.org/10.3390/publications7030052
Perrault, E. K. (2023). Teaching replication through replication to solve the replication “crisis”. Communication Teacher, 37(3), 220–226. https://doi.org/10.1080/17404622.2022.2123110
Peterson, D., & Panofsky, A. (2021). Arguments against efficiency in science. Social Science Information, 60(3), 350–355. https://doi.org/10.1177/05390184211021383
Peterson, D., & Panofsky, A. (2023). Metascience as a Scientific Social Movement. Minerva, 61(2), 147–174. https://doi.org/10.1007/s11024-023-09490-3
Polkinghorne, D. E. (1995). Narrative configuration in qualitative analysis. International Journal of Qualitative Studies in Education, 8(1), 5–23. https://doi.org/10.1080/0951839950080103
Serwadda, D., Ndebele, P., (…), Wanyenze, R. K. (2018). Open data sharing and the Global South—Who benefits? Science, 359(6376), 642–643. https://doi.org/10.1126/science.aap8395
Shapin, S., & Schaffer, S. (1985). Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life. Princeton University Press.
Stengers, I. (2016). ‘Another Science is Possible!’. A Plea for Slow Science. In H. Letiche, G. Lightfoot, & J.-L. Moriceau, Demo(s). BRILL. https://brill.com/view/title/37881
Stephan, P. (2012). Perverse Incentives. Nature, 484, 29–31. https://doi.org/10.1038/484029a
Toribio-Flórez, D., Anneser, L., (…), & on behalf of Max Planck PhDnet Open Science Group. (2021). Where Do Early Career Researchers Stand on Open Science Practices? A Survey Within the Max Planck Society. Frontiers in Research Metrics and Analytics, 5, 586992. https://doi.org/10.3389/frma.2020.586992
Ulpts, S., Bartscherer, S. F., Field, S. M., & Penders, B. (2025-1). The social replication of replication: Moving replication through epistemic communities. https://doi.org/10.31235/osf.io/pqc4v_v1
Ulpts, S., Bartscherer, S. F., Penders, B., & Nelson, N. (2025-2). Epistemic oligarchies: capture and concentration through scientific reform. Preprint. https://doi.org/10.5281/zenodo.17136864
Wingen, T., Berkessel, J. B., & Englich, B. (2020). No Replication, No Trust? How Low Replicability Influences Trust in Psychology. Social Psychological and Personality Science, 11(4), 454–463. https://doi.org/10.1177/1948550619877412
Yang, Y., Youyou, W., & Uzzi, B. (2020). Estimating the deep replicability of scientific findings using human and artificial intelligence. Proceedings of the National Academy of Sciences, 117(20), 10762–10768. https://doi.org/10.1073/pnas.1909046117
Zwaan, R. A., Etz, A., Lucas, R. E., & Donnellan, M. B. (2018). Making replication mainstream. Behavioral and Brain Sciences, 41, e120. https://doi.org/10.1017/S0140525X17001972




