~ 9 min read

JOTE in Conversation: Leo Tiokhin - “Criticism in Science Should be Rewarded”

ByStefan GaillardOrcID & Marcel HobmaOrcID

We present to you the fourth instalment of our “JOTE in conversation” series, where we ask researchers how they experience failure and success in research practices and their careers.

This article is based on a conversation had between JOTE co-founder Stefan Gaillard and Dr. Leo Tiokhin, where they delved into Leo’s interesting career path from evolutionary anthropology to metascience, and discussed the scientific evaluation platform Red Team Market, which he led during his postdoc at Eindhoven University of Technology. Leo is currently a senior researcher at the Rathenau Instituut.

Let’s start with your evaluation project. What is Red Team Market and how does it relate to the open science movement?
Essentially, what we were trying to build was a way for people to receive criticism whenever they need it by connecting them to the right experts. Using the platform, scientists and other researchers could critically evaluate each other’s work and receive financial compensation for doing so. I think it’s very well-aligned with the open science movement because it aims to improve the efficiency and reliability of science. More generally, we also wanted to change the academic incentive system by rewarding people for providing high-quality criticism. Contrast that with the current situation, where good reviews are treated as a free service. Well-motivated criticism is valuable for improving science and should therefore be rewarded.”

How would you institutionalise this kind of platform?
“I think it is important for scientific research to ensure that there is a way to apply criticism independently of publishers or funders. We envisioned our role as an independent middleman. Reviewers often want to assess certain things that they might not know the nitty-gritty details of, for example whether the code or statistical models were correctly implemented. However, it is important that these things indeed get reviewed, so we tasked ourselves to connect these people with critics with the right expertise.

At the same time, this project was not intended as a “free market” without rules. It was regulated. We did quality control, audited critics and were developing ideas for training programmes for critics in order to improve the quality of our offering. We started with the field of psychology, but ultimately we were aiming for a large marketplace with people from different fields, and eventually a search algorithm would allow people to find the appropriate critics.”

How would it compare with the ‘traditional’ form of peer review in academia?
“Red Team Market had the potential to work as a supplement to peer review. At the time, we were having discussions with journals to see if we could do a supplemental review on a subset of papers that were controversial or where the editor simply desired an extra level of review. You could also potentially conduct Red Team “audits” by checking a random subset of papers at a journal. I suspect that this would substantially increase the quality of the papers: you don’t have to audit very often for it to change people’s behaviour.

Currently, when a journal receives a submission, they rely on a limited database of experts. Often editors need to look for the right reviewer themselves, and that’s difficult: reviewers are not publicly listed, and many will simply ignore your review request. Journals can experience substantial delays because they cannot find the right reviewers. But with a larger marketplace, you could have people set their availability, add their expertise and rates, and so streamline the review process.

So, will it be the journals that would pay for the service? Or is this also something that individual researchers could use?
“The journal is just one type of customer. You could see individual researchers use a platform like Red Team Market when they have a project that could benefit from extra feedback, but they need to have funding for it, for example via their research grant.

You could also see governments or companies use this when they are planning to base a costly decision or policy implementation on scientific research. In situations like this, you want to be as certain as possible that you're not making a mistake, which means there is substantial value in taking a small percent of your money and hiring a red team.

Weird question halfway through, but how did the idea arise? And is the name a reference to something or …?
“Yes, it is. We were inspired by red team practices used in tech companies and in governments for military strategies. In these domains, a red team is a group that critically challenges a certain plan or policy by adopting an adversarial stance. Our whole team was interested in looking for better ways to provide and receive criticism, and had all experienced frustration in our careers regarding this.

One of our co-founders even organised a bug bounty for his own work, where he put up a few thousand euros of his own money that he would pay out if people found mistakes in his paper or code. The larger the mistake, the larger the reward. Drawing on ideas like these, we wanted to see whether providing financial incentives for finding scientific errors - could feasibly be professionalised in science.

We ran a trial with a paper from one of our team members and it turned out great! The critics were happy to get paid for something that they would normally do for free, and they were all very critically minded so they enjoyed the process. He [the team member] ended up not even submitting his paper and instead conducting additional experiments, because the red team had brought up issues that he hadn’t considered.”


So how did you end up working on Red Team Market during your postdoc, and how does that relate to your academic background?
“The Red Team project was not something I had planned when starting my postdoc: it came up organically throughout the process. My background is in evolutionary anthropology, where I studied the evolution of human behaviour and psychology. During graduate school I was conducting anthropological research in Indonesia. I enjoyed my work, but at the same time the ‘replication crisis’ was growing and really getting to me. I had been in situations where I had seen questionable practices and had been involved in research that made me feel uneasy.

Honestly, I have been frustrated with how science is conducted since early in my career. During graduate school I was on the verge of leaving academia because I didn’t like the incentives. After travelling and fieldwork, I began working on a PhD proposal about testing ideas from signalling theory regarding honest and dishonest communication. But when I really dug into it, it felt somewhat trivial relative to issues surrounding reproducibility and scientific integrity. So, basically in the middle of my PhD work, I went to my advisor and said ‘I can’t do it, I need to focus on metascience’. Luckily, he was very understanding, I think partly because he shared my frustrations about academia and partly because he knew that he wouldn’t be able to convince me otherwise.

For my postdoc, I studied reward structures in various ways, for example theoretical modelling. My collaborators and I applied signalling theory from biology to academic publishing to see if we could change publishing incentives to prevent scientists from misleading journals about the quality of their submitted papers.”


Leo’s transition to Rathenau Institute and ERROR


In 2022, Leo had to stop Red Team Market due to a career change. “I was leaving academia and decided to go into data science consulting, which demanded a lot of time. Unfortunately, there was nobody else who could take the project forward.”

He didn’t stop because the concept itself was flawed, he says. The team finished some projects and had even made some profit, but this was not sufficient to hire someone on a full-time or even part-time basis to continue the project. Furthermore, the project had many aspects that still needed improvement, such as the quality control and the funding scheme.

In his new job as a senior researcher at the Rathenau Instituut, he is still actively working on improving science. “I study the Dutch scientific system so that policy makers and broader society can make better decisions about how to organise science”, he says. “One subset of research is error detection and correction, as well as creating incentives for fixing errors. Maybe it will turn out that Red Team Market can contribute to that goal.”

Furthermore, he is also part of the advisory board of ERROR, a new bug bounty program where authors can get their work reviewed. “Critics receive money for detecting errors. Also, if the authors’ papers survive the process, then they can also get a monetary reward”. The project is funded by a grant and has the additional goal of learning about where and what kind of errors occur. “It aims to incentivize people to be more critical and inspire policy makers to create more structural funding for these kinds of initiatives”.


Gaillard
Stefan Gaillard
Co-founder, Editor, Special Issue Editor-in-Chief

Stefan Gaillard specializes in failure, uncertainty, and erroneous claims – both in science and society. He is one of the co-founders of the Journal of Trial and Error and currently works on the special issue on scientific failure in the health domain.

Hobma
Marcel Hobma
Blog Editor, Social Media Manager

Marcel Hobma is a student in History and Philosophy of Science, and trained as an (investigative) journalist. His interests vary from the Philosophy of Biology to the incentive structure of science. He currently works on his Master's thesis on the cultural evolution of values in nutrition science.

Latest

Founding JOTE: A Conversation with Stefan Gaillard: “I Believe That Positive Publication Bias is Actively Harming Science”

Read

Founding JOTE: A Conversation with Martijn van der Meer. “My Interests Became More Political. I Wanted to Contribute to a Mission-Driven Model of Scientific Publishing”

Read

Founding JOTE: A Conversation with Max Bautista Perpinyà. “I Don’t Think Forbes Has Any Legitimacy in Acknowledging Scientific Success.”

Read