Just in!
"London Book Fair 2024"
Blog
Blogs

Trust, but verify

Caption

One popular conception of science stresses the need to always question, to always remain skeptical. However, given that scientific work requires the coordination of a massive number of people scattered across the world and across disciplines, it is the ability to trust the work others are doing that allows scientists to build on with their own. The obvious question then is why do people trust each other?

In his book A Social History of Truth, the historian and sociologist of science Steven Shapin offers a surprising answer to how this trust initially came about. Science until the mid-19th century was primarily the pursuit of gentlemen. Birth, wealth, and behaviour were used to judge who was and wasn’t reliable. If a man was wealthy (and it was almost always a man), it was assumed that he had nothing to gain and plenty to lose in lying about results, since he was financially independent and was embedded in a culture of honour. Gentlemen trusted each other not because they naively believed good science was inevitable but because of non-scientific facts about their mutual social status.

Of course as time passed, this gatekeeping of science ended and anyone (in principle at least) could pursue science. In this context, why trust anyone else? Of course, most scientists are committed to truth-finding, and the repercussions of being found out serves as a strong deterrent to anyone tempted. But in our era of publish-or-perish, short term cheating and sloppiness might still be tempting to many. In fact, there is already a prominent case of this happening.

In December 2014, then UCLA political science graduate student Michael LaCour and Columbia University political science professor Donald Green published a paper in Science titled “When contact changes minds: An experiment on transmission of support for gay equality.” According to this, door-to-door canvassers who were gay were better than their straight counterparts in convincing voters to support same-sex marriage in the long-term. The study was picked up and touted in several major media outlets including The New York TimesThe Washington Post, and The Wall Street Journal. By chance, two grad students at UC Berkley, David Broockman and Joshua Kalla, were trying to carry out a similar study and during their attempt to replicate LaCour and Green’s result, realised that the original paper had fabricated its data. They published their expose “Irregularities in LaCour,” and the paper was retracted.

This episode itself is fascinating, but what I would like to draw attention to is how such an error had occurred. Green, although the senior researcher, had never even seen the data which LaCour had fabricated and had instead taken it on faith. When later asked why, Green said “It’s a very delicate situation when a senior scholar makes a move to look at a junior scholar’s data set. This is his career, and if I reach in and grab it, it may seem like I’m boxing him out.” In response, Ivan Oransky, aco-founder of Retraction Watch said, “At the end of the day he decided to trust LaCour, which was, in his own words, a mistake.” The New York Times article where both of them were quoted summarized with “The scientific community’s system for vetting new findings, built on trust, is poorly equipped to detect deliberate misrepresentations.

What this episode reveals is that our procedures are, for the most part, still based on trust, making it vulnerable. Reflecting on the LaCour retraction, C. K. Gunsalus, Director of the National Center for Professional and Research Ethics, advocated for greater openness, even titling the piece “If you think it’s rude to ask to look at your co-authors’ data, you’re not doing science.” This really is a fantastic piece, but the one place I’d like to disagree is that many of the suggestions place all the responsibility on authors themselves to institute good practices. I think a better idea is to try to build a culture of responsibility institutionally rather than on individual choice. If collaborators feel uncomfortable asking each other for data or their sources of funding, then the only way around this is to mandate that they do so.

Of course, even this won’t stop all fraud. Multiple authors can still fabricate results together, and can be too lazy and lie about verifying colleague’s work. And this would probably feel too top-down for some academics, who might feel having to fill-in institutionally mandated information at every significant stage of their work tiresome. But if we want a culture of robust checks and balances, we need to start working towards such a framework.

Gain better visibility and control over your entire processes.
Retain control over your content; archive and retrieve at will.
Achieve a 20% cost-saving with our AI-based publishing solution.