Things We Don't Know Anymore
|Our psychology editor Malte Elson explores the “replication crisis”, and questions our level of confidence in established psychology. Image credit: Things We Don't Know / Giles Meakin (CC-BY)|
The last few years haven’t been easy on psychological science. Don’t get me wrong – the field in itself is flourishing, boasting an ever-increasing number of publications, journals, conferences, faculty positions, and university graduates all over the world. It has gained more and more respect and acceptance, both in academia and society. The case of Harvard evolutionary biologist and primate researcher Marc Hauser’s fraudulent publications was already fading from our minds when in September 2011, the discovery of the scientific misconduct by the Dutch social psychologist Diederik Stapel shattered the grounds of psychological science. In at least 50 cases of scientific fraud that have been discovered by the Levelt Committee, Stapel had doctored, mangled, and completely fabricated datasets to successfully publish in the field’s top-ranked outlets - up to the most prestigious journals like Science. Among Stapel’s highly regarded publications were findings on how untidy environments encourage racist discrimination, or how to reduce racist biases in judges' legal decisions on minority defendants. Nullifying the content of these publications constitutes a setback for social psychology, and - to a somewhat lesser extent – society overall.
Although they work in a highly competitive environment, we trust scientists to be committed to finding the truth. And when playing it smart, like Stapel, it is quite easy to abuse this trust for personal gain in the form of a prestigious academic career. Instead of looking for the truth, Stapel was on a
quest for aesthetics, for beauty, as he was quoted saying by the New York Times. One might think that it’s not that much of an issue - Stapel got caught after all! Reaching for the stars he committed fraud, but got brought back down to reality when his deeds were unveiled, so the system works. But does it really?
There is good reason to believe that severely fraudulent behaviors, such as the fabrication of data, as in the Stapel affair, are extremely rare among scientists of all disciplines, including psychology. Not only does it require enormous efforts to maintain this fragile house of cards for a longer period, but in case one gets caught, the scientific community will most likely not pardon the fraud. However, there is a large gray area of questionable research practices (QRPs) that have a higher acceptance in the community, and that a considerable number of scholars engage in: while fabricating data, indeed, seems to have a relatively low prevalence (and is never perceived as justified), according to a survey on over 2,000 researchers, failing to report all dependent measures or experimental conditions, selectively reporting studies that “worked”, or even stopping data collection after achieving a desired result are surprisingly common in psychological science. Does that mean psychology as a whole is looking for beauty over truth?
|Questionable research practices and a methodological flexibility allows tweaking data just a little to make them sound “sexy”. Image credit: Marcus Holland-Moritz (CC-BY)|
That is, of course, a rather grim reading of these findings, and probably an unfair comparison to make given the differences in scale between not reporting some of the measures of a larger experiment and completely making up data. But there is a kernel of truth to this statement: Psychology evidently prefers “positive” over “negative” or null findings. According to Fanelli, approximately 90% of all empirical papers published in psychology’s top journals are in support of a tested hypothesis, and the number seems to be increasing even further over the years5. This led Arina Bones6, the satirical “alter ego” of social psychologist Brian Nosek, to humorously remark that given the almost clairvoyant abilities of psychologists, the empirical testing of hypotheses seems to be largely unnecessary. Of course, there are at least two substantial consequences of this “aversion to the null”: The empirical literature in psychology suffers from a massive publication bias, also called the “file drawer problem”, as studies yielding null findings very often end up in the researchers’ file drawers instead of academic journals. Even more serious is the fact that psychologists, pressured to publish if they don’t want to “perish” (i.e., fail in their career), aim to achieve statistical significance in their research – and sometimes maybe a little too hard. Current practices that are commonly employed entail a sufficient “methodological flexibility” that allows tweaking statistics just a little so that, more often than not, studies “pass” the significance test which, in turn, dramatically increases their chance of getting published8. Ioannidis' warning9 that
most published research findings are falsemight be particularly true for psychology.
Academics from other disciplines seem to be genuinely surprised when they hear about this current state of affairs in psychology, given that there is an obvious solution to the problem: replication. With a sufficient number of replications, it should be easy to discern reliable findings from “false positives” (results “made” statistically significant when they are actually not). However, psychologists have only rarely engaged in systematic replication efforts, at least historically. According to Makel, Plucker, and Hegarty, the replication rate in psychology’s empirical literature is only about 1%, just half of which comes from researchers not working at the original laboratory. Again, one apparent problem is that many psychological journals don’t publish replications, thus making it quite unattractive to spend any resources on replication efforts. But it’s not that simple.
|Replication is an important tenet in science, as it provides confidence - if you drop something, you are very sure it will fall to the ground because you’ve seen it happen innumerable times. But if you’ve only seen something happen once, you can’t be sure it will happen every time.|
Image credit: Tanya Hart (CC-BY)
About two months ago the journal Social Psychology published an entire special issue dedicated to the replication of the field’s “textbook classics”. Again, while this might seem trivial to academics from other disciplines, it’s an almost revolutionary effort to psychologists. Some of the original findings were replicated, some were not. Besides the scientific value of each replication attempt, the lesson to be learnt from this debate (and drama) that they sparked seems even more important. There was a lengthy, and quite heated, back-and-forth between a team of replicators (Johnson, Cheung, and Donnellan) and the author of a “textbook classic” (Schnall) about who or what might have been responsible for the failure to replicate the original findings. As previously observed in similar exchanges, psychologists can sometimes become defensive (rightfully or not) when their ideas get challenged by failed replications. Other commenters chimed in, too. Harvard psychologist Dan Gilbert, for instance, dubbed Johnson and colleagues the
shameless little bullieson Twitter. In fact, some scholars have started an astounding controversy on whether failed replications, as opposed to successful replications, have a place in psychology at all. Harvard’s Jason Mitchell, for example, argues that
unsuccessful experiments have no meaningful scientific valueand
do not constitute scientific output. And while Mitchell has been met with severe criticism (e.g., by Neuroskeptic) from a large number of commenters - including other psychologists - the fact that a tenured professor working at one of the most prestigious research institutions in the world publicly expresses doubts about the usefulness of one of the hallmarks of empirical science might be telling us that it is still a long way for psychology to attain the status that other disciplines already earned.
|Like our doodle? Studying psychology? Now you can keep your lecture notes in style and help support us at the same time with this stylish ring binder from zazzle!|
 Stapel, D. A., & Lindenberg, S. (2011). Coping with chaos: How disordered contexts promote stereotyping and discrimination. Science, 332(6026), 251–253. doi:10.1126/science.1201068 (RETRACTION NOTICE)
 Lammers, J., & Stapel, D. A. (2011). Racist biases in legal decisions are reduced by a justice focus. European Journal of Social Psychology, 41(3), 375–387. doi:10.1002/ejsp.783 (RETRACTION NOTICE)
 John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23(5), 524–532. doi:10.1177/0956797611430953
 Fanelli, D. (2010). “Positive” results increase down the hierarchy of the sciences. PloS ONE, 5(4), e10068. doi:10.1371/journal.pone.0010068
 Fanelli, D. (2012). Negative results are disappearing from most disciplines and countries. Scientometrics, 90(3), 891–904. doi:10.1007/s11192-011-0494-7
 Bones, A. K. (2012). We knew the future all along: Scientific hypothesizing is much more accurate than other forms of precognition - a satire in one part. Perspectives on Psychological Science, 7(3), 307–309. doi:10.1177/1745691612441216
 Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86(3), 638–641. doi:10.1037//0033-2909.86.3.638
 Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359–1366. doi:10.1177/0956797611417632
 Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124. doi:10.1371/journal.pmed.0020124
 Makel, M. C., Plucker, J. A., & Hegarty, B. (2012). Replications in psychology research: How often do they really occur? Perspectives on Psychological Science, 7(6), 537–542. doi:10.1177/1745691612460688
 Nosek, B. A., & Lakens, D. (2014). Registered reports. A method to increase the credibility of published results. Social Psychology, 45(3), 137–141. doi:10.1027/1864-9335/a000192
 Koole, S. L., & Lakens, D. (2012). Rewarding replications: A sure and simple way to improve psychological science. Perspectives on Psychological Science, 7(6), 608–614. doi:10.1177/1745691612462586