Last week the special issue on replication at the Journal of Social Psychology arrived to an explosion of debate (read the entire issue here and read original author Simone Schnall’s commentary on her experience with the project and Chris Fraley’s subsequent examination of ceiling effects). The debate has been happening everywhere–on blogs, on twitter, on Facebook, and in the halls of your psychology department (hopefully).
Make no mistake, this is great news for the field of social psychology: This is the first time, since I joined the field, that one of our own journals has devoted attention to examining the footing on which so much of our science is grounded (If you haven’t had a chance, please congratulate the curators of this effort [@lakens and @BrianNosek] on twitter or elsewhere). The whole procedure involved in the replication efforts has been made publicly available. Read it here and be encouraged by the fairness and transparency of this scientific enterprise.
There are some significant short-term challenges we are facing: By moving ahead with high-quality attempts at direct replication some of our original research will not replicate (we don’t know how much unless we conduct these replications). This is going to be personally painful for the original researchers who conducted the non-replicated work (though see this blog post for reasons why single replications are starting points, not conclusions). Ultimately, the mere act of conducting direct replications will improve our science, and we as researchers need to keep this in mind when facing non-replications of our research, or even when evaluating others’ non-replicated research. We must not allow single non-replications to damage the reputations of our colleagues. Think of our science as cumulative and the definitive answers as somewhere down the road.
There is one disappointing part of this whole event that I’d like to point out. In the past, the research integrity of social psychologists has been called into question by acts of fraud, by science journalists, and by psychologists in other fields. This is the first (or at least the most visible) time that card-carrying social psychologists are holding up our own research to scrutiny. In response to this effort by members of our own field–who have undertaken this work at some cost to their own reputations–we have resorted to engaging in debate tactics that would probably be best characterized as childish (one example explained here). I can understand defensiveness when it is people outside our discipline who comment on our methods (the discussion about research integrity is loudest among non–social psychologists at the University of Illinois, so I understand the defensive reaction), but sniping within our field is us missing a great opportunity to put social psychology at the forefront of scientific integrity.
We should all reflect for a moment on how wonderful it is that the examination of our field is FINALLY IN THE HANDS OF SOCIAL PSYCHOLOGISTS. I think we need to welcome this change and we need to promote (not obstruct) direct replications that are supervised by experts of our own guild (Danny Kahneman suggested as much here). If we can’t support direct replication attempts that are rigorously vetted prior to data collection, open and transparent, and conducted by our own guild, then I’m not sure we’ll ever be a replicable science. That would be a tragedy.
I’d love to read your comments here or on twitter (@mwkraus).