My issues with peer review
The biggest issue with peer review is that I'm not qualified to do it, but they keep asking me anyway.
I regularly review papers for journals and conferences because it's part of what it takes to have good science out in the world. It's an uncompensated service that is expected of most professors, and I'm not in love with it.
Reviewing a paper takes a lot of time to do well
Sometimes I'm intimately familiar with the topic and can immediately identify if the research is well done or not. Other times it's tangentially related to my own work, and it takes time for me to wrap my head around the literature to identify if the research is contributing to our understanding of a topic.
Write the review itself at the very least involves reading the full paper, spending extra time in the method, results, and discussion sections to ensure the research has been done well and implications are reported accurately.
The reward for reviewing a paper is... more reviewing assignments
Reviewing a paper is free labor that we do because it's the price we pay to have good science. But the work is usually uncompensated, and just expected as part of your normal job duties. But the more you review—and the better job you do—the more review requests you get. It can be hard to say no to a review request from a journal you want to publish in, but if you're not careful you could easily be reviewing multiple papers every week, taking time away from the other, better parts of your job.
Emotional toll
It's emotionally taxing (at least to me) to communicate to people I don't know that I don't think their paper is good enough. I want to be thorough, I want to be kind, and I want to be right. I've spent many hours going over a paper's methods, reviewing and re-reviewing to make sure that the things I think need improvement are actually wrong. ChatGPT knows me well now, because I'll frequently ask questions about part of a methodology struggling to get to exactly why I think it falls short.
To write a review I have to be confident that I know the best way to do things, but at the same time be open to being shown a new way I hadn't thought of. I have to be willing to tell someone "I know you've spend months of your life on this, but it's not good enough, and here's why."
I also often feel judged by other reviewers or by the editors. If I miss something that another reviewer catches, I'm not good at my job. If I catch something that other reviewers miss, I'm being too nitpicky. If I don't spend enough time talking about the theory development, I'll never be good at my job. You know, real healthy self-talk 🤷.
Outdated processes
The process is outdated. The typical review process looks like this:
- Authors spend months to years working on a paper.
- Authors submit the paper to a journal
- Editor briefly reviews paper and assigns reviewers (a couple weeks)
- Reviewers review the paper and submit their suggestions and recommendation (a few weeks to 3 months)
- Editor makes decision (usually revision or reject in the first round), then comments on reviewer comments and suggestions (a couple weeks)
- If the decision is a revision, the authors have ~6 months to revise the paper and submit their revision
- Steps 3-6 repeat until acceptance or rejection
The main issue is that at no point is there communication between the reviewers and the authors, and as a result authors can spend months working on a revision that won't address reviewer concerns. While this might have made sense to maintain anonymity in a pre-Internet time, it's just not necessary anymore.
I have been on both the author and reviewer end of these fruitless changes. Reviewers do their best to communicate clearly what they recommend for the authors, but a few pages of written notes may not be clear enough, and the authors might fixate on one comment while ignoring others and as a result to significant effort that still results in rejection. I was recently involved with a paper where that is exactly what happened. The authors misinterpreted some reviewer comments, ran a time-consuming data collection to address their interpretation, but it did not address the fundamental issue outlined in the reviews, and thus their efforts were wasted. A brief online exchange through anonymous posts (or even better an anonymized real-time discussion) could have prevented a lot of wasted time.
Two jobs at once
Often reviewers are doing two jobs at once. We are both verifying the validity of the paper and assessing its value, and I think these two things need to be separated.
The first (and in my opinion the only) important job is to check if the paper is done well. Are the statistical methods appropriate for the type of data? Are the results based on the methodology, and are the implications of those results described appropriately? For example, you shouldn't be making broad societal claims based on a sample of 20 college students. Not to say we can't learn things from 20 college students, but what you've learned from that informs and directs future research to dig deeper.
The second job is frequently to act as gatekeepers for a journal. A study can be done well, but if it doesn't contribute enough value to research, it probably isn't a good fit for a top level journal. Replicating well-studied findings is not a major contribution that deserves publication in Nature. But... and this is a big but... it's not usually that cut and dry. Often we have a new finding, or a finding in a new context, that helps us understand a little bit more about a phenomenon.
Deciding if a contribution is "enough" is, in my opinion, the responsibility of the editor. My job as a reviewer is to determine if the research is done well, and accurately describes what it has contributed to our understanding.
My recommendations
- Reviewers should be focused solely on the validity of the research. As the subject matter experts, they should identify if there are glaring issues with value, like if the entire paper is a duplicate of already finished research, but "journal fit" as a function of "is the contribution big enough to belong here" should be left to the editors.
- Figure out a way to allow in-between-rounds anonymous communication between authors and reviewers. Before the authors go all the way down the path of a major revision, whether that be a major rewrite of the paper, new data collection, or other things, they should be able to check in with reviewers to see if it will address their concerns. This should be optional, and not necessarily expected, but we should have the ability to discuss high risk changes rather than managing the review process like we're writing missives to be sent by ship across the Atlantic.
- For goodness sake Manuscript Central, get it together and let me have just one account. Having separate accounts for each journal is hard to keep track of.
- Journals should offer free therapy for reviewers and authors. Not really, but... maybe.
Thoughts? Join the site to comment below, or share on LinkedIn (@rschuetzler) or BlueSky (@schuetzler.net).