On the Folly of Rewarding Production While Hoping For Quality: Reconsidering How Peer Review Is Essential For Collective Impact

by Matthew A. Cronin (Guest Author)

Peer review is an essential part of the “collective” in creating collective impact. It is what makes management science science. In a time when desirable findings and folk wisdom can be easily dressed up using pseudoscience, unsound tests, confirmatory hypothesis testing, or a host of other specious methods, getting an expert’s validation that claims are sound is critical. If knowledge is our product, then peer review is quality control.  

Peer review can also be one of the great stresses in our job as researchers. When it is done well, the authors and the review team bring the work to the next level, and it creates a thrilling, if challenging, creation experience. Done poorly, it becomes a sham performance, and it sucks the life from the paper and the author.  We care about our work, and must publish or perish, so the emotional experience of peer review is significant.  

With so much at stake what do we, as a field, do to ensure that the critical peer review function is well executed? Unfortunately, very little. It was one of the chief complaints when the Academy of Management editors met in December 2022, and conversation continues among business journal editors more broadly in the Organization and Management Editors (OMEN) network. Training in how to review or how to be in an editorial role is quite rare.  

Even worse than limited training, the field disincentivizes this role. Most schools call peer review, even being an associate editor “service,” or in the parlance of Babcock et al., (2022) a “non-promotable task.”  That is something even though important takes away from the time you would spend working on tasks that will get you promoted, like producing research that will, ironically, need peer review!  

Luckily, a few small changes could improve the situation drastically.  

  1. Require training in peer review as part of doctoral education. While there are differences in journal expectations, the similarities outnumber differences. Even better, requiring this broad-based training will help us as a field, collectively, think about how reviewing can increase our impact.
  2. Include reviewing in the tenure packet. If promotion required your reviewing history, supplemented by letters on its quality from the Associate Editors (AE) for whom you reviewed, the quality would improve overnight. It may also have the benefit of increasing the cachet and visibility of being an AE, which could help with promotion, getting grants, etc.  
  3. Stop calling peer review “service.” One cannot be a good reviewer without substantive knowledge and research skill. Further, how different is the task of the reviewer and a 3rd or 4th author when it comes to evaluating the research?  To call reviewing “service” trivializes it to a chore. 

Better peer review means better science and a better experience making that science. It’s a collective effort that we, as a field, could better support. Then, our science will truly be a collective effort. 


Babcock, L., Peyser, B., Vesterlund L., Weingart L. R. (2022) The No Club: Putting a Stop to Women’s Dead-End Work. New York: Simon and Shuster 

2 comments on “On the Folly of Rewarding Production While Hoping For Quality: Reconsidering How Peer Review Is Essential For Collective Impact

  1. Matt Cronin’s post draws attention to a problem that couldn’t be more urgent and important. As evidence, a few weeks ago a medical school collaborator shared that one review he received from a journal had all the hallmarks of having been written by ChatGPT! What resources do we have to fix this problem before it’s completely broken?

    Low hanging fruit: Recommend that your doctoral students and junior faculty attend sessions like MOC’s Reviewing in the Rough at AOM to practice reviewing (Friday 8/4 1-3pm Westin Empire room – full disclosure: I’m a speaker, but all the other speakers are really good).

    More effortful: In the doctoral seminars you teach, have students read one of your papers in its anonymized first submission form, require them to write a review and a critique of a classmate’s anonymized review, and then devote a class session to the how-to of reviewing, including discussing students’ reviews and sharing the reviews you actually received from the journal. As follow-up, share the letter to reviewers that you sent back with the revision. It will be the most memorable and universally useful session of your class.

    Our field can also learn from IIS division NSF grant review panels, where the program director provides feedback on draft reviews and requests that reviewers provide a revision. Imagine how much reviewers could improve if AEs in our field asked reviewers to revise their reviews?

    On NSF review panels, the revised reviews are then shared with the other reviewers, after which one reviewer (generally the most positive one) writes up a consolidated review. That reviewer gets feedback on the write-up from the other reviewers and the program director, and has a last chance to revise. In many (but not all) cases, the process facilitates reconciling divergent feedback before the program director makes the ultimate decision, and all reviews are shared with the scholars submitting the grant. Such iteration might be tough to achieve given the volume of submissions, but my experience is that it does make science feel like a collective effort.

  2. An observation: the format of the peer review process should probably depend on what, exactly, we think the goal is. There are some alternative views here on what we want out of peer review.

    Accuracy: papers have a true intrinsic value; the goal of the review process is to identify those whose value is above a particular threshold
    Impact: the value of papers is uncertain ex ante; the goal of the review process is to identify those likely to be highly cited
    Development: the value of papers is altered by the review process itself; the goal of the review process is to identify promising papers and make them good enough to end up in print
    Innovation: papers exist to advance the state of the field through new methods, new findings, new insights, new theory; the goal of the review process is to distinguish the innovative from the mundane and the merely wrong
    Keeping score: papers are markers of achievement in the academic career of their author; the goal of the review process is to provide a reasonable judgment while minimizing the trauma to the author
    Community: papers are convening devices for a community of scholars; the goal of the review process is to inform and refine the taste and judgment of the participants in the scholarly enterprise

    These can overlap, of course, but reforming the review process is tricky if we don’t all agree on what it is intended to accomplish.

Leave a Reply

Your email address will not be published. Required fields are marked *