CC BY-NC-ND 4.0 · Synlett 2021; 32(09): 885-891
DOI: 10.1055/s-0040-1705955
account

How and Why Crowd Reviewing Works

a   Westfälische Wilhelms-Universität Münster, Organisch-Chemisches Institut, Corrensstraße 36, 48149 Münster, Germany   Email: mvangemmeren@uni-muenster.de
,
Benjamin List
b   Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr, Germany   Email: list@kofo.mpg.de
› Author Affiliations
 


Dedicated to Prof. K. Peter C. Vollhardt on the occasion of his 75th birthday

Abstract

In this Account, the development of Select Crowd Reviewing from the initial idea through a pilot phase to the present moment, when it is now used as the default method for the evaluation of manuscripts at Synlett and SynOpen is detailed. We describe the workflow through which a manuscript is evaluated when Select Crowd Reviewing is applied. A series of questions and answers is used to address typical concerns and the advantages Select Crowd Reviewing offers when compared to traditional peer review.

1 Introduction: The History of Select Crowd Reviewing

2 The Select Crowd Reviewing Workflow

3 Questions We Have Received Regarding Select Crowd Reviewing

4 Conclusion


#

Biographical Sketches

Zoom Image

Manuel van Gemmeren studied chemistry in Freiburg, before joining the group of Prof. Benjamin List at the Max-Planck-Institut für Kohlenforschung for his doctoral studies (2010–2014). After postdoctoral studies as a Feodor Lynen Fellow of the Alexander von Humboldt foundation in the group of Prof. Rubén Martín at the ICIQ in Tarragona (2015–2016) he started an independent research group at the WWU Münster in 2016. His group has been supported by a Liebig Fellowship of the Fonds der Chemischen Industrie, the Otto Hahn Award of the Max Planck Society, and the Emmy Noether Programme of the DFG. Research in the van Gemmeren Lab focusses on the development of novel synthetic methods that enable challenging transformations to proceed with catalyst controlled reactivity and selectivity. Since 2018 Manuel serves as Crowd Reviewing Editor at Synlett. His early career accomplishments have been recognized with the Thieme Chemistry Journals Award (2017), the ADUC prize of the GDCh (2019), his election into the ‘Junges Kolleg’ of the North Rhine-Westphalian Academy of Sciences, Humanities and the Arts (2019), and most recently an ERC Starting Grant (2020).

Zoom Image

Benjamin List graduated from Freie University Berlin (1993) and received his PhD (1997) from the University of Frankfurt. After postdoctoral studies (1997–1998) as a Feodor Lynen Fellow of the Alexander von Humboldt foundation at The Scripps Research Institute, he became a Tenure Track Assistant Professor at Scripps in January 1999. Subsequently, he developed the first proline-catalyzed asymmetric intermolecular aldol-, Mannich-, Michael-, and α-amination reactions. In 2003 he moved to the Max-Planck-Institut für Kohlenforschung, where he has been a director since 2005. From 2012 until 2014 he has served as the managing director of the institute. Since 2004 he is an honorary professor at the University of Cologne. His research interests are new catalysis concepts and chemical synthesis in general. He has pioneered several concepts including aminocatalysis, enamine catalysis, and asymmetric counteranion-directed catalysis (ACDC). His accomplishments have been recognized with the Otto Bayer Prize (2012), the Mukaiyama Award (2013), the Cope-Scholar Award (2014), the Gottfried Wilhelm Leibniz Prize (2016), and an election to the German National Academy of Sciences Leopoldina (2018).

1

Introduction: The History of Select Crowd Reviewing

The foundation for Select Crowd Reviewing at Synlett was laid several years ago, when drawbacks of traditional peer review became continuously more apparent. The two of us were working together in the Synlett Office, Manuel van Gemmeren serving as editorial assistant and at the same time doctoral student of Benjamin List, then Associate Editor, now Editor-in-Chief of Synlett. Striving to ensure a fast, fair, and balanced evaluation of the manuscripts submitted to Synlett, we had very high expectations regarding what the peer review process should accomplish. Besides providing the required feedback that would allow us to judge a manuscript to be suitable or unsuitable for publication, we expected that a good peer review process should help the authors to improve their manuscript as much as possible and whenever possible. However, like many other journals, we found it increasingly difficult to find qualified scientists willing to invest the time necessary for providing such a review (‘reviewer fatigue’).[1] This is almost certainly due to an uneven workload distribution and an ever-increasing volume of submitted manuscripts. The process was further hampered by an increasing frequency of nonresponses. While we understand that we have no right to expect anything, technically not even a decline to review, from potential reviewers we contact, such silence can noticeably slow down the evaluation process. Even worse, when referees agree to provide a review and then suddenly ‘play dead’ leaving a manuscript stuck in review for additional weeks at the costs of the authors. Finally, and again as many other journals, we frequently encountered cases where after obtaining two or more referee reports, these were drastically contradictory, e.g., one reviewer recommending minor revisions and another recommending rejection based on a lack of novelty. Who should we give more credit to? Wouldn’t it be great if these two referees would talk to one another and then provide us with a consensus opinion?

These thoughts led us to ponder the possibility of having the reviewers cooperate in the reviewing process as a form of scientific discussion rather than independently reviewing the manuscript.

Further technical questions needed to be sorted out, most importantly: In order to have a real conversation, the reviewing process would have to take place on a similar timeline. How would this be possible with two to three reviewers who would have a look at the manuscript over the course of weeks? We concluded that this would not work and that instead a larger number of reviewers with access to the manuscript over a shorter timeframe was required, such that actual conversations would evolve, analogous to discussions on scientific panels but also on social media, but with a carefully selected group of participants – the reviewing crowd. We quickly realized that such a crowd reviewing system would actually offer substantial inherent advantages that could serve to alleviate the challenges traditional peer reviewing faces today. At the same time many questions remained: Would the response be good? Would the comments from such a procedure be substantive? Would we be able to ‘recruit’ suitably qualified scientists to participate in the reviewing crowd?

Overall, at this stage we saw many opportunities and equally many challenges and concluded that the only way to evaluate the idea was through experiment (what a surprise that chemical synthesis scientists would choose this approach…).

Together with the Thieme publishing team in Stuttgart and coordinated by Dr. Denis Höfler, the successor of Manuel van Gemmeren working as the editorial assistant in Mülheim, a startup company offering an interesting online platform was identified (Filestage). The software that Filestage then provided was originally designed to aid advertisement and media companies to conduct collaborative though nonlocalized work on graphical material. International teams of photographers, writers, and art directors, located all over the planet, could jointly discuss graphical art. We reasoned this to be close enough to our concept of having multiple referees working jointly but not locally on manuscripts. Accordingly, Select Crowd Reviewing was launched as a pilot project in 2016. At this stage all manuscripts were simultaneously evaluated through Select Crowd Reviewing and traditional peer review, such that a reasonable comparison could be made between the two methods. The early conclusion from these trials was that ‘crowd-based peer review can be good and fast’.[2]

As can be seen from Figure [1], the proportion of manuscripts evaluated through Select Crowd Reviewing remained comparably low in those days. The decision was made to extend Select Crowd Reviewing in form of a long-term trial during which authors were given the choice to opt-in on Select Crowd Reviewing either instead of, or additionally to, traditional peer reviewing. This phase began in mid-2017, which marks the beginning of a continuous rise in the proportion of manuscripts undergoing Select Crowd Review (Figure [1]). This extended trial was highly successful. Select Crowd Reviewing was increasingly well accepted by the community, delivered valuable feedback, and displayed very fast turnaround times.

Zoom Image
Figure 1 Development of Select Crowd Reviewing: increasing percentage of manuscripts evaluated by Select Crowd Reviewing relative to the overall number of manuscripts sent for external peer review.

This led to a continuous increase in the percentage of manuscripts at Synlett being evaluated this way. The realization that Select Crowd Review ‘is here to stay’ is also reflected by several decisions made by Thieme as the publisher. Firstly, steps were taken to implement Select Crowd Reviewing at further journals beyond the boundaries of Thieme Chemistry (e.g., the medical journals Röntgenforschung and International Journal of Sports Medicine). Secondly, Crowd Reviewing Editors were appointed at both SynOpen (Dr. Philippa Cranwell) and Synlett (Dr. Manuel van Gemmeren).

Finally, in 2019 the Editorial Board of Synlett decided to make Select Crowd Reviewing the default method for the evaluation of manuscripts. Authors now have to actively opt-out of Select Crowd Reviewing. SynOpen has gone even further – manuscripts here are exclusively evaluated using Select Crowd Reviewing. This decision explains the rather sharp increase in the percentage of manuscripts undergoing Select Crowd Review in the first half of 2020 (Figure [1]).


# 2

The Select Crowd Reviewing Workflow

Many steps of the workflow for a manuscript that undergoes Select Crowd Review are identical to those of a manuscript evaluated by traditional peer reviewing. The process begins with the submission of the manuscript by the authors (Step 1, Figure [2]). During submission, an active opt-out from Select Crowd Reviewing causes the manuscript to be further processed via traditional peer reviewing. Irrespective of this decision, the manuscript first undergoes technical checks at the Synlett Editorial Office in Stuttgart.

Zoom Image
Figure 2 Workflow of Select Crowd Reviewing at Synlett/SynOpen

These checks do not consider the scientific novelty or relevance of the manuscript but exclusively focus on technical and formal aspects, i.e., plagiarism. Manuscripts that do not comply with the standards of Synlett may be sent back to the authors at this stage. Otherwise, the manuscript progresses and is assigned to an Editor (Step 2). The Editor then performs a first scientific evaluation of the manuscript. Manuscripts judged to be outside the scope of Synlett or unsuitable for publication in Synlett for another reason are rejected without peer review at this stage. Until now, no differentiation has been made between manuscripts that undergo Select Crowd Review or traditional peer review. For all other manuscripts, the Editor decides whether to send the manuscript for traditional peer review, Select Crowd Review, or both. Note: Authors who do not opt-out of Select Crowd Review are not guaranteed that Select Crowd Reviewing will be used to evaluate the manuscript. Editors may decide against this, e.g., when manuscripts are considered too specialized to ensure that an evaluation by the members of the crowd would be appropriate or when potential conflicts of interest are foreseen.

Despite these limitations, a sizable proportion of manuscripts is ultimately evaluated in this way. In these cases, the Editor sends a request for Select Crowd Reviewing to the Crowd Review Editor (Step 3). This is fully analogous to the invitation of an external reviewer. The Crowd Review Editor then uploads the manuscript to a secure online platform to which the members of the crowd have access, informs the members of the crowd that one or more manuscripts have been uploaded and invites them to contribute to the reviewing of these manuscripts. The members of the crowd receive invitation e-mails containing the titles and authors of the manuscripts in question as well as direct links to the manuscript (Step 4). Amongst the members of the crowd, those who are currently available and feel like they may contribute to its evaluation then use these links to access the manuscript in an anonymized form. Each reviewer gets assigned to a permanent alias, such that the Crowd Review Editor is able to determine the identity of a reviewer, e.g., in a suspected case of scientific misconduct or inappropriate commenting. It should be noted, however, that we have not observed such behavior so far.

The core of Select Crowd Reviewing now takes place (Step 5). The members of the crowd all have access to the same version of the manuscript and see all previous comments on the manuscript. The reviewers therefore react to the manuscript AND the comments already made. They can thus agree to comments already made (instead of writing down the same comments in a different phrasing). More importantly, the reviewers can disagree with previous comments and thereby start a discussion. Oftentimes, the same reviewer who posted the original comment will return and defend his/her position or consent to the criticism and change his/her recommendation. This process also allows for third and fourth reviewers to contribute as well. At the end of such a discussion, a consensus opinion can very often be identified or alternatively, the arguments for both positions have been exchanged in a way that allows the Editor to make a balanced decision based on these arguments. At the end of a typical crowd reviewing process, a review report in form of a PDF containing the annotated manuscript is obtained (Step 6). This report and a brief summary of its content are then sent from the Crowd Review Editor to the Editor, who receives this report in the same form a single peer review report would usually be obtained (Step 7). Based on this report (and in some cases an additional traditional peer review) the Editor now makes the decision regarding the manuscript, providing the authors with a typical decision letter accepting, rejecting, or requesting revisions to the manuscript (Step 8). As part of this decision letter the authors receive the crowd reviewing report with all original and anonymous comments made by the reviewers in the crowd.

Zoom Image
Figure 3 Screenshot of a real-life Select Crowd Reviewing process

To further illustrate this process, Figure [3] depicts a screenshot from a real-life crowd reviewing process.

3

Questions We Have Received Regarding Select Crowd Reviewing

In this section, we paraphrase some of the most important questions regarding Select Crowd Review that we have been asked or asked ourselves over the last few years and provide you with the answers we have/should have given to these.

The peer review system we have in use at most scientific journals works fine and has been honed over centuries – why are you trying to fix something that is not broken?

First, we must stress that we are not advocating the abolition of peer review. Select Crowd Review is peer review, just organized differently. The traditional peer review system has a number of very important merits that we acknowledge and that should be retained in any system implemented for the evaluation of scientific manuscripts.[1] The most important aspect is the evaluation through peers. Who, if not highly qualified scientists from the same discipline, should be qualified to judge if a manuscript is scientifically sound? Only a scientist who is sufficiently educated and specialized in a similar way can spot methodological flaws and suggest meaningful experiments to improve a manuscript. Only the participation of a broad group of peers, i.e., ‘the’ scientific community, can combat the bias that would undoubtedly occur if such decisions were to be made by persons outside the academic system. Another key aspect in peer review that must be retained is the reciprocity. Highly educated scientists provide their expertise to evaluate manuscripts because they can trust that their own manuscripts will be reviewed accordingly.

On the other hand, one could argue that the traditional peer review system is ‘broken’ in many small ways. Anyone who has seen the ‘Reviewer 2’ jokes on social media and recognized their own experiences in this will certainly agree that bias does exist in the current peer review system, that some reviewers protected by the single-blind review system are tempted to provide harsh, sometimes even demeaning comments or even unethically hamper the work by their direct competitors.[3]

The traditional peer review system in place today, in which typically two to three referees provide full reports on a manuscript, originates from a time in which the manuscripts and reports were sent back and forth via paper mail and where cooperativity in the reviewing process was simply not a realistic option. With Select Crowd Reviewing, we strive to preserve the indispensable strengths of peer review and at the same time alleviate some of the weaknesses in the traditional procedures by using modern tools enabled by the internet and information technology.

You mentioned the fact that sometimes contradictory recommendations result from traditional peer review. Isn’t this desired? Or, phrased differently, does crowd review result in a situation where you have just one viewpoint?

Of course, Select Crowd Reviewing also shows when different opinions on a certain manuscript or aspect thereof exist. However, these do not stand beside each other without a possibility to weigh them. Since each reviewer can see the others’ comments, the contradictory opinions ‘clash’ during the reviewing process. This can lead to a discussion and ultimately to a consensus opinion instead of an Editor having to weigh-up which of two opposing recommendations to give more credit.

But doesn’t this create the possibility for ‘peer pressure’ in the reviewing process, where one reviewer posts an opinion and then the others just agree without proper reflection?

We do see situations where one reviewer posts an opinion and then various others agree. The real question here is whether this is automatically a bad thing. It can in many cases save a lot of work to the reviewers, because instead of independently and lengthily writing the same points, they can agree to an already existing comment. It would only be a problem, if the agreeing reviewers did agree without reflecting on this step. We have no reason to assume that this is the case. Firstly, the reviewers are anonymous amongst each other, such that there is no reason to agree with someone for any reason except that you do in fact share their opinion. Secondly, we do see disagreement often enough to conclude that there is no hesitation to enter a discussion when conflicting opinions exist. Consider this: In traditional peer review it also happens frequently that several reviewers reach the same conclusion/recommendation without having influenced each other. This is actually what Editors hope for because it renders their job to make an overall decision much simpler than highly conflicting independent reviews.

You mentioned ‘reviewer fatigue’, but does in not create disproportionately more work if a larger number of reviewers has to review each manuscript?

Three aspects should be considered here. A reviewer contributing to the evaluation of a manuscript through Select Crowd Reviewing does not automatically have the same amount of work as in traditional peer review. It is possible for reviewers to log in, give a single comment ranging from an overall recommendation to highlighting spectra of insufficient quality, and not give any further comments on the manuscript. Through multiple such contributions full referee reports can be obtained. This is ‘crowd intelligence’ at work. Secondly, not all members of the crowd comment on all manuscripts, they ‘self-assign’ when they are available for commenting. Thirdly, the traditional peer review system has the tendency to focus reviewer requests on the most visible members of a given field, simply because these are the first names that come to mind when the authors are asked to recommend reviewers or the Editor herself/himself has to identify suitable reviewers. At the same time many members of the scientific community that also possess the qualification to provide review reports and are willing to contribute to the scientific community of which they are part as authors receive little or no requests to review manuscripts. Select Crowd Reviewing thereby has a much larger potential reviewer base than traditional peer review.

You mentioned that you have a larger base of potential reviewers. How did you mean that? Or phrased differently, who is part of this ominous crowd and how do you become a member of the crowd?

The members of the crowd are often highly trained assistant professors and postdoctoral researchers, but also senior professors and chemist who have left academia to lead industrial research laboratories. By asking such scientists to join Select Crowd Reviewing and encouraging them to apply to become members of the crowd, we create a system that distributes the burden of, but also honor and responsibility associated with peer review onto a broader basis of scientists. There are several main pathways to become a member of the crowd. The Editors at Thieme Chemistry are always looking for suitable candidates, who they then either contact directly or via the Crowd Reviewing Editor and invite them to join Select Crowd Reviewing. Additionally, applications to join the crowd, especially from postdoctoral researchers who wish to pursue academic careers, are frequent and are considered based on the merits of the applicant.

Even if the members of the crowd are well-trained chemists, isn’t the evaluation by two or three highly specialized leaders in the field more rigorous?

The need for extreme specialization may be given in particular manuscripts when topics at the border of the scope of Synlett are covered. However, in general, the fact that Synlett is in itself a specialized journal focusing on chemical synthesis leads to a comparable specialization of the crowd members. We simply aim to have a crowd of suitably specialized referees. Additionally, it should be considered that besides the qualification to perform a detailed and qualified review the quality of the referee reports obtained also strongly depends on the time invested by the reviewer. Here, Select Crowd Reviewing offers an inherent advantage. Traditional peer review, by focusing the reviewing burden on fewer individuals also reduces the time invested for reviewing a manuscript, which can overall erode the quality of peer review reports provided by prominent leaders in a given field. In contrast, the members of the crowd are often willing to invest more time and end up providing much more thorough referee reports, also because double work is avoided as explained above.

How can this ever be scalable?

Over recent years Select Crowd Review at Synlett has evolved from a handful of manuscripts over several months to the default method with which hundreds of manuscripts are evaluated. Naturally, we had to consider how to cope with an increasing number of manuscripts to be reviewed. As a result, we have expanded our crowd and now use a system, where each manuscript is evaluated by a subset of crowd members. Larger journals or journals with a broader scope and interdisciplinary journals where not all crowd members are suitable for all manuscripts, could easily adapt this approach to create sub-crowds by keywords describing their respective specialization. Overall, the solution is simple, the more manuscripts have to be evaluated, the more crowd members are required. Since, as discussed above, the overall amount of work is not increased and it is distributed more fairly throughout the community, we can conclude that this is more a matter of management, e.g., by the Crowd Review Editor, and a technological matter in the sense that the databases and artificial intelligence software to manage these increasingly larger and more diverse crowds must be available. To this goal, Thieme constantly works with the companies involved in the management and evaluation of manuscripts (ScholarOne and Filestage). Two more aspects should be reiterated here: (1) with Select Crowd Reviewing we tap into a much larger refereeing community, and (2) it should be noted that the overall amount of reviewing material that is generated in the process, which we typically quantify in terms of number of substantive points, is not massively more than that coming from traditional peer review. Overall, our referees work less rather than more and can even select on which manuscripts to put their effort.

How do you ensure appropriate ethics of the crowd?

On the one hand, we have reviewer guidelines as for traditional peer review, requiring reviewers to disclose potential conflicts of interests and not contribute to the evaluation of affected manuscripts. Secondly, the Crowd Review Editor has the possibility to access the identity of each reviewer despite the anonymity during the reviewing process itself. This allows us to probe suspected cases of ethical misconduct, although this has never been needed. In contrast, the crowd has already uncovered cases of ethically questionable to unacceptable behavior by authors that might have been missed in traditional peer review, such as self-plagiarism, plagiarism, and even digitally modified analytical data in the Supporting Information of manuscripts. Finally, and most importantly, the Crowd is self-controlling. An inappropriate reviewer comment will simply receive opposing comments by other members of the crowd. This renders it much more difficult to provide a biased report than in traditional peer review.

How do you deal with bad manners in the crowd?

Again, we have the same options at hand as in traditional peer review. We have had occasional impolite comments by reviewers. These reviewers were contacted and informed about our expectations from reviewers. In virtually all cases the reviewers changed their conduct after this instance. Only in a single case over the last years was a reviewer removed from the crowd for repeatedly providing comments in an inappropriate manner. One point of note is, however, that the referee reports obtained from Select Crowd Reviewing are built from the compiled comments by various people and constitute ‘chat protocols’. This format instead of the flow text obtained in traditional peer review may seem harsher to authors experiencing the format for the first time.

Until now, we have discussed potential weaknesses/challenges of Select Crowd Reviewing, but what are the inherent advantages?

The key advantages of Select Crowd Reviewing result from resolving challenges in traditional peer reviewing.[4] As stated above, Select Crowd Reviewing reduces an overload of certain reviewers, by calling upon a broader reviewer base. This also renders the format more inclusive towards otherwise underrepresented groups of scientists (with respect to the career stage, country of origin, etc.). The interactive format provides several additional advantages. It acts as a safeguard against biased referee reports and instead of contradicting statements by independent reviewers a discussion is obtained as review report, which allows for much more informed decisions on the Editor’s side. Further advantages result from the composition of the crowd. The participation of scientists at earlier stages of their careers provides a great opportunity for them to contribute to the scientific community. Furthermore, the balanced composition of the crowd leads to very substantive referee reports. Different members of the crowd tend to comment on different aspects of a given manuscript. Some focus only on the big picture, others have a keen eye for the quality of analytical data or put particular emphasis on the quality of the graphical material. Finally, the use of this modern format inspired by social media is extremely fast compared to traditional peer review. Typically, a manuscript is online for our crowd for only two to four days. This short period is usually enough to obtain a sufficient amount of quality feedback such that by now the administrative parts of the workflow shown in Figure [2] account for the larger part of the time required to evaluate a manuscript.

Doesn’t this cause much extra work for the authors?

It depends on the reference. In our experience Select Crowd Reviewing can produce very detailed comments. Instead of a comment such as ‘the language must be improved’, dozens of specific comments may be given to help the authors improve the writing. Reviewers may comment on spectra of insufficient quality and make specific suggestions on further experiments, etc. However, all of this is in principle expected from a very good traditional peer review report. Unfortunately, we have grown used to traditional review reports that do not live up to these standards and sometimes just contain a general recommendation to the Editor, but no specific comments that could be used to improve the manuscript. In that sense, the reports from Select Crowd Reviewing will sometimes result in more work for the authors than traditional peer review. However, this work is certainly justified if it results in better manuscripts and better science.


#
# 4

Conclusion

In this Account we have reported how the idea and process of Select Crowd Reviewing originated, have detailed the workflow used at Synlett/SynOpen and have attempted to address typical questions we receive regarding this modern implementation of peer review. While we are very pleased with how Select Crowd Reviewing has worked over the past few years, this comparably young technology is still in a dynamic development. We therefore always welcome questions and suggestions and invite readers interested in experiencing Select Crowd Reviewing themselves to contact us and join the crowd.

Testimonials from Crowd Members and Editors

Due to the diverse members of the crowd, CR allows indeed to obtain more opinions on manuscripts from a broader basis of reviewers. This is absolutely true and also prevents referees from following an agenda and weakens extreme opinions (in one direction or the other). […] For me it is a lot of fun in principal. As a young researcher, I am not yet overwhelmed by peer-review requests. At the same time, I can give something back to the community, I can participate actively and help to shape the journal, as I work on many of the submitted manuscripts. But especially, I also get insights into the comments of other referees and enjoy reading about, what I might have overlooked while reading! It helps me to grow as both scientist and referee. It is a great concept and I feel it already works very, very well! Congratulations!

Dr. Bernd M. Schmidt, Assistant Professor, Heinrich-Heine-Universität Düsseldorf

The crowd review program (CR) is a great improvement for the peer-review process where you could potentially see a fast-paced publication. It certainly has several advantages over conventional peer review and takes significantly much lesser time for both the manuscript evaluation as well as for publication (few weeks). In addition, crowd review completely nullifies the conflicts of interest and brings a precise conclusion to the fate of the manuscript, which you can occasionally miss in conventional peer review. […] I strongly believe that the reviewers also benefit from the scientific discussions during the review process and share a joint responsibility to properly evaluate the manuscript. A reviewer does not enjoy these important returns from conventional peer review.’ Dr. Nagaraju Miriyala, Research Associate, Medicinal Chemistry Facility, J. G. Brown Cancer Center

‘As the Editor for Account and Synpacts articles, I have been gratified by the increasing number of authors opting for crowd review (more than 50%). The manuscripts are invited, hence the aim is to get them published, the reviewing process serving to flag problems with content and to improve the presentation of the subject matter. The latter gets short thrift in traditional peer review, which (rightly so) tends to focus on the intellectual gist of the manuscript. The crowd has been exceptionally good at addressing both aspects, providing valuable feedback on the scientific content of the manuscripts, but at the same time pointing out stylistic and grammatical errors, inconsistencies, format problems, and ways to improve the reading of the narrative. Only occasionally do I have to supplement the comments of the crowd by those of additional experts. Needless to say, manuscript processing times have been reduced dramatically.’

Prof. Dr. K. Peter C. Vollhardt, University of California at Berkeley


#
#

Acknowledgment

We thank Dr. Kathrin Ulbrich from the Synlett Office at Thieme Chemistry for providing valuable statistical data and resources for the preparation of this manuscript.

We thank the many scientists who through their highly motivated engagement and valuable feedback as members of the crowd contribute to Select Crowd Reviewing. Without you this success story would not be possible!

  • References


    • For selected resources discussing the merits, but also challenges faced by the peer review system, see:
    • 1a Smith R. Br. Med. J. 1988; 296: 774
    • 1b Shatz D. Peer Review: A Critical Inquiry. Rowman & Littlefield; Lanham, MD: 2004
    • 1c Benos DJ, Bashari E, Chaves JM, Gaggar A, Kapoor N, LaFrance M, Mans R, Mayhew D, McGowan S, Polter A, Qadri Y, Sarfare S, Schultz K, Splittgerber R, Stephenson J, Tower C, Walton RG, Zotov A. Adv. Physiol. Educ. 2007; 31: 145
    • 1d Walker R, Rocha da Silva P. Front. Neurosci. 2015; 9: 169
    • 1e Csiszar A. Nature 2016; 532: 306
    • 1f Faggion CM. Jr. Br. Dent. J. 2016; 220: 167
    • 1g Carroll AE. Peer Review: The Worst Way to Judge Research, Except for All the Others, Nov. 8, 2018. The New York Times; New York: 2018: 3
  • 2 List B. Nature 2017; 546: 9

    • For selected resources highlighting the sensitivity of the peer review system towards unethical behavior and fraud, see:
    • 3a Smith R. J. R. Soc. Med. 2006; 99: 178
    • 3b Ferguson C, Marcus A, Oransky I. Nature 2014; 515: 480

Corresponding Authors

Manuel van Gemmeren
Westfälische Wilhelms-Universität Münster, Organisch-Chemisches Institut
Corrensstraße 36, D-48149 Münster
Germany   
Benjamin List
Max-Planck-Institut für Kohlenforschung
Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr
Germany   

Publication History

Received: 14 September 2020

Accepted after revision: 22 September 2020

Article published online:
30 October 2020

© 2020. This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial-License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/)

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany

  • References


    • For selected resources discussing the merits, but also challenges faced by the peer review system, see:
    • 1a Smith R. Br. Med. J. 1988; 296: 774
    • 1b Shatz D. Peer Review: A Critical Inquiry. Rowman & Littlefield; Lanham, MD: 2004
    • 1c Benos DJ, Bashari E, Chaves JM, Gaggar A, Kapoor N, LaFrance M, Mans R, Mayhew D, McGowan S, Polter A, Qadri Y, Sarfare S, Schultz K, Splittgerber R, Stephenson J, Tower C, Walton RG, Zotov A. Adv. Physiol. Educ. 2007; 31: 145
    • 1d Walker R, Rocha da Silva P. Front. Neurosci. 2015; 9: 169
    • 1e Csiszar A. Nature 2016; 532: 306
    • 1f Faggion CM. Jr. Br. Dent. J. 2016; 220: 167
    • 1g Carroll AE. Peer Review: The Worst Way to Judge Research, Except for All the Others, Nov. 8, 2018. The New York Times; New York: 2018: 3
  • 2 List B. Nature 2017; 546: 9

    • For selected resources highlighting the sensitivity of the peer review system towards unethical behavior and fraud, see:
    • 3a Smith R. J. R. Soc. Med. 2006; 99: 178
    • 3b Ferguson C, Marcus A, Oransky I. Nature 2014; 515: 480

Zoom Image
Zoom Image
Zoom Image
Figure 1 Development of Select Crowd Reviewing: increasing percentage of manuscripts evaluated by Select Crowd Reviewing relative to the overall number of manuscripts sent for external peer review.
Zoom Image
Figure 2 Workflow of Select Crowd Reviewing at Synlett/SynOpen
Zoom Image
Figure 3 Screenshot of a real-life Select Crowd Reviewing process