Iratxe Puebla: Interesting suggestion, I think that part of the difficulty in developing such ratings is finding a scoring framework that provides meaningful information about the quality of the review, quality can mean different things in the context of a review, and defining a simple scoring system can be challenging. For prepints, bioRxiov and medRxiv provide a commenting functionality via Disqus where readers can upvote or downvote a particular comment although that just provides a sense of approval/disapproval and not information on the quality of the comment. Peerage of Science, a platform that provides journal independent reviews, does assign quality indices to reviews: https://www.peerageofscience.org/how-it-works/quality-indices/.
We have some examples of tools that have already been implemented or where work is underway, such as Similarity Check by iThenticate for text overlap between manuscripts, and efforts around image screening software, I understand some publishers such as Elsevier are collaborating with groups developing software tools to detect figure manipulation and that pilots are underway so we may see implementation of this type of tools in the near future.
We may see future automation for some other checks that do not necessarily require subject-specific in-depth knowledge, for example, screening of clinical trial registration, competing interest statements or data statements. This may also open avenues for new services that provide some form of pre-certification for those checks for authors like it has happened with copyediting services.
I think tools can help scale the peer review process by decoupling certain checks, which can be potentially automated as new tools are developed, from the review of scientific content which requires the input of an expert.
I understand the Frontiers journals operate this peer review model, their process involves an independent review by reviewers first and then a collaborative review step where editor, reviews and authors can interact: https://www.frontiersin.org/about/review-system. I have heard anecdotally that this can make it sometimes difficult for reviewers to reject, but I have no first-hand experience with the model. I expect this type of review requires a higher commitment from the reviewer in terms of their input and involvement with the review process, but I can see the benefits of a collaborative approach in for example, avoiding unnecessary revision rounds if any items can be clarified directly with the reviewers. It would be interesting to complete a study of articles published after different review models to see whether there is any correlation between a specific review approach and the quality of the reviews, and the eventual published paper.
这是个不错的建议,我觉得难点之一是建立一个评分框架,提供与质量审核相关的有意义的信息。不同审核模式下,质量也是不同的,而定义一个简洁有效的评分系统也是极具挑战性的。对于预印本,bioRxiov和medRxiv通过Disqus提供了评论功能,读者可以对特定评论进行票数增减,然而这只是提供了一种赞同/不赞同的感觉,无关评论质量。 Peerage of Science是一个提供期刊独立评论的平台,且为评论分配了质量指数:https://www.peerageofscience.org/how-it-works/quality-indices
我们有一些已经实施或正在进行的工具示例,如iThenticate的查重以及图像处理。我了解到像Elsevier这样的出版商正与软件开发小组合作开发图像处理的工具,还在试验阶段,所以在不久的将来这类工具将问世普及。
我们也许会看到某些检查在未来实现自动化,这些检查不一定需要具备特定主题的深层知识。例如筛选临床试验注册,竞争性利益陈述或数据陈述。这也可能为新服务打开渠道,这些新服务为作者检查提供某种形式的预认证,就像在复制编辑服务中发生的那样。
我认为工具可以把一般检查与需要专家涉入的科学内容检查分离开来,从而有助于扩大同行检查的流程,某些检查可以在新工具中自动实现。
我了解到Frontiers期刊采用这种同行评审模式,其流程包括先由评审人员进行独立评审,然后再进行协作评审,使编辑、评审和作者之间可以互动:https://www.frontiersin.org/about/review-system
我曾听说,这种模式有时会让审稿人难以拒绝,但我没有使用该模式的直接经验。我希望这种审核要求审核人员在投入和参与审核过程中做出更高的承诺,而好处也是显而易见的。比如避免了一些非必要的修订。与审稿人一起,完成各种审阅模式下发表的文章研究,或查看特定的审阅方法与审阅质量以及最终发表的论文之间是否存在内在关联等,将是一件很有意思的事情。
2020-09-24 15:15