： Interesting suggestion, I think that part of the difficulty in developing such ratings is finding a scoring framework that provides meaningful information about the quality of the review, quality can mean different things in the context of a review, and defining a simple scoring system can be challenging. For prepints, bioRxiov and medRxiv provide a commenting functionality via Disqus where readers can upvote or downvote a particular comment although that just provides a sense of approval/disapproval and not information on the quality of the comment. Peerage of Science, a platform that provides journal independent reviews, does assign quality indices to reviews: https://www.peerageofscience.org/how-it-works/quality-indices/.
We have some examples of tools that have already been implemented or where work is underway, such as Similarity Check by iThenticate for text overlap between manuscripts, and efforts around image screening software, I understand some publishers such as Elsevier are collaborating with groups developing software tools to detect figure manipulation and that pilots are underway so we may see implementation of this type of tools in the near future.
We may see future automation for some other checks that do not necessarily require subject-specific in-depth knowledge, for example, screening of clinical trial registration, competing interest statements or data statements. This may also open avenues for new services that provide some form of pre-certification for those checks for authors like it has happened with copyediting services.
I think tools can help scale the peer review process by decoupling certain checks, which can be potentially automated as new tools are developed, from the review of scientific content which requires the input of an expert.
I understand the Frontiers journals operate this peer review model, their process involves an independent review by reviewers first and then a collaborative review step where editor, reviews and authors can interact: https://www.frontiersin.org/about/review-system. I have heard anecdotally that this can make it sometimes difficult for reviewers to reject, but I have no first-hand experience with the model. I expect this type of review requires a higher commitment from the reviewer in terms of their input and involvement with the review process, but I can see the benefits of a collaborative approach in for example, avoiding unnecessary revision rounds if any items can be clarified directly with the reviewers. It would be interesting to complete a study of articles published after different review models to see whether there is any correlation between a specific review approach and the quality of the reviews, and the eventual published paper.
这是个不错的建议，我觉得难点之一是建立一个评分框架，提供与质量审核相关的有意义的信息。不同审核模式下，质量也是不同的，而定义一个简洁有效的评分系统也是极具挑战性的。对于预印本，bioRxiov和medRxiv通过Disqus提供了评论功能，读者可以对特定评论进行票数增减，然而这只是提供了一种赞同/不赞同的感觉，无关评论质量。 Peerage of Science是一个提供期刊独立评论的平台，且为评论分配了质量指数：https://www.peerageofscience.org/how-it-works/quality-indices