Go back

How the Web of Science takes a step back

28/11/2024

The Web of Science, a major commercial indexing service of scientific journals operated by Clarivate, recently decided to remove eLife from its Science Citation Index Expanded (SCIE). eLife will only be partially indexed in Web of Science as part of its Emerging Sources Citation Index (ESCI). The justification offered for this decision is that eLife’s innovative publishing model was deemed to conflict with the Web of Science’s standards for assuring quality. This blog post argues that this decision works against the interest of science and will ultimately harm innovation and transparency in scientific research.


If we want to reap the full benefits of 21st-century science, scholarly communication needs reform and innovation[1]. That’s why the Howard Hughes Medical Institute (HHMI) and like-minded research organizations are supporting new approaches to scientific publishing. eLife, founded by HHMI, Wellcome, and the Max Planck Society in 2012, has delivered such innovations since its inception. In its first major contribution, eLife improved peer review by embedding consultation among peer reviewers into the process. Given this record, it is troubling that Web of Science recently decided to de-list eLife from its major index, especially considering that its publicly stated criteria for journal evaluation include the requirement that “published content must reflect adequate and effective peer review and/or editorial oversight”. Rather than helping move scholarly communication forward, Web of Science, by punishing a leader in the field, is in fact holding it back.

The decision

In 2023, eLife adopted an ambitious new publishing model, referred to as “Publish, Review, Curate”, in which research findings are first published as preprints and subsequent peer review reports and editorial assessments are published alongside the original preprints and revised articles.

Web of Science de-listed eLife from SCIE because the new publishing model can result in the publication of studies with “inadequate” or “incomplete” evidence. According to Clarivate eLife failed “to put effective measures in place to prevent the publication of compromised content”. As one of the people who supported the implementation of eLife’s new model, I worry that this decision penalizes the adoption of innovative publishing models that experiment with more transparent and useful forms of peer review.

What makes Clarivate’s decision so counterproductive is that other journals, which use the traditional and confidential peer review model, remain indexed in the Web of Science even though some of the articles they publish also have inadequate or incomplete evidence. The difference is that they do not label these articles as such.

The decision therefore rewards journals for continuing the unhelpful practice of keeping peer review information hidden and unintentionally presenting incomplete and inadequate studies as sound science and punishes those journals that are more transparent.

Clarivate's decision rewards journals for continuing the unhelpful practice of hiding peer review information and unintentionally presenting incomplete and inadequate studies as sound science and punishes those journals that are more… Share on X

Beyond Pass/Fail Peer Review

Traditional journals use the editorial decision to publish as the signal to their readers that the article has passed peer review and the journal’s standard of sound science.

As a proxy, the decision to publish can only transmit one piece of information: approval. In contrast, eLife uses editorial assessments, produced collaboratively by eLife reviewers and published with the article, as a proxy for the peer review process. Rather than being a one-dimensional stamp of approval, eLife editorial assessments summarize strengths and weaknesses and use standard terms to rank articles on two dimensions: significance and strength of evidence. For example, the ranked standard terms for strength of evidence range from “inadequate” and “incomplete” at the low end to “compelling” and “exceptional” at the high end.

As a result, eLife does indeed occasionally publish articles that eLife reviewers judge and mark as having “inadequate” or “incomplete” evidence after final revisions, although these represent only a small fraction of the total articles published.

eLife’s model thus makes visible a reality of scientific publishing that is currently invisible.

Publishing articles with incomplete or inadequate evidence

So, how do traditional journals end up publishing studies with incomplete or inadequate evidence?

As a former editor myself, I know that editors will sometimes publish incomplete studies in the belief that these articles include important information in their respective fields that others will improve and build on. The problem isn’t the publication of such studies but the non-transparent stamp of approval which conveys a strength of evidence that simply isn’t there. Clarivate’s decision signals to journals that they can continue to give readers this (false) impression that the evidence for all their articles is better than incomplete.  By contrast, the eLife model allows for the sharing of these types of studies, but with the context of an evaluation of the strength of the evidence.

The case for articles with inadequate evidence is different.

Journal editors certainly try to reject inadequate articles or compel authors to revise such articles in light of serious criticism. While these efforts seem appropriate and well-meaning at the level of individual journals, they break down at the level of the publishing system. Authors can simply submit an article rejected at one journal to another journal without note, where it may ultimately be published. Articles with inadequate evidence thus end up being published without any flag or commentary noting the problem, polluting the scientific literature.

Peer review isn’t foolproof to begin with.  Indeed, studies[2] on peer review have shown that expert reviewers often miss flaws that were deliberately inserted into manuscripts by the studies’ designers. Re-discovering these flaws in successive rounds of confidential peer review at different journals makes the task of keeping inadequate articles rejected increasingly harder.

Adding to this problem, the pool of expert reviewers is depleted with every round of review. The inevitable outcome is that inadequate articles are published with the stamp of approval from peer review in journals that remain listed on Web of Science.

Moreover, the confidential nature of peer review at traditional journals makes it hard to grasp the full extent of the problem. It is thus unclear whether these journals publish more or fewer inadequate articles than eLife. But when they do publish those studies, the damage is more serious than at eLife because readers are led to believe that the articles report sound science.

How indexers can drive change

Science needs indexing services that move beyond mere assertions and hold journals accountable for their claims about the quality of articles they review and consider for publication.

At minimum, journals should insist that the prior peer review history of an article is shared with their editors and peer reviewers.

Authors may deserve fresh eyes on their articles, but not without convincing the new reviewers that prior reviewers got it wrong. eLife authors can already take their eLife reviewed preprints to other journals for publication, including preprints that eLife reviewers deemed “inadequate”, but the peer reviews remain publicly accessible at eLife.

Journals that conduct peer review in the open or ensure transfer of prior peer review reports clearly advance adequate and effective peer review and should thus be rewarded, not punished, by indexers.

Indexing services should also implement protocols that scrutinize whether journals effectively correct hyped or flawed publications. Science is error-prone and a journal’s stamp of approval – or even a transparent peer review process such as eLife’s – can get it wrong.

Journals would perhaps do a better job at correcting their published record if these indexing services held them more accountable. eLife’s new publishing model makes it easier to evolve in this direction. For example, editorial assessments can be improved in the future by offering revisions based on expert feedback.

From career threat to badge of honor

When we accept that published assessments can change, reflecting the maturing perception of the published work among experts, we may even overcome the most formidable cultural barrier that transparent peer review faces: the understandable and widespread concern that critical reviews can sink the work and career of the authors. If the authors got it right and the work eventually succeeds in overturning a long-held dogma, initial negative reviews can turn into a badge of honor. They demonstrate, in the written record, what challenges the authors faced on the road to recognition of their work.

Unfortunately, the decision by Clarivate to de-list eLife from SCIE is a barrier to a future where the inevitable errors of authors, peer reviewers and editors can be effectively corrected through expert input that everybody can see and benefit from.


References

[1] Sever R (2023) Biomedical publishing: Past historic, present continuous, future conditional. doi.org/10.1371/journal.pbio.3002234; Stern BM, O’Shea EK (2019) A proposal for the future of scientific publishing in the life sciences. doi.org/10.1371/journal.pbio.3000116

[2] Baxt, William G et al. (1998) Who Reviews the Reviewers? Feasibility of Using a Fictitious Manuscript to Evaluate Peer Reviewer Performance. doi.org/10.1016/S0196-0644(98)70006-X; Godlee F, Gale CR, Martyn CN (1998) Effect on the Quality of Peer Review of Blinding Reviewers and Asking Them to Sign Their Reports: A Randomized Controlled Trial.  doi:10.1001/jama.280.3.237; Schroter S, Black N, Evans S, Godlee F, Osorio L, Smith R. (2008) What errors do peer reviewers detect, and does training improve their ability to detect them? doi:10.1258/jrsm.2008.080062


Disclosure

This blog is authored by Bodo Stern, Chief of Strategic Initiatives at the Howard Hughes Medical Institute (HHMI). HHMI is a founding member and major funder of eLife.


Update (28 November)

This blog post has been updated to note that the Web of Science recently decided to remove eLife from its Science Citation Index Expanded (SCIE). eLife will only be partially indexed in Web of Science as part of its Emerging Sources Citation Index (ESCI).


Bodo Stern

Bodo Stern is the Chief of Strategic initiative at Howard Hughes Medical Institute (HHMI). He works directly with HHMI’s president and senior executive team to formulate and execute the organization’s strategic initiatives and direction, with emphasis on enhancing HHMI’s investment in research and science education. Bodo is also responsible for the Institute’s philanthropic collaborations to advance science. He previously served as HHMI’s chief development and strategy officer. Before joining HHMI, Bodo served for eight years as director of research affairs at the Harvard Center for Systems Biology, where he helped to manage the Bauer Fellows Program, a unique initiative that gives young scientists the opportunity to run independent research groups. He also has worked as a senior scientific editor at Cell. Bodo earned a PhD in biochemistry from University College, London, and an MA in biochemistry from the University of Tübingen, in Germany. His primary research explored how cells correct chromosome errors during cell division.