The following is an expanded version of a piece that recently appeared in Times Higher Education. It was written with Prof. Hilal A. Lashuel, the director of the laboratory of molecular and chemical biology of neurodegeneration at the Swiss Federal Institute of Technology Lausanne (École Polytechnique Federale de Lausanne, EPFL).
“That’s it, that’s the cure.”
I stood squinting at the graph in front of me. Woah! The disease group really did fall back in line with the healthy controls. Maybe this target actually could swing the tides in our fight against Parkinson’s diseases.
I watched the eager young researcher as he flipped excitedly through the rest of his slides. “Amazing.” I said, not entirely certain I understood everything but wooed anyway by his enthusiasm and self-belief. “So, what’s next?”
“Well, the journal wants me to run a couple more experiments in animal models and connect this to the field’s belief in protein aggregation. Once it’s published I’ll be able to form a company through my university to develop bioassays to identify a lead compound. It shouldn’t take more than five years to get it out of preclinical development.”
At that moment, I knew I was screwed.
Humanity is defined by our ability to communicate. Our capacity to exchange detailed messages with other members of our species enabled us to become the dominant form of life on earth.
The invention of the written word propelled us even further, giving each of us the power to influence events around the globe and shape the minds of generations to come.
Printing made writing our dominant form of expression. It turned literacy into a requirement of civil life and launched the scientific and cultural revolutions that gave birth to the modern world.
Now the internet has throttled all of that into hyperdrive, facilitating the spread of ideas to all corners of the world at nearly the speed of light.
In all its iterations, writing has been the storehouse of our knowledge and the backbone on which we have built our civilization. Every great innovation, every giant leap forward, emanated from the written expression of an idea. To this day it still carries our best hopes for building a better future. Within is the potential to curb climate change, stave off drought and famine, even cure diseases.
However, the flow of information is stifled by the very means we use to disseminate it – publishing. Scientific publishers wield enormous influence over society because of the effect they have on the direction of science. Research institutions and funding bodies rely on publication history to assess scientists, entire careers are predicated upon getting published in the right journals. As a result, the primary goal driving research today is not the progress of human knowledge or advancing society, it is publishing.
There is some good that comes of this. To get published does typically require discovery of something novel, which does spur progress, but it does so at a snail’s pace because progress has been rendered a side effect rather than the goal. While this process does ensure that any claims get sufficiently scrutinized, it is rife with inefficiencies and pressures researchers into monopolizing data to secure a competitive edge for funding opportunities or to get into the limited pages of “high-impact journals”.
All of which is further exacerbated by the need to tell a compelling story. Publishers sell a commodity, the more appealing they deem a story is, the more likely they are to publish it. This drives researchers towards experiments that have the best chance of resulting in a captivating headline, leaving few if any incentives in place to do the ‘unsexy’ but equally important replication and validation studies. What’s more, if new information comes in after formally submitting a manuscript, researchers are not able to update their work and must wait until they can collect enough data to tell a new story. This often results in scientists presenting half-truths and discarding critical data that adds too much complexity because journals want clean stories with unambiguous results.
At the core of this problem is the increasing reliance of universities on publication matrices when making recruitment and promotion decisions. Science has turned into a numbers game, as a result, researchers now spend an average of 38 days working on writing new proposals, around ten days every year reviewing other academics’ funding proposals, and in total nearly 15 million hours are spent reviewing rejected papers each year.
For those who look to scientific progress hoping for solutions to the problems we face, the future can look pretty grim. The issues above are well known in the scientific community, yet there is a critical lack of urgency to do anything about them because the system keeps churning. Discoveries still happen, papers are still written, scientists still get funded, and incremental progress still seems to be made.
One example of the bottleneck this creates is in Parkinson’s disease. Patients today are still treated with the same drug that was given to their grandparents in the 1960s. The lack of progress is blamed on the complexity of the problem, but a closer look at how research is conducted today tells us that the way we conduct, evaluate and reward research is largely responsible for this slow pace. We have a plethora of exciting new discoveries being published, yet next to nothing that has translated into meaningful clinical advances.
All of which leads to a hard truth that anyone diagnosed with a degenerative brain condition must inevitably face – the pace at which science is progressing is slower than the rate at which the cells in our brain are dying.
If we are ever going to flip that equation, we will need new methods of assessment that allow us to judge scientists by their merit, not by a matrix. We need a system where researchers are encouraged to collaborate, to make their work accessible to everyone as soon as possible, and where the reward systems that drive their careers does not depend on where they get published but rather on the extent their finding advances knowledge and addresses the problems of their field. So long as researchers are assessed primarily by their individual achievements, change will lag.
A New Vision
This new vision for scientific publishing requires reversing the relationship between authors and publishers. In this new system, authors would be able to publish and make their research accessible to everyone immediately and free of charge, editors of journals would compete to publish work, researchers could continue to update their papers even years after they were published, and the incentives, recognition and reward systems would not depend on where a paper is published, but rather on its contents and the extent it advances knowledge and contributes to addressing the problems of their respective field.
In this system, scientists would publish their work and the relevant data immediately upon completion in pre-print servers such as BioRxiv or ChemRxiv and data repositories. Here they will be able to receive immediate feedback and recognition from their peers and the scientific community. This system will democratize the process and open the doors to reviews and feedback from a broader swath of the community as students, citizen scientists, and researchers from all over the world and at all stages of their careers would be able to review and comment on the work. These papers would be indexed and retrievable in a single database of preprint articles, becoming immediately visible and accessible to the scientific community and the public free of charge.
In this open market for papers, journal editors would have the opportunity to review the uploaded manuscripts and determine which ones fit the scope and requirements of their journals. They can then contact the authors to express their interest in publishing their work. The authors then get to decide where to send their work.
Next would begin the all-important peer-review process. Journals would eventually come to be judged by authors based on how well they facilitate these interactions. This would force journals to take a more active role in this stage to enable authors to interact directly with the reviewers whose primary role is to assess the claims made, ensure the robustness of the science and provide guidance and constructive feedback to improve the paper.
Given the increasing complexity and interdisciplinary nature of science, the current review process based on 2-3 reviewers no longer works. In many cases the papers contain highly specialized data from different disciplines many of which are beyond the expertise of the reviewers, which means that most complex interdisciplinary papers are not thoroughly reviewed at the experimental level. Opening up the process will not only give more expertise to assess and comment on the work, but will also give a chance to young scientists and post-docs who are conducting similar experiments and are much closer to the experiments to provide technical and scientific feedback that would improve reproducibility and research integrity.
After passing the peer review process, the papers are accepted, and all the reviews and authors responses are published with the papers. The authors pay the publication and open access fees and deposit all the relevant data and the tools needed to allow others to validate and expand on their work.
The process does not end here. In the new system, the authors would be given the opportunity to respond to post-publication reviews, update or even correct their papers with comments, links to improved methods or protocols or even add new data. All of which will be linked and traceable to the original submission.
Finally, provisions will be taken to ensure that the public is as informed as possible. Papers would have to include a one-page summary that explains in the simplest terms possible the motivation of the work, main results, as well as the limitations and the implication of the main findings for the advancement of knowledge and/or addressing societal needs and global challenges.
For taxpayers and funders of research, this system means that they are getting maximum value and return on investments in research. Giving scientists the ability to correct and update their publications will improve reproducibility, enhance research integrity and ensure that all the research that they paid for is published. There is a lot of good and useful science that never gets published because the body of work is not enough to warrant a full paper. Scientists are also reluctant to invest time in publishing incomplete work, correcting the work of others or simply improving previously published methods and protocols because this is not recognized as scholarly work and they are not likely to be recognized, promoted or rewarded for such efforts.
For editors and publishers, this system represents a major role reversal and will require that they assume more of the editorial roles and responsibilities. In addition, journals and publishers would need to hire more editors, including preprint editors. Given their current profit margins (~35%), this could be accomplished without increasing the cost of publications. Today, the editors of “top-ranked” and “high impact journals” reject 80-90% of the papers they receive with impunity. Instead, they will be able to review the preprint repository and select exactly the papers they are interested in publishing.
This would also give scientists more choices and greater control of their work. This process eliminates the time lost on multiple submissions and rejections or delays due to demands from the editors for more data to meet the journal’s novelty standards. This is especially important for junior scientists because being able to quickly find the right journal for their papers and receive immediate feedback to improve the quality of their work. This enables them to get their work noticed, cited and evaluated, thus helping them to build a reputation and scholarly track record before they apply for jobs or come up for tenure evaluation or promotions.
Previously scientists had to wait for at least 1-2 years before the accumulate citation of their work. Today, thanks to preprint servers and websites such as https://rxivist.org, preprint papers receive almost immediate citations and recognition of their impact through matrices that include number of downloads. While papers published in preprint servers are not yet listed in some databases such as pubmed, new platforms have been developed to archive preprint papers and search engines such as … allow retrieval of both published and preprint papers.
Finally, it would also lift the veil that currently separates science from the rest of the society. There is a tremendous gap between what gets published in academic journals and what eventually trickles down to the mainstream public. This is partly because very few people have been granted access to the source of our knowledge. We seem to have reverted to a time when only a select priestly class of citizens had access to our most important information. If we are to continue to advance as a species and as a civilization, we must do everything we can to remove the barriers separating those who know and those who don’t. This new system (which we have dubbed ADP – author driven publishing) of publication would be an important step towards ensuring a more educated and civil society and would enable us as a civilization to make more informed decisions about the future we want to live in.
The vision outlined above may seem like a far-off utopia, but we are already trending in this direction.
Scientists, in particular junior and mid-career scientists, continue to embrace open science practices in large numbers and the publishing industry is recognizing this trend and is trying to adapt. The number of papers published in preprint servers is increasing daily and most journals allow submission in preprint servers. Some even offer services that allow direct submission of papers to preprint servers from their own submission systems. Several journals (e.g., Biophysical Journal, PLOS journals) are already introducing direct links to the preprint versions of the published articles.
The journals and editors are also responding to this rapidly changing landscape. A few journals (e.g. PLOS Genetics, PLOS ONE and Proceedings of the Royal Society) have appointed Preprint Editors to screen preprint publications and solicit submissions.
To ease people’s fear of being scooped, several journals (e.g., EMBO journal, eLife and PLOS Biology) have introduced scoop protection policies where competing papers published after the submission or preprint submission date would not influence their editorial decisions. Several national and regional funding agencies, including the European Research Council, Wellcome Trust, MRC, NIH and Cancer Research UK, … now encourage scientists to use and cite preprints in grant applications, thus recognizing preprints as evidence for research productivity.
PLOS biology and PLOS Computational Biology announced recently that 43 and 46% of their authors opted for making preprint versions of their paper available. To ensure quality, the papers are now prescreened by editorial staff prior to submission to bioRxiv. Interestingly, many of the authors who opt out of preprint submissions do so simply because they are unfamiliar with the process.
The process of review has also become increasingly more transparent. Several journals encourage the publication of reviews and authors responses once manuscripts are accepted. Others, such as the frontiers journals and eLife, already encourage more interactive exchanges between the authors and the editors or reviewers during the review process.
To address the financial burden of open access publications, several research and academic institutions have also established central funds to cover open access fees and funding agencies are allowing researchers to use grant funds to pay for open access publications.
The Rest of the Way
While we have certainly made a lot of progress in this direction, we aren’t there yet. For this to become the norm, it will require a collective and concerted efforts by all the stakeholders (scientists, universities, funding agencies, private foundation and regulators). There is already consensus that the current model of publishing should be changed, and that universities and research institutions need to modify their research evaluation and reward systems. The reasons change has been slow is there has not been a sense of urgency and making these changes means at least two things 1) changing long-standing institutional cultures, values and systems; and 2) experimenting with new and untested incentive structures and evaluations systems. One way to create a sense of urgency is by increasing public pressure and by providing resources and incentives for universities to experiments with new and innovative open science practices and evaluation systems.
More investment is also urgently needed to develop new and innovative technologies and tools to support dynamic publishing and ensure the long-term sustainability of data and tool sharing infrastructure and repositories. This is where the support of private funding agencies, such as the recent funding boost given to BioRxiv by the Chan Zuckerberg Initiative (CZI), could play a major role in driving innovations to support open science practices and accelerate the pace of discovery and innovation.
It is time that academic institutions, research centers, and funders of science wake up to their social responsibility and enact the changes necessary to enable everyone involved to work towards more ambitious goals than papers and start prioritizing people over publications.