India’s first Open Access policy was drafted by a committee affiliated with the Departments of Biotechnology and Science & Technology (DBT/DST) in early 2014. It hasn’t been implemented yet. Its first draft accepted comments on its form and function on the DBT website until July 25; the second draft was released last week and is open for comments until November 17, 2014. If it comes into effect, it could really expand the prevalence of healthy research practices in the Indian scientific community at a time when the rest of the world is handicapped by economies of scale and complexity to mandate their practice.
The policy aspires to set up a national Open Access repository, akin to PubMed for biomedical sciences and arXiv for physical sciences in the West, that will maintain copies of all research funded in part or in full by DBT/DST grants. And in the spirit of Open Access publishing, its contents will be fully accessible free of charge.
According to the policy, if a scientist applies for a grant, he/she must provide proof that previous research conducted with grants has been uploaded to the repository, and the respective grant IDs must be mentioned in the uploads. Moreover, the policy also requires institutions to set up their own institutional repositories, and asks that the contents of all institutional repositories be interoperable.
The benefits of such an installation are many and great. It would solve a host of problems that are starting to become more intricately interconnected and giving rise to a veritable Gordian knot of stakeholder dynamics. A relatively smaller research community in India can avoid this by implementing a few measures, including the policy.
For one, calls for restructuring the Indian academic hierarchy have already been made. Here, even university faculty appointments are not transparent. The promotion of scientists with mediocre research outputs to top administrative positions stifles better leaders who’ve gone unnoticed, and their protracted tenancy at the helm often stifles new initiatives. As a result, much of scientific research has become the handmaiden of defence research, if not profitability. In the biomedical sector, for example, stakeholders desire reproducible results to determine profitable drug targets but become loth to share data from subsequent stages of the product development cycle because of their investments.
There is also a bottleneck between laboratory prototyping and mass production in the physical sciences because private sector participation has been held at bay by concordats between Indian ministries. In fact, a DST report from 2013 concedes that the government would like to achieve 50-50 investment from private and public sectors only by 2017, while the global norm is already 66-34 in favour of private.
In fact, these concerns have been repeatedly raised by John Ioannidis, the epidemiologist whose landmark paper in 2005 about the unreliability of most published medical findings set off a wave of concern about the efficiency of scientific research worldwide. It criticized scientists’ favouring positive, impactful results even where none could exist in order to secure funding, etc. In doing so, however, they skewed medical literature to paint a more revolutionary picture than prevailed in real life, and wasted an estimated 85% of research resources in the process.
Ioannidis’s paper was provocative not because it proclaimed the uselessness of a lot of medical results but because it exposed the various mechanisms through which researchers could persuade the scientific method to yield more favourable ones.
He has a ‘sequel’ paper published on the 10th anniversary of the Open Access journal PLOS Med on October 19. In this, he goes beyond specific problems – such as small sample sizes, reliance on outdated statistical measures, flexibility in research design, etc. – to showcase what disorganized research can do to undermine itself. The narrative will help scientists and administrators alike design more efficient research methods, and so also help catalyse the broad-scale adoption of some practices that have until now been viewed as desirable only for this or that research area. For India, implementing its Open Access policy could be the first step in this direction.
Making published results – those funded in part or fully by DBT/DST grants – freely accessible has been known to engender practices like post-publication peer-review and sharing of data. Peer-review is the process of getting a paper vetted by a group of experts before publication in a journal. Doing that post-publication is to invite constructive criticism from a wider group of researchers as well as exposing the experimental procedures and statistical analyses. This in turn inculcates a culture of replication – where researchers repeat others’ experiments to see if they can reach the same conclusions – that reduces the prevalence of bias and makes scientific research as a whole more efficient.
Furthermore, requiring multiple institutional repositories to be interoperable will spur the development of standardised definitions and data-sharing protocols. It will also lend itself to effective data-mining for purposes of scientometrics and science communication. In fact, the text and metadata harvester described in the policy is already operational.
Registration of experiments, which is the practice of formally notifying an authority that you’re going to perform an experiment, is also a happy side-effect of having a national Open Access repository because it makes public funds more tractable, which Ioannidis emphasizes on. By declaring sources of funding, scientists automatically register their experiments. This could siphon as-yet invisible null and negative results to the surface.
A Stanford University research team reported in August 2014 that almost 67% of experiments (funded by the National Science Foundation, USA) that yielded null results don’t see the light of day while only 21% of those sent to journals are published. Contrarily, 96% of papers with strong, positive results are read and 62% are published. As a result, without prior registration of experiments, details of how public funds are used for research can be distorted, detrimental to a country that actually requires more oversight.
It is definitely foolish to assume one policy can be panacea. Ioannidis’s proposed interventions cover a range of problems in research practices, and they are all difficult to implement at once – even though they ought to be. But to have a part of the solution capable of reforming the evaluation system in ways considered beneficial for the credibility of scientific research but delaying its implementation will be more foolish. Even if the Open Access policy can’t acknowledge institutional nepotism or the hypocrisy of data-sharing in biomedical research, it provides an integrated mechanism to deal with the rest. It helps adopt common definitions and standards; promotes data-sharing and creates incentives for it; and emphasizes the delivery of reproducible results.