Jeffrey Aronson, reader in clinical pharmacology at Oxford, said it was for the audience, not him, to answer questions. There are, he said, only two types of committee: those that need expert chairs, those that don’t. The British Pharmacological Society which runs two academic journals, the British Journal of Pharmacology published by Nature Publishing Group and the British Journal of Clinical Pharmacology, published by Blackwell Science [that would be Wiley now-TR]; apart from their intrinsic value to the discipline, the income from these journals supports the society's educational activities. Since the advent of the web we are all working much harder, but there are concomitant problems of overload and trash. He gave, in his own words, a diatribe against the term open access which he found unhelpful. The issue, he said, is who pays, author reader. At the moment there is an 85/15% split between the two models. Open access can cover a multitude of models.
Technorati Tags: openaccess
He quoted a study he and colleagues had performed, while searching the literature for the Side Effects of Drugs annual. From a set of 81 papers, 15 were not found either in Medline or Embase, but only by a hand search.
BMJ. 2006 Feb 11;332(7537):335-9. Epub 2006 Jan 18.
Case reports of suspected adverse drug reactions--systematic literature survey of follow-up.
Loke YK, Price D, Derry S, Aronson JK.
He too drew attention to the gaps in our knowledge. We don't know who buys research articles, neither do we know the size of the market, either in absolute terms or its breakdown by sector or discipline, nor do we know where the trends are going. We do not even know its total value. Publishers’ receipts have been claimed to be a total of $5 billion; is that true?
There are also some imponderables in the supply-side economics of scientific journals: what are the costs of researchers’ contributions as author, editor or referee? What are the costs of launching a new journal? Are new titles cheaper or more expensive than old? What is the variation from discipline to discipline?
How are costs segmented? How much should authors be charged to publish?
We also know little about usage: the extent of use, the difference between use of electronic and hard-copy versions or the differences between remote use and use in libraries. As for citations and impact factors, while these have been better understood for some years now, they remain difficult to interpret, and it is hard to see what the effects of different models might be, whether their use by funders is evidence-based and how they cope with self-citation and collaborative research.
There will be disciplinary differences in both citation and usage patterns. And what effect will alternative models of dissemination have on quality?
Observing that expert opinion, on which the report was based, ranks a very low four in the pyramid of the hierarchy of evidence, we need more roust evidence, but it takes time. The last systematic review he was involved in took two years.
He cited Eysenbach’s study on the impact of open access ( Citation Advantage of Open Access Articles Eysenbach G PLoS Biology Vol. 4, No. 5, e157 doi:10.1371/journal.pbio.0040157 ) which found that, for PNAS, over time, author pays articles were more likely to be cited. But Eysenbach’s study was non-randomised; there is also an element of self-selection, in that authors appear more likely to publish their better work in open access journals. We need a proper randomised study in which articles in delayed free access journals, immediate and embargoed, are randomized and the citation rates measured. He reminded us of Chalmers' injunction: randomise the first patient. We need good randomized studies, interpreted within the bounds of the data, to know whether open access will benefit science by speeding up dissemination.
In questions, Richard Charkin asked if, in such a randomised trial, the authors should be told to which sample their papers had been allocated. Jeffrey Aronson had no definite answer, but thought that ethics committees would probably say they should be told. Some authors might object.
Someone from CUP thought that there was an element of distortion, in that open access journals generate more publicity: distortion by oa....OA debate, OP journals generate more publicity and the trial itself distorts results, I think a sort of Hawthorne effect.
Jan Velterop of Springer questioned if we really do not know usage. We do, but what we know less about is how, at article level, information is used. What does the act of citation mean, does it mean a paper has been read in full? The wide electronic availability of abstracts may have increased the citation of unread material.
Fred Friend of UCL asked if there could be a randomised approach to traditional usage statistics. The title of a journal may distort.
Anthony Watkins asked how this work should be followed up? Should there be an annual survey to add to the store of evidence? Yes, said Michael Jubb, there’s a lack of longitudinal data.
David Prosser of SPARC Europe felt that the problem Richard Charkin raised of publishers having to deal with lots of authors happened in the traditional model
Hazel Woodward asked if there were really gaps in publicly available information? She felt that there was lots of information available, but it was not being shared. For example there were no subscription agents present but they sit on very valuable use data. Someone else pointed out that in the US anti-trust legislation might prevent this sort of sharing of information.