Public Domain works by Charles Darwin are being legally sold online. Is this ethical ?

August 21, 2016

darwin manuscript

paywall

Early on Sunday morning (21st August 2016), I spotted the following (anonymized) #icanhazpdf request tweet:-

darwin tweet

After spotting this, I did indeed find that the publication of this Charles Darwin paper from 1858 is indeed sitting behind a paywall:-

darwin paywall

(Thankfully, the person who left the original #icanhazpdf request for the paper found a link to the free version).

darwin paywall1

Here’s some of the responses to my tweet:-

 

 

 

 

 

 

 

 

 

 

Thankfully, a quick online search threw up a number of open access copies of this work such as here on the Darwin Online website.

So, should works dating back to that era be out of copyright and sitting in the public domain ? YMMV it would seem.

Some tweets from Copyright Librarian Nancy Sims:-

 

 

 

 

 

 

So, after being initially surprised that this public domain work is being sold by a publisher (in this case Wiley), they are within their legal rights to do this.

Earlier this year, there was an interesting thread relating to such matters after I posted this tweet:-

This is also linked to this one:-

One person however is of the view that this is NOT legal:-

Others disagree with that view:-

Let’s see if Copyright expert Charles Oppenheim will comment:-

RESPONSE

 

 

Views of someone associated with a publisher:-

the_invisible_a @McDawg @HistGeekGirl it seems wrong in principle, even if it may technically be legal. Just shows how messed up laws are.

— Andrea Wiggins (@AndreaWiggins) August 21, 2016

(Jan Velterop implies that he is of the view that it’s legal to download this from Sci-Hub as the work itself is Public Domain)

 

Updated thoughts on live-streaming an event

July 19, 2016
This post is a re-assessment of a comment I left back in 2010 on the following blog post by Martin Fenner.
I shall leave the below intact from the original (other than some formatting). So as someone who regularly continues to follow Conferences/Events virtually, have my thoughts changed much since 2010 ? Essentially, not that much really.
Firstly, I’ve live-streamed many events since then having previously just briefly dabbled. One common misconception about live-streaming, is that if you do it, no-one will come to the event IRL. This came up in the discussion below with Mike & David. They have over two decades of first hand experience in the field. Essentially, they said “the opposite is true“.
In terms of the quality of live-streaming, this can vary massively. Some of the free applications that I used years ago either no longer exist or are now cluttered with adverts (unless you pay for a premium account). Now that there are many platforms that offer broadcasting/recording in HD, the quality of live-streaming had certainly improved, generally.
Over the last two years or so, a number of mobile APPs (e.g. Periscope, Meerkat & UStream etc.) have been released meaning that after a couple of taps, you can be live on the web. Again, the quality of these APPs can vary a lot. Archiving these recordings could be made easier (although I have limited experience in this particular area).
I still firmly believe that if you’re streaming from an event, getting a secure web connection remains important.
I also still firmly believe that post event, it’s really important to archive the recordings online ASAP before interest disappears.
Another entity to compliment live-streaming is live blogging.

 This is something I’ve dabbled with a couple of times. Firstly, briefly at Repository Fringe (Edinburgh) 2015 and also earlier for UKSG in 2014 e.g., here and here.  Being part of a team certainly helped given the size of this event.

One person I know who has much more experience than me (and most) in this is Nicola Osborne, Jisc MediaHub Manager / Digital Education Manager at EDINA. You can view her work here.

 All in all, live streaming/blogging is certainly here to stay and technology/software continues to revolutionize the possibilities for making events open to wide audiences online.
One caveat remains though. By not attending events IRL, you do miss out on face to face discussions/socializing/networking etc.

Interesting post Martin.

I’ll try to keep this as brief as possible. That wasn’t possible, so here goes.
From an general subjective Conference perspective namely focusing on the virtual attendee angle, I think one has to consider many variables such as:-

A) What subjects are you going to cover?
B) Who is your “target audience”?
C) How will you make them aware of the event?
D) Is it free or fee (#scio10 was $175 – #solo09 – £10 in person or £10 for Second Life)
E) How interactive do you wish to make it be for virtual attendees?
F) How does one tackle sessions that involve unpublished data?
etc.etc.

Now since we only appear to be able to use a max. of four url’s in the comments feed on NN at the moment, I’ll choose ’em sagaciously.
On Jan 10th, I posted this on my blog (sorry, link rot) before virtually attending last weekend’s events in North Carolina.

Based upon past experience(s), the thing that I was really looking forward to was the live-streaming/chat-room aspect of the Conference. Despite the much applauded wi-fi connection they had set up (extremely important these days and secured internet access to all present) c/o company SignalShare, it became clear fairly early on that there was -phlegm- a problem with the live-stream.

++ACTION POINT++ Must find out what went wrong so that we can learn from this for the future.

Each chunk (hourly sessions) of the event was split into five parallel sessions (Rooms A to E) and the aim was to live-stream content from all discussions in D & E. This meant that ahead of the event, virtual attendees could chose which sessions they wanted to attend. In the end alas, over the whole weekend, only about 1.5 hours worth was streamed and with very little notice.

I found this rather disappointing I have to say as discussed with Martin over the weekend (can a DM Twitter discussion be classed as a “personal communication”?) so time to follow events in other ways. I was pretty much glued to the #scio10 Twitter feed all weekend and I very much agree with AJCann’s comments above.
==
I like Martin’s Twitter suggestions !!

Richard Grant & I covered various aspects of Conference event coverage during a podcast we did with Mike Sefang & David Wallace (link rot, try here) in July 2008. (relevant section starts at 19’30”) As a result of that discussion and with the permission of NPGs Timo Hannay, Richard recorded audio of the Wrap-Up Panel: Embracing change: Taking online science into the future at Science Blogging 2008: London which was uploaded to web within a few hours. Cool….

As discussed with a few NN staffers, even though all sessions were video recorded, due to technical issues, none of the NN files ever appeared on the web. That said, Cameron Neylon recorded and from memory live-streamed (and also self archived) some of the sessions via his laptop. Also cool.

An observation from “Science Blogging 2008, North Carolina”:http://scienceblogs.com/clock/2008/01/science_blogging_conference_vi.php is as follows. Similar to what Cameron did in London, that year, a couple of individuals, Wayne Sutton and to a lessor extent, Deepak Singh, live-streamed events from their laptops. Within the space of a week (after that, little interest), their uploaded files had been viewed over 15,000 times which I think was pretty impressive. Observation from Science Online London: 2009. Video footage of 7 sessions were uploaded c/o NN’s Joanna Scott 2 weeks after the event. Total views ~ 500.

My take on this from this data is that if you are going to attract the attention of virtual attendees using video format, it needs to be instant ideally, or delayed by a day or two at the most, before interest fades. The same I guess applies to audio. As to Second Life, I personally have limited experience of this platform so am unable to comment. One for Lou & Jo to discuss as Lou has indicated earlier.

As Cameron Neylon has mentioned elsewhere on teh interwebs, as matters stand, livestreaming using a wi-fi connection is still very 50/50 in success terms. Whilst livestreaming from this years Science Online UK event shouldn’t be completely ruled out (we should at least try a secure [not wi-fi] web connection, IMO) all things considered, I have to say that I’m pretty much with Martin as per the last para of his post.

One final point. I really like the idea of Science Online 2010: London being a two day event, yay !! The meatspace socialising aspect of such events is a real draw and something that you miss by non physical presence. I can’t really add to Stephen’s comments above in this regard.

Oh waits, I still have a link final up my sleeve. Whilst I was unable to attend the pre Science Online 2009: London party in person, I did manage to fling together the following montage. Apols for re-posting here but I thought it was rather cool and in doing something like this, it gives virtual attendees a flavour of the social aspect of events, which I think is not “essential” but of general interest.

Oh bums, I appear to have have ran out of links so let’s see if I can post this without the “missing link”.

Oh sh*te, we can’t embed stuff from Vimeo here, so “el missing linky is here.”:http://vimeo.com/6306956

Essentials of Glycobiology

May 25, 2016
Originally posted on my now deceased blog on Friday, 17 October 2008

Essentials of Glycobiology – New Edition is freely available from the NLM/NCBI Bookshelf

As reported today on Open Access News (OAN):-

… Essentials of Glycobiology, the largest and most authoritative text in its field, will be freely available online beginning October 15, through collaboration between the Consortium of Glycobiology Editors, Cold Spring Harbor Laboratory Press, and the National Center for Biotechnology Information (NCBI), a division of the National Library of Medicine (NLM) at the National Institutes of Health (NIH). Fittingly, the release of the book follows soon after the October 14th celebration of International Open Access Day, which will highlight prior successes in providing such open access to research journals. …

Here is the book. The Foreword is a great place to start of course.

OAN Comments:

* Add this to the growing list of widely-used textbooks that are available OA.
* Who’s backing this is also noteworthy. It’s not an author who negotiated the right to self-archive a copy, or an OA startup publisher; it’s a scientific press and the National Library of Medicine.
* The first edition of the book has been OA since 2003. Apparently, the impact on sales of the print copy hasn’t been overwhelmingly negative, or it seems unlikely the publisher would support OA to the new edition.

My Comments:

Glycobiology is a most fascinating area that continues to gather interest over time.
I applaud all involved for contributing to and publishing this work to the widest audience on the planet.

Press Release: Novel publishing approach puts textbook in more hands

1 comment:

Anonymous said…

Good fill someone in on and this fill someone in on helped me alot in my college assignement. Thanks you seeking your information.

Glycobiology – update

May 25, 2016

Originally posted on my now deceased blog on Saturday, 12 January 2008

Now that I’m blogging more often, I wish to expand further on one aspect of Glycobiology – 21st Century Style which I posted on this blog mid December.

Pentosan Polysulphate (or PPS for short)

PPS was something that I became interested in late 2002. As a result, this led me to become interested in the field of Glycobiology and making contact with at least two dozen leading Glycobiologists from around the world.

Since social networking is all the rage these days, I am currently co-admin of two Glycobiology related online groups, one on Facebook, the other, on Nature Network.

PPS may be a merely a drop in the Glycomics ocean, but it is the substance that I have the greatest knowledge and first hand experience of.

Since my email records only go back to Feb 2006 (I had to change computers and lost a lot of older emails), I cannot recall specifically when contact was made with a Linda Curreri based in Dunedin, New Zealand but it would have been during 2003. We still keep in touch.

Linda maintained a very well referenced and detailed website dedicated to Pentose Sugar. Linda like myself is a layperson but her general research knowledge was and remains sublime. A decision was taken last year by Linda to no longer maintain the site but I offered to come to the rescue if required.

A few months before then however, I had discovered the brilliant WayBackMachine c/o Archive.org

We then knew that despite the site being archived, it would be (and is) preserved perfectly as it was on the web.

I therefore for the first time am providing a link to archived website PENTOSE SUGAR

Further preservation was done in the form of book “Pentosan polysulphate : a medicine made from beech bark” by Linda Curreri early 2007. ISBN 9780473119720 (pbk.) : $12.00

—-

At the time of writing, there are 731 Papers about Pentosan archived in PubMed. Importantly however, only 98 are readable (open access) at full article level. That’s not a lot really.

When one considers the clinical usages and research areas of PPS, I find it quite staggering that so little research has been published thus far.

Let’s take arthritis for example. PubMed throws up over 175,000 Papers. When we add Pentosan to the search mix, the number falls to 32, and down further to only three at full article level.

For a condition that afflicts millions of patients, pretty much to date, only the non-human variety are allowed to receive PPS treatment.

But wait !!. Check out the ARTHOPHARM website in Australia and this page.

Go back, check and FULLY read through archived website PENTOSE SUGAR

Being a layperson and patient advocate with no loyalty to anyone other than patients, no patents pending (or ever likely) and no conflicts of interest to declare, I simply wrote this blog to place information in the public domain.

I will end with a Legal Precedent from 2002 and it’s wide ramifications. The following text is from the Pentose Sugar website.

“Law. Pentosan Polysulphate is not and cannot be patented. This places it in the public domain. However PPS is only available under patented trade names which appear on its packaging. This is a complete nonsense, because the brand name is not pentosan polusulphate and it is the trade name which is being marketed. The brand patenting merely protects the commercial property of the manufacturer. If a pharmaceutical company does not ‘develop’ PPS for specific indications and patent their brand name(s) this unique medicine does not become available as a treatment option. In effect legislation made up by law maker’s is blocking pentosan polysulphate’s availability for sick human’s. PPS is however readily available for scientific research.

In the London High Court in December 2002
, Dame Elizabeth Butler-Sloss ruled in favour of two teenagers with advanced vCJD to have pentosan polysulphate to treat this prion disease. The case had absolutely nothing to do with any patented brand name; it was about the right of two dying people to have a medicine that could save their lives .The medicine was the unpatented pentosan polysulphate. Dame Elizabeth’s decision set a precedent and it would seem that the legal constraints which surround this pentose medicine, though firmly in place are flimsy and could in fact be nul and void.

It could also be argued that due to the unique status of pentose sugar in human physiology that it is an absolute right of humans to source and assimilate pentose sugar whether it is a food sweetener or a medicine that has been compounded by a chemist, and neither medical specialists, government officials, man made laws or scientists and pharmaceutical companies ( both of which appear to claim the unpatented generic medicine PPS) have any right to prevent them. Whether by collusion or not, this is indeed happening….globally.

Scientists will always continue research even though principles have been proven, it is what they do, but this should not be used as an excuse to prevent pentosan polysulphate’s use in humans, because science has already proven the safety and worth of this medicine to the human organism.”

http://www.getgooglesearch.com/htmlcodes/search.html

5 comments:

Francis said…

Is the archived version of the pentosesugar website no longer available?
Also, I tried looking out for the book itself, but the only website reference for it doesn’t have any copies. Any working links would be appreciated. I dont mind paying directly to the author herself

Thank you,
Francis

McDawg said…

@ Francis. This link should work for the archived pentosesugar website:- http://web.archive.org/web/20041205094941/www.pentosesugar.com/toc.html

Shall email you the authors contact details.

Francis said…

Hey McDawg,

The webarchive is perfect. I haven’t received the author’s details though.
PS. I would like to know more about your expertise on Pentosan.

Thanks,
Francis

McDawg said…

@Francis.

If you would kindly drop me an email (steelgraham AT hotmail DOT com), I’ll see if I can assist you.

Kind regards,

Graham

McDawg said…

@ Francis. I tried to email you the information you were looking for but there is no email address in your profile. Overnight, I received the following message from Linda Curreri which she asked me to post on my blog.

HELLO FRANCIS,

TO PURCHASE A COPY OF THE BOOK PENTOSE SUGAR, A MEDICINE MADE FROM BEECH BARK, PLEASE EMAIL ME AT

lcurreri@xtra.co.nz

KIND REGARDS,

LINDA CURRERI [AUTHOR]

Glycobiology – 21st Century Style

May 25, 2016

Originally posted on my now deceased blog on Monday, 10 December 2007

Glycobiology – 21st Century Style


Glycobiology 21st Century Style

 

I’ve been thinking about an ‘appropriate’ image for this for quite some time, so here we go. All that I added was ’21st Century Style’.

SOURCE (C)

Follows some thoughts I had last year.

If accepted for publication, my co-authors and I will deliver something of substance in the New Year.

Complex sugar chains and glycosaminoglycan (GAG) side chains
make up an integral part of our mind and body.

It was an Albrecht Kossel who was awarded a Nobel back in 1910 as the first to recognize that nucleic acids contained a carbohydrate. Due to its 5 carbon molecular structure, he called it pentose “the stuff of genes”.

During the first half of the last Century, the chemical and biological structures of carbohydrates were very much a point of focus. Whilst this was to become an integral part of modern day molecular biology, at the time, they were not forerunners unlike other major classes of molecules. Largely, this was due to their (very) complex structures, difficulty in understanding their sequence(s), and the fact that their biosynthesis could not be directly predicted from the DNA template.

53 years ago Nature magazine published a scientific Paper by Maurice Wilkins and his two colleagues at King’s College, London, called “Molecular Structure of Deoxypentose Nucleic Acids” Wilkins M.H.F., A.R. Stokes A.R. & Wilson, H.R. Nature 171, 738-740 (1953)

Something called heparin was “discovered” by a second-year student at John Hopkins University in 1916. By the 1930’s, heparin came into use namely as an anti-coagulant. Essentially, this was made using animal ‘by products’ such as pig, dog and later, bovine gut material. By the early 1940’s, “purified” heparin was available for clinical and experimental use.

Post WW2, Germany was unable to import heparin and there was also a shortage of many basic resources such as sugar. A novel method of deriving synthesized heparin type substances led to the development of sulphanated pentose sugar made essentially from the bark of beechwood trees. The most commonly used term these days for this particular (Polyanion) substance is Pentosan Polysulphate or PPS. Its most common broad (oral) usage commenced in the 1960’s and continues in many countries (namely USA and mainland EU) in relation to the management of the common bladder complaint, internal cystitis (IC). SP54 (the purest form of PPS known continues to be manufactured by a small family run German company, Bene. Here is a page from the Bene website that lists it’s currently known broad usages.

Around the same time, Germany, Japan and the (former) Soviet Union also focused on Xylitol which is a five-carbon sugar alcohol, a natural carbohydrate which occurs freely in certain plant parts (for example, in fruits, and also in products made of them) and in the metabolism of humans. Xylitol has been known to organic chemistry at least from the 1890’s.

Where there is deficiency or excess of (Proteoglycans) PG’s this can play a lead role in the pathogenesis of a substantial range of common and rare conditions ranging from arthritis, diabetes, cancer, HIV/Aids through to protein folding neuro degenerative conditions such as Alzheimer’s disease (AD).

Semi synthesized HSPG’s (Heparan Sulphate Proteoglycans) over the last few years in particular are now referred to as Glycans. There are 14 Glycans in the family including Glypicans.

In 2005, Nature published a seminal Paper by Professor’s Fuster and Esko (5) entitled “The Sweet and Sour of Cancer: Glycans as Novel Therapeutic Targets” which reported on several significant developments.

In this Paper, Fuster and Esko et al demonstrated the potential use of Glycans in the treatment of many types of cancer/tumor and concluded the Paper with a “(this) might represent the ‘tip of an iceberg’ of therapeutic potential that awaits future discovery” type ending.

Have matters progressed since then?

Glycans have now been brought into real time Clinical usage, namely as surrogate markers in the treatment of a number of cancers.

How far are we from human trials of Glycan use for the likes of HIV, Cancers and Alzheimer’s disease?

Book, ‘Essentials of Glycobiology’ is available online and can be (Open Source)accessed via this book is currently being revised with an updated version was released this year via TA.

The Journal of Glycobiology published its first online Journal in September 1990.

Whilst there has been an increasing wider focus of attention in the Glycobiology field over the last few years in particular, some of the core principles from a cellular level stretch back over a Century.

In Australia and (limited extent) New Zealand in particular, Glycans continue to be used safely and successfully in the treatment of the arthritic joints (osteoarthritis) in both animal and man (1).

The latest reported commentary (2) from the most recent Global Conference on AD in Madrid is highly suggestive that diabetes, whilst certainly not the cause, has a degree of interlinkage with a number of neurodegenerative diseases.

With regards to Alzheimer’s (Amyloidosis generally) despite a substantial number of Peer reviewed published Papers showing Glycan promise in vitro and in vivo, there has been little/no interest from the Pharma Industry.

To sense the ‘sweet flavour’ of the future of Glycobiology in the 21st Century, the word Glycomics comes to mind (3,4).

Are such Generic based approaches deemed as potentially large threats to large Pharma?

(1) PMID:12014849PubMed http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=pubmed&cmd=Retrieve&dopt=AbstractPlus&list_uids=12014849&itool=iconabstr&query_hl=5&itool=pubmed_docsum
(2) http://www.wilmingtonstar.com/apps/pbcs.dll/article?AID=/20060722/NEWS/607220330/-1/State
(3) http://www.functionalglycomics.org/static/consortium/main.shtml
(4) http://glycomics.scripps.edu/pub/NatMethodsEditorial2005.pdf
(5) PMID: 16069816 [PubMed – indexed for MEDLINE]http://www.nature.com/nrc/journal/v5/n7/abs/nrc1649.html

1 comment:

Anonymous said…

Good brief and this post helped me alot in my college assignement. Say thank you you on your information.

Open Science Enthusiast

April 22, 2016

@KLA2010 Just noted “Open science enthusiast” on your profile. Have used that term for myself a few times.

— ⓪ Grⓐhⓐm Steel (@McDawg) April 22, 2016

canScience

140 is too short, so …… I was going to use ‘TweetLonger’ but decided not to do so in the end.

As to who first came up with “Open Science Enthusiast”, we’ll never know and frankly, who cares…. To me, in short, it means “Citizen Scientist”.

I was present at Scotland’s 1st Open Knowledge (possibly 2nd) event (in Edinburgh back) in 2012. [2]

VLUU L200  / Samsung L200

At one point, those present were asked to describe themselves in just three words. Off the top of my head, I went for ‘Open Science Enthusiast’. I was the only one to do it in three words, so that was my starting point. – Since then, I’ve used it elsewhere even including peer reviewed papers such as:-

Buckland, A. et al., (2013). On the Mark? Responses to a Sting. Journal of Librarianship and Scholarly Communication. 2(1), p.eP1116. DOI: http://doi.org/10.7710/2162-3309.1116

The term has been used elsewhere, e.g. here by Dr Marcus Hanwell @mhanwell

best-of-science-2015

I predict that high on the list of many open science enthusiast’s new year’s resolutions will be the education of both established and future researchers on the importance of openness, licensing, sharing, and reproducibility.

Best of Opensource.com: Science  December 25th 2015

Marcus D. Hanwell | Marcus leads the Open Chemistry project, developing open source tools for chemistry, bioinformatics, and materials science research. He completed an experimental PhD in Physics at the University of Sheffield, a Google Summer of Code developing Avogadro and Kalzium, and a postdoctoral fellowship combining experimental and computational chemistry at the University of Pittsburgh before moving to Kitware

Also relevant if this lovely poignant quote from Dr Jennifer Molloy [1]

[1]Molloy quote

SOURCE

[2]

FULL REPORT of my experience in Edinburgh that day

Interview: “Open Views” featuring McDawg aka Steck

March 16, 2016

Sunday, 13 July 2008 (Originally posted on my now deceased blog)

sundar steck

BACKSTORY

12th October 2007, Prof Peter Suber and I & were interviewed on the same afternoon, back to back by Sundar Raman as part of the ongoing series called “Open Views”.

Peter’s can be found here.

Having previously released a snippet and patiently waiting for a landing space over at KRUU.fm, I decided to edit and self archive a copy of my own, now here.

Steel, Graham (2014): My first interview about open access from ~2007.. figshare.

https://dx.doi.org/10.6084/m9.figshare.1053210.v2

Retrieved: 23 01, Mar 16, 2016 (GMT)

Intended Intro music

White Lies by *Catch* by Steck
Genre: Pop (mainstream)

Intended Outro music

Wake Up Now – remastered by Tobin Mueller
Genre: Pop (mainstream)

Above image is mash up of this image from KRUU.fm.

CCBY1

The interview was a joint creation of McDawg and host from KRUU.fm.
Labels: graham steel, KRUU.fm, open access, open views, peter suber, sundar raman

“Wallets with a Serious Case of Stockholm Syndrome”: Sci-Hub and the Future of Scholarly Communication

February 29, 2016
Sci-Hub Logo

Originally posted here by Marcus Banks. Re-blogged in verbatim with permission to do so.

Following Aaron Swartz’s tragic suicide in 2013, there was a brief flurry of attempts to honor his legacy by increasing public access to research articles. Swartz had successfully accessed millions of articles from MIT’s licensed JSTOR database, in a way that drew the ire of JSTOR (which eventually dropped charges), MIT (which arrested Swartz), and the federal government (which alleged numerous violations of the Computer Fraud and Abuse Act).

People argued that the way to remember Swartz was to provide immediate, complete, non-embargoed access to research articles. Not reports to grant funders about progress along the way, not mere summaries of the results — but the actual papers from actual journals, complete with their DOIs and page numbers.

Indeed, in 2013 — well after the Internet had transitioned from a novel technology into an essential part of everyday life — we were still debating about how to maximize access to the fruits of a publication process that dates from the 1600’s. Activists claim that all of the scholarly literature should be free, publishers claim they add significant value to this literature that is worthy of compensation.

We are still having this debate in 2016, and if trends continue we will keep doing so for decades more. The great unleashing of the literature called for after Swartz’s death has not come to pass. There is too much money to be made in the current scholarly publication system — in which the only way to have immediate access to papers is to be affiliated with an institution rich enough to afford this, or to live in a poor enough nation that it is not an attractive market for publishers anyway.

Legally, the current system rests on a transfer of copyright from the authors of papers to publishers — with that transfer complete, the publishers then bundle articles into journals and license them back to libraries. These licensing terms carry costs that greatly exceed the rate of inflation, which is by now a very well-documented phenomenon. This is because journals are “inelastic” and “non-substitutable”; there is less ability to shop around on the basis of content, as each journal fills a unique niche. Meanwhile librarians feel duty bound to subscribe to all the leading titles in a field, leading inexorably to monopolistic pricing.

That pricing does not affect researchers, who are the consumers of scholarly work, because they do not pay it. The upshot is that the only balance sheet negatively impacted is that for the library. Hence we find that librarians, in the immortal words of John Dupuis, feel like “wallets with a serious case of Stockholm Syndrome.”

Open access journals, which are available without subscription or licensing barriers, most certainly improve access compared to subscription journals. But they are not necessarily any cheaper for libraries, especially those that foot the bill for the author processing charges (APCs) that sustain open access journals. As T. Scott Plutchak has often observed, access and affordability are two separate issues.

Everything I’ve written so far should be very familiar to observers of the scholarly communication scene, perhaps mind-numbingly so. The uneven balance of power between librarians and researchers, and ergo between librarians and publishers, are long-established sources of resentment in libraryland.

Enter Sci-Hub, a radical disruption with perhaps enough power to compel solutions to this intractable impasse.

What is Sci-Hub? A repository of academic papers that are supposed to be behind pay walls. To date Sci-Hub has collected more than 47 million academic research papers. It does so through bypassing the many access control mechanisms meant to restrict this content to authorized users. (Whether this comes via “donations” of institutional log-in credentials or phishing scams is unclear.) This effort necessarily involves infringing on copyright, but Sci-Hub founder Alexandra Elbakyan argues that she observes a higher law by making these papers available to all interested readers.

In a sense Sci-Hub’s approach is a refinement and improvement of the process Aaron Swartz utilized with JSTOR. As Graham Steel notes, Sci-Hub’s approach is much more effective at file sharing than the once upon a time cutting edge #ICanHazPDF.

Publishers are outraged. Elsevier successfully sued Sci-Hub in US court last year, seeking the site’s demise. After a brief pause last year (prior to the lawsuit’s conclusion), as of today Sci-Hub continues unabated. Elbakyan is from Kazakhstan, and the site’s servers are not in the United States. It also relies on sophisticated programming that bounces between servers around the globe. For all these reasons it would be very difficult to halt Sci-Hub on a permanent basis. Even if Sci-Hub itself did cease operations, another similar site could easily emerge in its place.

The genie is out of the bottle. The writing is on the wall. [Insert similar metaphor here]. If nothing else, Sci-Hub proves that the days of making money from regulating access to PDFs of journal articles is over.

Or does it? As observers of this controversy have noted, academic libraries are not going to cancel their journal licenses thanks to the newfound availability of articles on Sci-Hub. Those licensed packages are the lifeblood of Sci-Hub — which penetrates ostensibly secure university networks in order to fetch and cache articles — in any case. And of course an institutional actor such as a library would not make decisions based on a third party’s practices that infringe on copyright.

For these reasons Angela Cochran, Director of Journals at the American Society of Civil Engineers, is seeking common cause with librarians. In a much-discussed post on the Scholarly Kitchen, Cochran lays out the case against Sci-Hub and expresses her dismay that librarians and open access advocates have not spoken out against Sci-Hub’s “piracy.” Cochran is right that the methods used by Sci-Hub could put many other institutional computer systems at risk, which is why librarians and others should be concerned.

But Cochran is not familiar with that feeling of librarian Stockholm Syndrome that John DuPuis so aptly described. I’ve long raged against having to think about and deploy access control mechanisms within the libraries where I have worked. I became a librarian in order to maximize access to information, not to meter it out stingily. But dem’s the breaks baby cakes. Part of being an academic librarian today involves providing uncompensated copyright enforcement for publishing interests, in order to reinforce values you do not even believe in.

Hence Cochran’s disillusionment. I suspect many academic librarians and open access advocates support Sci-Hub’s ends if not its means. (Perhaps I am wrong on the library front, this ultimately depends on whether a librarian perceives themselves as a “soldier or revolutionary” in Rick Anderson’s formulation). If Cochran wishes to find common ground with the greatest number of librarians in the wake of Sci-Hub, I suggest seeking this in discussions of building a future for scholarly communication that serves the interests of publishers and librarians alike. Pointing a finger at Sci-Hub in outrage will not do the trick.

There is pathos in all this. Sci-Hub’s posting of PDFs would be a trivial event if PDFs were not where the action still is for scholarly communication. In a Web-centric world PDFs should be yesterday’s news as a means of sharing knowledge.

This is why it’s high past time for publishers and librarians to work together to move beyond the PDF, a topic I will explore more fully in a future post. Sci-Hub’s ultimate service, I hope, will be to speed this conversation along.

Marcus Banks is a health sciences library director.

 

 

Misleading open access myths

February 21, 2016

This information was originally posted here (under CC-BY) on the Biomed Central website but is no longer actively live. As such, I am re-posting to the web c/o Wayback Machine.

See also Peter Suber’s “A field guide to misunderstandings about open access”

Also this from Suber in the Guardian, 2013.

There are many misconceptions and arguments against open access. Below is BioMed Central’s response to the most common myths highlighted in the UK’s Select Committee on Science & Technology 2003-2004 inquiry into scientific publishing and open access.

Below, BioMed Central responds to some of the most prevalent and most misleading anti-open access arguments.

The cost of providing open access will reduce the availability of funding for research

Access to research is not a problem – virtually all UK researchers have the access they require

The public can get any article they want from the public library via interlibrary loan

Patients would be confused if they were to have free access to the peer-reviewed medical literature on the web

It is not fair that industry will benefit from open access

Open access threatens scientific integrity due to a conflict of interest resulting from charging authors

Poor countries already have free access to the biomedical literature

Traditionally published content is more accessible than open access content as it is available in printed form

A high quality journal such as Nature would need to charge authors £10,000-£30,000 in order to move to an open access model

Publishers need to make huge profits in order to fund innovation

Publishers need to take copyright to protect the integrity of scientific articles

Myth 1 The cost of providing open access will reduce the availability of funding for research

There is also the question of the impact on the funding of research by charities, particularly those without the considerable resources of the Wellcome Trust. The Royal Society for example, runs number of funding schemes for scientists. Perhaps the best known is the University Research Fellowships, most of which are funded by our Parliamentary Grant in Aid (PGA). Our 300 University Research Fellows publish on average about four papers per year. Based on an estimate of $3,000 fee per article (which we believe is realistic if the current high standards in publishing are to be maintained) an extra $3.6M or £1.96M per year would need to be found to fund our URFs alone. In the absence of an increase to our PGA we would be forced with the choice of reducing the amount of research money funding allocated to our URFs, reducing in the total number of URFs that we could support or diverting funds from our other activities to compensate.

Written submission to inquiry, February 2004, Royal Society

Response

At a macro-economic level, there is evidence that a switch to open access publishing would not negatively impact research funding.

The cost of the present system of biomedical research publishing, with all its inefficiencies and overly generous profit margins, still only amounts to about 1-2% of the overall funding for biomedical research (estimate from the Wellcome Trust, cited by Public Library of Science in their submission to the House of Commons inquiry). There is no reason why the cost of open access publishing should exceed the cost of the current system, since the fundamental process is the same. In fact, the use of web technology by open access publishers are leading the way in using web technology to reduce costs further, so that the cost of open access publishing to the scientific community becomes significantly less than the system currently in place.

Additionally, the increased availability of research that is delivered by open access has been shown to greatly increase the effectiveness of the research money that is spent, allowing further research to be built on what has been done previously. This ensures funders can see the results of their grants.

At the micro-economic level, there will certainly be transitions that need to be carefully managed as the open access publishing model grows in economic significance. For example since the total cost of publishing scientific articles is roughly proportional to the amount of research to be published, it may well make sense for the costs of publishing to be incorporated into research funding grants, rather than being covered by library budgets. These are important issues, which deserve attention. But these transitional challenges should not be allowed to obscure the overall picture which is that with the Open Access publishing model the scientific community will pay significantly less, yet receive vastly more (in terms of access and usability).

Update

On 29th April 2004 the Wellcome Trust published a report on the economic implications of open access publishing. The report (Costs and Business Models in Scientific Research Publishing) indicates that open access publishing could offer savings of up to 30%, compared to traditional publishing models, whilst also vastly increasing the accessibility of research.

Myth 2 Access to research is not a problem – virtually all UK researchers have the access they require

All of us are committed to increasing accessibility of scientific content. I would argue that in the last ten years we have made a huge contribution to that, and I think 90 per cent worldwide of scientists and 97 per cent in the UK are exceptionally good numbers.

Oral evidence to inquiry, March 1st 2004, Crispin Davis (CEO, Reed Elsevier)

Response

Elsevier’s figure of 97% of researchers in the UK having access to Elsevier content is misleading. As explained in the small print of their written submission, this refers to researchers at UK Higher Education institutions only, many of which have indeed taken out ScienceDirect subscriptions as a part of JISC’s “big deal” agreement.

However, these researchers do not have access to all ScienceDirect content by any means – the subset of journals that is accessible varies widely from institution to institution, meaning that access barriers are frequently a problem, even for researchers.

The access situation at institutions which focus primarily on teaching rather than research is particularly bad, but Elsevier disguises this by weighting each institution according to the number of ‘researchers’ employed, to come up with the 97% figure.

More fundamentally, the Higher Education sector is only one of several sectors carrying out biomedical research in the UK. Much medical research in the UK goes on within the NHS. Lack of online access to subscription-only research content within the NHS is a major problem. Similarly, Elsevier’s figures conveniently omit researchers employed at institutes funded by charities such as the Wellcome Trust and Cancer Research UK, and in industry.

Myth 3 The public can get any article they want from the public library via interlibrary loan

I think the mechanisms are in place for anybody in this room to go into their public library, and for nothing, through inter-library loan, get access to any article they want.

Oral evidence to inquiry, March 1st 2004, John Jarvis (Managing Director, Wiley Europe)

Incidentally, any member of the public can access any of our content by going into a public library and asking for it. There will be a time gap but they can do that.

Oral evidence to Inquiry, March 1st 2004, Crispin Davis (CEO, Reed Elsevier)

Response

To say that being able to go to the library and request an interlibrary loan is a substitute for having open access to research articles online is rather like saying that carrier pigeon is a substitute for the Internet. Yes – both can convey information, but attempting to watch a live video stream with data delivered by carrier pigeon would be a frustrating business.

Practically, the obstacles to obtaining an article via the interlibrary loan route are so huge that all but the most determined members of the public are put off. For those who persist, after a time lag that will typically be several weeks, their article may (if they are lucky) finally arrive in the form of a photocopy. What the user can do with that photocopy is extremely restricted compared to what they can do with an open access article.

  • With an online open access online article, you can cut and paste information from the article into an email. With a photocopy you cannot.
  • With an open access online article, the license agreement explicitly allows you to print out as many copies as you like and distribute them as you see fit. But if you copy and distribute the article you received by Interlibrary Loan without seeking appropriate permission from the publisher, you may well be in violation of copyright law.

It is also worth noting that an increasing fraction of public libraries now offer free or low-cost Internet access, making it even more convenient for the public to view open access research.

Myth 4 Patients would be confused if they were to have free access to the peer-reviewed medical literature on the web

Without being pejorative or elitist, I think that is an issue that we should think about very, very carefully, because there are very few members of the public, and very few people in this room, who would want to read some of this scientific information, and in fact draw wrong conclusions from it […] Speak to people in the medical profession, and they will say the last thing they want are people who may have illnesses reading this information, marching into surgeries and asking things. We need to be careful with this very, very high-level information.

Oral evidence to inquiry, March 1st 2004, John Jarvis (Managing Director, Wiley Europe)

Response

This position is extremely elitist. It also defies logic. There is already a vast amount of material on medical topics available on the Internet, much of which is junk. Can it really be beneficial for society as a whole that patients should have access to all the dubious medical information on the web, but should be denied access to the scientifically sound, peer-reviewed research articles?

In some cases, to be sure, comprehending a medical research study can be a demanding task, requiring additional background reading. But patients suffering from diseases are understandably motivated to put in the effort to learn more about their conditions, as the success of patient advocacy groups in the USA has shown. Patients absolutely should have the right to see the results of the medical research that their taxes have paid for.

Myth 5 It is not fair that industry will benefit from open access

The major industry readers of information, like the pharmaceutical industry, would be in a much better position (with the open access model) since they do not produce very much in terms of new research articles. Of course, they purchase a lot for their industry. So companies that do not produce very much material but read a lot – I will not mention (companies), but this would be wonderful news for them. It would be wonderful news for the chemical industry and for the pharmaceutical industry, and bad news for major research institutes like Oxford and Cambridge, Harvard and Yale, and for countries like Britain.

Oral evidence to inquiry, March 1st 2004, John Jarvis (Managing Director, Wiley Europe)

Response

It is peculiar to hear large commercial publishers saying that open access would be a very good thing for the pharmaceutical and other industries, and then claiming that this is a problem with the open access model. The chemical, biotech and pharmaceutical industries play a major role in the UK economy, and so this argues strongly for open access.

To say that they do not contribute significantly in terms of publishing research is inaccurate. Industry publishes a significant amount of research itself, and also funds much research within the academic community that then goes on to be published.

It is certainly possible that under an open access model, institutions (and countries) that publish a lot of research would pay a somewhat higher proportion of the cost of publishing than they do currently. Since it is the process of publishing the research that incurs the lion’s share of the costs (with Internet distribution being very cheap in comparison), this is the most logical, sustainable way to fund the publication process. In contrast, the current situation, in which small universities effectively subsidize the cost of publishing the research carried out at relatively wealthy research centers, is far more inequitable and unsustainable.

But in any case, the absolute amount of money expended by the research institutions will fall, due to the far greater efficiency of open access publishing. Furthermore, research institutions that support open access will benefit greatly in terms of kudos and influence, due to the greater accessibility and visibility of their research. These institutions would therefore be cutting off their nose to spite their face to oppose open access on the grounds given above.

Myth 6 Open access threatens scientific integrity due to a conflict of interest resulting from charging authors

The second question that increasingly is being asked is the inherent or potential conflict of interest if a publisher is receiving money from the author to publish that article. There is an inherent conflict there in terms of quality, objectivity, refereeing and so on. One of the real strengths of today’s model is that there is no conflict there. We reject well over 50 per cent of all articles submitted. Other journals do that or even higher. If you are receiving potential payment for every article submitted there is an inherent conflict of interest that could threaten the quality of the peer review system and so on.

Oral evidence to Inquiry, March 1st 2004, Crispin Davis (CEO, Reed Elsevier)

Response

This canard has been thoroughly debunked elsewhere. The assertion being made is, essentially, that open access publishers have an incentive to publish dubious material in order to increase their revenue from Article Processing Charges. This is a very peculiar accusation for a traditional publisher to make given that in the same evidence session, Elsevier’s hefty annual subscription price increases was justified as follows:

On pricing, we have put our prices up over the last five years by between 6.2 per cent and 7.5 per cent a year, so between six and seven and a half per cent has been the average price increase. During that period the number of new research articles we have published each year has increased by an average of three to five per cent a year. […] Against those kinds of increases we think that the price rises of six to seven and a half per cent are justified.”

Oral evidence to Inquiry, March 1st 2004, Crispin Davis (CEO, Reed Elsevier)

i.e. Elsevier’s primary justification for increasing their subscription charges (and profits) is that each year they are publishing more articles. In which case, if their own argument is to be believed, they face the exactly the same conflict of interest as open access publishers.

Fortunately, however, no such conflict of interest exists, for either Open Access or traditional publishers. Any scientific journal’s success depends on authors choosing to submit their research to it for publication. Authors publish research in order for the value of their findings to be recognized. The kudos granted by a solid publication record is crucial for scientific career progression. Authors submit their research to journals with a reputation for publishing good science. If a journal had a reputation for publishing poor science, it would not receive submissions. Thus the system is inherently self-correcting.

It should also be noted that many leading journals (both commercial and not-for-profit) already have page charges and colour figure charges for authors, in order to defray expenses and to keep subscription costs down. Just two examples (of many hundreds) are the Proceedings of the National Academy of Sciences (USA), and Genes & Development. So author charges are hardly an unprecedented experiment.

It is true that commercial publishers have tended in some cases to remove author charges, and to commensurately increase subscription fees, since this suits their commercial interests in maximizing profits. But it is clear that author charges pose no fundamental problem to effective peer review.

Myth 7 Poor countries already have free access to the biomedical literature

…what has happened is that the publishing industry has effectively, with the support of the societies it publishes for, given free access to poorer countries. There are various schemes, which you will see in the submissions – HINARI, AGORA for example, which deliver journals without charge to poorer countries; and that scheme is being enhanced and is lifting up to another level of slightly better-off countries.

Oral evidence to Inquiry, March 1st 2004, Bob Campbell (President, Blackwell Publishing)

Response

This canard has been thoroughly debunked elsewhere. The assertion being made is, essentially, that open access publishers have an incentive to publish dubious material, in order to increase their revenue from article processing charges. This is a very peculiar accusation for a traditional publisher to make given that in the same evidence session, Elsevier’s hefty annual subscription price increases was justified as follows:

On pricing, we have put our prices up over the last five years by between 6.2 per cent and 7.5 per cent a year, so between six and seven and a half per cent has been the average price increase. During that period the number of new research articles we have published each year has increased by an average of three to five per cent a year. […] Against those kinds of increases we think that the price rises of six to seven and a half per cent are justified.”

Oral evidence to Inquiry, March 1st 2004, Crispin Davis (CEO, Reed Elsevier)

HINARI,and its sister initiative, AGORA, are commendable initiatives and are undoubtedly warmly welcomed by researchers working in the eligible countries.

Via these schemes, publishers give some of the poorest countries free access to some of their journals. In HINARI, twenty-eight publishers participate, making a total of more than 2000 journals available for free to some of the poorest countries (defined as having a per capita annual income of less than $1000); and at a deep discount for some slightly less disadvantaged countries (per capita annual income between $1000 and $3000).

Unfortunately these schemes offer only a partial solution to the access problems of the developing world. The list of eligible countries has many notable omissions. Even though these countries have per capita annual incomes of $735 or less, and are therefore “low-income” countries according to World Bank criteria. Countries such as Brazil and China (which are “lower-middle income” according to the World Bank) are also excluded from the eligibility list, even for discounts.

There is an obvious explanation for these omissions. These larger countries have significant research programs, so publishers can generate substantial income by selling subscriptions to them. It appears that traditional publishers will only offer open access to the developing world when they can be sure it won’t affect their profits.

It is therefore clear that researchers in developing countries have a huge amount to gain from greatly expanded access to the global scientific literature that open access publishing will offer.

Certainly, there are challenges that need to be faced to ensure that authors in developing countries can publish in open access journals, but these challenges are by no means insurmountable. Indeed, many low-income countries have already started their own open access journals. Meanwhile, BioMed Central currently offers a full waiver of the article processing charge to authors in low and low-middle income countries. Long term, the scientific community will certainly find ways to ensure that scientists in developing countries get the full benefit of open access, both as readers and as authors.

Myth 8 Traditionally published content is more accessible than open access content as it is available in printed form

We make our articles available both in print and on line. In fact, open access would today have the result of reducing accessibility to scientific research because it is only available on the Internet. In this country that would exclude some 20-25 per cent of scientists; globally it would exclude over 50 per cent of scientists. In actual fact, the business model we have today gives the widest possible access.

Oral evidence to Inquiry, March 1st 2004, Bob Campbell (President, Blackwell Publishing)

Print is used by many scientists around the world and by global citizens who are the beneficiaries of scientific and medical research. To rely on the Internet alone for distribution, as most open access journals do, risks reducing levels of access among these beneficiaries. 11% of the world’s population uses the Internet and only 64% of UK citizens have ever been online.

Written submission to Inquiry, February 2004, Elsevier

Response

This canard has been thoroughly debunked elsewhere. The assertion being made is, essentially, that open access publishers have an incentive to publish dubious material, in order to increase their revenue from Article Processing Charges. This is a very peculiar accusation for a traditional publisher to make given that in the same evidence session, Elsevier’s hefty annual subscription price increases was justified as follows:

This claim should perhaps win a prize for audacity. To be clear: it is not just slightly wrong; it is preposterously wrong.

Firstly, sending out printed copies of journals to subscribers who pay for them is in no way in conflict with the goals of open access. Many Open Access journals (such as PLoS Biology, Journal of Biology and Genome Biology) have print editions. Wherever there is a demand for print (from libraries or from individuals) then print editions are available to those who wish to pay to receive them, just as with a traditional journal.

But, far more importantly, by Elsevier’s own estimate some 30 million people in the UK (and more than half a billion people worldwide) use the Internet. The wonderful thing about open access is that any one of those hundreds of millions of people can print out copies of any open access article, and distribute them to whomever they want. If you want to get hold of an Open Access article, there are literally hundreds of millions potential sources. We already see the power of this mechanism in action. In the poorest countries in Africa, those scientists who are lucky enough to have access to the Internet are downloading open access articles from BioMed Central’s journals (e.g. Malaria Journal), printing them out in large numbers, and distributing them to their colleagues in areas the Internet does not yet reach. They confirm to us that this makes the research vastly more accessible than research published in traditional print-only journals.

In contrast, many traditional journals are received in print by only a few hundred libraries worldwide. Not only that, the libraries that hold these print copies are bound by strict rules governing what is and is not permissible in terms of copying and redistribution. To argue that these few hundred printed copies provide greater access to research than making articles openly accessible online is, frankly, ludicrous.

Myth 9 A high quality journal such as Nature would need to charge authors £10,000-£30,000 in order to move to an open access model

Under an author pays model, we estimate the actual cost per paper published would be in the region of £10-£30,000 depending on the impact of lost advertising.

Letter to Inquiry, January 13th 2004, Richard Charkin (CEO, Macmillan)

“There are many answers because there are many journals for many disciplines, and the impact will be different depending upon which discipline or which journal you are talking about. In our letter to you, speaking on behalf of Nature Publishing Group, in the case of Nature itself, the British international journal, in order to replace our revenues you would have to charge the author somewhere between £10,000 and £30,000 because the costs of editorial design and support are so high.

Oral evidence to Inquiry, March 1st 2004, Richard Charkin (CEO, Macmillan)

Response

Although subsequent media reports failed to mention it, the quotes above make clear that this figure is only claimed to apply to Nature – an extremely special case among the tens of thousands of life science journals. Elsevier’s evidence confirmed that, even with the inefficiencies of publishers’ current systems, the cost per article for a typical journal is far lower:

The cost to publish an article […] ranges from between $3,000 to $10,000 per article […] I would agree with those numbers.

Oral evidence to Inquiry, March 1st 2004, Crispin Davis (CEO, Reed Elsevier)

For Blackwell? […] it worked out at £1,250 per article. That was the cost of the total system.

Oral evidence to Inquiry, March 1st 2004, Robert Campbell (President, Blackwell Publishing)

But even for Nature, the figure of £10,000-£30,000 is wildly off the mark. The calculation used by Macmillan was as follows:

Very crudely, £30 million of sales: we get income of £30 million and we publish 1,000 papers a year. That is your [£30,000].

Oral evidence to Inquiry, March 1st 2004, Richard Charkin (CEO, Macmillan)

£30,000 is indeed a lot of money. But Nature clearly spends nothing like that on each research article that it publishes.

There are several major problems with the calculation that was used:

    1. A significant fraction of Nature‘s £30m revenue is spent to commission and produce the non-research-article content of the journal (e.g. News & Views articles, book reviews, commentaries, editorials etc.) This non-research content would continue to drive healthy print and online subscription revenue, even if the research articles were made freely accessible online. Since the non-research content (the front-matter) is far more widely read than the research articles themselves, it is far from clear whether making the research articles open access would have any negative impact on subscription revenue. In fact, the opposite can be argued.
    2. For the same reason, there is no reason to believe that Nature‘s impressive advertising revenue would suffer dramatically as a result of open access, yet they are assumed to fall to zero in Nature‘s calculation.
    3. Part of the argument used to justify the high cost per published article is that Nature rejects more than 90% of papers submitted, and so has to review more than 10 papers for every one it publishes, and has to bear the entire cost of this.

[Nature] publishes fewer than 10% of the research articles submitted. Economics dictates that high quality journals like Nature have a high unit cost per paper published, because for every article published more than ten have been reviewed and de-selected.”

Letter to Inquiry, January 13th 2004, Richard Charkin (CEO, Macmillan)

This would indeed be expensive, and it is true that the repeated peer-reviewing of rejected papers as they trickle down the journal pyramid is one of the worst inefficiencies of the present system. In fact, however, Nature is not that profligate and had already taken steps to address this issue:

If a paper cannot be accepted by Nature, the authors are welcome to resubmit to Nature Cell Biology. Nature will then release referees’ comments to the editors of Nature Cell Biology with the permission of the authors, allowing a rapid editorial decision. In cases where the work was felt to be of high quality, papers can sometimes be accepted without further review”

From the HoC website

Thus, if a paper is scientifically sound, but is not exceptional or fashionable enough to appear in Nature, it may well be submitted and accepted into one of the next tier of journals in the Nature stable (Nature Cell Biology, Nature Medicine, Nature Genetics etc.) without requiring significant additional editorial work or costs. This is a very sensible system, and is one that is already in use at BioMed Central. If an article is rejected for publication in BioMed Central’s top-tier journal, Journal of Biology, but is judged by the reviewers and editors to be scientifically sound, the authors may be offered publication in one of our more specialist journals. Public Library of Science plans to operate a similar mechanism as it launches new journals.

This trickle-down approach benefits authors by avoiding the delays caused by repeated rounds of peer-review, and benefits science as a whole by reducing the cost of the publication process while maintaining quality.

Taken together, the above factors make it clear that the actual figure that would be necessary as an author charge for Nature would most likely be vastly lower than the suggested figure of £10,000-£30,000. It is even possible that Nature could operate at a profit while offering open access to research content and making no author charge whatsoever.

Myth 10 Publishers need to make huge profits in order to fund innovation

In the last seven years we have led the industry and the scientific publishing world to on-line. I think most people would agree we have pioneered it through ScienceDirect and through the electronic platform. That would not have happened if we did not have the scale to invest what turned out to be in excess of £200 million to develop the Science Direct on-line platform.

Oral evidence to Inquiry, March 1st 2004, Crispin Davis (CEO, Reed Elsevier)

Response

Elsevier cannot realistically claim to have led the transition of scientific publishers from print to online – that was done by smaller, more nimble operators such as HighWire Press (which launched the Journal of Biological Chemistry in 1995) and BioMedNet (which made the Current Opinion series of journals available online in full text form back in 1994). Of the large commercial publishers, Academic Press started IDEAL in 1995, years before ScienceDirect. Similarly, Elsevier’s figure of £200 million for the development costs of ScienceDirect is more an indication of corporate inefficiency than of innovation.

Huge investment by a large corporation is not the best driver of innovation, especially in the modern connected world. The explosion of the Internet has shown that open platforms are the real spur for innovation. The open standards of the Internet mean that anyone can create a website and offer any imaginable online service, and it will be instantly accessible by all Internet users world-wide. The result has been an unparalleled wealth of innovation, which goes far beyond what proprietary online services had previously achieved.

open access to the scientific literature holds the promise of the same benefits for science. Once the majority of the scientific literature is open access, in the full sense of being openly re-distributable and re-usable, the entire scientific community will be free to develop and improve techniques to mine and explore that literature. They will not be constrained by any one corporate budget or policy, nor by the barriers inherent in the current fragmentation of the literature. At this point in time we can only imagine what is possible, but it is certain that it will dwarf what any one company might achieve.

Myth 11 Publishers need to take copyright to protect the integrity of scientific articles

If your author’s work is then stolen or changed, what publishers can do because of their scale and their reach is to do something about that. Individual authors would find it very difficult if their article was used and changed.

Oral evidence to Inquiry, March 1st 2004, John Jarvis (Managing Director, Wiley Europe)

Response

Scientific integrity is protected not by copyright law, but by the norms, standards and processes of the scientific community. An article is only “stolen” from an author if it is mis-attributed. This is fraud, and laws other than copyright deal with fraud.

It is exceptionally rare for a scientific publisher to use copyright law to defend the integrity of a scientific paper on behalf of an author. In fact BioMed Central knows of no situation where this has happened.

The “scientific integrity” argument simply provides a convenient excuse, which is used by traditional publishers to attempt to justify their requirement for transfer of copyright.

Meanwhile, the real reason for copyright transfer is clear. Publishers regularly use copyright law to protect the profits they derive by controlling access to the literature. For example, in ongoing litigation, Elsevier and Wiley are suing various US photocopying firms for, amongst other things, including copies of research articles in student course-packs without paying royalties to the publisher.

If you would like to suggest a link please contact us.

Some thoughts about Sci-Hub

February 18, 2016

scihub website

Late last week, I was contacted by an online contact asking if I would be interested in participating in an interview:-

Do you want to be possibly interviewed by the Chronicle of Higher Ed about and Sci-hub?

Being well aware of #icanhazpdf and Sci-Hub, I agreed. Sci-Hub is certainly a hot topic at the moment.

It wasn’t practical to speak with the reporter at that time so I emailed them back suggesting that they email me a few questions and I would respond.

I heard back a few days later and got to work at formulating my responses. This took quite a bit longer than I had anticipated.

The report at The Chronicle of Higher Education is due to be published week ending 19th February and I will link to it here as soon as I have found it. The Chronicle is a subscription publication. However, a fair percentage of articles can be read without a subscription and I hope that will be the case here.

++UPDATE++ The Chronicle article has now come out and you can read it in full here.

From experience of doing interviews, I am fully aware that only a portion of what you write/say will be used. As such, I thought I would blog our Q&A discussion in verbatim.

 

QUESTION: In your article, you write that open access has become the new norm and social media is the tool driving it. I’m wondering, what is Sci-Hub’s role in open access?

Sci-Hub is not open access. Maybe it’s a bit of grit in the oyster, helping to rock the boat. I completely agree with Dr Martin Eve who recently tweeted “I can’t condone and I don’t think it’s the answer, but it is a symptom of the problem. Pure open access business models would be immune to it”.

QUESTION:  Now that Elsevier is suing Sci-Hub there is much more attention drawn to academic piracy. In your opinion is Sci-Hub challenging the traditional pay to publish/pay to access model?

Subscription journal workarounds have been around for many years. Sci-Hub launched quietly in 2011 (I didn’t know about it until 2013) and is one that has received much attention over the last 12 months or so via social media, blog posts and broad media coverage. As I currently understand it, I’m not sure it’s “challenging” these models per se (because it uses .edu proxies i.e., journal subscription accounts), but it has become an extremely effective way to access literature that is beyond the reach of most. Other than the legal aspects of the dispute with Elsevier, I sense there are also technology based ones.

With regards to Sci-Hub generally, Richard Smith-Unna summarized matters succinctly in this tweet:-

QUESTION:  Many librarians I’ve spoken to say that academic publishing is working off a broken system. Do you agree? If so, who is it up to to fix it? What will it take?

There are several reasons that academic publishing is working off a broken system. The ongoing serials crisis. Addiction to Journal Impact Factor and most recently, expensive Article Processing Charges, e.g. here. However the publishing landscape continues to evolve. I would like to see academics, librarians and research funders taking more of a leading role in matters rather than the publishers. Some such as Björn Brembs even question the need for publishers at all !

QUESTION:  Are you familiar with how Sci-Hub’s model works? Does the fact that it uses university credentials to scrape papers from Elsevier and other journals put librarians who work at those university in an awkward position?

Yes, I am aware of how the model works. This is not mentioned on the Sci-Hub platform, but is elsewhere such as the Wikipedia page about it. Some (but not as many as I thought) in the librarian community are aware of Sci-Hub and other methods of bypassing the modern interlibrary loan (ILL). A detailed paper titled Bypassing Interlibrary Loan Via Twitter: An Exploration of #icanhazpdf Requests by Gardner & Gardner et al from 2015 is noteworthy. Equally worth a read is Is Biblioleaks Inevitable? by Dunn et al from 2014.

QUESTION:  Is there a tension between academics and publishers? Is that how open access emerged?

In terms of tension between academics and publishers, where does one start ! That is an extremely broad question. Many academics feel that publishers haven’t been serving their interests effectively. For example, see The Cost of Knowledge signed so far by over 15,000 researchers. This was inspired by Sir William Timothy Gowers.

The open access movement traces its history at least back to the 1950s. Widespread access to the internet in the late 1990s and early 2000s fueled the movement. Post internet, open access was initially seen as a threat by traditional subscription based publishers and more recently, an opportunity.

QUESTION:  You’ve studied open access thoroughly. To you, what does the future look like for Sci-Hub? If it disappears, do you expect something else will take its place?

The future of Sci-Hub is uncertain. It does have shades of the Napster era. See Napster, Udacity, and the Academy by Clay Shirky. That said, as The Library Loon states in her recent blog post Next moves in the Sci-Hub game“Sci-Hub has come as close as anything to Napsterizing paywalled journals yet actually surviving the experience”. Pressure on the system will continue until we have full open access in place.

++UPDATE++

In a subsequent piece about Sci-Hub, Peter Suber is quoted three times and is on record as saying the following:-

“I don’t endorse illegal tactics,” says Peter Suber, director of the Office for Scholarly Communications at Harvard University and one of the leading experts on open-access publishing.

Suber quote


Follow

Get every new post delivered to your Inbox.