## The open access hoax and other failures of peer review

Curt Rice in the Guardian, "Open access publishing hoax: what Science magazine got wrong", 10/4/2013:

Science magazine has published a blistering critique of the most sacred cow of scientific research, namely the peer review quality system. Unfortunately, Science doesn't seem to have understood its own findings. It proclaims to have run a sting operation, written by 'gonzo scientist' John Bohannon, revealing the weaknesses of the relatively new publishing model we call open access. In fact, the Science article shows exactly the opposite of what it intended, namely that we need an even wider use of open access than the one we currently have.

The version published on Curt's web log ("What Science — and the Gonzo Scientist — got wrong: open access will make research better") closes with a list of links to other commentary on the Science article:

Science magazine rejects data, publishes anecdote, by Björn Brembs
John Bohannon’s peer review sting against Science, by Mike Taylor
New “sting” of weak open access journals, by Peter Suber
I confess, I wrote the arsenic DNA paper to expose flaws in peer-review at subscription based journals, by Michael Eisen
Science reporter spoofs hundreds of open access journals with fake papers, at the wonderfulRetraction Watch, by Ivan Oransky
OASPA’s response to the recent article in Science, entitled “Whose afraid of Peer Review?”Press Release
What Science‘s “sting operation” reveals: open access fiasco or peer review hellhole? by Kausik Datta
Who’s afraid of open access? by Ernesto Priego
Science Mag sting of OA journals: is it about open access or about peer review, by Jeroen Bosman

There are very important issues at stake here, and Curt has very worthwhile things to add to the discussion, so you should definitely read both the Science article and Curt's response.

To start with, Science is definitely not an open-access journal — to read most articles, you need to be a member of the AAAS (at relatively modest prices ranging from $75/year for students to$151/year for "Professional Member" status), or have access to a library that subscribes.  Like other society-centered journals, Science is suffering from attrition in its membership rolls due to the simple fact that most potential members can get access through their university or company library, and would just as soon not pile up paper copies that clog their recycling bins. And like other non-open-access journals, Science is suffering from the moral and political assault of the open access movement, which variously argues that publicly-funded research reports should be accessible to the public, and that authors (who are not paid for their contributions) benefit from broader access to their writings, and (sometimes) that access to digital information should be priced at its marginal cost of reproduction, which in the case of scholarly and scientific publication is essentially zero.

Journals like Science have done several things in response. They've tried to keep down subscription prices — especially in comparison to the sometimes-exorbitant prices charged by commercial publishers like Reed Elsevier ; they've tried to offer additional value to members; they've allowed various forms of limited or delayed open access; and they've made anti-open-access counter arguments, of which the Bohannon article is an extreme example.

There are some non-trivial anti-open-access arguments. For example, there are non-zero costs associated with editing and managing a journal,  which are on the order of a thousand dollars per published paper. The commonest "open access" method to raise this money is the "Author Pays" model, in which the journal charges would-be authors a fee, usually if the paper is accepted for publication, but sometimes at the time of submission.  (The range of such fees has been about \$400 to \$3500 in cases that I've encountered.)

And there are two potential problems with the "Author Pays" model.

First, for authors who don't have grants or slush funds to pay such fees, the cost can be a problem.  There are fields where productive researchers expect to publish five to ten articles per year, so cumulative publication fees might easily exceed $10,000 a year. This is essentially a problem in politico-economic restructuring — the billions of dollars now spent by libraries on journal subscriptions are the obvious place to look for the needed funds. But of course, this kind of restructuring is extremely hard to arrange. Second (and in my opinion more important), the "Author Pays" model is an invitation to chicanery and fraud. Starting more than a decade ago, we saw the proliferation of "spamferences" — ad hoc international conferences, organized by ad hoc international organizing committees, whose goal seems to be to persuade gullible researchers to pay substantial registration fees to present their papers, usually in resort locations (or places that sound like resorts, at least). The structure and the cash flow of these spamferences are not at all distinct from the annual meetings of reputable organizations — but there is nevertheless a difference. See "(Mis)Informing Science", 4/20/2005, and "Dear [Epithet] spamference organizer [Name]", 10/6/2010, for some discussion. As a result of the blossoming of the Open Access movement, there has been a similar proliferation of journals on a continuum from those motivated by the best interests of humanity to out-and-out frauds. It appears a number of large and well-funded operations have started to mine this vein of ore, often with publications that are well out towards the "take the money and run" end of that spectrum. John Bohannon demonstrated this by building an engine to create a large number of nonsense versions of a pretend scientific paper: The goal was to create a credible but mundane scientific paper, one with such grave errors that a competent peer reviewer should easily identify it as flawed and unpublishable. Submitting identical papers to hundreds of journals would be asking for trouble. But the papers had to be similar enough that the outcomes between journals could be comparable. So I created a scientific version of Mad Libs. The paper took this form: Molecule X from lichen species Y inhibits the growth of cancer cell Z. To substitute for those variables, I created a database of molecules, lichens, and cancer cell lines and wrote a computer program to generate hundreds of unique papers. Other than those differences, the scientific content of each paper is identical. The fictitious authors are affiliated with fictitious African institutions. I generated the authors, such as Ocorrafoo M. L. Cobange, by randomly permuting African first and last names harvested from online databases, and then randomly adding middle initials. For the affiliations, such as the Wassee Institute of Medicine, I randomly combined Swahili words and African names with generic institutional words and African capital cities. My hope was that using developing world authors and institutions would arouse less suspicion if a curious editor were to find nothing about them on the Internet. He then submitted these papers to a large number of (vaguely biomedical) journals: Between January and August of 2013, I submitted papers at a rate of about 10 per week: one paper to a single journal for each publisher. I chose journals that most closely matched the paper's subject. First choice would be a journal of pharmaceutical science or cancer biology, followed by general medicine, biology, or chemistry. In the beginning, I used several Yahoo e-mail addresses for the submission process, before eventually creating my own e-mail service domain, afra-mail.com, to automate submission. The results? By the time Science went to press, 157 of the journals had accepted the paper and 98 had rejected it. Of the remaining 49 journals, 29 seem to be derelict: websites abandoned by their creators. Editors from the other 20 had e-mailed the fictitious corresponding authors stating that the paper was still under review; those, too, are excluded from this analysis. Acceptance took 40 days on average, compared to 24 days to elicit a rejection. It has to be noted that traditional paid-access journals are always motivated to some degree by the desire for money: a modest (but intense) desire on the part of the staff of scientific and technical societies, and a more expansive and rapacious desire on the part of companies like Reed Elsevier. And regular readers of Language Log will have noticed that some pretty bad papers get published by non-open-access journals. This includes Science, where, alas, it's hard to think of any decent-quality linguistics papers that have appeared in recent years, and easy to think of several deeply embarrassing ones… But the bad papers published in journals like Science typically present exciting-sounding results with fundamental conceptual or experimental flaws — they're not meaningless sham papers created by random substitution of names of languages, phonemes, lexical categories, etc., into typed slots in a "Mad Libs" framework. So there's definitely a problem here. And the problem is compounded by the fact that the large-scale publishers of dubious open-access journals are being bought up by nominally reputable for-profit publishers. Bohannon notes that Journals published by Elsevier, Wolters Kluwer, and Sage all accepted my bogus paper. Wolters Kluwer Health, the division responsible for the Medknow journals, "is committed to rigorous adherence to the peer-review processes and policies that comply with the latest recommendations of the International Committee of Medical Journal Editors and the World Association of Medical Editors," a Wolters Kluwer representative states in an e-mail. "We have taken immediate action and closed down the Journal of Natural Pharmaceuticals." In 2012, Sage was named the Independent Publishers Guild Academic and Professional Publisher of the Year. The Sage publication that accepted my bogus paper is the Journal of International Medical Research. Without asking for any changes to the paper's scientific content, the journal sent an acceptance letter and an invoice for$3100. "I take full responsibility for the fact that this spoof paper slipped through the editing process," writes Editor-in-Chief Malcolm Lader, a professor of pschopharmacology at King's College London and a fellow of the Royal Society of Psychiatrists, in an e-mail. He notes, however, that acceptance would not have guaranteed publication: "The publishers requested payment because the second phase, the technical editing, is detailed and expensive. … Papers can still be rejected at this stage if inconsistencies are not clarified to the satisfaction of the journal." Lader argues that this sting has a broader, detrimental effect as well. "An element of trust must necessarily exist in research including that carried out in disadvantaged countries," he writes. "Your activities here detract from that trust."

The Elsevier journal that accepted the paper, Drug Invention Today, is not actually owned by Elsevier, says Tom Reller, vice president for Elsevier global corporate relations: "We publish it for someone else." In an e-mail to Science, the person listed on the journal's website as editor-in-chief, Raghavendra Kulkarni, a professor of pharmacy at the BLDEA College of Pharmacy in Bijapur, India, stated that he has "not had access to [the] editorial process by Elsevier" since April, when the journal's owner "started working on [the] editorial process." "We apply a set of criteria to all journals before they are hosted on the Elsevier platform," Reller says. As a result of the sting, he says, "we will conduct another review."

I was happy to see that a large open-access publisher for which I have a lot of respect, BioMed Central,  was apparently not one of those taken in by the sting. And BioMed Central is owned by Springer Science + Business Media, showing that mere ownership by a large European commercial publisher is not a guarantee of rapacity and intellectual fraud.

Returning to Curt Rice's piece in the Guardian, his main point is that the real problem is not the business models of publishers, it's the antiquated and disfunctional system of peer review:

Bad work gets published. This is a crisis for science and it's the crisis that Science shines a sharp light on this week. But Science misread the cause, which was not about making the results of research freely available via open access, but the meltdown of the peer review system. We need change. It's the digital age that allows that change, and the very best open access journals that are leading the development of new approaches to peer review.

Here are the basic facts about journal peer review as I perceive them to be:

(1) It doesn't effectively maintain quality. A large amount of really, really bad work gets published, including (and even especially) at high-impact, top-rated journals.

(2) It slows down the pace of innovation, except in fields where journal publication has largely been abandoned as a communications method.  There are many journals (including for example Language) where the time from submission to publication may routinely be more than 18 months. As a result, in many fields of computer science, for example, journal publication has become a sort of quaint academic ritual, rather like wearing academic regalia at commencement — something that people do out of a sense of tradition and not because it has any remaining function. Instead, people learn about new ideas from conference presentations and conference proceedings (which are peer-reviewed, but in a very different way from journals), or from un-refereed web publication, as for instance on arXiv.org.

(3) Yes/No evaluative decisions are better made after an idea or result is published, and it's obvious that in the future some kind of social-media evaluation procedure will take care of this function.  Back-and-forth to modify articles before publication is sometimes worthwhile and sometimes just pointless dithering, but even when it's worthwhile, its value is decreased by the fact that it's hidden from everyone except the reviewer, editor, and author(s).  Open conversation after publication — perhaps resulting in new and improved versions — is a much more useful kind of communication.

(4) There are a complex set of problems about criticism, anonymity, retaliation, etc., and these are serious issues for  a more open evaluation process  — but in fact the current situation is full of Bad Stuff caused by the same motivations and dynamics, it's just hidden from view.

As Curt puts it:

The real problem for science today is quality control. Peer review has been at the heart of this, but there are too many failures – both in open access and traditional journals – simply to plod ahead with the same system. We need new approaches and numerous individuals and organisations are working on these, such as the open evaluation project.

The creative potential offered by digital communication of scientific results, an area in which open access journals are leading the way, is exactly where we need to focus. And if we do so, we will solve the problem of the broken peer review system that Science and the gonzo scientist have uncovered.

Curt implies that the problems with the peer-review system are recent ones. But in my opinion, they've always been there; what's new is the opportunity to correct them.  In this context, it's interesting to re-read Stevan Harnad's 1990 "Scholarly skywriting and the prepublication continuum of scientific inquiry", or Andrew Odlyzko's 1994 "Tragic loss or good riddance? The impending demise of traditional scholarly journals" (I especially recommend his section 8, on "The interactive potential of the Net").

Good ideas (and working models) of a better system have been around for a long time. But academics are a very conservative bunch, at least when it comes to their own social arrangements, and so it is likely to be some time before real change occurs.

1. ### Jon Weinberg said,

October 5, 2013 @ 3:24 pm

According to Peter Suber, Open Access (MIT Press 2012), only 30% of open-access journals use an author-pays model (with that fee being paid nearly all the time by an institution rather than the author personally). That's consistent with my own experience in law academia; no U.S. open-access journal in law imposes an author fee. Suber contrasts that with non-open-access journals, which he says are much more likely to require an upfront fee.
[(myl) I've very rarely experienced non-open-access journals asking for "page changes", which is what such fees used to be called. In the fields that I'm familiar with, it's true that many open-access journals (e.g. Computational Linguistics) are paid for by the society that publishes them, out of membership fees. But in the biomedical area, which is where Bohannon was prospecting, I believe that most of the open-access journals do charge author fees — in particular, the kinds of journals that his pseudo-paper was accepted by.]

(What's your basis, Mark, for the figure of $1000 in actual costs incurred per published paper, assuming online-only publication? Scholars doing peer review, after all, work for free; I'm having a hard time getting the costs to add that high.) [(myl) I've gotten versions of this order-of-magnitude number from several different types of sources, ranging from Matt Cockerill at BioMedCentral to Steven Bird at ACL. There remains a fair amount of labor beyond basic editorial and refereeing activities: copy editing, format hacking, permissions clearance, web site administration, bookkeeping, general secretarial and administrative functions. If you can get all of that done by volunteers — or if you don't do it at all — then the costs obviously go down. But note that we're not talking about a lot of money — for a small journal, it's far below the cost of hiring even one professional employee. Here's an example where I know some of the details. In 2012, Computational Linguistics published about 24 articles — at$1000 each, that would be $24,000. In fact, through 2010 the ACL paid MIT Press$45-50k per year for copyediting, proofreading, typesetting, web hosting, marketing, handling of rights & permissions. In 2011, MIT Press introduced a LaTeX-aware copy editor, and reduced their changes to about $28k/year. In addition to these costs, there used to be a part time editorial assistant, typically a grad student, who was paid$15k/year. I believe that in 2011 that position was eliminated in favor of the OJS web-based manuscript management system; but not all journals can count on their editor being able or willing to install and maintain such a software package on a volunteer basis. So the out-of-pocket costs in 2012 were either $28k or$43k, which in either case is greater than 24*$1k. (In fact, CL does not charge author fees, but rather funds the enterprise from membership dues.) What I've learned about costs for other journals is consistent with this. The only alternatives to O($1k) per article in funds, from whetever source, are
(1) Lots of volunteer labor for things like copy editing and formatting and site management and IPR negotiations; or
(2) Very minimal intervention, a la arXiv.org or ssrn.com — basically a repository for .pdfs contributed by authors.

Costs are trending downwards, due to better automation in open-source programs for managing the process. But I think that $1k/article, for a new journal without economies of scale from being folded in to a larger enterprise like BioMedCentral, is not out of line for what the costs are today for a publication in the U.S. or Europe. Maybe you could bring it down to$600/article or so (which is what I believe that BioMedCentral quotes for hosting and managing a journal), but I think it would be difficult to go much lower than that, unless a couple of dedicated and talented people are willing to devote most of their lives to providing volunteer labor for the project.

10. ### Richard Sproat said,

October 7, 2013 @ 7:12 am

"But the bad papers published in journals like Science typically present exciting-sounding results with fundamental conceptual or experimental flaws — they're not meaningless sham papers created by random substitution of names of languages, phonemes, lexical categories, etc., into typed slots in a "Mad Libs" framework."

So are you saying you don't think Science would publish an article that was based on intentionally faked or nonsensical data, if they thought there would be a big press release coming out of it? Not if they knew it was fraudulent or nonsensical, surely. But might they not be willing to look the other way a bit if they started drooling over the possibility of a press bonanza?

It's not as if we haven't seen instances of this already.

[(myl) I'm assuming, perhaps falsely, that Bohannon's pseudo-articles would look to experts in the area like obvious fakes, as the 2005 SCIgen pseudo-article was.

But all that he says about his pseudo-articles is

The paper took this form: Molecule X from lichen species Y inhibits the growth of cancer cell Z. To substitute for those variables, I created a database of molecules, lichens, and cancer cell lines and wrote a computer program to generate hundreds of unique papers.

He might have done this in a way that created coherent and plausible but fraudulent papers, in which case I think his experiment/anecdote is presented in a very misleading way. There's a big difference between accepting an incoherent paper and accepting a paper that makes fraudulent claims. If this is what happened, then the problem is not exactly a failure of peer review, but rather another example of why replicability and replication are crucial.

There are many journals, including some high-impact ones like Science, that have accepted and published fraudulent or semi-fraudulent papers.

So yes.]

11. ### Richard Sproat said,

October 7, 2013 @ 8:19 am

@MYL

Agreed, but aren't some of the papers that Science did publish in our field obviously incoherent? How long did it take you to show up major flaws in Atkinson 2011? Or for you, Shalizi and me to demonstrate major problems with Rao et al 2009? Both of these involved serious issues with replicability when subjected to critique by experts in the field.

Bohannon at least picked a field where it would not be immediately obvious whether the reported results were bogus. One can't run a simple computer simulation, or compute stats from a public online database to determine whether a particular chemical from a particular lichen in a particular cancer cell line. Perhaps a little fact checking: does molecule X even occur in lichen species Y? Is lichen species Y a species that one would expect would be readily available in the country the authors claim to be from? But that would be circumstantial evidence at best, and certainly nothing like as clear as your demonstration that Atkinson's methodology overrewarded tone distinctions, thus inflating the apparent phonological richness of African languages.

Does the Science article (which I haven't seen, not having access) report anything other than numbers? In the cases where the papers were rejected, do we know why they were rejected? Was there evidence that the reviewers did the kind of basic fact checking suggested above, or was it for other reasons that they were rejected?

Anyway there is plenty of evidence that Science has accepted papers that do not pass the sniff test or even come close.

12. ### Richard Sproat said,

October 7, 2013 @ 8:27 am

Correction to myself: I see now that this is actually open.

13. ### Richard Sproat said,

October 7, 2013 @ 8:39 am

Ok, so I have now had a look at some of the articles. As Bohannon says, these are all the same except for the chemical, lichen and cell line.

To be frank, I don't think this is an honest test of anything. I of course know nothing about this field, but I assume that he was at least careful to make the methodology (e.g. "Cells were irradiated with a single dose of external radiation from a Cesium-137 source") plausible. That said, the reviewer would have to pay particular attention to just three basic elements, and without necessarily being an expert on these three elements, might easily miss the fact that the data are nonsensical. I'll have to go through the rejection letters sometime and see if anyone actually caught him out on these things.

That's presumably a large part of why papers like Rao's or Atkinson's looked plausible: lots of apparently reasonable "methodology" hiding the fact that the basic assumptions are flawed.

But it would be easy enough to concoct some computational linguistics problem where the basic methodology seems sound, the results are what people would like to here (e.g. DNNs outperform generative n-gram models) but the data are bogus (e.g. I picked some proprietary dataset from some obscure language).

14. ### Richard Sproat said,

October 7, 2013 @ 8:43 am

here -> hear

15. ### sbfren said,

October 7, 2013 @ 9:40 am

A parallel discussion on a terrific research chemistry blog, on the original article, not Curt Rice's response:
http://pipeline.corante.com/archives/2013/10/04/an_open_access_trash_heap.php

16. ### Richard Sproat said,

October 7, 2013 @ 12:49 pm

Ok one more comment and then enough on this. From the paper, which I didn't have time to read earlier (should remember to do that):

"There are numerous red flags in the papers, with the most obvious in the first data plot. The graph's caption claims that it shows a "dose-dependent" effect on cell growth—the paper's linchpin result—but the data clearly show the opposite. The molecule is tested across a staggering five orders of magnitude of concentrations, all the way down to picomolar levels. And yet, the effect on the cells is modest and identical at every concentration.

One glance at the paper's Materials & Methods section reveals the obvious explanation for this outlandish result. The molecule was dissolved in a buffer containing an unusually large amount of ethanol. The control group of cells should have been treated with the same buffer, but they were not. Thus, the molecule's observed "effect" on cell growth is nothing more than the well-known cytotoxic effect of alcohol."

Fine, but people may remember that it was suspicious looking plots — ones that had got past peer reviewers in "prestige" journals — that led to the downfall of Hendrik Schön.

And as someone pointed out, for this to be a controlled experiment, shouldn't Bohannon also have sent this to journals like, e.g. Science?

(Actually we already know the answer with Science: they must get thousands of papers on cancer, and I am sure that this one would not have made it past the editorial office anyway. Surely not sufficient "impact".)

October 7, 2013 @ 1:00 pm

WHERE THE FAULT LIES

To show that the bogus-standards effect is specific to Open Access (OA) journals would of course require submitting also to subscription journals (perhaps equated for age and impact factor) to see what happens.

But it is likely that the outcome would still be a higher proportion of acceptances by the OA journals. The reason is simple: Fee-based OA publishing (fee-based "Gold OA") is premature, as are plans by universities and research funders to pay its costs:

Funds are short and 80% of journals (including virtually all the top, "must-have" journals) are still subscription-based, thereby tying up the potential funds to pay for fee-based Gold OA. The asking price for Gold OA is still arbitrary and high. And there is very, very legitimate concern that paying to publish may inflate acceptance rates and lower quality standards (as the Science sting shows).

What is needed now is for universities and funders to mandate OA self-archiving (of authors' final peer-reviewed drafts, immediately upon acceptance for publication) in their institutional OA repositories, free for all online ("Green OA").

That will provide immediate OA. And if and when universal Green OA should go on to make subscriptions unsustainable (because users are satisfied with just the Green OA versions), that will in turn induce journals to cut costs (print edition, online edition), offload access-provision and archiving onto the global network of Green OA repositories, downsize to just providing the service of peer review alone, and convert to the Gold OA cost-recovery model. Meanwhile, the subscription cancellations will have released the funds to pay these residual service costs.

The natural way to charge for the service of peer review then will be on a "no-fault basis," with the author's institution or funder paying for each round of refereeing, regardless of outcome (acceptance, revision/re-refereeing, or rejection). This will minimize cost while protecting against inflated acceptance rates and decline in quality standards.

That post-Green, no-fault Gold will be Fair Gold. Today's pre-Green (fee-based) Gold is Fool's Gold.

None of this applies to no-fee Gold.

Obviously, as Peter Suber and others have correctly pointed out, none of this applies to the many Gold OA journals that are not fee-based (i.e., do not charge the author for publication, but continue to rely instead on subscriptions, subsidies, or voluntarism). Hence it is not fair to tar all Gold OA with that brush. Nor is it fair to assume — without testing it — that non-OA journals would have come out unscathed, if they had been included in the sting.

But the basic outcome is probably still solid: Fee-based Gold OA has provided an irresistible opportunity to create junk journals and dupe authors into feeding their publish-or-perish needs via pay-to-publish under the guise of fulfilling the growing clamour for OA:

Publishing in a reputable, established journal and self-archiving the refereed draft would have accomplished the very same purpose, while continuing to meet the peer-review quality standards for which the journal has a track record — and without paying an extra penny.

But the most important message is that OA is not identical with Gold OA (fee-based or not), and hence conclusions about peer-review standards of fee-based Gold OA journals are not conclusions about the peer-review standards of OA — which, with Green OA, are identical to those of non-OA.

For some peer-review stings of non-OA journals, see below:

Peters, D. P., & Ceci, S. J. (1982). Peer-review practices of psychological journals: The fate of published articles, submitted again. Behavioral and Brain Sciences, 5(2), 187-195.

Harnad, S. R. (Ed.). (1982). Peer commentary on peer review: A case study in scientific quality control (Vol. 5, No. 2). Cambridge University Press

Harnad, S. (1998/2000/2004) The invisible hand of peer review. Nature [online] (5 Nov. 1998), Exploit Interactive 5 (2000): and in Shatz, B. (2004) (ed.) Peer Review: A Critical Inquiry. Rowland & Littlefield. Pp. 235-242.

Harnad, S. (2010) No-Fault Peer Review Charges: The Price of Selectivity Need Not Be Access Denied or Delayed. D-Lib Magazine 16 (7/8).

18. ### David Marjanović said,

October 16, 2013 @ 1:59 pm

Here are the basic facts about journal peer review as I perceive them to be:

[…]

(2) It slows down the pace of innovation […] There are many journals (including for example Language) where the time from submission to publication may routinely be more than 18 months.

Uh, I have no idea about Language, but in my field peer review isn't the limiting factor. It takes maybe a month, or two to three if the reviewers want you to redo analyses. What can take a year or even two, or on the other hand just a few weeks, is the time between acceptance and publication.

19. ### David Marjanović said,

October 16, 2013 @ 2:01 pm

A discussion about the topic of this post.

20. ### Controversial Article in The Journal “Science” exposes the weaknesses of Peer-Review in a set of Open Access Journals | SciELO in Perspective said,

November 5, 2013 @ 12:04 pm

21. ### Polémico artículo en Science expone debilidades en la revisión por pares en una serie de revistas de acceso abierto | SciELO en Perspectiva said,

November 5, 2013 @ 12:14 pm

22. ### Polêmico artigo na Science expõe fragilidades da revisão por pares em um conjunto de periódicos de acesso aberto | SciELO em Perspectiva said,

November 5, 2013 @ 12:26 pm