From “Thinking about prestige, quality, and open access,” SPARC Open Access Newsletter, September 2, 2008.
I've been thinking a lot lately about how journal quality and prestige overlap, how they diverge, and how their complex interaction affects the prospects for open access (OA). Here are a dozen thoughts or theses about prestige and OA. Some are commonplace, but I include them because they help me build up to others which are not. I start with the rough notion that if journal quality is real excellence, then journal prestige is reputed excellence.
(1) Universities reward faculty who publish in high-prestige journals, and faculty are strongly motivated to do so. If universities wanted to create this incentive, they have succeeded.
Researchers have always been motivated by their research topics. If all journals were equal in prestige, or if all journals were equal in the eyes of promotion and tenure committee, most researchers would happily focus on their research and give very little thought to where it was published. Universities have succeeded at putting journal prestige on the radar of faculty who might not have cared. This is important for at least two reasons. First, the OA movement has to work, or start working, within the existing system of incentives. Second, it means that researchers are not so preoccupied by their research that they can't be induced to pay attention to relevant differences among journals, or at least the differences which universities make relevant. This gives hope to a strategy to get faculty to pay attention to access issues.
Funding agencies can have the same effect as university P&T committees. If they give grants to applicants who have a record of publishing in high-prestige journals, they help create, and then entrench, the incentive to publish in high-prestige journals.
If journal prestige and journal quality can diverge, then universities and funders may be giving authors an incentive to aim only for prestige. If they wanted to create an incentive to put quality ahead of prestige, they haven't yet succeeded. Much more on this below.
Universities and funders can mandate green OA, of course, and a growing number of them do (as of today [9/2/08], 22 universities, 4 departments, and 27 funders).
It may seem sufficient to motivate authors to provide OA to their own work, and that motivating journals to provide OA is unnecessary. That may be true for providing OA. But if more journals don't permit OA (green OA), then even successful attempts to motivate authors to self-archive are needlessly limited in their scope. However, if universities wanted to create incentives for journals to support OA (green or gold), they haven't yet succeeded. I know that many universities and funders are thinking about adopting OA policies. But as I argued in an article from January 2007,
While they're deliberating, it would help if universities [and funders] would recognize their complicity in the problem they are trying to solve. By rewarding faculty who win a journal's imprimatur, mindful of the journal's prestige but heedless of its access policies, universities [and funders] shift bargaining power from authors to publishers of high-prestige journals. They give publishers less incentive to modify their standard contracts and authors greater incentive to sign whatever publishers put in front of them.
(2) Most high-prestige journals today are toll access (TA).
This isn't surprising. Most OA journals are new and it takes time for any journal, even one born excellent, to earn prestige in proportion to its quality. But it means that the motive to publish in high-prestige journals leads most faculty most of the time to try TA journals first.
If a journal can be excellent from birth, but not prestigious from birth, or if new journals typically achieve quality before they achieve a reputation for quality, then we have a non-cynical reason to think that quality and prestige can diverge. Quality and prestige clearly overlap, perhaps most of the time. But a significant number of high-quality journals, most notably the new ones, will not be correspondingly high in prestige.
If most OA journals are lower in prestige than most TA journals, it's not because they are OA. A large part of the explanation is that they are newer and younger. And conversely: if most TA journals are higher in prestige than most OA journals, it's not because they are TA. A large part of the explanation is that they are older or have a headstart.
Could the average quality of TA and OA journals be another part of the explanation? For journals of roughly the same age, differences in quality probably correlate closely with differences in prestige. But we don't have a good quality measurement—a problem that will come back again and again—and we can't forget the age variable. No one has done the studies. But if we could compare TA and OA journals of the same age and quality, I suspect we'd find that they had roughly the same levels of prestige.
(3) Most authors will choose prestige over OA if they have to choose. Fortunately, they rarely have to choose. Unfortunately, few of them know that they rarely have to choose.
There are two reasons why authors rarely have to choose between prestige and OA. First, there is already a growing number of high-prestige OA journals. They function not only as high-prestige OA outlets for new work, but as proofs of concept, showing that nothing intrinsic to OA prevents the growth of prestige. Second, authors can self-archive. They can publish in a prestigious TA journal and then deposit their postprint in an OA repository. About two-thirds of TA publishers already give blanket permission for this and many of the others will give permission on request.
When the OA archiving is mandated by the author's funding agency, the percentage of TA publishers allowing it rises to nearly 100%.
Beyond this, there's growing evidence that some scholars actually prefer OA to prestige, when they have to choose.
One of the best-kept secrets of scholarly communication today is that deposit in an OA repository is compatible with publication in a TA journal. Of all the damage caused by ignorance and misunderstanding of OA, more may be caused by ignorance of this fact than ignorance of any other. Today, many prestige-driven authors will dismiss the idea of OA because they haven't heard of any OA journals—proof, to them, that OA doesn't carry sufficient prestige. Even if they want to support OA and look up the OA journals in their field, they may not find any with the prestige they could get from certain TA journals and therefore, perhaps reluctantly, dismiss the idea of OA itself as one of those ideas which is good in theory but not yet in practice. But authors who do either of these things are unaware of OA archiving or unaware that OA archiving is compatible with publishing in TA journals.
In my experience, people who don't know about this compatibility assume incompatibility. They assume that there's usually a trade-off between prestige and OA when in fact there usually isn't. If we could enlighten researchers and their institutions on this one point, we'd remove one of the largest single barriers to the spread of OA. But we must be precise: the barrier isn't prestige or the pursuit of prestige. It's ignorance and misunderstanding.
(Note that even in the minority of cases when journals don't allow OA archiving, they don't prohibit dark or non-OA deposits in an OA repository. Authors can always self-archive in that sense, and switch the article from closed to open when the journal's embargo period runs.)
John Unsworth once made the good point that we needn't make OA prestigious if we could only make it cool.
Both prestige and coolness would attract author submissions, and OA does seem to be growing on both scales. We needn't see them as the same thing. Authors want to be associated with a journal's prestige, or reputed excellence, and they want to be associated with a journal's coolness (and even a repository's coolness), or contribution to a good cause. I'd love to explore this in more detail. But for my present purposes, it's enough to say that no one is likely to see an incompatibility between OA and coolness.
(4) Apart from the fact that most OA journals are new, there is no intrinsic reason why OA journals can't be as high in quality and prestige as the best TA journals.
The key variables in journal quality are excellent authors, editors, and referees. OA journals can use the same procedures and standards, and the same people—the same authors, editors, and referees—as TA journals. If this weren't already clear, we're reminded of it every time a TA journal converts to OA.
There are even some respects in which the average OA journal may exceed the quality of the average TA journal, such as the freedom to publish a short issue (without shortchanging subscribers) rather than lower standards to fill it out. I discussed some of these at length in SOAN for October 2006.
It's harder to pinpoint the key variables in journal prestige, but any list would include quality, age, impact, circulation, and recognition by promotion and tenure committees. Except for age, good OA journals can match good TA journals on all these parameters. By age, I mean how old a journal is today, not its prospects for longevity. When OA journals have sustainable business models and are as likely as any other journals to survive for the long-term, they are still on average much newer and haven't yet been around long enough to acquire the brand recognition and reputations of venerable TA journals. There is no doubt that their newness works against their prestige, but little doubt that they could possess all the other marks of prestige. In the case of citation impact, OA journals are likely to surpass TA journals of comparable quality, and in the case of circulation, they already surpass all TA journals whatsoever.
If most high-prestige journals today are TA, that's much more a fact about today than about any intrinsic advantages or disadvantages of TA and OA journals. It's a snapshot of dynamic, rapidly-changing situation.
(5) Quality feeds prestige and prestige feeds quality.
Quality ought to feed prestige and generally does. Or prestige ought to rest on quality and generally does. Excellent journals deserve reputations for excellence and, with conducive circumstances, tend to acquire them.
But prestige also feeds quality. Journal prestige is a powerful incentive for authors to submit work, perhaps the most powerful. By attracting submissions, prestige allows a journal to be more selective and actually improve its quality. Journal prestige also attracts good editors and referees, who directly help a journal improve its quality.
Prestige even feeds prestige. Journal prestige attracts readers, and helps justify library decisions to spend part of their limited budget on a subscription. The growth in readers and subscribers directly boosts prestige.
The quality-prestige feedback loop operates as a benign circle for high-prestige journals. The prestige itself helps them maintain both their quality and their prestige. The same feedback loop operates as a vicious circle for low-prestige journals (journals with little or no reputation, regardless of their quality). The lack of prestige itself becomes a barrier to gaining prestige in proportion to their quality. This vicious circle often takes a stark form: a journal needs excellent submissions to generate prestige, but needs prestige to attract excellent submissions.
Prestige enhances quality roughly the way interest enhances wealth, enabling the rich to get richer.
(6) Prestige is a zero-sum game, but quality is not.
On the mountain of quality, there's always room at the top for another journal (OA or TA). But on the mountain of prestige, there isn't. As long as researchers are producing excellent work, publishers can produce excellent journals. But as excellent journals multiply, not all of them can be on a library's must-have list, because budgets are finite. Not all can have a reputation for excellence, because brand awareness is also finite.
Even if all excellent journals could have reputations for excellence—say, when there are very few of them—not all could have a reputations for superiority to other journals in the same field. Insofar as prestige is reputed quality, it might attach to all high-quality journals, at least when there aren't too many of them. But insofar as prestige is reputed superiority, it cannot.
This is another non-cynical reason to think that quality and prestige can diverge, and that a significant number of high-quality journals will not be proportionally high in prestige.
I owe this insight to Doug Bennett, President of Earlham College, who makes the point with regard to colleges rather than journals. But it clearly applies to journals, just as it applies to books and movies.
Prestige is only approximately, not strictly, a zero-sum game. There is obviously no fixed limit to the number of prestigious journals or to the total quantum of prestige a system of scholarly communication could sustain. But there are pressures from finite budgets and finite scholarly attention which constrain the number and percentage of prestigious journals, independently of their quality. These pressures don't cap the number of journals that can earn prestige in proportion to their quality, or force new ones to displace old ones, but they burden the new ones that try and jack up the burden roughly in proportion to the number of existing high-prestige titles already regnant in a given field. With this understanding, I'll refer to prestige as a zero-sum game without always adding the wordy qualification.
(7) Because prestige is a zero-sum game, and quality is not, prestige can actually interfere with quality.
When the journals in a field are few, it might be possible for the all the good ones to have recognizable brands and prestige in proportion to their quality. But when they are many, as today, then it's difficult or impossible for all the good ones to have recognizable brands and prestige in proportion to their quality.
This isn't the kind of interference which directly prevents a journal from becoming excellent. But it does prevent many excellent journals from earning prestige in proportion to their excellence. And because prestige feeds quality, lack of prestige prevents a journal from taking advantage of the feedback effects which could help it sustain and improve its quality, for example, through increased submissions and subscriptions. If two journals are equal in quality, and one has more prestige than the other (say, because of a headstart), then the one higher in prestige will generally become higher in quality at a faster rate than the one lower in prestige.
In short, prestige generates quality, but the zero-sum problem means that quality only generates prestige up to a point or with increasing resistance. This matters for several reasons. Prestige can't keep pace with quality, at least when there are many high-quality journals. If prestige is our measure of valuation, then it will inevitably undervalue some high-quality journals. And this kind of undervaluation will function as an obstacle to the kinds of quality improvements that prestige helps to make possible. It prevents some high-quality journals from earning interest on their quality, the way high-prestige journals do.
When prestige and quality diverge, therefore, it makes sense for journals to choose prestige over quality. Prestige will help them gain quality, but quality won't always help them gain prestige. Note that authors have different reasons to make the same choice. When prestige and quality diverge, authors have a stronger incentive to publish in a high-prestige journal than in a high-quality journal. For journals, this preference reflects the zero-sum problem, and for authors it reflects incentives created by promotion and tenure committees, which themselves favor prestige over quality when the two diverge. But if authors could be made to invert their preference and put quality ahead of prestige (say, because universities did the same), then journals would have a strong reason to follow suit.
For a different kind of evidence that prestige interferes with quality, see the evidence that journal prices are either unrelated to quality or inversely related to it. Publishers can charge what the market will bear, and prestige and monopoly potently affect what the market will bear.
(8) Universities tend to use journal prestige and impact as surrogates for quality. The excuses for doing so are getting thin.
If you've ever had to consider a candidate for hiring, promotion, or tenure, you know that it's much easier to tell whether she has published in high-impact or high-prestige journals than to tell whether her articles are actually good. Hiring committees can be experts in the field in which they are hiring, but promotion and tenure committees evaluate candidates in many different fields and can't be expert in every one. Moreover, even bringing in disciplinary experts doesn't fully solve the problem. We know that work can be good even when some experts in the field have never heard of it or can't abide it. On top of that, quantitative judgments are easier than qualitative judgments, and the endless queue of candidates needing evaluation forces us to retreat from time- and labor-intensive methods, which might be more accurate, to shortcuts that are good enough. And perhaps above all, it's easier to assume that quality and prestige never diverge than to notice when they do diverge and act accordingly.
Impact factors (IFs) rose to prominence in part because they fulfilled the need for easy quantitative judgments and allowed non-experts to evaluate experts. As they rose to prominence, IFs became more tightly associated with journal prestige than journal quality, in part because their rise itself helped to define journal prestige.
IFs measure journal citation impact, not article impact, not author impact, not journal quality, not article quality, and not author quality, but they seemed to provide a reasonable surrogate for a quality measurement in a world desperate for a reasonable surrogate. Or they did until we realized that they can be distorted by self-citation and reciprocal citation, that some editors pressure authors to cite the journal, that review articles can boost IF without boosting research impact, that articles can be cited for their weaknesses as well as their strengths, that a given article is as likely to bring a journal's IF down as up, that IFs are only computed for a minority of journals, favoring those from North America and Europe, and that they are only computed for journals at least two years old, discriminating against new journals.
By making IFs central in the evaluation of faculty, universities create incentives to publish in journals with high IFs, and disincentives to publish anywhere else. This discriminates against journals which are high in quality but low in IF, and journals which are high in quality but for whatever reason (for example, because they are new) excluded from the subset of journals for which Thomson Scientific computes IFs. By favoring journals with high IFs, universities may succeed at excluding all second-rate journals, but they also exclude many first-rate journals and many first-rate articles. At the same time, they create perverse incentives for authors and journals to game the IF system.
When we want to assess the quality of articles or people, and not the citation impact of journals, then we need measurements that are more nuanced, more focused on the salient variables, more fair to the variety of scholarly resources, more comprehensive, more timely, and with luck more automated and fully OA.
There is already a number of new measurements available or under development: Age-weighted citation rate (from Bihui Jin), Batting Average (from Jon Kleinberg et al.), the Distributed Open Access Reference Citation project (from the University of Oldenburg), Eigenfactor (from Carl Bergstrom), g-index (from Leo Egghe), h-index (from J.E. Hirsch) and variations on the theme like the Contemporary h-index (from Antonis Sidiropoulos et al.) and Individual h-index (from Pablo D. Batista et al.), the Journal Influence Index and the Paper Influence Index (both from the Center for Journal Ranking), MeSUR (MEtrics from Scholarly Usage of Resources, from LANL), SCImago Journal Rank and SJR Indicator (both from the University of Granada), Strike Rate Index (from William Barendse), Usage Factor (from UKSG), Web Impact Factor (from Peter Ingwersen), and y-factor (from Herbert van de Sompel et al.).
None of the new metrics tries to remedy all the limitations of the IF, when misused as a quality measurement, but they generally have more nuance and lower costs than the IF, and some have wider scope. Because none of them is widely adopted, none yet rivals the IF as a constituent of journal prestige.
We could solve many problems at once if we had more direct and accurate measurements of quality and could stop using citation impact and prestige as surrogates. But I'm very conscious that this is easier said than done. The new metrics are not direct quality measurements either, and quality may always be too difficult to measure directly—too time-consuming, labor-intensive, and subjective. But if we have to settle for surrogates, then at least we can improve the surrogates. If new metrics could reduce the inevitable oversimplification, then we could make more intelligent hiring, promotion, and tenure decisions. We could recognize more first-rate work, not just the subset delivered in certain venerable packages. We could remove some artificial disincentives for faculty to publish in OA journals. We could help more quality journals (OA and TA) use feedback effects to maintain and improve their quality. We could undo some of the ways in which prestige interferes with quality.
Here's a thought experiment. Imagine that we could agree on our judgments of journal quality. Construct two sets of peer-reviewed journals such that the average quality of the journals in the two sets were equal and that the journals in the one set were 1–3 years old and those in the second set 10 years old or older. Then we could check to see whether promotion and tenure committees reward faculty for publishing in the second set more than for publishing in the first set. (I'd bet big that they do.) When there's a quality difference, P&T committees ought to do their best to detect it and let it guide their judgments. But when there isn't a quality difference, steering faculty toward journals with little more than the advantage of age, headstart, or incumbency, and indirectly steering them away from journals of equal quality, only makes sense for the publishers of the incumbent journals. It makes no sense for universities trying to recognize and reward good work.
This point has been misunderstood in the past. I'm not saying that universities should lower their standards, assume quality from OA, give equal recognition to journals of lower or unknown quality, or treat any impact metric as a quality metric. I'm saying that universities should do more to evaluate quality, despite the difficulties, and rely less on simplistic quality surrogates. I'm saying that work of equal quality should have equal weight, regardless of the journals in which it is published. I'm saying that universities should focus as much as possible on the properties of articles and candidates, not the properties of journals. I'm saying that in their pursuit of criteria which exclude second-rate work, they should not adopt criteria which exclude identifiable kinds of first-rate work.
I'm never surprised when OA journals report high IFs, often higher than older and better-known journals in their fields. This reflects the well-documented OA impact advantage. I'm glad of the evidence that OA journals can play at this game and win. I'm not saying that journals shouldn't care about their citation impact, or that IFs measure nothing. I'm only saying that IFs don't measure quality and that universities should care more about quality, especially article quality and candidate quality, than journal citation impact. I want OA journals to have high impact and prove it with metrics, and I want them to earn prestige in proportion to their quality. But I want universities to take them seriously because of their quality, not because of their impact metrics or prestige.
I do want to increase submissions to OA journals, but the present argument has the importantly different goal of removing disincentives to submit to OA journals. I want OA journals to earn their submissions with their quality, and if possible with prestige matching their quality. Universities don't have to help at this, provided they don't hurt. As long as universities encourage or require green OA, they can let the process of rooting gold OA take its own time. But they must stop slowing it down, partly in fairness to new journals which are actually good, but mostly in fairness to themselves, who deserve to recognize and benefit from good work.
(9) Quality is made by authors, often in conjunction with editors and referees. Prestige is made by communities.
Peer review and brand—or if you like, quality and prestige—are the two most valuable attributes of published articles. Publishers contribute essentially to each, of course, but they contribute less to these than to other kinds of journal value such as copy editing, mark up, and marketing. But even if we consider peer review and brand to be “added value” (if only to let us use conventional idioms), peer review and brand are the most valuable forms of added value. They do more than the other forms to attract author submissions and trigger the prestige-quality feedback loop.
Peer review and brand are quite different from each other, however. Peer review is the kind of thing that can be duplicated at the same level of quality somewhere else, for example, at a new startup with no reputation. It's not easy, but it's possible. Brand or prestige is not that kind of thing at all.
Publishers create the conditions for prestige the way farmers create the conditions for a harvest. But weather, not to mention time and chance, happeneth to all. Publishers can't directly add prestige. If they could add it to a new journal, they would. If they could add more of it to an existing journal, they would. If they could create prestige as straight-forwardly as they organize peer review, then there would be as many high-prestige journals as high-quality journals—or in fact, more. But prestige is added by the community of users. After publishers do their part, the rest is added by the recognition of authors, readers, libraries, and promotion and tenure committees. Their recognition responds to a journal's antecedent value, of course, but in turn it creates subsequent value, for example, by boosting the incentive for authors to submit their work. The journal and its external stakeholders are partners in adding this kind of value. Without the contribution of the community, good journals, like good people, would be admirable but not admired.
The respect shown by authors, readers, librarians, and promotion and tenure committees can be a rational response to a journal's quality. But it can also be uninformed, reflect the dearth of high-quality alternatives in the same field, reflect past quality rather than present quality, or be based on quality surrogates like impact factors rather than quality itself. It may take some kinds of quality into account (e.g., local usage, name recognition, circulation) and not others (e.g., originality, importance, reliability), and may disregard the ways in which a journal subtracts value (e.g., password protection, locked PDFs, truncating good articles solely for length, freezing processable data into unprocessable images, and turning gifts into commodities which may not be further shared). But as a journal grows in prestige, for whatever reason, it attracts more submissions, which gives it the ability to pick off the best pieces and improve its quality, creating the feedback loop which enhances both its quality and its prestige.
Prestige is not an illusion, even if it is more shadow than substance. Prestige is not always deceptive, even if it is sometimes deceptive; we know this because prestige feeds quality. Even if some degree of it is unearned, it will work to earn its keep. Nor is prestige irrational, even if it isn't always based on evidence of quality. We needn't draw any of those disparaging conclusions in order to point out the variables that belong to the community rather than to the journal.
In short, quality has a couple of parents and prestige takes a village. This matters for several reasons. Prestige is a more elusive goal than quality and cannot directly be engineered even by a determined publisher. If it weren't for prestige, or if the only forms of “added value” available to a journal were peer review, copy editing, mark-up, marketing, and so on, then the greatest value of existing publications could be duplicated overnight by new startups (OA or TA). But prestige changes the picture, explains why this kind of value duplication is so rare, and helps explains why OA journals, as a class of newcomers, have so much trouble gaining traction against TA journals, as a class of incumbents. (The rest of the explanation is that the money to support OA journals is still tied up in TA journal subscriptions.)
At best, because prestige takes a village, it will take time for OA journals to earn prestige in proportion to their quality. At worst, because prestige is a zero-sum game, many OA journals will never earn prestige in proportion to their quality. In both cases they face this barrier because they are new, not because they are OA. An important consequence is that we must complement slow-moving gold OA strategies with fast-moving green OA strategies.
The corresponding good news for TA publishers is that the existing high-prestige journals, which are mostly TA, are likely to be entrenched for a long time. The only bad news for TA publishers on this front is they don't deserve all the credit for the prestige of their prestigious titles. They must share that credit with the research community.
It's tempting to conclude that the community which creates prestige for the existing prestigious journals could redirect it toward all high-quality journals, OA or TA. But that presupposes that we could agree on quality, that generating prestige is not slow and difficult, and that quality is not a zero-sum game.
(10) Despite its value, prestige may only give TA journals limited protection against the rise of green OA.
If prestige is an important value beyond peer review, could it help high-prestige journals survive the threat of postprint archiving (green OA to peer-reviewed manuscripts)? It could help. I've argued before that high-prestige journals will last the longest on library “must-have” lists and therefore will be the last to lose subscriptions attributable to green OA.
Nevertheless, there are two reasons why this isn't the whole story. First, postprint archiving in physics doesn't cause any detectable cancellations, at high-prestige journals or low. This seems to be true even though only about half of deposits in arXiv are postprints, and the rest preprints. Second, when a self-archived postprint includes a citation to the journal where it was published, it benefits from some or all of the journal's prestige, not just from its peer review.
Publishers are usually the first to call for self-archived postprints to include citations to the published editions, and I've supported their calls.
I wonder whether publishers will reconsider their desires here. On the one hand, calling for archived postprints to cite the published editions is another way to spread the brand, and conventional wisdom says to lose no opportunity to spread the brand. But on the other, this method of spreading the brand extends a journal's prestige to OA editions, cutting into the protection that prestige might otherwise have provided against cancellations. (BTW, I didn't call for archived postprints to cite published editions as a snarky way to hurt publishers; I did it to help authors, readers, and publishers, and am now wondering myself about its net effect on publishers.)
But the news for TA publishers isn't all bad. Their fear that postprint archiving will undermine subscriptions itself oversimplifies the problem. For now, at least, the effect of green OA on subscriptions, and the effectiveness of prestige as a shield, are both unknown.
On a related front, it's tempting to conclude that any significant added value beyond peer review could support an alternative TA business model: give away the peer-reviewed literature and sell the value-added enhancements to it. That business model may eventually work for some kinds of added value, and I hope it does. But it won't work for prestige. First, as we've seen, OA postprints can incorporate a journal's prestige, not just its peer review. Second, if journals can't significantly add or increase prestige through their own efforts, then the only publishers who can build a business model on it are those lucky enough to have high-prestige titles already, probably the group least in need of an alternative business model. Third, any new business model along these lines presupposes that libraries would willingly pay for the peculiar value of prestige, when all or most of the journal's quality resides in the peer reviewed manuscripts. Finally, even if this model worked for some, it would carry a perverse consequence, giving publishers one more incentive to favor prestige over quality when the two diverged.
Prestige may or may not protect TA journal subscriptions from the rise of postprint archiving, but it's already protecting TA publishers from disintermediation.
Let me approach this one indirectly. The press often depicts the debate between OA advocates and TA journal publishers as a standoff or uncompromising conflict. But I see it as a prolonged negotiation. Both sides are currently making concessions that they need not make. For example, publishers needn't permit postprint archiving and scholars needn't work as authors, editors, or referees for publishers. Publishers could stop experimenting with OA, even where it benefits them, and scholars could stop collaborating with publishers, even where it benefits them. Publishers could slam the door on OA, even if that harmed them, and scholars could disintermediate publishers, even if that harmed them.
We're not at the two extreme positions because each group does benefit from working with the other. Because of the self-interest on each side we needn't call the current positions compromises. But compromises or not, they needn't be as conciliatory as they are now. Or to pick up the stick from the other end, there's still a lot of room to escalate polarization and antagonism. That's why I call the current situation a negotiation.
Publishers need scholars (as authors, editors, and referees) and scholars need publishers (to organize peer review and worldwide distribution). But these needs are not equal and therefore the situations are not symmetrical. Publishers need scholars unconditionally; without scholars to serve as authors, referees, and editors, they couldn't publish scholarly journals at all. But scholars only need publishers as long as publishers are the best current providers of a package of valuable services; it's a marriage of convenience and needn't last. Scholars need peer review and worldwide distribution, but they've always provided peer review themselves, they could find new ways to organize it without publishers, and the internet gives them the tools for low-cost worldwide distribution. If push came to shove, it would be much easier for scholars to do without publishers than for publishers to do without scholars. Or as Richard Smith put it, “I think that you will quickly find that journals (even the arrogant ones) need authors more than authors need them.”
In this prolonged negotiation, scholars benefit from the asymmetry: that publishers need scholars more than scholars need publishers. Publishers benefit from the status quo: that most prestigious journals today are still TA. The prestige of existing prestigious journals, then, is the largest single factor which keeps scholars working with publishers and, therefore, which keeps this a negotiation. If it weren't for the entrenchment of prestigious journals, researchers and their institutions would be cutting TA publishers out of the loop much faster than they are today.
(11) When OA journals approach TA journals in prestige, TA journals will lose their only remaining advantage. But this is not just a matter of time.
OA journals reach a larger audience than even the most popular TA journals. OA articles are cited 40–250% more often than TA articles, at least after the first year. Peer review at good OA journals can be as rigorous as peer review at good TA journals, using the same standards, procedures, and even the same people. The only advantage of TA journals over OA journals is prestige, a side-effect of incumbency. Prestige may be more shadow than substance, but it matters a great deal here and explains the undeniable TA journal advantage in author submissions.
However, if OA journals approach TA journals in prestige, TA journals would lose their only advantage in attracting author submissions.
In an article from March 2005, I made this argument:
[S]ome OA journals are already prestigious and others are growing in prestige. An OA journal has no intrinsic prestige handicap just because it is OA—or if it does (or did), this is a prejudice that is rapidly vanishing. However, most OA journals are new. And while new journals can be excellent from birth, it takes time for a journal's prestige to catch up with its quality. Now here's the key: it's only a matter of time before the prestige of excellent OA journals does catch up with their quality. At the same time, as OA spreads, it will be easier to recruit eminent scholars to serve on OA journal editorial boards. In addition, we'll see more and more already-prestigious TA journals convert to OA, taking their reputations with them. These are three reasons to think that OA journals will continue to rise in prestige as time passes.
For authors, the only reason to submit work to a TA journal is its prestige. In every other way, TA journals are inferior to OA journals because they limit an author's audience and impact. OA journals will start to draw submissions away from top TA journals as soon as they approach them in prestige. And by the time they equal them in prestige, the best TA journals will have lost their one remaining competitive advantage. As authors lose their incentive to submit work, subscribers will lose their incentive to subscribe. This suggests that coexistence [of OA and TA] will be temporary.
I still accept the main thread of this argument. But I have to update it with two retractions. First, I was wrong to say that it's just a matter of time. Or at least OA journals are not marching steadily toward greater prestige at the same pace at which they are marching toward greater quality. The reason for this shows up in my second retraction: I was wrong to say that there is no “intrinsic prestige handicap” for OA journals. Or at least there is such a handicap for new journals. The handicap emerges from a cluster of facts: that prestige is a zero-sum game, that most prestigious journals today are TA, and that most OA journals are new. Quality feeds prestige, which gives hope to all high-quality OA journals. But prestige feeds quality, which gives an inherent advantage to those with a headstart.
Because journals publish different articles, they don't compete for readers or subscribers. If you need to read the articles in a given journal, then you have reason to consult it, even if it's expensive, and even if there are free journals in the same field. But journals in the same field do compete for authors. That is why the superior prestige of TA journals today gives them an edge in the competition for authors. It's also why, when OA journals have comparable prestige, even the best TA journals will lose their competitive edge and start to suffer from all the competitive disadvantages of being TA.
If someone argued that financial stability is another advantage of TA journals over OA journals, I wouldn't disagree. The problem is not that the claim is untrue, but that it doesn't bear on the balance sheet of intrinsic strengths and weaknesses that I'm trying to sketch here. It's true that most TA journals are on more solid financial footing than most OA journals, today. But that's a fact about the present dominance of TA journals, or the present allocation of funds, not a fact about the business models. If the money now spent on TA journals were redirected toward OA journals, the financial footing of OA journals would be at least as strong as that of TA journals today. In 1980, the typewriter industry was on a more solid financial footing than the personal computer industry, but that said nothing about the superiority or typewriters or their business models.
If OA journals did approach TA journals in prestige, and start to take their submissions, they would also start to take their funding or to accelerate the redirection of funds from TA to OA journals. The institutions that pay for journal subscriptions aren't trying to support the TA business model; they're trying to support research. They won't follow the business model; they'll follow the authors.
Prestige is the flywheel preserving the present system long into the era when it might have been superseded by a superior alternative. Or viewed from the other side, it's the flywheel delaying progress.
(12) Conclusions and recommendations
Quality and prestige overlap significantly. Because quality feeds prestige and prestige feeds quality, this is no accident. But sometimes they diverge, for at least three reasons: because some journals are new and prestige takes time to cultivate, because prestige is a zero-sum game and quality is not, and because prestige can be based on inaccurate or outdated judgments of quality. It's always convenient, and usually irresistible, to use prestige as a surrogate for quality. When quality and prestige overlap, that's entirely legitimate. But when they diverge, favoring prestige harms university hiring practices, research funding practices, and the growth of every kind of science and scholarship represented by new journals (which always lack prestige). Universities have a responsibility to notice when prestige and quality diverge, resist the almost irresistible temptation to favor prestige in those cases, do their best to recognize and reward quality, and give faculty an incentive to put quality first as well.
When we stop discriminating against new journals, then we can recognize more excellent work, not just a subset, and stop ruling out first-rate work in our attempt to rule out second-rate work. Even opponents of OA should see that some new journals are high in quality, and that some new journals explore important new topics (genomics, climate change) and methods (stem cells, nanotechnology), not just new business models and licensing terms. Policies that deter faculty from submitting to new journals as such, regardless of their quality, put an artificial brake on science and scholarship themselves. Don't make this change for OA; make it for quality and research.
But make other changes for OA. Once we remove the disincentives to submit to high-quality OA journals (by removing disincentives to submit to high-quality new journals), we can add incentives to submit to journals that are at least green (permit no-delay no-fee postprint archiving). We can supplement slow-moving gold OA strategies with fast-moving green OA strategies. We can do this as individual researchers: by self-archiving whenever we publish in TA journals. We can do this as universities: by requiring OA archiving for the research output of the institution, and (in P&T committees) by requiring the articles eligible for review to be on deposit in the institutional repository. Likewise, governments and funding agencies can put a green OA condition on research grants. Finally, we can all help publishing scholars understand that publishing in a TA journal is compatible with depositing in an OA repository. Even when authors choose TA journals for their prestige, there's rarely a trade-off between prestige and OA.
University promotion and tenure committees should focus less on journal prestige and journal impact than on article quality and candidate quality. I know that's easier said than done. We'll never have quality metrics that are as easy to apply as our current prestige and impact metrics. But we can stop putting easy judgments of prestige or impact ahead of difficult judgments of quality, and we can find help in metrics which oversimplify less than the one we tend to use now.
When prestige and quality diverge, journals, universities, and authors all tend to favor prestige. It's not hard to see why. When prestige and quality diverge, prestige continues to offer undiminished rewards and create undiminished incentives. Quality is a weaker incentive when it is not accompanied by prestige. Journals have their own reasons for favoring prestige over quality: because of the zero-sum problem, prestige boosts quality more than quality boosts prestige. But authors favor prestige mostly because their universities lead them to, and universities tend to favor prestige because it's easier than favoring quality. If universities could take on the difficult job of assessing quality, they'd change incentives for authors, which would have at least some effect on journals.
Prestige is a real incentive, for journals, universities, and authors. We shouldn't expect that any of these players will nobly rise above prestige. But neither should we underestimate the attraction of prestige or its superior attraction when prestige and quality diverge. Nor should we underestimate either its non-accidental relationship with quality or the non-cynical reasons for thinking it can diverge from quality. Nor, finally, should we underestimate either side in a delicate balance of opposites: our own role in creating prestige and the difficulty of creating prestige where it doesn't already exist.
Prestige is no obstacle to green OA. But green OA suffers when authors make the mistaken assumption that publication in a prestigious TA journal is incompatible with OA. Prestige is a greater obstacle to gold OA, but only because gold OA journals are new, not because they are OA.
Two developments would change everything: (1) roughly equal prestige for OA and TA journals of roughly equal quality, regardless of age, and (2) high-volume green OA across the disciplines. (Funder and university OA mandates are terribly important, but they are merely means to the second of these.) The two developments are compatible, and we can work for both at once. We can make rapid progress on the second as soon as we have the will. But we can't make rapid progress on the first, even with the will, and my main purpose in this article has been to show why. We can describe the impediment from many angles: the benign circle entrenching the high-prestige TA journals, the vicious circle excluding newer OA journals, the zero-sum game of prestige, the slow-changing community attitudes that create prestige, the slow-changing allocation of funds paying for peer-reviewed research articles, and the stubborn fact of age. This impediment doesn't prevent OA journals from becoming first-rate, or even from growing in prestige, but it slows progress, like the slope of a hill, and can deprive OA journals of the feedback effects which boost submissions, quality, and prestige.
The second development is attainable, as advertised. But the first is equivalent to the state in which quality and prestige never diverge, which shows that it's an asymptote. We can increase the prestige of some OA journals, and sometimes even bring their prestige into alignment with their quality, and the same is true of publisher efforts on behalf of new TA journals. But we'll never prevent quality and prestige from diverging.
In my mind, these are reasons to work for gold OA and green OA simultaneously: gold OA, so that we don't further delay the benefits of hard-won, slow-growing incremental progress, and green OA so that we don't miss precious, present opportunities for accelerating progress.