Last week brought word of the San Francisco Declaration on Research Assessment, or DORA.
DORA comprises a wide range of publishers, transcending the usual lines between subscription and open access publishers. Everyone who supports DORA believes that the impact factor has become a pernicious force in research evaluation, with the power to influence tenure decisions and to affect the trajectory of an entire research career.
To quote the DORA declaration on this point: "A number of themes run through these recommendations:
- the need to eliminate the use of journal-based metrics, such as Journal Impact Factors, in funding, appointment, and promotion considerations;
- the need to assess research on its own merits rather than on the basis of the journal in which the research is published; and
- the need to capitalize on the opportunities provided by online publication (such as relaxing unnecessary limits on the number of words, figures, and references in articles, and exploring new indicators of significance and impact)."
All in all, good news. Our means of evaluating research are finally catching up with our means of producing it. Technology always outpaces policy change, in some cases for many years but never forever.
I especially liked this recommendation to institutions: "For the purposes of research assessment, consider the value and impact of all research outputs (including datasets and software) in addition to research publications, and consider a broad range of impact measures including qualitative indicators of research impact, such as influence on policy and practice." The bold is mine, as that clause alone effectively douses the belief that universities should be ivory towers.
Of course, there is room for improvement. To wit: "Outputs other than research articles will grow in importance in assessing research effectiveness in the future, but the peer reviewed research paper will remain a central research output that informs research assessment. Our recommendations therefore focus primarily on practices relating to research articles published in peer reviewed journals but can and should be extended by recognizing additional products, such as datasets, as important research outputs."
I've got nothing against peer review, but the output of that review does not have to be a journal article. Why not a peer reviewed podcast or web site? We can't forget that leading journal editors are a key constituency of DORA, and that bias is plain here. I'd like to see the philosophy of DORA extended by analogy to many and sundry forms of scholarship....not just datasets.
Then again, we can't let the perfect be the enemy of the good. That's when paralysis sets in and nothing changes at all. It is exciting to live in a world in which DORA now exists.