UC Denver librarian Jeffrey Beall recently published an opinion article, “Spurious alternative impact factors: The scale of the problem from an academic perspective,” (paywall) that sadly contains the same old misinformation about research impact metrics that he’s been corrected on before. This is a short post to gently remind Beall and his coauthors of what altmetrics and citation-based metrics do and do not stand for/measure/etc, correct some inaccuracies found in the article, and highlight two areas where we agree.
Let’s start with what they got wrong.
Despite being a matter of different criticisms [sic], [the h-index] is one of the best available ways to estimate the quality of a group of research publications.
False. The h-index–like other citation-based measured of impact–can only measure the scholarly attention that an article has received. Citations do not measure quality–never have and never will (in their current incarnation, anyway). As I’ve written before on the Impactstory blog:
“It’s true that some studies have shown that citations correlate with other measures of scientific quality like awards, grant funding, and peer evaluation. We’re not saying they’re not useful. But citations do not directly measure quality, which is something that some scientists seem to forget.”
Altmetrics , which is aimed at emphasizing article-based measures instead of the traditional journal-based metrics.
This is only partially correct. Altmetrics can, by their very nature, give good insight into article-level measures of impact. But that’s not their primary aim. Altmetrics crucially can tell us a great deal about the uses of non-article research outputs (software, data, etc), and can also give us insight into impact at the author, journal, research group, and university level.
I point this out not to be a pedant, but to correct a common misunderstanding–altmetrics do not (only) equal article-level metrics. They can potentially measure so much more than that.
ImpactStory offers, under a subscription model, access to consolidated profiles of Altmetric data for individual researchers (https://impactstory.org/)
This is also wrong. Impactstory profiles do include some Altmetric data, but only for a small portion of their total data sources (which can easily be found here). Impactstory profile data goes beyond Altmetric data by bringing in metrics from sources like PubMed Central, Dryad, Publons, and more. The sentence above does a disservice to the top-notch comprehensive professional impact profiles that Jason & Heather have spent a lot of time creating. I don’t want that falsehood to spread.
Predatory journals are indexed automatically – together with reputable OA titles – by services such as Google Scholar. Hence their articles could lead to inflated h-indexes of researchers, based on abnormal patterns of self-citations in these journals (h-indexes in other databases, such as Scopus, could be much lower).
I’m not arguing that this happens, or that it is a problem. But I do want to point out that this isn’t unique to indices like Google Scholar. Predatory journals are also sometimes indexed in respected databases like Scopus (though perhaps not as often as on Google Scholar). And predatory journals aren’t the only ones using “abnormal patters of self-citations” to inflate h-indices and journal impact factor–it’s a well-documented phenomena in many traditional journals, too.
It is possible that a future consolidation of article level metric will facilitate a better understanding of the impact of scientific publications.
No one number is going to ever accurately summarize “impact”, whether using citations as its basis or altmetric indicators. Period.
Single-number indicators are fundamentally flawed because they boil down a complex, multifaceted concept like “What effect is my research having on other researchers in my discipline and also on policy makers, practitioners, members of the public, etc.?” into a number. Much of the real value in altmetrics data lies in what it can tell us about who is saying what about your research, in many different contexts.
Here’s where we agree.
I do not argue with the central premise of the article–that there are a number of companies peddling dubious “impact factor” lookalikes that are meant to trick authors into publishing with predatory journals. This is a problem, and I applaud Beall and his coauthors for bringing light to it (as I’ve commended Beall in the past for his work exposing predatory publishers overall).
I also mostly agree with this statement:
Instead, we think that these new measures should be transparent and professional, and should respect practices and standards of the scholarly publishing industry.
Should altmetrics data be transparent? Without a doubt.
But what does “professional” mean in this context? That we only source altmetrics data from platforms that are perceived to be “scholarly” (like Mendeley, Publons, etc)? That’d mean the conversations about and sharing of scholarship that takes place on platforms like Twitter (one of the most popular places that researchers use to discuss & share scholarship) might not be included.
And should new measures respect practices and standards of the scholarly publishing industry, or should they respect the practices and standards of researchers themselves?
Jeffrey Beall is a smart guy who’s done a lot for researchers with Beall’s List, so it’s disappointing to see him continue to miss the mark on altmetrics (and overstate the usefulness of citation-based metrics). I hope this post can keep this from happening again in the future.