Altmetrics and the reform of the promotion & tenure system

For the past few weeks, I’ve been working with a colleague at Altmetric to develop a guide for using altmetrics in one’s promotion and tenure dossier. (Keep an eye out for the resulting blog post and handout on Altmetric.com–I think they’re going to be good!)

Altmetrics and P&T is a topic that’s come up a lot recently, and predictably the responses are usually one of the following:

  1. Do you seriously want to give people tenure based on their number of Twitter followers?!!?! ::rageface::
  2. Hmm, that’s a pretty interesting idea! If applied correctly (i.e. in tandem with expert peer review and traditional metrics like citation counts, etc), I could see how altmetrics could improve the evaluation process for P&T.

You can probably guess how I lean.

With that in mind, I wanted to think aloud about an editorial I recently read in Inside Higher Ed (a bit late to the game–the essay was written in 2014). It’s a great summary of many of the issues that plague P&T here in the States, and in particular the bits about “legitimacy markers” make a great argument in favor of recognizing altmetrics in P&T evaluation and preparation guidelines.

Below, I’ve excerpted the parts [to which I want to respond] (and the bits I want to emphasize), but please visit Inside Higher Ed and read the piece in its entirety, it’s worth your time.

The assumption that we know a scholar’s work is excellent if it has been recognized by a very narrow set of legitimacy markers adds bias to the process and works against recognition of newer form of scholarship.

[…]

Typically candidates for tenure and promotion submit a personal narrative describing their research, a description of the circulation, acceptance rate and impact factors of the journals or press where they published, a count and list of their citations, and material on external grants.  This model of demonstration of impact favors certain disciplines over others, disciplinary as opposed to interdisciplinary work, and scholarship whose main purpose is to add to academic knowledge. [Emphasis mine.]

In my view, the problem is not that using citation counts and journal impact factors is “a” way to document the quantity and quality of one’s scholarship. The problem is that it has been normalized as the only way. All other efforts to document scholarship and contributions — whether they be for interdisciplinary work, work using critical race theory or feminist theory, qualitative analysis, digital media or policy analysis are then suspect, marginalized, and less than.

Using the prestige of academic book presses, citation counts and federal research awards to judge the quality of scholarship whose purpose is to directly engage with communities and public problems misses the point. Interdisciplinary and engaged work on health equity should be measured by its ability to affect how doctors act and think. [One might argue that altmetrics like citations in public policy documents and clinical care guidelines are a good proxy for this.] Research on affirmative action in college admissions should begin to shape admissions policies. [Perhaps such evidence could be sourced from press releases and mainstream media coverage of said changes in admissions policies.] One may find key theoretical and research pieces in these areas published in top tier journals and cited in the Web of Science, but they should also find them in policy reports cited at NIH [again, citations in policy docs useful here], or used by a local hospital board to reform doctor training [mining training handbooks and relevant websites could help locate such evidence]. We should not be afraid to look for impact of scholarship there, or give that evidence credibility.

Work that is addressing contemporary social problems deserves to be evaluated by criteria better suited to its purposes and not relegated to the back seat behind basic or traditional scholarship.

Altmetrics technologies aren’t yet advanced enough to do most of the things I’ve suggested above (in particular, to mine news coverage or the larger Web for mentions of the effects of research, rather than links to research articles themselves). But the field is very young, and I expect we’ll get there soon enough. And in the meantime, we’ve got some pretty decent proxies for true impact already in the main altmetrics services (i.e. policy citations in Altmetric Explorer, clinical citations in PlumX, dependency PageRank for useful software projects in Depsy/Impactstory).

In the shorter term, we need for academics to advocate for the inclusion of altmetrics in promotion & tenure evaluation and preparation guidelines.

Most researchers don’t know that this data is available, so they tend not to use it in preparing their dossiers. Fair enough.

What concerns me are the researchers who are aware of altmetrics, but who are hesitant to include it in their dossiers for fear that their colleagues a) won’t know what to do with the data, or b) won’t take them seriously if they include it. After all, there’s a lot of misinformation out there about what altmetrics are meant to do, and if you’ve got a reviewer that’s misinformed or that has a bone to pick re: altmetrics, that could potentially affect your career.

Then there’s the tenure committees, often made up of reviewers from all disciplines and at all (post-tenure) stages of their career. If they’re presented with altmetrics as evidence in a P&T dossier but a) they’re biased against altmetrics, and/or b) their university’s review guidelines don’t confirm that altmetrics–in the service of providing evidence for specific claims to impact–are a respectable form of evidence for one’s dossier, then the tenure applicant is either met with confusion or skepticism (at best) or responded to with outright hostility (at worst).

(Before you think I’m being melodramatic re: “outright hostility”–you should see some of the anti-altmetrics diatribes out there. As in many other aspects of life, some people aren’t content with the “you do it your way, I’ll do it my way” thing–they are pissed that you dare to challenge the status quo and will attack those who suggest differently.)

Anyone reading this post that’s got a modicum of influence at their university (i.e. you’ve got tenure status and/or voting rights on your university’s faculty council) should go and petition their vice provost of faculty affairs to update their university-wide P&T review and preparation guidelines to include altmetrics. Or, at the very least, focus on changing departmental/college P&T guidelines.

Once you’ve done so, we’re that much closer to reforming the P&T process to respect the good work that’s being done by all academics, not just those who meet a very traditional set of criteria.

Stacy Konkiel
Stacy Konkiel
Professional Data Wrangler 🤠