Browsed by
Author: Stacy Konkiel

Altmetrics and the reform of the promotion & tenure system

Altmetrics and the reform of the promotion & tenure system

For the past few weeks, I’ve been working with a colleague at Altmetric to develop a guide for using altmetrics in one’s promotion and tenure dossier. (Keep an eye out for the resulting blog post and handout on Altmetric.com–I think they’re going to be good!)

Altmetrics and P&T is a topic that’s come up a lot recently, and predictably the responses are usually one of the following:

  1. Do you seriously want to give people tenure based on their number of Twitter followers?!!?! ::rageface::
  2. Hmm, that’s a pretty interesting idea! If applied correctly (i.e. in tandem with expert peer review and traditional metrics like citation counts, etc), I could see how altmetrics could improve the evaluation process for P&T.

You can probably guess how I lean.

With that in mind, I wanted to think aloud about an editorial I recently read in Inside Higher Ed (a bit late to the game–the essay was written in 2014). It’s a great summary of many of the issues that plague P&T here in the States, and in particular the bits about “legitimacy markers” make a great argument in favor of recognizing altmetrics in P&T evaluation and preparation guidelines.

Below, I’ve excerpted the parts [to which I want to respond] (and the bits I want to emphasize), but please visit Inside Higher Ed and read the piece in its entirety, it’s worth your time.

The assumption that we know a scholar’s work is excellent if it has been recognized by a very narrow set of legitimacy markers adds bias to the process and works against recognition of newer form of scholarship.

[…]

Typically candidates for tenure and promotion submit a personal narrative describing their research, a description of the circulation, acceptance rate and impact factors of the journals or press where they published, a count and list of their citations, and material on external grants.  This model of demonstration of impact favors certain disciplines over others, disciplinary as opposed to interdisciplinary work, and scholarship whose main purpose is to add to academic knowledge. [Emphasis mine.]

In my view, the problem is not that using citation counts and journal impact factors is “a” way to document the quantity and quality of one’s scholarship. The problem is that it has been normalized as the only way. All other efforts to document scholarship and contributions — whether they be for interdisciplinary work, work using critical race theory or feminist theory, qualitative analysis, digital media or policy analysis are then suspect, marginalized, and less than.

Using the prestige of academic book presses, citation counts and federal research awards to judge the quality of scholarship whose purpose is to directly engage with communities and public problems misses the point. Interdisciplinary and engaged work on health equity should be measured by its ability to affect how doctors act and think. [One might argue that altmetrics like citations in public policy documents and clinical care guidelines are a good proxy for this.] Research on affirmative action in college admissions should begin to shape admissions policies. [Perhaps such evidence could be sourced from press releases and mainstream media coverage of said changes in admissions policies.] One may find key theoretical and research pieces in these areas published in top tier journals and cited in the Web of Science, but they should also find them in policy reports cited at NIH [again, citations in policy docs useful here], or used by a local hospital board to reform doctor training [mining training handbooks and relevant websites could help locate such evidence]. We should not be afraid to look for impact of scholarship there, or give that evidence credibility.

Work that is addressing contemporary social problems deserves to be evaluated by criteria better suited to its purposes and not relegated to the back seat behind basic or traditional scholarship.

Altmetrics technologies aren’t yet advanced enough to do most of the things I’ve suggested above (in particular, to mine news coverage or the larger Web for mentions of the effects of research, rather than links to research articles themselves). But the field is very young, and I expect we’ll get there soon enough. And in the meantime, we’ve got some pretty decent proxies for true impact already in the main altmetrics services (i.e. policy citations in Altmetric Explorer, clinical citations in PlumX, dependency PageRank for useful software projects in Depsy/Impactstory).

In the shorter term, we need for academics to advocate for the inclusion of altmetrics in promotion & tenure evaluation and preparation guidelines.

Most researchers don’t know that this data is available, so they tend not to use it in preparing their dossiers. Fair enough.

What concerns me are the researchers who are aware of altmetrics, but who are hesitant to include it in their dossiers for fear that their colleagues a) won’t know what to do with the data, or b) won’t take them seriously if they include it. After all, there’s a lot of misinformation out there about what altmetrics are meant to do, and if you’ve got a reviewer that’s misinformed or that has a bone to pick re: altmetrics, that could potentially affect your career.

Then there’s the tenure committees, often made up of reviewers from all disciplines and at all (post-tenure) stages of their career. If they’re presented with altmetrics as evidence in a P&T dossier but a) they’re biased against altmetrics, and/or b) their university’s review guidelines don’t confirm that altmetrics–in the service of providing evidence for specific claims to impact–are a respectable form of evidence for one’s dossier, then the tenure applicant is either met with confusion or skepticism (at best) or responded to with outright hostility (at worst).

(Before you think I’m being melodramatic re: “outright hostility”–you should see some of the anti-altmetrics diatribes out there. As in many other aspects of life, some people aren’t content with the “you do it your way, I’ll do it my way” thing–they are pissed that you dare to challenge the status quo and will attack those who suggest differently.)

Anyone reading this post that’s got a modicum of influence at their university (i.e. you’ve got tenure status and/or voting rights on your university’s faculty council) should go and petition their vice provost of faculty affairs to update their university-wide P&T review and preparation guidelines to include altmetrics. Or, at the very least, focus on changing departmental/college P&T guidelines.

Once you’ve done so, we’re that much closer to reforming the P&T process to respect the good work that’s being done by all academics, not just those who meet a very traditional set of criteria.

“The Use of Altmetrics in Promotion and Tenure” published in Educause Review

“The Use of Altmetrics in Promotion and Tenure” published in Educause Review

An article I co-authored along with Cassidy Sugimoto (Indiana University) and Sierra Williams (LSE Impact Blog) was recently published in the Educause Review.

From the intro: “Promotion and tenure decisions in the United States often rely on various scientometric indicators (e.g., citation counts and journal impact factors) as a proxy for research quality and impact. Now a new class of metrics — altmetrics — can help faculty provide impact evidence that citation-based metrics might miss: for example, the influence of research on public policy or culture, the introduction of lifesaving health interventions, and contributions to innovation and commercialization. But to do that, college and university faculty and administrators alike must take more nuanced, responsible, and informed approaches to using metrics for promotion and tenure decisions”

Read the full article on the Educause Review website.

Reddit AMA – May 10th!

Reddit AMA – May 10th!

Cross-posted from the Digital Science blog on 25th April 2016

 

reddit logo

Join us for a Reddit Ask Me Anything with Stacy Konkiel (@skonkiel), Outreach & Engagement Manager at Altmetric, at 6pm GMT/1pm EDT on the 10th May.

The Reddit Ask Me Anything forum is a great way to engage and interact with subject experts in a direct and honest Q&A, asking those burning questions you’ve always wanted to get their perspective on! Mark Hahnel, the founder of Figshare, Euan Adie, the founder of Altmetric and John Hammersley, co-founder of Overleaf, have also all participated in this popular discussion forum.

Following their lead, on Tuesday 10th May at 6pm UK time / 1pm EST Stacy Konkiel, Altmetric’s Outreach & Engagement Manager, will be taking part in an AMA on the AskScience subreddit.

Photo on 4-22-16 at 4.53 PM #2

Stacy plans to talk about what the metrics and indicators we like to rely upon in science (impact factor, altmetrics, citation counts, etc) to understand “broader impact” and “intellectual merit,” are actually measuring what we purport they measure.

She is not sure they do! And instead thinks that right now, we’re just using rough proxies to understand influence and attention. We’re in danger of abusing the metrics that are supposed to save us all, altmetrics, just like science has done with the journal impact factor.

Stacy will talk about improving measures of research impact, but is also open to taking other relevant questions.

If you wish to participate in the Ask Me Anything, you will need to register with Reddit. There will also be some live tweeting from @altmetric and @digitalsci, and questions on the #AskStacyAltmetric hashtag, so keep your eyes peeled!

Results of the #Force2016 Innovation Challenge: we won!

Results of the #Force2016 Innovation Challenge: we won!

I’m pleased to report that along with the team behind Radian (a knowledge portal for data management librarians), the Metric Tookit (pitched by me, Heather Coates, and Robin Champieux) has won the Force 2016 PitchIt Innovation Challenge!

I’m hugely proud and very excited about bringing this idea to life. In talking with researchers and librarians worldwide over the past two years, the single biggest request I tend to get is an easy way to understand what metrics really mean (or more importantly, what they don’t mean). This toolkit will be that resource.

We’ve already started to get some promising feedback about our plans, including these nice tweets from (one of my favorite open scientists :)) Erin McKiernan and Sara Mannheimer:

To learn more about our vision, visit the Jisc Elevator site, where we’ve submitted our pitch and an accompanying video.

Many thanks to Heather and Robin, who were the driving force behind developing such a compelling pitch deck! And thank you also to the Force11 community–we look forward to sharing our results with you soon!

What does a culturally-relevant #scholcomm practice look like?

What does a culturally-relevant #scholcomm practice look like?

I am currently at the Force 2016 conference in Portland, OR, where I presented today at a workshop for the Force Fellows on scholarly communication and crafting one’s online identity. As expected, the “teachers” at this workshop learned as much from the attendees as the attendees did from us, particularly with respect to culturally-relevant, informal scholarly communication (tweeting, blogging, etc).

Paul Groth gave an excellent talk following mine (full-text of which is forthcoming, watch this space) on best practices for writing online, during which an important conversation started.

One participant described how, at her institution–and in among Middle Eastern librarians more generally–her colleagues are too shy to even comment upon a blog post she had written. Putting one’s self “out there” in the ways Paul and I recommended (blogging, tweeting, commenting upon others’ blogs, and so on) would simply not work in her context. Though she believed in our message, she was afraid it would be a very hard sell to her colleagues.

Likewise, an attendee from Africa described how sharing one’s personal opinion on research online–even with the oft-seen Twitter/blog disclaimer, “The views expressed here do not reflect those of my employer”–was a non-starter. Among African researchers and university administrators, there is no such thing as a personal-professional divide; whatever you do and say online related to research will always reflect upon the employer.

Moreover, lots of the recommendations I was making with regard to Twitter come from my perspective as an American. I believe it is the single most valuable informal networking tool for scholars, and so I recommended it highly in this workshop. But what about more culturally-relevant, local social networks like Sina Weibo or VK? Do the techniques I describe for engaging on Twitter translate (pardon the pun) into other social networks? I honestly have no idea.

Today’s workshop seeded some important conversations about diversity in informal scholarly communication. Many of us tend to take for granted that it is good to blog, tweet, etc, but for some, that’s simply not possible.

I can’t be the first person to bring up this topic–if you know of research or commentary in this area, please do leave a comment with some links. (Most “culture”-oriented scholcomm readings I’ve found have to do only with disciplinary culture, not global cultural differences.)

I’d also like to hear from the experts at Force 2016. If you’re working with researchers outside of North America and Europe, how are expectations around informal online scholarly communication different from popular “best practices”? What are some culturally-relevant ways that you use to share and discuss research online?