Altmetrics and the reform of the promotion & tenure system

Altmetrics and the reform of the promotion & tenure system

For the past few weeks, I’ve been working with a colleague at Altmetric to develop a guide for using altmetrics in one’s promotion and tenure dossier. (Keep an eye out for the resulting blog post and handout on Altmetric.com–I think they’re going to be good!)

Altmetrics and P&T is a topic that’s come up a lot recently, and predictably the responses are usually one of the following:

  1. Do you seriously want to give people tenure based on their number of Twitter followers?!!?! ::rageface::
  2. Hmm, that’s a pretty interesting idea! If applied correctly (i.e. in tandem with expert peer review and traditional metrics like citation counts, etc), I could see how altmetrics could improve the evaluation process for P&T.

You can probably guess how I lean.

With that in mind, I wanted to think aloud about an editorial I recently read in Inside Higher Ed (a bit late to the game–the essay was written in 2014). It’s a great summary of many of the issues that plague P&T here in the States, and in particular the bits about “legitimacy markers” make a great argument in favor of recognizing altmetrics in P&T evaluation and preparation guidelines.

Below, I’ve excerpted the parts [to which I want to respond] (and the bits I want to emphasize), but please visit Inside Higher Ed and read the piece in its entirety, it’s worth your time.

The assumption that we know a scholar’s work is excellent if it has been recognized by a very narrow set of legitimacy markers adds bias to the process and works against recognition of newer form of scholarship.

[…]

Typically candidates for tenure and promotion submit a personal narrative describing their research, a description of the circulation, acceptance rate and impact factors of the journals or press where they published, a count and list of their citations, and material on external grants.  This model of demonstration of impact favors certain disciplines over others, disciplinary as opposed to interdisciplinary work, and scholarship whose main purpose is to add to academic knowledge. [Emphasis mine.]

In my view, the problem is not that using citation counts and journal impact factors is “a” way to document the quantity and quality of one’s scholarship. The problem is that it has been normalized as the only way. All other efforts to document scholarship and contributions — whether they be for interdisciplinary work, work using critical race theory or feminist theory, qualitative analysis, digital media or policy analysis are then suspect, marginalized, and less than.

Using the prestige of academic book presses, citation counts and federal research awards to judge the quality of scholarship whose purpose is to directly engage with communities and public problems misses the point. Interdisciplinary and engaged work on health equity should be measured by its ability to affect how doctors act and think. [One might argue that altmetrics like citations in public policy documents and clinical care guidelines are a good proxy for this.] Research on affirmative action in college admissions should begin to shape admissions policies. [Perhaps such evidence could be sourced from press releases and mainstream media coverage of said changes in admissions policies.] One may find key theoretical and research pieces in these areas published in top tier journals and cited in the Web of Science, but they should also find them in policy reports cited at NIH [again, citations in policy docs useful here], or used by a local hospital board to reform doctor training [mining training handbooks and relevant websites could help locate such evidence]. We should not be afraid to look for impact of scholarship there, or give that evidence credibility.

Work that is addressing contemporary social problems deserves to be evaluated by criteria better suited to its purposes and not relegated to the back seat behind basic or traditional scholarship.

Altmetrics technologies aren’t yet advanced enough to do most of the things I’ve suggested above (in particular, to mine news coverage or the larger Web for mentions of the effects of research, rather than links to research articles themselves). But the field is very young, and I expect we’ll get there soon enough. And in the meantime, we’ve got some pretty decent proxies for true impact already in the main altmetrics services (i.e. policy citations in Altmetric Explorer, clinical citations in PlumX, dependency PageRank for useful software projects in Depsy/Impactstory).

In the shorter term, we need for academics to advocate for the inclusion of altmetrics in promotion & tenure evaluation and preparation guidelines.

Most researchers don’t know that this data is available, so they tend not to use it in preparing their dossiers. Fair enough.

What concerns me are the researchers who are aware of altmetrics, but who are hesitant to include it in their dossiers for fear that their colleagues a) won’t know what to do with the data, or b) won’t take them seriously if they include it. After all, there’s a lot of misinformation out there about what altmetrics are meant to do, and if you’ve got a reviewer that’s misinformed or that has a bone to pick re: altmetrics, that could potentially affect your career.

Then there’s the tenure committees, often made up of reviewers from all disciplines and at all (post-tenure) stages of their career. If they’re presented with altmetrics as evidence in a P&T dossier but a) they’re biased against altmetrics, and/or b) their university’s review guidelines don’t confirm that altmetrics–in the service of providing evidence for specific claims to impact–are a respectable form of evidence for one’s dossier, then the tenure applicant is either met with confusion or skepticism (at best) or responded to with outright hostility (at worst).

(Before you think I’m being melodramatic re: “outright hostility”–you should see some of the anti-altmetrics diatribes out there. As in many other aspects of life, some people aren’t content with the “you do it your way, I’ll do it my way” thing–they are pissed that you dare to challenge the status quo and will attack those who suggest differently.)

Anyone reading this post that’s got a modicum of influence at their university (i.e. you’ve got tenure status and/or voting rights on your university’s faculty council) should go and petition their vice provost of faculty affairs to update their university-wide P&T review and preparation guidelines to include altmetrics. Or, at the very least, focus on changing departmental/college P&T guidelines.

Once you’ve done so, we’re that much closer to reforming the P&T process to respect the good work that’s being done by all academics, not just those who meet a very traditional set of criteria.

“The Use of Altmetrics in Promotion and Tenure” published in Educause Review

“The Use of Altmetrics in Promotion and Tenure” published in Educause Review

An article I co-authored along with Cassidy Sugimoto (Indiana University) and Sierra Williams (LSE Impact Blog) was recently published in the Educause Review.

From the intro: “Promotion and tenure decisions in the United States often rely on various scientometric indicators (e.g., citation counts and journal impact factors) as a proxy for research quality and impact. Now a new class of metrics — altmetrics — can help faculty provide impact evidence that citation-based metrics might miss: for example, the influence of research on public policy or culture, the introduction of lifesaving health interventions, and contributions to innovation and commercialization. But to do that, college and university faculty and administrators alike must take more nuanced, responsible, and informed approaches to using metrics for promotion and tenure decisions”

Read the full article on the Educause Review website.

Reddit AMA – May 10th!

Reddit AMA – May 10th!

Cross-posted from the Digital Science blog on 25th April 2016

 

reddit logo

Join us for a Reddit Ask Me Anything with Stacy Konkiel (@skonkiel), Outreach & Engagement Manager at Altmetric, at 6pm GMT/1pm EDT on the 10th May.

The Reddit Ask Me Anything forum is a great way to engage and interact with subject experts in a direct and honest Q&A, asking those burning questions you’ve always wanted to get their perspective on! Mark Hahnel, the founder of Figshare, Euan Adie, the founder of Altmetric and John Hammersley, co-founder of Overleaf, have also all participated in this popular discussion forum.

Following their lead, on Tuesday 10th May at 6pm UK time / 1pm EST Stacy Konkiel, Altmetric’s Outreach & Engagement Manager, will be taking part in an AMA on the AskScience subreddit.

Photo on 4-22-16 at 4.53 PM #2

Stacy plans to talk about what the metrics and indicators we like to rely upon in science (impact factor, altmetrics, citation counts, etc) to understand “broader impact” and “intellectual merit,” are actually measuring what we purport they measure.

She is not sure they do! And instead thinks that right now, we’re just using rough proxies to understand influence and attention. We’re in danger of abusing the metrics that are supposed to save us all, altmetrics, just like science has done with the journal impact factor.

Stacy will talk about improving measures of research impact, but is also open to taking other relevant questions.

If you wish to participate in the Ask Me Anything, you will need to register with Reddit. There will also be some live tweeting from @altmetric and @digitalsci, and questions on the #AskStacyAltmetric hashtag, so keep your eyes peeled!

Results of the #Force2016 Innovation Challenge: we won!

Results of the #Force2016 Innovation Challenge: we won!

I’m pleased to report that along with the team behind Radian (a knowledge portal for data management librarians), the Metric Tookit (pitched by me, Heather Coates, and Robin Champieux) has won the Force 2016 PitchIt Innovation Challenge!

I’m hugely proud and very excited about bringing this idea to life. In talking with researchers and librarians worldwide over the past two years, the single biggest request I tend to get is an easy way to understand what metrics really mean (or more importantly, what they don’t mean). This toolkit will be that resource.

We’ve already started to get some promising feedback about our plans, including these nice tweets from (one of my favorite open scientists :)) Erin McKiernan and Sara Mannheimer:

To learn more about our vision, visit the Jisc Elevator site, where we’ve submitted our pitch and an accompanying video.

Many thanks to Heather and Robin, who were the driving force behind developing such a compelling pitch deck! And thank you also to the Force11 community–we look forward to sharing our results with you soon!

What does a culturally-relevant #scholcomm practice look like?

What does a culturally-relevant #scholcomm practice look like?

I am currently at the Force 2016 conference in Portland, OR, where I presented today at a workshop for the Force Fellows on scholarly communication and crafting one’s online identity. As expected, the “teachers” at this workshop learned as much from the attendees as the attendees did from us, particularly with respect to culturally-relevant, informal scholarly communication (tweeting, blogging, etc).

Paul Groth gave an excellent talk following mine (full-text of which is forthcoming, watch this space) on best practices for writing online, during which an important conversation started.

One participant described how, at her institution–and in among Middle Eastern librarians more generally–her colleagues are too shy to even comment upon a blog post she had written. Putting one’s self “out there” in the ways Paul and I recommended (blogging, tweeting, commenting upon others’ blogs, and so on) would simply not work in her context. Though she believed in our message, she was afraid it would be a very hard sell to her colleagues.

Likewise, an attendee from Africa described how sharing one’s personal opinion on research online–even with the oft-seen Twitter/blog disclaimer, “The views expressed here do not reflect those of my employer”–was a non-starter. Among African researchers and university administrators, there is no such thing as a personal-professional divide; whatever you do and say online related to research will always reflect upon the employer.

Moreover, lots of the recommendations I was making with regard to Twitter come from my perspective as an American. I believe it is the single most valuable informal networking tool for scholars, and so I recommended it highly in this workshop. But what about more culturally-relevant, local social networks like Sina Weibo or VK? Do the techniques I describe for engaging on Twitter translate (pardon the pun) into other social networks? I honestly have no idea.

Today’s workshop seeded some important conversations about diversity in informal scholarly communication. Many of us tend to take for granted that it is good to blog, tweet, etc, but for some, that’s simply not possible.

I can’t be the first person to bring up this topic–if you know of research or commentary in this area, please do leave a comment with some links. (Most “culture”-oriented scholcomm readings I’ve found have to do only with disciplinary culture, not global cultural differences.)

I’d also like to hear from the experts at Force 2016. If you’re working with researchers outside of North America and Europe, how are expectations around informal online scholarly communication different from popular “best practices”? What are some culturally-relevant ways that you use to share and discuss research online?

Hej Sverige! I’m coming for you!

Hej Sverige! I’m coming for you!

sweden

From September 21st – 28th, I’ll be visiting a number of universities across Sweden (even making a short stopover in Denmark) to discuss altmetrics with researchers and librarians, including a presentation at the ChALS 2015 conference in Gothenburg on September 23rd.

I could not be more excited for this visit–I’ve had a fondness for Sweden ever since seeing the film Show Me Love (or “Fucking Åmål” in Swedish) as a young teen (a fact that I’m sure will make any Swedes reading this post giggle). In learning more about the country over time, I’ve developed an admiration for its progressive politics, including its commitment to gender and LGBT equality and its singular role in caring for the world’s refugees. And in recent years, I’ve been very impressed by the important role Swedish academic librarians are playing in open access and altmetrics advocacy, as well.

Unless otherwise noted below, during my visits I’ll be presenting a new hands-on workshop titled, “Is your research making a difference?” to both researchers and librarians at each university. This workshop includes an overview of altmetrics, practical examples of librarians and researchers’ use of altmetrics, and a chance to find altmetrics data using Altmetric for Institutions, the Altmetric bookmarklet, and Impactstory.

Here’s my itinerary:

  • Monday, 9/21 – Technical University of Denmark (Lyngby/Copenhagen)
  • Tuesday, 9/22 – Göteborgs Universitet (Gothenburg)
  • Wednesday, 9/23 – ChALS 2015 Conference (this year’s theme is “Make your mark!”) – “What we talk about when we talk about impact” (Gothenburg)
  • Thursday, 9/24 – Chalmers University of Technology (Gothenburg)
  • Friday, 9/25 – Södertörn University – “Altmetrics: What all librarians need to know” &&&  Swedish University of Agricultural Sciences (Stockholm)
  • Monday, 9/28 – Stockholms Universitet (Stockholm)

If you’ve got any recommendations for the best konditoris (konditorin? konditoror?) to visit while in Gothenburg and Stockholm, I’d love to hear them in the comments below or on Twitter! I’ve recently started taking an afternoon fikapaus (such a wonderful custom!) and want to get it right while I’m in the motherland. And I’m of course always welcome to beer recommendations–which Swedish craft breweries do I just have to try?

On a more serious note, I’m deeply indebted to Urban Andersson, Marie Hogander, and the ChALS 2015 organizing committee for their invitation and support in getting me to Gothenburg to present on the 23rd. They’ve made this entire trip possible.

The case against (only) using metrics for collection management

The case against (only) using metrics for collection management

I’m currently working with Sarah Sutton at Emporia State University on launching a survey to get a sense of how librarians use altmetrics, including for collection development purposes. (Is it useful to know if a book has been cited in a policy document, even if it’s rarely circulated, so you don’t deaccession it? Can monitoring online activity for all scholarship–even articles that a library doesn’t have subscription access to–help librarians make much quicker purchasing decisions for articles and journals that patrons might request? And so on.)

So it was with a lot of interest that I read Chris Bourg’s “Infrastructure and Culture: A job talk” yesterday. In the talk (which rightly landed her the position of Director of MIT Libraries, at least in part), Bourg describes how institutional cultures are so important to (among other things) the failure or success of campus scholarly communication initiatives.

Crucially, she talks about the unintended consequences that a culture of quantification can have upon decisions that are made for library collections, especially collections that might not be popular but that might inform important research projects, like this study that uses old, uncirculated volumes to study the evolution of Brazilian Portuguese over time.

I encourage you to read Bourg’s post in its entirety, but wanted to pull out one section in particular that I think is a valuable way to think about assessment and metrics w/r/t library services:

Developing new ways of demonstrating the impact of our services and collections is a way of promoting a culture that values assessment, but also recognizes that the true impact of libraries and librarians is often delayed and too idiosyncratic to show up in most of the standard ROI style assessment tools currently in use.

So while I am a fan of assessment and data-driven decision-making, I think it is critically important that we make sure the data we are using captures the full story of our impact. As a social scientist with experience teaching and consulting on statistics and research methods, I’m committed to making sure that the assessment tools we use in libraries are the right ones, that the data we collect measures what really matters, and that we use methods appropriate to the decisions we want to make.

In this spirit, I ask: what methods are you using to drive collection development at your library? And how might you use metrics (including altmetrics) in a more nuanced way to achieve goals that are in line with your larger library (and institutional) culture?

PS Keep your eyes on your inbox–if you’re a librarian at an R1 institution in the US, I’ll likely be emailing you soon to ask you to participate in our survey on altmetrics and libraries.

Altmetrics and analytics in IRs and digital libraries: where we’re at and where we’re going (upcoming LITA webinar)

Altmetrics and analytics in IRs and digital libraries: where we’re at and where we’re going (upcoming LITA webinar)

On Tuesday at 2 pm, I’ll have the pleasure of speaking with members of the newly-formed LITA Altmetrics & Digital Analytics interest group about the white paper that I recently wrote with Michelle Dalmau (Indiana University) and Dave Scherer (Purdue University). Join us!

Here’s the webinar description:

Altmetrics and analytics in IRs and digital libraries: where we’re at and where we’re going
When: Tuesday, June 23, 2015 @ 2pm CST (3pm EST| 1pm MST| Noon PST)
Where: Online – https://bluejeans.com/164382960 [no registration necessary; just tune in on Tuesday!]

Description:
How are users engaging with your digital collections? What are people
saying about your university’s research on the social web? Just a few
years ago, the answers to these questions were virtually unknowable, but
nowadays a wealth of analytics services can help you learn the answers
quickly and easily. And soon, standards will enable cross-collection and
cross-institution comparisons that will allow for better benchmarking,
reporting, and more.

In this webinar, Stacy Konkiel (Research Metrics Consultant, Altmetric) will report on the results of a recent white
paper outlining best practices and important issues in using metrics to
understand the use of digital collections and institutional repository
content. Robin Chin Roemer (Instructional Design & Outreach Services
Librarian, University of Washington Libraries) will share how the
National Information Standards Organization (NISO) is shaping the future
of analytics services through the development of altmetrics standards.

Join us on Tuesday at 2 pm Central! https://bluejeans.com/164382960

Stacy Konkiel is a Research Metrics Consultant at Altmetric, a data
science company that tracks the attention that research receives online.
She has researched, published, and presented widely on scholarly
communication, research impact, and other issues in academic
librarianship.

Robin Chin Roemer is the Instructional Design & Outreach Services
Librarian at the University of Washington Libraries in Seattle,
Washington. She is the coauthor of the new book Meaningful Metrics: A
21st Librarian’s Guide to Bibliometrics, Altmetrics, & Research Impact
with ACRL Press.

We’re overdue on altmetrics for the digital humanities

We’re overdue on altmetrics for the digital humanities

Humanities researchers I’ve talked to usually fall into one of two camps when it comes to altmetrics:

  • “Altmetrics are amazing! I’ve been looking for a better way to understand the use and impacts of my work.” or
  • “This is just another tool that favors the sciences and tries to reduce the complexities of my research into a single formula. It will never work for me.”

As an altmetrics advocate and humanist by training, I unsurprisingly tend to fall into the first camp.

I’ve been doing a lot of reading and thinking lately on how research is evaluated in the humanities, and it seems to sadly be as an opaque of a process as in the sciences. By and large, you’re judged by:

  • the university press you’ve published your book(s) with,
  • reviews that other scholars write about your books, and
  • occasionally, by the citations your journal articles and books receive

Unsurprisingly, this framework tends to favors books and, to a lesser extent, journal articles. The value of digital scholarship projects (websites, interactive exhibitions, etc), software, and research data aren’t taken into account under this model. Thank goodness that’s starting to change.

In the past few years, scholarly societies and some universities have created guidelines to help traditionalists understand the impacts of digital scholarship projects. Faculty are encouraged to think beyond the monograph when considering what types of work might have a lasting influence on the profession.

But these guidelines tend not to address newer types of quantitative and qualitative data, sourced from the web, that can help reviewers understand the full scope of the impacts your work may have. This data can include newer impact metrics like numbers of website visitors, what other scholars are saying about your work on their research blogs and social media, how many members of the public have reviewed your books on GoodReads and Amazon, and so on.

That’s where my current work comes in.

I’m now in the process of talking with humanities researchers, from department chairs to graduate students, to better understand what types of data might be useful in supplementing their understanding of impacts for digital humanities research.

And I’ve done two talks in the past week–one at the ASIS&T Virtual Symposium on Information Technology in the Arts & Humanities, and one at the Advancing Research Communication & Scholarship meeting.

Both talks were intended to get conversations started about altmetrics and the humanities–what might work, what would never work, what data sources could potentially be tracked that aren’t yet included in services like Altmetric and PlumX.

I’ll be doing more researching, writing, thinking and speaking on the topic in the coming months–stay tuned for more information.

In the meantime, I’d love to get your feedback in the comments below.