We’re overdue on altmetrics for the digital humanities

We’re overdue on altmetrics for the digital humanities

Humanities researchers I’ve talked to usually fall into one of two camps when it comes to altmetrics:

  • “Altmetrics are amazing! I’ve been looking for a better way to understand the use and impacts of my work.” or
  • “This is just another tool that favors the sciences and tries to reduce the complexities of my research into a single formula. It will never work for me.”

As an altmetrics advocate and humanist by training, I unsurprisingly tend to fall into the first camp.

I’ve been doing a lot of reading and thinking lately on how research is evaluated in the humanities, and it seems to sadly be as an opaque of a process as in the sciences. By and large, you’re judged by:

  • the university press you’ve published your book(s) with,
  • reviews that other scholars write about your books, and
  • occasionally, by the citations your journal articles and books receive

Unsurprisingly, this framework tends to favors books and, to a lesser extent, journal articles. The value of digital scholarship projects (websites, interactive exhibitions, etc), software, and research data aren’t taken into account under this model. Thank goodness that’s starting to change.

In the past few years, scholarly societies and some universities have created guidelines to help traditionalists understand the impacts of digital scholarship projects. Faculty are encouraged to think beyond the monograph when considering what types of work might have a lasting influence on the profession.

But these guidelines tend not to address newer types of quantitative and qualitative data, sourced from the web, that can help reviewers understand the full scope of the impacts your work may have. This data can include newer impact metrics like numbers of website visitors, what other scholars are saying about your work on their research blogs and social media, how many members of the public have reviewed your books on GoodReads and Amazon, and so on.

That’s where my current work comes in.

I’m now in the process of talking with humanities researchers, from department chairs to graduate students, to better understand what types of data might be useful in supplementing their understanding of impacts for digital humanities research.

And I’ve done two talks in the past week–one at the ASIS&T Virtual Symposium on Information Technology in the Arts & Humanities, and one at the Advancing Research Communication & Scholarship meeting.

Both talks were intended to get conversations started about altmetrics and the humanities–what might work, what would never work, what data sources could potentially be tracked that aren’t yet included in services like Altmetric and PlumX.

I’ll be doing more researching, writing, thinking and speaking on the topic in the coming months–stay tuned for more information.

In the meantime, I’d love to get your feedback in the comments below.

4 thoughts on “We’re overdue on altmetrics for the digital humanities

  1. It seems to me that two of the challenges in measuring impact of humanities scholarship are: 1) that engagement and impact can happen for a long time after publication so any altmetrics would need to reflect that longitudinal engagement; and 2) that it is difficult to measure the *type* of impact which seems so important in the humanities. A deep engagement or conversation with one scholar’s analysis – that is reflected throughout an entire volume, say – I would guess would be difficult to get at – but very important. Considering the difference for the humanities in this arena is so important – and I’m very glad to see you doing it!

    1. Agreed! I think that most altmetrics services could measure longitudinal engagement now, depending upon whether they’re serving a recent snapshot of the data or data collected over time. Most are luckily doing the latter and (AFAIK) plan to continue doing so for years to come.

      But getting at deep engagement with scholarship is harder. It’s easy to count things–there are plenty of services that do that–but as you rightly point out, an entire volume’s worth of engagement is arguably worth more than a blogpost on a topic, and should be treated as such (perhaps via weighting, etc). Right now, no tool can distinguish automatically between a simple citation and a book’s worth of engagement; instead, the better services (like Altmetric 🙂 and Impactstory) link out to any and all mentions, so readers can understand for themselves the context of a citation.

      A few months back, all everyone could talk about was sentiment analysis for altmetrics. I won’t be surprised if that’s where the “arms race” in altmetrics tool development is headed next.

Leave a Reply

Your email address will not be published. Required fields are marked *