In 2018, I wrote a post for The Bibliomagician blog on identifying authors’ genders based on name analyses, based on a lively discussion on the LIS-Bibliometrics listserv. I’m reposting the blog post here under a CC-BY license.
Recently on the LIS-Bibliometrics listserv, Ruth Harrison (Imperial College London) posed a question on behalf of a patron who was interested in identifying authors’ genders based upon names listed on ~2,000 journal articles–too large a corpus for manual analysis.
The Journal of Librarianship and Scholarly Communication just published “ Scholarly Communication Librarians’ Relationship with Research Impact Indicators: An Analysis of a National Survey of Academic Librarians in the United States“.
This is the final publication related a topic I’ve been working on since 2013 (!), when I first realized that although academic librarians were interested in research metrics, no one had yet studied the reality of how they were using these kinds of indicators in their day-to-day jobs and in support of their own careers.
CC-BY Nicky Agate / Medium I’m excited to announce that the HuMetricsHSS research team–which I was a part of at the 2016 TriangleSCI conference–has received the support of the Andrew W. Mellon Foundation to continue our work of encouraging the discovery and use of “humane” research evaluation metrics for the humanities and social sciences.
HSS scholars are increasingly frustrated by the prevalence of the use of evaluation metrics (borrowed from the sciences) that do not accurately capture the impacts of their work.
I’m super excited to announce that the Innovation in Libraries grant is now accepting applications: http://www.awesomefoundation.org/en/chapters/libraries
A core group of Library Pipeliners has been working hard for months to recruit rank-and-file librarians worldwide, many of whom are funding this grant out of their own pockets (!). Each month through August 2017, our Awesome Foundation chapter will award a $1000 USD grant to prototype library-based innovations (both technical and non-technical in nature) that are inclusive, daring, and diverse.
For the past few weeks, I’ve been working with a colleague at Altmetric to develop a guide for using altmetrics in one’s promotion and tenure dossier. (Keep an eye out for the resulting blog post and handout on Altmetric.com–I think they’re going to be good!)
Altmetrics and P&T is a topic that’s come up a lot recently, and predictably the responses are usually one of the following:
Do you seriously want to give people tenure based on their number of Twitter followers?
An article I co-authored along with Cassidy Sugimoto (Indiana University) and Sierra Williams (LSE Impact Blog) was recently published in the Educause Review.
From the intro: “Promotion and tenure decisions in the United States often rely on various scientometric indicators (e.g., citation counts and journal impact factors) as a proxy for research quality and impact. Now a new class of metrics — altmetrics — can help faculty provide impact evidence that citation-based metrics might miss: for example, the influence of research on public policy or culture, the introduction of lifesaving health interventions, and contributions to innovation and commercialization.
_[Cross-posted from the Digital Science blog on 25th April 2016 ]_ Join us for a Reddit Ask Me Anything with Stacy Konkiel (@skonkiel), Outreach & Engagement Manager at Altmetric, at 6pm GMT/1pm EDT on the 10th May. The Reddit Ask Me Anything forum is a great way to engage and interact with subject experts in a direct and honest Q&A, asking those burning questions you’ve always wanted to get their perspective on!
I’m pleased to report that along with the team behind Radian (a knowledge portal for data management librarians), the Metric Tookit (pitched by me, Heather Coates, and Robin Champieux) has won the Force 2016 PitchIt Innovation Challenge!
I’m hugely proud and very excited about bringing this idea to life. In talking with researchers and librarians worldwide over the past two years, the single biggest request I tend to get is an easy way to understand what metrics really mean (or more importantly, what they don’t mean).
I’m currently working with Sarah Sutton at Emporia State University on launching a survey to get a sense of how librarians use altmetrics, including for collection development purposes. (Is it useful to know if a book has been cited in a policy document, even if it’s rarely circulated, so you don’t deaccession it? Can monitoring online activity for all scholarship–even articles that a library doesn’t have subscription access to–help librarians make much quicker purchasing decisions for articles and journals that patrons might request?
Humanities researchers I’ve talked to usually fall into one of two camps when it comes to altmetrics:
“Altmetrics are amazing! I’ve been looking for a better way to understand the use and impacts of my work.” or “This is just another tool that favors the sciences and tries to reduce the complexities of my research into a single formula. It will never work for me.” As an altmetrics advocate and humanist by training, I unsurprisingly tend to fall into the first camp.