I’m Stacy. I help scientometrics researchers find and understand the Big Data they need to study how research is created, communicated, funded, and commercialized in society.
Cross-posted from TheIdealis.org
In August, I stepped down as a Founding Editor of The Idealis to focus on other projects. Nicky Agate is now The Idealis’s Editor in Chief.
The Idealis started out of community conversations around LIS scholarship and open access, and I’m proud of what we’ve accomplished so far: over 290 recommendations for freely available scholcomm research; more than 44,000 views and 400 subscribers; and most importantly a stellar team of 38 editors who have dedicated their time and expertise to finding the very best scholcomm research and sharing it with the community.
I’m very grateful to The Idealis’s volunteers, especially Nicky, for taking The Idealis forward. I look forward to seeing what The Idealis has in store, and will remain a faithful reader of the site for years to come. Thank you!
The Journal of Librarianship and Scholarly Communication just published “Scholarly Communication Librarians’ Relationship with Research Impact Indicators: An Analysis of a National Survey of Academic Librarians in the United States“.
This is the final publication related a topic I’ve been working on since 2013 (!), when I first realized that although academic librarians were interested in research metrics, no one had yet studied the reality of how they were using these kinds of indicators in their day-to-day jobs and in support of their own careers.
Along the way, I’ve been privileged to work with Sarah Sutton and Rachel Miles (and for a short period, Michael Levine-Clark) on a series of publications and presentations that include:
- “Is What’s “Trending” What’s Worth Purchasing? Insights from a National Study of Collection Development Librarians” in The Serials Librarian (which we also presented upon at NASIG 2016 in Albuquerque)
- “Awareness of Altmetrics among LIS Scholars and Faculty” in Journal of Education for Library and Information Science (which we compared to librarians at ER&L 2016 in Austin, TX)
- “What’s used to gauge when engaging?: Determining academic librarian roles in research assessment reporting services“, presented at the 2016 Bibliometrics and Research Assessment Symposium in Bethesda, MD.
- “Scholarly Communication Librarians’ Relationship with Research Impact Metrics,” a panel presentation at ‘Finding Meaning in Metrics’ at ALA Annual 2016 in Orlando, FL
- “Use of Altmetrics in US-based academic libraries,” a presentation at the Second Altmetrics Conference in Amsterdam (summarized on the Altmetrics Conference blog by Ian Mulvaney)
- “Myth vs. reality: Altmetrics and librarians,” a presentation at the Altmetrics15 workshop in Amsterdam
We ultimately learned that:
- Your seniority/years of experience has no effect upon how familiar you are likely to be with various research metrics
- Librarians and LIS educators alike are more familiar with traditional research impact metrics like the JIF than they are with altmetrics
- Altmetrics are least likely to be used for collection development, though this is a use case I’ve been promoting for a long time
- The more scholcomm-related duties you have in your job, the more you’ll use metrics of all kinds
- Altmetric is the most popular altmetrics database used by librarians 😎
Sarah and Rachel plan to carry this path of research forward, expanding the scope of the study to include librarians worldwide, and also possibly looking at library promotion and tenure documents’ discussion of metrics. I wish them the very best and want to once again express my gratitude towards them as collaborators: Ladies, I hope to work with you both again in the future!
Last month, an article I co-authored with Josh Finnell on the challenges of organizing librarians at the grassroots was published in International Information & Library Review.
We librarians love to bemoan the state of our professional organizations. (Who doesn’t?) But as board chair of Library Pipeline–a fledging professional association for librarians–and volunteer for both Pipeline’s Green Open Access Working Group and the Innovation in Libraries Awesome Foundation chapter, I have to say, running a professional organization is often tough and thankless work.
Luckily, it’s also rewarding work. Through Pipeline, I’ve gotten to know our profession’s best and brightest (including my co-author Josh), contributed personally to ‘opening up’ the LIS literature to all readers, and helped others vet and fund some amazing library-based projects from around the world.
The article that Josh and I wrote explains the brief history of Library Pipeline to date and where we’re headed next–while also pointing out some challenges that exist for others who might want to launch a grassroots library professional organization of their own. You can read it on the IILR website or check out the preprint on Figshare.
In case you’re wondering, Pipeline has been mostly quiet for the latter half of 2017 as the board worked to create our bylaws and revise our mission statement, so we’re better positioned to expand our work in 2018. To learn more about Library Pipeline and to become a volunteer, visit our website.
Finnell, J.and S. Konkiel. Building and Sustaining a Grassroots Library Organization: A Three Year Retrospective of Library Pipeline. 2, figshare, 2 Jan. 2018, doi:10.6084/m9.figshare.5727084.v2.
In 2017, I:
- Earned my yellow belt in krav maga
- Drove 1,321 miles cross-country* to make a new home in Minneapolis, MN
- Learned how to drive a manual car
- Started taking computer programming a bit more seriously
- Visited Poland
- “Came out” as a socialist and joined DSA (I’m now a monthly sustaining member and believe you should be, too, if you’re a progressive of any flavor–DSA does amazing work nationwide)
- Told anyone who would listen about Sarah Schulman’s Conflict Is Not Abuse: Overstating Harm, Community Responsibility, and the Duty of Repair. Go read it, now.
I also accomplished a lot professionally, but that’s a post for another time.
I began using Qbserve to track my computer-based time around June of this year. In looking back at the past six months’ worth of data, I’m a bit disturbed to learn that I spent:
- 2 days chatting in Slack
- 18 hours watching Netflix
- 8.5 hours watching Amazon Prime
- 26 hours answering emails (both personal and work-related)
- 21.25 hours in Omnifocus (task management software for complete nerds)
- …and at least 20 hours faffing about on various social media sites
In 2018, I hope to maintain or lower most of these metrics (which would mean I’d be cutting that time spent roughly in half), in favor of getting to know my neighbors and deepening my personal relationships, both at work and at home.
To that end, I’m aiming to leave Twitter for the year (though I may pop back in occasionally for work-related postings). (Here’s a bit of background on why I’m making that decision.) It’s a bit nerve-wracking–Twitter is pretty important to me professionally–but I’m guessing that it will pay off to spend my time and energy elsewhere.
I’m also looking to simplify my life in other ways, which will mean fewer new projects (and ending some existing projects–more on that in the months to come). Saying “no” has historically been difficult for me when I get excited about an idea. In 2018, I want to do less, but better.
I hope to update this blog more regularly, in lieu of offering updates via social media. If you’re reading this, chances are I want to hear updates from you, so you should stop what you’re doing and email me right now to say hello (firstname.lastname@example.org for work colleagues, email@example.com for friends and family).
Here’s to a grounded, intentional 2018!
* If you’re curious, this road trip consisted of overnight stops in Clayton, NM; Dodge City, KS; Kansas City, KS; and Bloomington, IN. This was my fourth cross-country road trip. America is a great big beautiful place–especially the middle part.
I’ve recently launched a fun new project, @SociologyBot. It’s a Twitter account that recommends recently discussed research in the field of (you guessed it) sociology.
I’ve been wanting to explore the “altmetrics as a filter” idea for a long time. Being able to find not only disciplinary research but also the conversations surrounding research appeals to me, and I bet that other researchers would like access to that kind of information, too.
So, now I’m experimenting with a prototype “bot”, @SociologyBot. What sets @SociologyBot apart from other research recommendation bots on Twitter are a few things:
- It’s a social sciences bot (which are surprisingly rare!)
- It tweets out new and old research alike (not just the “recently published” stuff)
- It surfaces both research and the conversations surrounding research
- It’s not actually a bot (yet)!
I’m prototyping @SociologyBot right now, meaning it’s powered using a mix of manual and automated means. (Hence the scare quotes I keep putting around “bot”.) That’s because I want to understand if people actually care about this kind of a bot before I put a lot of time and energy into coding it! I guess you could call @SociologyBot a “minimum viable product”.
Here’s how @SociologyBot currently runs:
- I set up a search in Altmetric Explorer to find articles from the top ten sociology journals (as identified by SCImago Journal Rank) that have been mentioned in the last day. I use the journal shortlist as a basis this not because I particularly care for finding only research published in the “top” journals, but because it makes the list of articles much more manageable.
- Explorer sends me a daily email summary of said articles.
- Based on the shortlist provided in the summary email from Explorer, I schedule new daily tweets using TweetDeck that include both the article with the highest Altmetric Attention Score (AAS) and a link to the Altmetric details page, where discussions of the articles can be found.
- Using TweetDeck as automation, @SociologyBot then tweets out one scheduled article daily, at 8 am Mountain time.
Here’s how I plan to build @SociologyBot so that it’s fully automated:
- I write a script to query the Altmetric API every 24 hours to find sociology articles that have been mentioned online in the past day.
- The script takes the article with the most mentions and checks whether it’s already been tweeted about in the past month, as a safeguard against the same popular articles being constantly recommended.
- If it hasn’t, the script then composes a tweet that links to the article and its Altmetric detail page. If it has, the script will then check for the article with the next highest AAS that has not been recently tweeted, and will compose a tweet for that one instead.
- The script then posts the article and its Altmetric details page immediately to the @SociologyBot Twitter account.
Whether or not @SociologyBot gets a lot of followers, and whether or not those followers actually click on the Altmetric Details Page links, will determine whether @SociologyBot is a success (and thus whether I should bother coding it to be a proper bot!)
So: if you’re interested in sociology research and want to see this little guy come to life, please give @SociologyBot a follow!
CC-BY Nicky Agate / Medium
I’m excited to announce that the HuMetricsHSS research team–which I was a part of at the 2016 TriangleSCI conference–has received the support of the Andrew W. Mellon Foundation to continue our work of encouraging the discovery and use of “humane” research evaluation metrics for the humanities and social sciences.
HSS scholars are increasingly frustrated by the prevalence of the use of evaluation metrics (borrowed from the sciences) that do not accurately capture the impacts of their work. Our grand vision is to develop better metrics, ones that are rooted in the values that are important to scholars. This grant-funded research is a start.
From the press release:
“We are reverse-engineering the way metrics have operated in higher education,” said Christopher P. Long, Dean of the College of Arts & Letters at Michigan State University and one of the Principal Investigators (PIs) of the Mellon-funded project. “We begin not with what can be measured technologically, but by listening to scholars themselves as they identify the practices of scholarship that enrich their work and connect it to a broader public.”
Much gratitude to the Mellon Foundation for supporting HuMetricsHSS.
I just used a service called Cardigan to delete the 10k+ tweets I’ve published since 2007, when I first joined Twitter.
I don’t know about you, but I’ve changed a lot since I was 24 years old.
It didn’t make sense to me to keep ten years worth of miscellany–silly jokes, uninformed hot takes, occasional sharp insights, and so on–up on the Internet, gathering dust, making advertising money for Twitter. I don’t want to support a company that even with a $10.8 billion valuation somehow can’t get it right and stop banning innocent users rather than the Nazis who are harrassing them.
I don’t enjoy Twitter anymore. Over the years, Twitter has gone from a great place (to stay in touch with friends and former colleagues worldwide, to find interesting research and industry news, to meet new people) to one that seriously bums me out every time I log on (every day brings a new outrage, smart people sniping at each other, Mean Librarian Twitter, and unintelligible memes). It’s become superficial on a lot of levels. It’s often used as a tool to demean and call out rather than enrich and uplift.
All that said, I’m not going to delete my account outright. Twitter is still somewhat important professionally, so I’ll continue using it to share the occasional piece of research or to livetweet interesting conferences.
But I’d rather let my writing and research speak for itself, in longform. And for my personal and professional relationships to deepen, offline.
I’ll be slowly unfollowing accounts who aren’t directly relevant to my interests or my work at Altmetric (sorry!) and hopefully logging on a lot less. I’ll also aim to delete my tweets and favorites every so often, to keep things fresh.
If you need me, email me at firstname.lastname@example.org (personal) or email@example.com (work).
With love and gratitude to my friends and followers for ten years of shitposting and networking…
I’m super excited to announce that the Innovation in Libraries grant is now accepting applications: http://www.awesomefoundation.org/en/chapters/libraries
A core group of Library Pipeliners has been working hard for months to recruit rank-and-file librarians worldwide, many of whom are funding this grant out of their own pockets (!). Each month through August 2017, our Awesome Foundation chapter will award a $1000 USD grant to prototype library-based innovations (both technical and non-technical in nature) that are inclusive, daring, and diverse.
I am so proud of this grassroots effort to support risk-taking in librarianship. This is a great step towards building community through organizing, and I’m really excited to be a part of it.
Special recognition goes to Josh Finnell (Los Alamos National Lab), Robin Champieux (OHSU), and Bonnie Tijerina (Data & Society/ER&L), all of whom were crucial to getting this project off the ground.
For the past few weeks, I’ve been working with a colleague at Altmetric to develop a guide for using altmetrics in one’s promotion and tenure dossier. (Keep an eye out for the resulting blog post and handout on Altmetric.com–I think they’re going to be good!)
Altmetrics and P&T is a topic that’s come up a lot recently, and predictably the responses are usually one of the following:
- Do you seriously want to give people tenure based on their number of Twitter followers?!!?! ::rageface::
- Hmm, that’s a pretty interesting idea! If applied correctly (i.e. in tandem with expert peer review and traditional metrics like citation counts, etc), I could see how altmetrics could improve the evaluation process for P&T.
You can probably guess how I lean.
With that in mind, I wanted to think aloud about an editorial I recently read in Inside Higher Ed (a bit late to the game–the essay was written in 2014). It’s a great summary of many of the issues that plague P&T here in the States, and in particular the bits about “legitimacy markers” make a great argument in favor of recognizing altmetrics in P&T evaluation and preparation guidelines.
Below, I’ve excerpted the parts [to which I want to respond] (and the bits I want to emphasize), but please visit Inside Higher Ed and read the piece in its entirety, it’s worth your time.
The assumption that we know a scholar’s work is excellent if it has been recognized by a very narrow set of legitimacy markers adds bias to the process and works against recognition of newer form of scholarship.
Typically candidates for tenure and promotion submit a personal narrative describing their research, a description of the circulation, acceptance rate and impact factors of the journals or press where they published, a count and list of their citations, and material on external grants. This model of demonstration of impact favors certain disciplines over others, disciplinary as opposed to interdisciplinary work, and scholarship whose main purpose is to add to academic knowledge. [Emphasis mine.]
In my view, the problem is not that using citation counts and journal impact factors is “a” way to document the quantity and quality of one’s scholarship. The problem is that it has been normalized as the only way. All other efforts to document scholarship and contributions — whether they be for interdisciplinary work, work using critical race theory or feminist theory, qualitative analysis, digital media or policy analysis are then suspect, marginalized, and less than.
Using the prestige of academic book presses, citation counts and federal research awards to judge the quality of scholarship whose purpose is to directly engage with communities and public problems misses the point. Interdisciplinary and engaged work on health equity should be measured by its ability to affect how doctors act and think. [One might argue that altmetrics like citations in public policy documents and clinical care guidelines are a good proxy for this.] Research on affirmative action in college admissions should begin to shape admissions policies. [Perhaps such evidence could be sourced from press releases and mainstream media coverage of said changes in admissions policies.] One may find key theoretical and research pieces in these areas published in top tier journals and cited in the Web of Science, but they should also find them in policy reports cited at NIH [again, citations in policy docs useful here], or used by a local hospital board to reform doctor training [mining training handbooks and relevant websites could help locate such evidence]. We should not be afraid to look for impact of scholarship there, or give that evidence credibility.
Work that is addressing contemporary social problems deserves to be evaluated by criteria better suited to its purposes and not relegated to the back seat behind basic or traditional scholarship.
Altmetrics technologies aren’t yet advanced enough to do most of the things I’ve suggested above (in particular, to mine news coverage or the larger Web for mentions of the effects of research, rather than links to research articles themselves). But the field is very young, and I expect we’ll get there soon enough. And in the meantime, we’ve got some pretty decent proxies for true impact already in the main altmetrics services (i.e. policy citations in Altmetric Explorer, clinical citations in PlumX, dependency PageRank for useful software projects in Depsy/Impactstory).
In the shorter term, we need for academics to advocate for the inclusion of altmetrics in promotion & tenure evaluation and preparation guidelines.
Most researchers don’t know that this data is available, so they tend not to use it in preparing their dossiers. Fair enough.
What concerns me are the researchers who are aware of altmetrics, but who are hesitant to include it in their dossiers for fear that their colleagues a) won’t know what to do with the data, or b) won’t take them seriously if they include it. After all, there’s a lot of misinformation out there about what altmetrics are meant to do, and if you’ve got a reviewer that’s misinformed or that has a bone to pick re: altmetrics, that could potentially affect your career.
Then there’s the tenure committees, often made up of reviewers from all disciplines and at all (post-tenure) stages of their career. If they’re presented with altmetrics as evidence in a P&T dossier but a) they’re biased against altmetrics, and/or b) their university’s review guidelines don’t confirm that altmetrics–in the service of providing evidence for specific claims to impact–are a respectable form of evidence for one’s dossier, then the tenure applicant is either met with confusion or skepticism (at best) or responded to with outright hostility (at worst).
(Before you think I’m being melodramatic re: “outright hostility”–you should see some of the anti-altmetrics diatribes out there. As in many other aspects of life, some people aren’t content with the “you do it your way, I’ll do it my way” thing–they are pissed that you dare to challenge the status quo and will attack those who suggest differently.)
Anyone reading this post that’s got a modicum of influence at their university (i.e. you’ve got tenure status and/or voting rights on your university’s faculty council) should go and petition their vice provost of faculty affairs to update their university-wide P&T review and preparation guidelines to include altmetrics. Or, at the very least, focus on changing departmental/college P&T guidelines.
Once you’ve done so, we’re that much closer to reforming the P&T process to respect the good work that’s being done by all academics, not just those who meet a very traditional set of criteria.