Library Data II:
Too Good, Too Bad, and Too Ugly
Friday, February 2, 2024 (10:00am – 4:00pm EST)
SQL: It’s not just how we get our data. It’s this year’s conference. A sequel — Library Data Part II: Too Good, Too Bad, and Too Ugly!
Based on the great interest in last year’s theme and the number of amazing proposals that we received, it’s clear that data is an endless area of opportunity and challenge for libraries. We know that working with data can be a terrifying tale of suspense and it can also be an adventure. This year, we wanted to find more ways to help you share fun and exciting stories with your library data.
Do you have a passion for library data? Are you interested in learning about useful tools and methods that help you comb through data? Are you trying to get a better handle on analyzing and sharing COUNTER 5 and SUSHI data? Would you like to learn new ways to communicate about your data to different audiences? This conference is for you! Come join your library colleagues at the SUNYLA Midwinter Virtual Conference on Friday, February 2, 2024 (10:00am – 4:00pm EST).
Technology requirements for attendance: Computer, internet connection, microphone/speakers (headset recommended) or telephone. Zoom Webinar will be used for this conference and is free for use by attendees.
You do not need to be a SUNYLA member in order to attend this free conference. Recordings of the sessions will be available shortly after the conference on this webpage. Tell your friends!
SUNYLA’s Midwinter Virtual Conference Committee:
Jennifer DeVito, Stony Brook University
Jennifer Jeffery, SUNY Potsdam
Bill Jones, SUNY Geneseo
Jill Locascio, SUNY Optometry (chair)
Carrie Marten, SUNY Purchase
Jessica McGivney, SUNY Farmingdale
10:00am – 10:05am
Morning Introductory Remarks
10:05am – 10:35am
Session 1: Telling Your Library Stories With Data: Using Statistics to Create Statistical Narratives, by Amy Parsons (Columbus State University Libraries)
I am a cataloger and back in 2015 I needed to figure out a way to share our numbers in a relevant way to my library director. I have learned to use data to create statistical narratives and use those narratives in many ways to tell stories with data, use data for assessment, visualize data in a meaningful way to make important data-driven decisions and ultimately showcase the value of library work to library stakeholders. I started with cataloging data and in my current position at Columbus State University, as Head of Technical Services, I oversee managing all library-wide statistics, so I have had the opportunity to expand my knowledge and share even more data-driven stories from all areas of the library. My examples will include:
- Data for Assessment: Using data that illustrates the hours that the library is used to alter the library hours.
- Data for supporting smart budget decisions: Usage data from databases allowed us to make intelligent data-driven decisions with our library budget; cutting or adding databases.
- Telling a story with Data: The Great Weed of 2019 at CSU Libraries. I created a cataloging report illustrating all the hard work that we did that year.
- A lot of this behind-the-scenes work with numbers is invisible and only seen by library directors and a few staff members. These stories can be created by any library department and ultimately allows outsiders, and constituents to see our value backed up by real data.
10:35am – 11:05am
Session 2: How Many Spreadsheets Does It Take to Communicate Textbook Affordability?, by Shannon Pritting and Sarah Morehouse (Empire State University)
Empire State University Library has worked with OER for many years to reduce the cost of higher education for its students. However, OER only goes so far, and Empire’s Library has developed strategies over the past year to comprehensively affect textbook and course resource affordability at scale. The library partners with the registrar, the bookstore, institutional research, administration, and faculty to look holistically at how many course sections can be moved to no-cost options such as OER or e-books licensed by the library. In its first full semester, Empire was able to demonstrate that over 30% of all course sections were affected by no-cost options. Our initial estimates for Spring 2024 semester indicate that around 40% of course sections will be reduced cost or no-cost. For the past three semesters, the library has secured access to 400-450 e-books each semester that have been assigned as textbooks in courses. Empire has done this by merging and manipulating data from multiple sources: registrar data, enrollment data, bookstore adoptions, holdings data in various library systems, faculty input, and review of courses in the digital learning environment. It’s been ugly working with many sources of data that typically don’t match, but the outcome allows the library to demonstrate its value as a partner in affordability and aligns itself with the access mission of Empire State University. We will cover how we’re working with the data, share analysis of e-book licensing strategies, and share findings about how we’ve built a comprehensive course resource affordability program.
11:05am – 11:15am
11:15am – 11:45am
Session 3: Enriching GOBI Expenditure Reports with Usage Statistics, by Mackenzie Kathmann (University of Ottawa)
Each year GOBI supplies Expenditure Reports, so that we know how, where and hopefully why we spent with GOBI. While this data is extremely rich and there are some interesting insights within, a discussion with my colleagues illuminated the fact that the data lacked outcome information.
So, we bought 10,000 books, but how did they actually perform? Are we buying in the right areas for our researchers? How are our students responding to different formats? Are there certain platforms that our users prefer or are using more? Since COVID, our plan has been in virtual – how has that affected usage? Are our manual selections yielding different results? The questions seemed endless, but to answer them we were missing key data – usage statistics.
In this presentation I will speak about how I enriched the provided GOBI expenditure data with COUNTER 4, COUNTER 5 and Alma print usage metrics, in order to facilitate collection analysis. I will briefly cover the technical details of how I performed this task, and then go on to discuss applications.
11:45am – 12:15pm
Session 4: Press Button, Receive Data: Building a One-Click Tool to Track Patron Questions, by Rebecca Hyams (Borough of Manhattan Community College, CUNY)
While Alma records when items are checked in and out at the circulation desk, those transactions are only one component of the work that our circulation staff do. However, while having tangible data about when our staff were answering patron questions would be very helpful in things like determining staffing needs, it was difficult to reliably collect that data. While our reference librarians use RefAnalytics at our reference desk, we knew it would be difficult to get our circulation staff to do the same for various reasons. Instead of returning to old paper tally-sheets, we built a small desktop widget in Python for them to use that sends the data directly into a LibInsight dataset. Now we have a much clearer (and down-to-the-minute) picture of the additional work happening at our circulation desk that’s easy for all of our staff to use. This presentation will discuss why and how the widget was built, how it works, and how we’ve been able to make use of it over the Fall 2023 semester.
12:15pm – 1:15pm
1:15pm – 1:20pm
Midday Introductory Remarks
1:20pm – 1:50pm
Session 5: At Your Service!: Getting Set Up to Quickly Deliver Actionable Information with Your Usage Data, by David Macaulay (University of Nebraska-Lincoln)
The ability to quickly access and analyze data about your community’s usage of online resources is an invaluable asset to an academic library… but if your organization takes a partial, haphazard approach to acquiring and managing the usage reports supplied by publishers, it can be difficult to find the information you need to answer questions in a timely way.
Drawing on the experience of a newly hired e-resources librarian who inherited an idiosyncratic system for obtaining, storing, and reporting on usage data, this session will present tips on developing an organized, pragmatic approach to managing usage reports – both COUNTER-compliant and non-standard — for your library’s electronic resources, allowing you to build a comprehensive, reliable source of data for responding to annual statistical surveys, investigating the value of subscriptions, pinpointing potential access outages, identifying unfilled demand, and more.
1:50pm – 2:20pm
Session 6: Using Electronic Resource Tickets to Assess E-Resource Management and Service Effectiveness, by Amy Fry, Uyen Nguyen, Sam Lechowicz (University of Illinois Urbana-Champaign)
One of the main responsibilities of the Electronic Resource Unit at UIUC is to respond to reports of electronic access problems through the ticketing system. Using a combination of qualitative and quantitative methods, our team examined ~1600 ticket reports from Team Dynamix submitted over 3 years. We extracted the ticket reports and used Python to analyze them in order to better understand our users and how they are interacting with our library’s digital collection. We will share the results of our analysis and how we will use them in to assess our electronic resources management planning and provide a more comprehensive understanding of our Library users and resources.
2:20pm – 2:30pm
2:30pm – 3:00pm
Session 7: Novelty Visualizations of Collections Data: Real Impact or Comic Interlude?, by Nat Gustafson-Sundell and Evan Rusch (Minnesota State University Mankato)
At Minnesota State University Mankato (MNSU), we have iteratively developed collection analysis and data visualization solutions for about a decade now. Although we currently prefer Microsoft Power BI as an interactive and engaging interface for most data visualizations, we have continued to experiment with other tools to improve how we communicate. This year, we are re-designing our “one-sheeters,” or brief reports to support librarian visits to faculty meetings, inform conversations with administration, or help us to communicate specific collection development decisions. For these one-sheeters, we have experimented with a new approach to data visualization using Dall-E 3, an image generation tool. The goal has been to create illustrative comparisons of quantitative variables. We think it’s very important to consider context, audience, and purpose while developing these illustrations, if they are to be used. We’ll discuss some of our considerations and share some of our designs. How do these visualizations offer new advantages over traditional data visualization? What risks do these visualizations pose?
3:00pm – 3:30pm
Session 8: Wrangling COUNTER Usage Reports for ACRL/IPEDS, by Ilda Cardenas (Cal State Fullerton)
Libraries regularly harvest COUNTER usage reports for a variety of reasons. They are an essential metric for evaluating library investments in eResources during subscription renewals and for the ACRL Academic Library Trends and Statistics survey. Collecting and compiling COUNTER reports is tedious and time-consuming; library professionals can do this with a SUSHI client to help automate the process. This presentation will help those who do not have a SUSHI client and want to automate the processing of COUNTER reports. The presentation will cover the Python code for processing COUNTER reports for subscription renewals and ACRL statistics. For novice coders, this presentation will discuss using Visual Studio Code’s Artificial Intelligence product, Co-Pilot, to develop the code.
3:30pm – 4:00pm
Session 9: SUSHI Monitoring: Creating an At-A-Glance, Interactive Dashboard in Alma Analytics, by Carin Yavorcik (Oregon Health & Science University)
Setting up SUSHI is an incredible time saver when it comes to gathering usage statistics, but making sure all the right reports get harvested without errors from dozens of different vendors offers its own set of challenges. Oregon Health & Science University Library designed a custom dashboard in Alma Analytics to quickly identify any gaps in the data, as well as easily navigate to more detailed COUNTER report information for further investigation. This presentation will give an overview of how we created the dashboard and how to adapt it for use at other institutions.