My position was a brand new one when I began in it a little more than 2 years ago. One item in my new-to-me and new-to-my-institution job description was, “Provides regular database use statistics reports.” It sounds perfectly reasonable and routine, but those 6 words have, at times, occupied rather non-routine amounts of time.
The first thing I discovered was that all of our different vendors provided different reports. Oh. Some counted sessions and searches; some counted downloads of full-text or images or films; some counted browse searches versus keyword searches. Some supplied these things called COUNTER reports, while others told me how I could use something called SUSHI to automate the gathering of my stats. I was, to say the least, rather lost.
With just a little research, I discovered what COUNTER (Counting Online Usage of Networked Electronic Resources) and SUSHI (Standardized Usage Statistics Harvesting Initiative) were. Great, I thought. There’s a standard. Now, I just need to pull those reports from each vendor and ignore the rest. Ha. Then, I learned that not all vendors supplied COUNTER reports. Compliance isn’t mandatory. Even with the standard in place, not all vendors supplied the same COUNTER reports, as different reports may or may not apply to a particular online database (no use counting full-text downloads in a citation-only index).
Before we go much farther, a review of COUNTER is probably in order. The COUNTER code of practice specifies 23 different types of usage reports; some are standard, while others are just optional. It defines things such as “search” and “successful request” and specifies how the vendors count such activity (usually from server logfiles or page tagging). With all of my vendors using the COUNTER standard, I can be reasonably sure that when I compare searches in one database to searches in another that I’m comparing apples to apples rather than apples to oranges. You can see a full list of the reports and read much, much more about COUNTER in the COUNTER Code of Practice, Release 4, available at http://www.projectcounter.org/code_practice.html. Release 4 was, well, released in April 2012, and vendors have until December 2013 to update their practices. Most of the vendors we use still seem to be in Release 3.
My favorite COUNTER reports are
- Journal Report 1 (Number of successful full-text article requests by month and journal);
- Database Report 1 (In Release 3, this is total searches and sessions by month and database. It’s changes a bit in Release 4.);
- Book Report 2 (Successful sections requests by month and title).
These are all standard reports. In Release 4, I’m looking forward to the Multimedia Reports, that allow for the counting of use of “multimedia content units.”
So, I downloaded a bunch of COUNTER reports and immediately got bogged down in a morass of spreadsheets. Our next step, then, was to review some of the commercial products on the market and select a system to help us manage and generate meaningful reports from our COUNTER stats.
With that done, I thought I was finally ready to start producing those regular reports. Not so quick.
I produced a round of reports and emailed them to my coworkers. Immediately, my colleagues started asking (really good) questions. Questions like, what do these stats mean? How do they compare to the past? Are we spending our money wisely? I went back to the software system we’d subscribed to and uploaded 4 years of historical data (as much as I could get, for most vendors). That allowed me to put new stats in the context of past usage.
The question of what the stats really mean, though, kept nagging at me. What do the use stats really tell us? In most of our databases, use means that the student finds a full-text article that he or she can use right away – or something they can ILL. So, in one way, it’s not searches and sessions that matter so much, but figuring out how many searches and sessions result in a student or faculty connecting with needed information. I’m not sure if there’s really a way to figure that out just by looking at numbers, but I did think it would be interesting to compare the numbers from Database Report 1 (searches and sessions) with numbers from Journal Report 1 (full-text downloads) for the same databases, where both reports were available. That was something of a surprise.
Our most-used platform (including 29 databases) tallied up 675,480 searches in 167,496 sessions for the 2012-13 academic year. But, it saw only 104,625 full-text downloads and 15,749 instances of acting as a referring source for our OpenURL link resolver (meaning, presumably, that it linked the user to the full-text in another database or led them to our ILL request form). That means only 17.82% of searches (or 71.87% of sessions) turned into usable items. It begs the question, what’s going on with the other 82.18% of the searches?
Some things I could think of right away are
- we often teach that specific database in our for-credit IL classes and in one-shot IL sessions (so lots of example searches with little downloading, multiplied by classes of 25-30);
- the platform allows users to search multiple databases at once, probably increasing the number of searches per session (?); and
- two of the databases in the platform are citation-only indexes, with no full-text available.
Still, I do wonder (and worry) about the users who are searching for but possibly not finding anything they can use.
I’d love to do some research to delve into these stats and better understand what’s really going on and if there really is a need for concern. Maybe one of these days…