Tuesday, April 30, 2013

TLA 2013, Day Two

<word of warning: this is a shameless plug for my library>
I had neglected to mention in my first post about TLA that the UNT Libraries' Portal to Texas History was awarded the TLA Wayne Williams Library Project of the Year.  This the Digital Projects' "crown jewel" of the Digital Collections, with a significant amount of grant funding and effort provided for its development.  This collection of primary resources has become integrated in the public schools' curriculum, particularly for its Texas history requirement.  It is definitely worthy of the award.

In the second general session, UNT Libraries' again was honored with not only the Best in Show, but also the PR Plan Winner and the Collateral Materials Winner honors of the TLA Branding Iron Awards.  This is the result of a recent focus on marketing, advertising, and external relations.  The UNT Libraries takes advantage of the artistic and creative talents of the UNT's students, as well as innovative librarians and hard-working staff, to develop marketing plans and advertising materials in a wide variety of formats.  One  popular product is the subject liaisons posters, complete with photo, placed strategically in the right places of the stacks.
</end shameless plug>
Now, as they were awarding the Upstart Innovative Programming Award to Eileen Lee of the Montgomery County Memorial Library System for her Sensory Storytime for developmentally disabled children, I had the idea of collaborating with MLS faculty and our UNT Autism Center to develop collections and services for autism spectrum children. Not exactly totally original, but I wanted to get my idea down on paper before I forgot it.

I had not paid enough attention to the program about the speaker for the second general session - in fact, I had considered skipping out after the awards and getting some coffee (my one complaint of the TLA facilities - not enough coffee!).  I'm glad I stayed.  I've been enjoying the segments with Dan Aierly's on NPR, and when I realized who the speaker was, I knew I wouldn't need the coffee.  His focus was on his latest research on cheating - essentially, most of us cheat a little.  Interestingly, providing simple reminders before a test about the "honor code" (even if none exists) decreased the number of people who cheated.  Well, on to the sessions of the day, only two of which will I write about.

Library Assessment Today: This was an overview of the experiences of Jim Self of the University of Virginia.  He's been involved in library since the early 1980's, long before the current "fad" of assessment.  But he starts with a quote from J.T. Gerould of the Princeton libraries from 1906, stating that the basic questions of assessment were essentially, "Is this method the best? Are we up to the standard of a similar institution?"  So clearly, assessment is not a new fad.  It is a traditional aspect of librarianship - we've just started looking further away from library collections and outputs, to the broader interests of the institutions we serve.  Furthermore, as Jim states, recently, the user has become the center of the discussions.

Interestingly, Jim mentioned that they had conducted their own patron satisfaction surveys every three to four years since 1993.  They did participate in LibQual survey in 2006, but was disappointed by the much lower response rates.  They returned to their own customized surveys with a steady 50% response rate.  They have seen their patron satisfaction increase over time.  He attributes a notable "U" turn in the satisfaction with the online catalog with an attempt to "re-invent" the system.  They have evaluated the use of the collection and redefined collection development, with a focus on the user.  Now, they are focusing on "budget communications" with university administration.  Like most academic libraries (well, all libraries), there has been a dramatic decrease in the number of reference questions asked.  But, we both asked, is that necessarily a bad thing?  Jim suggests that this may be due, at least in part, to librarians doing a better job with our systems.  He summarizes his first section with a listing of how libraries could provide better evidence of our worthiness, including usability testing, "wayfinding" (evaluation of our facilities), ethnographic studies (a la Rochester), quantitative performance measures (aka "balanced scorecard"), MINES, COUNTER & e-resource use, and staff surveys.

Jim then switches to observations from the consultation work he has provided to other libraries over the last five years.  He has seen more libraries accept assessment as a core activity, but that it is still hard to sustain.  He has also seen more collaboration with individuals and groups both on- and off-campus.  But he emphasizes that what is still needed are common measures of holdings, usage, costs and learning outcomes, as well as sharing of this information.  He would also like to see more standard survey templates and metrics, and perhaps an "Index of Performance".

Mr. Self ends his session with some observations on value and assessment in general.  The current need is to determine the library's impact on student and faculty success, but this may be difficult to measure. There are natural limits of assessment, in that we are attempting to predict the future based on our past and present, and goodness knows, that often turns out to be wrong.  He pointedly noted that "innovation does not come from assessment, but it can indicate what works".  Local barriers to assessment include complacency, fear, arrogance, inertia, an operations mindset, and ethical concerns.

Finally, he asks about the the point of view of assessment -- is it neutral and unbiased?  Or is it advocacy?  Would assessment ever result in recommendations for a lesser library?  Perhaps, but the purpose of the our work is to improve the service - to give users what they need when they need it.

Right Size, Right Stuff: This was part two of Fort Worth Public Library's "market segmentation" presentations.  This was more of the nitty-gritty of this method - applying the data to actually modify a library's collections.  Actually, the market segmentation was only part of this process. The bulk of the work was adjusting collection size based on usage.  Like the Dallas Public Library, the FWPL has shifted to a "floating collection", with each branch's collection set by what patrons return.  So while they will delivery materials to a branch for an individual, once the patron returns the material to the branch, it stays there - until requested elsewhere.  This saves quite a bit of money by not having to return materials.  But it also leads to a "pooling effect" in which the collection size of certain heavily-used branches grows dramatically.  So the collection development librarians for the FWPL System charted the current holdings against the usage to determine the "right size" for each collection at each branch.  The collections were broken down by age level (adult, juvenile), type (fiction, picture book, etc.) and subject.  Here is a sample table that was developed for a single branch:
<I'll add this later>
Essentially, the "right size" is calculated as such:  % of Total System Use X Total Titles in System.  This was then used to calculate shelving space (using different formulas for different collections).  They also used market segmentation to make additional adjustments to the collection based on interests and hobbies.

Actually, I was quite intrigued by this use of market data.  I'm not sure how, if at all, such info could be used in an academic setting - the data is pretty well set on households.  But I was inspired to think outside my little box.

No comments:

Post a Comment