Thursday, September 27, 2012

Library 2.012 Virtual Conference - FREE!

Given all the cuts to libraries and particularly to librarian professional development (including travel), I actively seek and would like to promote those opportunities that are cost-effective.  And how much more cost-effective can you get with something that is free?  (OK, your time is money, but eliminating conference fees and travel expenses boosts the cost-effective equation tremendously.)

The Library 2.012 Virtual Conference is next week and there are 148 presentations (and counting) scheduled over 2 days.  They cover six different "strands":

  • STRAND 1: Libraries – Physical and Virtual Learning Spaces

  • STRAND 2: Librarians & Information Professionals – Evolving Professional Roles in Today’s World

  • STRAND 3: Content & Creation – Organizing and Creating Information

  • STRAND 4: Changing Delivery Methods

  • STRAND 5: User Centered Access

  • STRAND 6: Mobile and Geo-Social Information Environments
Here is a sampling of the presentations:
OK, I confess...the list is a bit biased...the last two are mine.  I hope these will be a stimulating look at what our patrons have selected and how they compare with our print and EBSCO eBooks collection.  This will be my first presentations using WeCollaborate, which provides a slightly more interactive environment than simple webcasting. 

But even if my topic of demand-driven acquisitions is not on your radar, consider taking a moment to look at the options available and take advantage of this opportunity to see what others have done.

Friday, September 21, 2012

Library 2.012: Worldwide Virtual Conference

The Library 2.012 Worldwide Virtual Conference will take place October 3-5, 2012.  This is a free Web-based conference that will include presentations essentially around the clock (since it’s around the world).  The topics will cover these areas:
STRAND 1: Libraries – Physical and Virtual Learning Spaces

STRAND 2: Librarians & Information Professionals – Evolving Professional Roles in Today’s World

STRAND 3: Content & Creation – Organizing and Creating Information

STRAND 4: Changing Delivery Methods

STRAND 5: User Centered Access

STRAND 6: Mobile and Geo-Social Information Environments

You can review the accepted presentations list to decide which ones to attend. 

Disclaimer: I will be making two presentations about our Demand-Driven Acquisitions program:

Wednesday, September 19, 2012

Library included in Princeton Review survey

Library Journal posted a piece about the 2013 Princeton Review college rankings, which now includes an item about the library.  This is a survey of about 122,000 students asking their opinions of various aspects of the colleges they are attending.  The particular question was,  “How do you rate your school’s library facilities?” and the responses are on 5-point Likert scale ranging from "Excellent" to "Awful".  The Princeton Review then ranks the institutions by each item and provides two lists: "Best College Library" and "This is a Library?".  Like other surveys of students, these kinds of reports provide only limited information from a very limited viewpoint.  Students are not asked to make comparisons and no other information is taken into account for the two lists.

Taking all this into consideration, it is interesting to see how some libraries fared.  Of the 20 in the "Best College Library" list, 6 were Public and 14 were Private (30%).  Of the 20 in the "This is a Library?", only 3 (9%) were Public.  For comparison, only 5% of the "Students Study the Most" were Public, but 85% of the "Students Study the Least" were Public (hmmm...).   Interestingly, 20% of the "This is a Library?" group were also in the "This is a Dorm?" group, while 15% of the "Best College Library" had the Best College Dorms".  Ok, so there isn't that much overlap there...

What about happy students?  Are the schools with the happiest students similar to the schools with the best libraries (survey says....)?  Well, not really...only 10% overlap.  However, 20% of the "This is a Library?" group were also in the "Least Happy Students" group -- the same group in the "This is a Dorm?" group.  OK, so there seems to be a set of schools with very dissatisfied students.

Finally, what about financial aid and administration.  Well, 20% of the "Best College Library" group were in the "Best Run Colleges" group, while 25% of the "This is a Library?" group were in the "Administrators get low Marks" group...again, all but one of the same schools that overlapped in other categories.  Finally, 35% of the "Best College Library" group were also in the "Best Financial Aid" group (all private schools, of course) [NOTE: one of the Best College Library group was the US Military Academy - no financial aid required].  Conversely, only 10% of those in the "This is a Library?" group were in the "Financial Aid Not So Great" group.

OK, that's all well and good, but it is just a survey of students' opinions...which makes me wonder if there is some correlation with LibQual+.  Unfortunately, my university is not represented in the list of that purposeful?  Did our administration opt-out?  Oh well, the Princeton Review survey does not provide near the detail or information that LibQual does - we have no idea why Drexel was in the "This is a Library?" group when it has a prestigious School of Information (or did I just answer my own question?).

Tuesday, September 11, 2012


When I start a new project, the first step is usually a literature review.  I want to find out what others have learned about the problem or issue I am hoping to resolve with my project.  Given that I am starting three new projects, I have been doing a lot of reading lately, and this, of course, leads me on other tangents of ideas.  To provide some context, the three projects are:

  • Evaluating the impact of the libraries' resources on the success of grant applications.
  • Developing and implementing a Collection Assessment Plan.
  • Investigate the feasibility of essentially "classifying" the courses offered in order to more effectively assess the coverage of our collections.
So my literature reviews have covered citation analysis & bibliometrics, assessment of research and grants, collection development and assessment, and classification.  The tangents that I have wandered down include:
  • Development of a Research Impact Measurement Service (RIMS), as provided by the University of New South Wales, Australia.
    • Provide citation analysis and bibliometric analysis of publications for individual faculty, as well as departments and administrative units.  
    • See this presentation and this article for details.
  • Assessing the impact of the library on the local community by measuring links to the university on the Web sites of local organizations and public services.
    • My MLS professional paper (well, really, it was a thesis) was analyzing the distribution of links on academic medical library Web sites.  This paper from JASIST evaluates the use of different methods to count links, which is what sparked this idea.
  • Assessing the impact of the libraries' digital collections on the research and education community by citation analysis and Web link analysis.  
    • This paper, also from JASIST, lit that spark in me.  It is too easy to stay focused on books, journals and databases when assessing the collections.  While our team regularly evaluates the usability and the content of our digital collections, there hasn't been as much research into their true impact in the communities they serve.
Of course, the common theme is assessment of our collections, but the directions are from there are different, from extending into the local community to developing a new service for our faculty and administration.  This can be a very interesting job.

Wednesday, September 5, 2012

Measures of reach

On the ACRL Value of Academic Libraries blog, Joseph Matthews discusses measures of reach and impact that were recommended waaaaay back in the mid- 1990's (about the time I started my professional training).  Now, nearly 20 years later, we still haven't made much progress.  Part of that is will (only recently have we had the need to do this), and part of it is's just not that easy to get these measures.  Here are some examples, with my comments about the technical feasibility of obtaining these values:
  • The percent of courses with materials in the reserve reading room.
    • Possible, but difficult. This would require access to the courses database and manually matching up courses with data from the ILS.  
  • The percent of students enrolled in these courses who actually checked out/downloaded reserve materials.
    • Not terribly difficult, but not very accurate. A gross value of this could be generated simply by using the total number of enrolled students divided by the total number of users with status of "student" with a circulation greater than 1.  But this is not really accurate - individuals in the numerator may or may not actually be counted in the denominator.  
  • The percent of courses requiring term papers based on materials in the library’s collections.
    • Ugh - how would we do this?  I can't imagine the work involved in a census of all courses, but surveying a random sample of courses would provide fairly reliable estimates.
  • The percent of courses requiring students to use the library for research projects
    • See above...Possible to get estimates, but not easily.
  • The number of students who checked out library materials.
  • The number of undergraduate (and graduate) students who borrowed materials from the library.
    • These two are similar to #2 above - fairly easy to get rough estimates.  
  • The number of library computer searches initiated by undergraduates.
    • This is virtually impossible to measure here.  Our users are not required to log into the databases unless they are off-campus.  While users of in-library computers must log in, there are far more users who use their laptops and own PCs.
  • Percent of library study spaces occupied by students.
    • We are finally starting to put card-swipes on some of our classrooms.  Eventually, we hope to put them on all study rooms.  But there are still many open areas that would be impossible to measure usage.  
  • Number of pages photocopied by students.
    • Easy to get...practically useless now.  However, we can get usage of ebooks, including page and chapter downloads.  Unfortunately, we can not yet distinguish user types because they usually do not need to log in.
  • Percent of freshmen students not checking out a library book.
    • Good idea - need to get a handle on non-usage.  This is the antithesis of items discussed above.
  • The percent of faculty who checked out library materials.
    • Fairly easy to get and fairly valid.  The Faculty population is more stable than the students, so a ratio of gross measures would likely be more valid.
  • The number of articles and books published by faculty members.
  • The number of references cited in faculty publications that may be found in the library’s collections.
    • Difficult but highly useful.  I'm working towards a systematic way of collecting this information.
In general, the recommendation was to provide:
  • The percent (undergraduates, graduate students, faculty, and researchers) who borrowed materials
    • Easy to get rough estimates
  • The percent who downloaded online materials
  • The percent who used the physical and virtual collections.
    • Virtually impossible in our environment.  
So, you can see, it takes more than simple recommendations.  The technical environment would need to be changed to support this.