Of course, the User Experience is the newest model to hit library assessment, so it was logical that Stephen then applied these criteria to evaluating online courses, asking questions pertinent to this library service.
I'm intrigued because I've been trying to develop my own model of evaluating collections that I could use on a regular basis. I've been fixated on a "tripod" model, with 3 sets of 3 criteria - Usage, Scope and Depth - but I haven't made much progress down this path. That is why this diagram caught my attention. While ascribed to online courses in this posting, these are the same kinds of criteria that have applied to collections of one kind or another.
- Useful: Stephen asks,
"What do your students do with the activities, materials, links and other content you include in your course? Does each of these itesm add to the understanding of a complex concept in some way? Or perhaps they directly support hte achievement of an established learning objective in the course?"
- We could also ask, "What do your students (and faculty) do with the books, journals, databases, tutorials, and other content you offer in your library?"
- Usable: Stephen focuses basic Web site usability issues, such as working links and applications. And while we could also examine the usability of our Web interfaces to our collection (e.g. catalog, article databases, etc.), we could also evaluate the usability of our content (print books, ebooks, scores, etc.).
- Findable: While Stephen mentions "navigation and layout", findability of our resources is a very important aspect of collection assessment. Are the items cataloged accurately and to the appropriate level? Are journal article findable using multiple access points (ejournals list, article databases, catalog, etc.)? What about the special collections?
- Credible: Stephen writes, "...it's important for the students to know the content is credible in terms of its source, purpose, currency and relevance" (original emphasis). These are among the most established criteria of collection assessment. The problem is how to measure them with some semblance of objectivity.
- Desirable: "How well does your course capture the interest of your students?" Stephen asks. Similarly, we could ask, "How well does our collection capture the interest of our students?" While some of our faculty may snort at such a concern, if our students are not interested in the works, even for required projects and courses, they will eschew them or use them ineffectively.
- Accessible: While Stephen's accessibility issues are focused on Web sites, we do need to consider other aspects, including physical accessibility. Are book stacks wheelchair accessible? Even for those without overt physical limitations - are the book shelves too high? too low? Are the books packed in too tightly? Do the special collections require too many people to "go through" to access? While remote shelving is becoming more common, is the delivery too cumbersome to use?
- Valuable: Stephen asks, "Does your content support your students in their pursuit of course learning objectives?" He quotes the author of the original article, "'the user experience must advance the mission'". So, do our collections advance the mission of the library?
This may prove to be good model on which to base our regular collection assessment; or it may be simply the re-working of old ideas. I need to ponder this more....