Sunday, March 17, 2013

Research misconduct in LIS?

The editorial, Bogus Evidence by R. Laval Hunsucker in the latest Evidence-Based Library & Information Practice (EBLIP), discusses at great length the potential for research misconduct in the LIS field.  After setting the stage with the recent apparent rise in "questionable research practices" (QRP's) and outright fraud in the basic, medical, and social sciences, Laval brings our attention to our own profession, or rather, the lack of attention that our profession has given this issue.  He questions whether we, as members of the LIS profession, should consider research misconduct to be more or less or equally prevalent than in other research fields.  He admits that there is enough evidence to support any of these positions, yet not enough evidence to reach any conclusions.  And that, Laval asserts, is the crux of the problem.  Why should we assume that we are any better (or worse) than any other field?  And if we are neither better nor worse, shouldn't we be concerned that we are equally bad?

Laval brings up some very valid points, particularly regarding the difficulty of detecting research fraud.  Indeed, having been involved in a few research studies, I can imagine where fraud could occur, if so desired, particularly with data.  Auditing the data collected is rarely done, and yet, it is, I believe, the weakest (or easiest) point.  I have heard of large surveys where one single survey-taker fraudulently completed forms. Proper follow-ups detected the problem, but not before so many had been submitted that the integrity of the entire study had been threatened.  But that is an easier problem to detect because the researchers were themselves conducted the study with integrity.  The more difficult cases occur when the researchers manufacture the data.  Only a full data audit could detect this, but as mentioned above, this is so rarely done because it is difficult and time-consuming (and thus, costs money).

So, like so many researchers, those conducting studies in LIS are trusted to collect, analyze, and report their data in an unbiased and appropriate manner with little oversight.  Is this trust justified?  It appears that in this day and age of competition for jobs, promotions, and respect, people are growing more deceitful.  But could we not say, too, that in this day and age of growing transparency with the Internet, people are growing more skeptical and distrustful?

Professionally, I'm less concerned with outright fraud in the LIS literature than with QRP's related to poor training and limited knowledge.  This is particularly true regarding studies conducted by practicing librarians like myself, rather than LIS research faculty who have completed more formal training and apprenticeship in research, in the form of the dissertation.  Most practicing physicians do not initiate and formally conduct clinical studies.  There are many who participate in research, but very few actually develop proposals, gather the data, analyze the results, and write the papers of their own research.  Yet academic librarians are very often expected to do so themselves, often with less training than the physicians receive.  Therefore, why should we not expect QRP's to occur?  Laval himself notes that the "good news, and the other important difference, is that genuinely fraudulent research in LIS is almost certainly far less prevalent than sloppy research in LIS."

Actually, the apparent increase in research misconduct does need more study in order to establish the environment in which our research can be trusted.  But to judge all studies with a jaundiced eye will make it more difficult to move ideas into practice.  Laval discusses the many proposed solutions, ranging from changes to rewards and incentives to formalized research integrity training.  Like any complex problem with multiple foundations, no solution addressing just one of these foundations will work by itself.  But like they tell those with mental or addiction problems, simply recognizing the problem exists is the first step.

Saturday, March 2, 2013

New results from impact studies

The ACRL Value of Academic Libraries site highlighted two articles recently published from the University of Minnesota, led by Shane Nackerud.  Both articles are published in the April 2013 issue of portal: Libraries and the Academy, although I'll be linking to their institutional repository.  The first article describes how the data was collected and analyzed and provides basic demographics of the users of library's services.  The research questions in this paper were:
  • Do sufficient measures exist to determine what services individual library patrons use?
  • Do the Libraries reach the majority of students in some way?
  • Do students in different colleges use library materials and services in different ways?
  • How does undergraduate library use compare to that of graduate students?
With increasing use of methods that capture, essentially, who uses each service, they were able to link with demographic and academic data about each user.  The "access points" or services for which data was captured includes:

  • material circulation
  • ILL requests
  • library workstation logins 
  • usage of electronic resources for those who were off-campus or those who logged into the library workstations
  • attendance at workshops
  • reference consultations
  • course-integrated instruction (through Blackboard)
While not a complete set of data from all library services, this set does represent much of what the library provides.  Missing are usage of electronic resources from those on-campus and not using library workstations, in-house usage of materials, brief reference transactions, and visits to the library.  The authors admit that such data, especially the last, would be essential for measurement of "library as place", but they express concerns about over-reaching to such an extent that it would affect usage of the very services they would be measuring.  

But it's still a big data set - over 1.5 million transactions from over 61,000 unique users.  They were able to work with their Office of Institutional Research to get the demographic and academic data.  This step has been a common obstacle to such impact research for many libraries, whose own OIR's were reluctant to share the information.  The solution seems to be to split the collection of data between the two campus units - library gathers the user identifiers and the OIR provides the demographic/academic data, returning an anonymized data set to the library.  This way, neither party should have access to the entire data set, thus securing privacy that much more.  

With the complete, anonymized data set, the librarians were able to run correlation analysis to determine if academic achievement was in any way associated with library usage of any kind.  Pretty basic...and, to no one's surprise, there was a significant correlation.  This is very much in-line with other similar studies, such as the Library Impact Project.  I found this part interesting, though (emphasis added):
Already library staff have been able to share this data with University deans and administration and the feedback has been both positive and somewhat unexpected. For example, while University administrators have been enthused by the results, they are also not surprised. It seems intuitive that libraries should be able to demonstrate appropriate levels of usage, and that usage should result in increased academic success.
How frustrating!  University administration puts libraries (and others) under pressure to justify our value, our impact on student achievement, then says, "So?"!   Maybe this demonstrates a need to find out exactly what measures administrators expect.

In the second paper, the authors look at a subset of the population that could provide the clearest association of library usage and academic outcome: first-year, non-transfer students.  This group would have the fewest confounders, such as previous college experience, to muddie the results.  They looked at the effects of library usage on both student achievement (grades) and retention.  Their statistical analyses was more sophisticated than you typically see in LIS research, using not only t-tests and chi-squared-tests to determine the significance of differences between groups, but also determining the effect size (medium) and multiple linear and logistic regression.  They attribute the (relative) richness of this analysis to their own outreach to campus statisticians, and they recommend libraries not try to do it all themselves.  Hear, Hear.  The value of doing this analysis is that the authors demonstrated the size of the relationships they had found (significantly positive), but also the limitations of such relationships (most correlations were small).

However, with their models, they were able demonstrate significant effect of any library usage on GPA, while controlling for demographic factors.  Essentially, users of library service had a GPA 0.23 points higher than non-users, and that usage of the library accounted for 12.4% of the difference between the groups.  Given how many factors went into the model (12), that's a bigger chunk than expected (1/12th or 8.3%).  The second model broke the services out.  Not unexpectedly, these effects were much, much smaller.  The only services that showed statistically significant effects were database use, book loans, and workstation logins, and while the size was small, they were services that would be used repeatedly over the course of the semester. The totality of the usage of services represented a larger share of the effect - 13.7%.

Finally, their logistical regression models were conducted to predict student retention based on either usage of any service or usage of specific services.  This kind of model demonstrates the strength of the relationships by showing how library services can predict, or explain, outcomes.  This is a key aspect of research - can a variable predict a specific outcome?  If it can, then it can be used to change the outcome.  These models were both significant, even when adjusting for demographic factors.  Another important feature of logistic regression analysis is the calculation of the odds ratio (OR).  This measures the sized of the effect of the variable on the outcome.  In this case, students who used any of the library's services were 1.54 times more likely to continue to the next semester than those who did not.  Conversely, few of the individual services showed significant effects on retention; those that did were likely due to small sample sizes (few attendees).

So, what does this all mean?  Using these moderately-sophisticated statistical analyses is very much like triangulation - analyzing data from different angles to see the true picture.  This picture shows that there appears to be modest relationship between usage of any of the library's services and student achievement and retention.  However, picking out which services had the biggest effects is more difficult.  The linear model showed database logins, workstation logins and materials circulation as having a small effect; this doesn't show, though, in the logistic model.  More evidence, then, is needed.

It is somewhat disappointing that more interpersonal services, such as instruction and consultation, showed much lower effects.  This, I imagine, is due in no small part to the size of the data set.  Usage of these was services much lower compared to the more self-service, well, services. This could hide any association of the less-used services because of higher standard errors.  The problem with studying such low-usage services is selection bias.  If this can be controlled, randomly-selecting classes to provide instruction, then the effects should be more significant.

These articles, I think, are invaluable to the efforts of demonstrating value.  Like all applied research, it is but a piece in the overall puzzle.  It is not sufficient for the argument, but with more such studies filling in the gaps of knowledge, the picture becomes more and more clear.  It would be nice, however, if those who are the intended audience of such studies (presumably the campus decision-makers) would show their interest.