Keep updated

Working a framework: benchmarks are a process.

Tags:

Along with TechSoup, the ALA Office for Technology Information Policy (OITP) is one of the thirteen organizations working to develop a beta set of national public access technology benchmarks for public libraries. We’d like to thank Sarah for the opportunity to introduce ourselves and share some of our thoughts and experiences from working on this project.

Over the last decade, OITP has played a leadership role in advocating for policies supporting e-rate, high-speed broadband for libraries, and copyright through its Program on Networks and Program on Public Access to Information. More recently, the new Program on America’s Libraries for the 21st Century has recognized cutting-edge technology practices in all types of libraries. And in the last year, we’ve been the convener of ALA member activities related to e-content and digital literacy. As you all know, technology is moving faster and faster, and OITP works with and for library staff to ensure libraries are at the table in national policymaking, as well as developing tools and resources to support libraries in leveraging technology.

In fact, my colleague Rick Weingarten (who Sarah referenced in an earlier post) and others in OITP facilitated the development of the Principles for a Networked World in 2001 (adopted 2003).The principles addressed equitable access, intellectual freedom, infrastructure and information literacy –  among other key areas.

These and similar experiences brought us to this benchmarks initiative – which has at its heart the important and daunting goals of providing guideposts to continuous improvement in library public access technology and advocacy for continuing re-investment in these technology resources

One of the first questions I amasked when talking about this project is: How will it work? How can one set of benchmarks fit our diverse library community? In fact, this is one of the most challenging elements of this complex project – and for any similar national effort – because we know the benchmarks must be relevant to libraries and communities of all types and sizes

As part of this project, ALA OITP conducted a literature review to learn more about benchmarking efforts in libraries of all kinds, as well as in other fields. The review identified many important considerations, including:

  • Affordability: How can we make sure that the cost of using the framework makes sense in terms of its benefits?
  • Clarity: How can we ensure that the framework is as simple and as clear as possible?
  • Relevancy: How can we design a framework that is truly useful to its users and stakeholders?
  • Solvability: At what levels should we set benchmarks so that they are motivational, yet possible to meet?
  • Portability: How can the framework be designed so that it is meaningful, yet general enough to apply to multiple contexts?
  • Scalability: What would allow the framework to scale to work in contexts with different starting points or levels of preparedness?
  • Impact:  Will the framework have the desired impact and usefulness on the target stakeholder groups?
  • Understandability:  Will the target stakeholder groups have enough knowledge to successfully implement the framework?
  • Rewards and Consequences:  Are the rewards for participating understood and provide motivation and are the consequences for not participating equally clear?
  • Comparability:  To what degree will the framework allow participants to accurately compare their “findings” to other participants?

One way to address different contexts is to avoid being too prescriptive. For instance, the 5-Step Animal Welfare Rating that Sarah referenced earlier, which evaluates the lives of farm animals used for meat, is designed to apply to a wide range of farms. One way the rating allows for this is that it uses ranges rather than fixed numbers. For example, instead of requiring that transport time from farm to store for all highest-rated broiler chickens is precisely 1 hour in a particular type of truck, the transport time must simply be a maximum of 2 hours, perhaps recognizing that there are many factors that may come into play for particular farms, and on any given day (such as traffic) that might affect delivery time, and there are many ways of getting to the store.

For libraries, a framework might stipulate, for example, that providing public Internet access is critical, but how it is done might not be specified.  A specific number of desktop workstations, for instance, may not be the ideal measurement, since access could be provided through Internet-enabled laptops or mobile devices owned by the library, through WiFi to patrons using their own devices, or some combination of these things, depending on the needs in their particular communities.

I have to admit, some days this work can feel overwhelming in trying to find the balance between simplicity and complexity or specificity and generalizability. But most of the time, I feel really engaged and challenged to apply everything I’ve learned as a lifelong library user (dating back to the bookmobile that visited my town of 800), a recent library school graduate (2006) and 11 years working with and for libraries and library staff while at the ALA. I really look forward to sharing a first draft that others in this very diverse community can review, argue over, revise and improve so we can meet our shared goals of continuous improvement and sustainable re-investment.

Larra Clark

Associate Director, Program on America’s Libraries for the 21st Century

ALA Office for Information Technology Policy

 


--> Would you like to receive updates about the benchmarks initiative or provide feedback on draft benchmarks? Please sign up for updates or share your thoughts.

Recent comments

Have a story to tell?

Tell us about your daily routine maintaining public computers, or a moment when you were particularly proud. Don't forget that what might be "that's nothing" to you may be an "aha!" to someone else!