Competency N

Evaluate programs and services using measurable criteria.

Statement of Competency

Librarians and information professionals must have the ability to assess the effectiveness of programs and services and use the findings for intervention or improvement.

As a profession, librarianship is one of service so customer satisfaction is our top priority. To ensure that our library programs and services are attaining the objectives set during their conception, we need to have evaluation plans in place by using instruments to measure success with established criteria.

Evaluation Measures

Traditionally, libraries track usage and operational measures. Operational statistics are used for determining resource allocation, efficiencies and budget (Hosseini-Ara & Jones, 2013, Paragraph 26). Quantitative data from these evaluations can include circulation and user statistics, shelf count, number of library orientation classes. Using this data we can look at the peaks, dips, outliers and patterns and interpret what these mean. Peggy D. Rudd (2000), Director and Librarian of Texas State Library and Archives Consortium states “libraries collect and report a variety of data to respond to surveys, to prepare annual reports, to measure progress, to assess standards, or support long-range planning and budgeting”(p. 18).

Customer satisfaction is also measured, usually through surveys (Dysart, 2014, Slides 6-7). Satisfaction measures tell us if our patrons are happy with the resources we are providing or the way we serve them. The results of these evaluations tell us where we can make improvements. But together, these measures (usage and satisfaction) do not create a complete picture of what a program’s value is to its users. Furthermore, how we deliver library programs and services have been evolving that the primary measurement systems have become inadequate.

Jane Dysart (2014) of Dysart and Jones Associates adapted a popular quote to illustrate a point: “If our presence can’t add value to their lives, our absence will make no difference” (Slide 10). This is the reason why nowadays, it is preferable to use outcome-based evaluations (OBE). This type of measure goes deeper than the previous two by asking questions such as “Are we adding value?” or “What difference are we making?” (Dysart, 2014, Slide 8).  In order to answer these questions, we must know what impacts are important to community members. Funders want them to experience these impacts and are benefitting them directly.

Hosseini-Ara & Jones (2013) posited that libraries struggle to define and capture measures that convey their value to decision makers because libraries do not set targets for their measure. Libraries need to ask what positive impact they have on a community segment. The authors said, “Libraries exist to have a positive impact on people’s lives” (Paragraph 6). Therefore, we need to measure the impact of our services and determine whether they are contributing to the success of the community members we serve.

Outcome-Based Evaluation

Before explaining what outcome-based evaluation is, it is necessary to define what an “outcome” is. IMLS (2015) defines outcomes as “benefits to people, specifically, achievements, or changes in skill, knowledge, attitude, behavior, condition, or life status for program participants.” If a project is designed to achieve these benefits, then it has outcome goals. And outcome-based evaluation (OBE) is the measurement of the results (IMLS, 2015, Paragraph 3).

According to IMLS (2015) libraries and museums need to conduct outcome-based evaluations because many funders have turned to these evaluations to determine responsible distribution of their resources. Libraries and museums can use the information gleaned from these evaluations to validate the creation of new programs and services, validate expansion, or request support for professional development.

How is outcome-based evaluation different? Libraries and museums usually measure usage and satisfaction. Outcome-based evaluation measures value or importance. Julia Blixrud (2014) of the Association of Research Libraries explained another reason why we should use outcome-based evaluation. She said there is an increased pressure from funders and accreditation agencies for organizations to show how well it is serving its community. Organizations must demonstrate their efficiency and effectiveness in order to be considered for funding.

The OBE process helps libraries work towards their goals by focusing on their programs and providing tools for monitoring progress during the life of the project. Evaluation is most effective when it is included in the planning of the project. The Knight Foundation (2011) in their publication IMPACT: A planning guide to evaluating community information projects explains the four essential steps for designing an evaluation for an information project. The first is to describe the project and identify the target audience. Second is to identify the evaluation’s purpose and key questions. Third, design the evaluation using effective methods. And fourth, communicate and report the evaluation findings to make decisions and actions (p. 4). They suggest website analytics, social media analysis, surveys, and interviews as effective methods for evaluation.

Rudd (2000) assures librarians: “While outcome measurement may at first seem very different from the traditional program or service model, in fact it incorporates all the traditional library measurement (inputs, activities, outputs) while adding only the element of outcomes” (p. 20).

Dysart (2014), in her “Library and Research Services for Parliaments Management Workshop” said that successful measurers (assessors, evaluators) are clear on their purpose, the culture in which they operate, what’s important and of value to their stakeholders, what they need to do, how the library can expedite or enhance people’s ability to do what they want to or need to do (Slide 4). When we pull together our statistics and measures we can convey a clearer picture to decision makers. The data itself will tell our story and become evidence of our needs and our value.

Justification of Evidence

  1. Virtual Reference Transcript Analysis (LIBR 210 Reference and User Services)

This was an interesting project which I shared with one classmate from my reference class. We had two transcripts of virtual reference interactions to analyze. We used the following to analyze the transcripts: Cassell and Hiremath’s (2003, p. 17) six-step reference interview process, Guidelines for Behavioral Performance of Reference and Information Service Providers from Reference and User Association (RUSA, 2004) and the Digital Reference Guideline from the International Federation of Library Associations (IFLA, n.d.)

In analyzing the quality of a virtual reference interview, we had to judge them on two levels: first, did the librarian provide the requested information; and second, whether the interaction between the librarian and the patron is positive or negative. A successful reference interview, whether in-person or online should provide the patron with the best possible information and have a positive impact on the patron so he/she will see the value of consulting a librarian. As Durrance (1989) said, “Satisfaction is the willingness to return to or work with a librarian in the future” (p. 34). We felt that establishing rapport is the first step in having a satisfactory reference interview. Librarians have to be both friendly and professional, which is not an easy thing to do, especially in a virtual chat. The librarian and patron can’t see each other and can’t see visual cues. So it’s important to be careful with how the “conversation” is perceived. Virtual reference is certainly a challenge.

Librarians also have to ask open-ended questions which give the patron an opportunity to expound the information need. Once the information need is determined, the librarian has to develop a search strategy. RUSA (2004) guidelines describes an effective searcher as one who works with the user to see if what searches have already been done by the user; creates a competent and complete strategy and explains this to the patron and provides pointers and search paths to follow to find the answer to the information question. It’s not enough to just find the information but evaluate it and see if it is indeed what the user is looking for. And finally, the librarian must confirm that the patron’s question was completely answered and invite the patron to use the library services again before closing the interview.

The transcripts were quite interesting and revealing. They were certainly a clear illustration of the do’s and don’ts of virtual reference interviews. As with face-to-face interviews, librarians should provide service that results in satisfied and smiling customers.

  1. Risk Assessment (LIBR 282 Project Management)

This artifact is one the assignments for my project management class. The purpose of the assignment was to conduct and evaluate a risk management plan. The risk management plan consists of three parts: risk identification, risk assessment, and risk mitigation. In project management, risk is defined as a future event that could happen and have a positive or negative effect but most of the time, a risk event is associated with a loss. To remain relevant, libraries today create programs and projects to remain relevant and exhibit value. So it’s important to be able to assess components of a project to see whether it’s viable or not.

For the assignment, I had to write a short description of the organization I was working for and what I perceived as issues with the organization. Since I was working for a for-profit university library, the bottom line was always money and because the library doesn’t generate revenue, the administration did not think it was worth investing in.

I had to determine the risks the library faced and assessed how much those risks could impede library operations and presented these in the form of a chart based on Michalko, et al’s (2010) article on risk and systematic change (p. 9). I categorized the risks into the following: value proposition, human resources, and durable goods. Then I briefly explained how I decided and assigned the ratings.

As with any risk assessment, it is not only important to identify the risks but also determine plans for mitigation. To be able to manage the risk that requires immediate attention, I created a chart that is also based on Michalko, et al’s (2010) article. The chart is divided into three sections: low, medium, and high. Each risk level has combinations of three actions: consider, ignore, and take action. And each risk is assigned a number and placed in the appropriate sections. This creates an excellent visual both for identifying risks and what action to take.

  1. Electronic Records Management Systems (ERMS) Evaluation (LIBR 284 Tools, Services, and Methodologies for Digital Curation)

An ERMS is a software application that manages digital information such as text documents, spreadsheets, images, even email. ERMS is used to capture, describe, and secure these material but also allow the organization to easily retrieve, access, and reuse the data.

For the fourth week of our digital curation class, our instructor gave us three ERM systems to evaluate: Utah ERM (Utah State Archives, DoD 5015.2 (U.S. military), and MoReq 2010 (non-profit foundation). It was a bit difficult to evaluate them based on a uniform set of criteria because they did not name their requirements in the same way or they could be named the same but their meaning is different. The Utah ERM (2008), because it’s being used for archives is most concerned with preservation and accessibility. DoD (2007) and MoReq (2011) has declared standards and expressed the need for metadata. Nevertheless, these systems’ main goal is to best manage electronic records based on the needs of its user community.

To be able to comparatively evaluate these systems, I created a chart with the three systems in columns. Then I listed the criteria each system required by each system. Authenticity is a major requirement, which ensures the integrity of the record. Accessibility is another requirement and this makes the record available for retrieval and reuse. Of course, security is another major concern. Records should only be accessible to authorized users and have security standards. Standards do not only apply to security but also to the metadata that is included with the records. Metadata is what makes records truly accessible and use ISO standards for all storage media. Other requirements include backwards compatibility, archival custody, preservation, records tracking and destruction.

Of the three systems, the Utah ERM is the one that is concerned with the whole digital curation life cycle. This is not uncommon for institutions such as archives, repositories, and libraries. In academic settings, the DCC Curation Lifecycle Model (2015) is a good plan to follow because it covers all the essential phases of the curation lifecycle. There is no universal records management system. This can lead to accessibility issues between organizations. However, one must remember that ERMs are designed for specific users so its standards depends on the needs of the community using it.

Evidence

Virtual Reference Transcript Analysis

Risk Assessment 

Electronics Records Management Systems 

References

Blixrud, J.C. (2012). Evaluating library service quality: use of LibQUAL+TM. Retrieved fromhttp://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=1556&context=iatul

Casey, M.E. & Savastinuk, L.C. (2006). Service for the next generation. Library Journal, 131(1) Retrieved from http://cil733.pbworks.com/f/Library+2.0+Journal.pdf

Cassell, K.A. & Hiremath, U. (2013). Reference and information services in the 20th century: An introduction. Chicago, IL: Neil-Schuman.

DCC. (2015). DCC curation lifecycle model. Retrieved fromhttp://www.dcc.ac.uk/resources/curation-lifecycle-model

Department of Defense. (2007). Electronic records management software applications design standard (DoD 5015.2). Retrieved fromhttp://www.dtic.mil/whs/directives/corres/pdf/501502std.pdf

DLM Foundation. (2011). Modular requirements for records systems. Retrieved fromhttp://www.moreq.info/files/moreq2010_vol1_v1_1_en.pdf

Durrance, J.C. (1989). Reference success: Does the 55 percent rule tell the whole story? Library Journal, 114(7), 31-36.

Dysart, J. (2014). Library and research services for parliaments management workshop. Presentation, Paris, August 2014. http://www.ifla.org/files/assets/services-for-parliaments/preconference/2014/evaluating_services.pdf

Hosseini-Ara, M. & Jones, R. (2013). Overcoming our habits and learning to measure impact.Computers in Libraries, 33(5). Retrieved from http://www.infotoday.com/cilmag/jun13/Hosseini-Ara_Jones–Overcoming-Our-Habits-and-Learning-to-Measure-Impact.shtml

International Federation of Library Associations and Institutions. (n.d.). IFLA Digital Reference Guidelines.

Institute of Museum and Library Services. (2015). Evaluation resources. Institute of Museum and Library Services. Retrieved from https://www.imls.gov/research-evaluation/evaluation-resources

Knight Foundation (2011). IMPACT: A planning guide to evaluating community information projects. Retrieved fromhttp://www.knightfoundation.org/media/uploads/publication_pdfs/Impact-a-guide-to-Evaluating_Community_Info_Projects.pdf

Reference and User Services Association. (2004). Opening and closing rituals of the virtual reference services of the Internet Public Library. Journal of Documentation, 66(6), 807-823.

Rudd, P.D. (2000) Demonstrating the values of libraries through outcome measurement.Perspectives on Outcome Based Evaluation form Libraries and Museums. Washington, DC: IMLS. Retrieved fromhttps://www.imls.gov/sites/default/files/publications/documents/perspectivesobe_0.pdf

University of Wisconsin. (2014). Logic model – Templates for creating a logic model. Retrieved from http://www.uwex.edu/ces/pdande/evaluation/evallogicmodel.html

Utah Department of Administrative Services. (2008). Electronic records management business case. Division of State Archives. Retrieved fromhttp://archives.utah.gov/recordsmanagement/erm/ERMBusinessCase.pdf

Advertisements