Information Literacy Assessment Methods for Academic Libraries

Information Literacy Assessment Methods for Academic Libraries

Author: Cristina Horak

This tutorial intents to comprehensively enlighten our academic library administration team about the most suitable methods of Information Literacy Training (ILT) evaluation based on a proficient bibliographic research, aiming to support our approach to ILT assessment, and to be complaint to the ACRL’s Information Literacy Competency Standards for Higher Education (2000) given that it is important to adopt an up to date evaluation methodology that at the same time demonstrates student’s IL proficiency as well as analyze course pedagogy efficiency.


Annotated Bibliography

  • Standards
  • Book
  • Case Studies Articles
  • Further Readings


See More
Introduction to Psychology

Analyze this:
Our Intro to Psych Course is only $329.

Sophia college courses cost up to 80% less than traditional courses*. Start a free trial now.


Annotated Bibliography

This compiled annotated bibliography, including published guidelines, books, case study articles and suggestions for further reading, aims to support the evaluation methods of information literacy training (ILT) assessment to be practiced by academic libraries, accordingly to the ACRL’s Information Literacy Competency Standards for Higher Education (2000).


Source: American Library Association (2000). Information Literacy Competency Standards for Higher Education. http://www.ala.org/acrl/standards/informationliteracycompetency

Standards and Guidelines

American Library Association (2000). Information Literacy Competency Standards for Higher Education. http://www.ala.org/acrl/standards/informationliteracycompetency

      These ARCL standards establish the framework for information literacy service scope, delivery and assessment as well as performance indicators and persuaded outcomes. Its fulfillment is authoritative on evaluating ILT methods of assessment.   

      The ACRL’s Information Literacy Competency Standards for Higher Education (2000) provides reputable performance indicators that assess higher education students as information literates.  Those outcome learning indicators are framed on the intellectual development ladder based on Bloom’s cognitive taxonomy where the learner will intellectually grow from lower order questions (knowledge, comprehension and application) through higher order questions (analysis, synthesis and evaluation).

     Accordingly, ARCL’s standards would promote the following learning achievements: the student will ethically, accurately and creatively master in using information by recognizing need of information, by identifying information need and selecting relevant sources; by applying search strategies into different source, by distinguishing relevant sources from irrelevant or inaccurate sources and determining adequacy of information, regardless of the format and medium, by determining if bibliographic resources substantially support academic research while applying publication styles rules into academic work; and by exercising an inquisitive mind with a sense of mutual understanding, constructing and application of new knowledge . In addition, students will develop dexterity to become critical life-long learner, and consequently manage their own learning path.

Source: American Library Association (2000). Information Literacy Competency Standards for Higher Education.


Radcliff, C. J. (2007). A practical guide to information literacy assessment for academic librarians. Westport, Conn: Libraries Unlimited Inc.

This book provides background knowledge to the practice of IL assessment. It covers levels of assessment, how to choose the proper assessment tool, type of tools and methodologies, and how to analyze data and apply results.

In this book, the items following types of tools and methodologies are detailed:

  1. Informal assessment methodologies contain informal observations, questioning and self-reflection, monitoring drawing instantaneous and spontaneous feedback.
  2. Classroom assessment techniques (CAT) included in ILT provide immediate student’s learning detection, allowing on the spot improvement and review.
  3. Survey methods help to generalize participants’ perceptions, attitudes and opinions.
  4. Interviewing methods allow in depth exploration of issues identified by other assessment methods, gathering qualitative data from the user’s perspective.
  5. Focus group techniques promote the discussion of users’ thoughts and feelings regarding to selected topic while delving richer assessment data.
  6. Knowledge tests measure students’ IL proficiency by quizzing content and case scenarios covered during the instruction.
  7. Concept maps represent while organize students’ acquisition, comprehension, and integration or not of new knowledge, revealing misconception therefore areas for improvement.
  8. Performance assessment based on good assignment prompts and rubrics can evaluate the application of student’s higher order cognitive skills from the generated process and products.
  9. Portfolios manifest the students’ development and application of information literacy competencies while implementing student’s self-reflection through the work development process.

Along to tools and methodologies’ purpose, implementation criteria such as cost feasibility, process, librarian involvement, resources, products and outcomes should be considered. The authors list the following parameters to be considerate when evaluating assessment methodologies:

  1. time to prepare, administer and analyze;
  2. level of financial commitment as well as level of assessment (classroom; programmatic; institutional);
  3. Types of information gather about the student (affective: feeling, perception, or opinions; behavioral: how students apply or do with acquired knowledge; and cognitive: what students know);
  4. access of participants students engagement;
  5. stakeholders consent and collaboration; and
  6. assessment of know-how required and developed.

Source: Radcliff, C. J. (2007). A practical guide to information literacy assessment for academic librarians. Westport, Conn: Libraries Unlimited Inc.

Case Study Articles

The following case study articles present a combination of assessment methods from “forced-choice to constructed-response or performance” (Mueller, 2013b). They can be used to support and enrich the decision making process of selecting the most suitable ILT assessment method. Although scenarios are distinctive, those case studies are a useful tool to understand the implementation and outcomes of an ILT assessment method.

Hill, J., Macheak, C., & Siegel, J. (2013). Assessing undergraduate information literacy skills using project SAILS. Codex (2150-086X), 2(3), 23-37.

      This article presents a case study of ILT at the University of Arkansas (Little Rock), applying the Project SAILS assessment focused on two populations: freshmen and seniors. Freshman students ILT evaluation is done through tests during Library classes, and senior students’ ILT evaluation is collected via the Blackboard online learning management system. Test score results were compared externally to peer institutions while providing demographic and academic data.

Hoffmann, D., & LaBonte, K. (2012). Meeting information literacy outcomes: Partnering with faculty to create effective information literacy assessment. Journal of Information Literacy, 6(2), 70-85.

      This paper outlines an authentic assessment methodology through the creation of a rubric and a specialized assignment (student writing assignments) to gather data from a three-year period at the California State University Channel Islands. The article examines diverse types of authentic assessment of IL such as performance assessment (rubric), portfolio assessment and self assessment. The evaluation method emphasizes the assessment of students reapplying acquired IL knowledge, the “importance of learning the process” (Hoffmann & LaBonte, 2012), of recognizing, retrieving and applying pertinent information.

Knecht, M., & Reid, K. (2009). Modularizing information literacy training via the Blackboard eCommunity. Journal of Library Administration, 49(1/2), 1-9. doi:10.1080/01930820802310502

      This article presents comparative and immediate learning outcome data from a cyclical process of an online ILT pre-posttest methodology by using Balckboard eCommunity feature. The pre- and post-tests provide useful correlation studies and percentage rates employed in progress, annual and accreditations reports, as well as, self-studies.

Larsen, P., Izenstark, A., & Burkhardt, J. (2010). Aiming for assessment. Communications in Information Literacy, 4(1), 61-70.

      This article reports a large-scale assessment project of a 3-credit, full-semester information literacy course at the University of Rhode Island. The library instruction faculty adapted the open source question mapping Bay Area Community College Information Competency Proficiency Exam to determine students’ proficiency, teaching effectiveness and compliance to the ACRL’s Information Literacy Competency Standards for Higher Education.

Mery, Y., Newby, J. & Peng, K. (2011). Assessing the reliability and validity of locally developed information literacy test items. Reference Services Review, 39, 1, 98-122.

      This report describes an evaluation model adherent to the ACRL’s Information Literacy Competency Standards for Higher Education, based on the SAILS Project eight skills set, and adapted to fulfill local assessment needs as well as to increase librarians’ expertise in demonstrating accountability. This ILT case study at the University of Arizona allows quantitative test results data analysis to identify student’s performance as well as qualitative test results data analysis to evaluate and improve the IL course.

Oakleaf, M. (2008). Dangers and opportunities: A conceptual map of information literacy assessment approaches. Libraries and the Academy 8(3), 233-253. DOI: 10.1353/pla.0.0011

      This article identifies three major assessment approaches: (1) fixed-choice tests or standardized tests such as SAILS Project, (2) performance assessments, and (3) rubrics. For each approach, it considers the benefits and limitations regarding learning assessment, data management, and development and implementation.

Tancheva, K., Andrews, C., & Steinhart, G. (2007). Library instruction assessment in academic libraries. Public Services Quarterly, 3(1-2), 29-56.

      This article reports the combination of three complementary methods of assessment for a library instruction program. They are attitudinal (to improve marketing and user satisfaction), outcomes-based (to measure the effectiveness of library instruction), and gap-measure (to identify and compare patron and instructor’s relevancy perceptions before and after training sessions).

Van Cleave, K. (2008). The self-study as an information literacy program assessment tool. College & Undergraduate Libraries, 15(4), 414-431. doi:10.1080/10691310802554887

      This article presents a self-study ILT assessment method through a survey questions based on the Association of College and Research Libraries' (ACRL) Characteristics of Programs of Information Literacy That illustrate Best Practices: A Guideline, covering programming, teaching preparation, teaching methods, materials/support, assessment, and training process, submitted to and answered by 20 teaching librarians. Quantitative data was gathered, comments were summarized and results were shared among librarians and stakeholders.

Source: Mueller, J. (2013b). Authentic assessments toolbox.

Furhter Reading

Besides those selected case studies and background information, the following additional articles were collected to supplement, ratify, and illustrated related assessment methodology development.

American Library Association. (2012). Characteristics of programs of information literacy that illustrate best practices: A guideline. College & Research Libraries News, 73(6), 355-359.

This article presents recommendations of best practice for stakeholders in developing, implementing and assessing IL initiatives by academic libraries. It covers mission, goals and objectives, planning, administrative and institutional support, articulation, collaboration and pedagogy.

Blummer, B., Kenton, J. M., & Liyan, S. (2010). The design and assessment of a proposed library training unit for education graduate students. Internet Reference Services Quarterly, 15(4), 227-242. doi:10.1080/10875301.2010.526491

This article presents Behaviorist and Constructivist approaches to teaching IL with the application of a blend of assessment methodologies from website test, student grades, librarians and course instructor’s self-assessment, focus group, pre and posttests, individual assessment and program assessment.

Carter, T. M. (2013). Use what you have: Authentic assessment of in-class activities. Reference Services Review, 41(1), 49-61.

This article discusses use of authentic assessment within individual sessions of course-integrated information literacy.

Catalano, A. (2010). Using ACRL standards to assess the information literacy of graduate students in an education program. Evidence Based Library & Information Practice, 5(4), 21-38.

This article presents the outcomes results of a survey based on ARCL’s IL standards applied to all graduate students to investigate their IL skills perception and knowledge.

Emde, J., & Emmett, A. (2007). Assessing information literacy skills using the ACRL standards as a guide. Reference Services Review, 35(2), 210-229. doi:10.1108/00907320710749146

The article presents a study where over a three-year period pre and post-tests were applied to assess graduate student information literacy skills based on ACRL’s standards desired learning outcomes.

 Haras, C., Moniz, J., & Norman, A. (2010). Listening to the customer: Using assessment results to make a difference. Library Leadership & Management, 24(2), 91-99.

This article compiles the panel discussion about assessment practices, focusing on the proper alignment of inputs (assessment requirements), outputs (assessment resources) and outcomes (assessment goals) for IL programs.

Johnson, W. (2009). Developing an information literacy action plan. Community & Junior College Libraries, 15(4), 212-216.

This article exemplifies the need for an information literacy action plan, bringing its to the institutional level planning.

Lewis, J. (2011). Using LibQUAL+® survey results to document the adequacy of services to distance learning students for an accreditation review. Journal of Library & Information Services in Distance Learning, 5(3), 83-104.

This article presents the LibQUAL+® survey developed by the Association of Research Libraries as one tool that academic libraries can use to demonstrate compliance with accreditation standards applied to distance learning environment.

McMillen, P., & Deitering, A. (2007). Complex questions, evolving answers: Creating a multidimensional assessment strategy to build support for the ‘Teaching Library’. Public Services Quarterly, 3(1-2), 57-82.

This article presents a blended assessment model for ILT by using the Project SAILS to collected quantitative data, and focus groups to collect qualitative data.

Mueller, J. (2013a). Assessments of information literacy : available online [directory]. http://jfmueller.faculty.noctrl.edu/infolitassessments.htm

This online directory provides links to forced-choice, constructed-response and performance web links of ILT assessment examples as well as links to Authentic Assessment Toolbox created by Mueller.

Mueller, J. (2013b). Authentic assessments toolbox. http://jfmueller.faculty.noctrl.edu/toolbox/index.htm

This website is a comprehensive guide to authentic assessment methodology, providing its explanation and comparison to traditional assessment (test-based), standards, applications and examples, glossary, workshops and author’s biography.

Oakleaf, M. (2009). Guiding questions for assessing information literacy in higher education. Libraries and the Academy, 9(2), 273-286.

This article presents a guided methodology to assist librarians in evaluating and determining the adequacy of an assessment methodology.  

Oakleaf, M. (2009). Using rubrics to assess information literacy: An examination of methodology and interrater reliability. Journal of the American Society for Information Science & Technology, 60(5), 969-983.

This article defines, and discusses the benefits and limitation of rubric as an ILT assessment tool. It presents the inter-rater reliability statistical analysis of a rubric-based approach to ILT assessment methodology from survey with open-ended questions embedded on the library orientation online tutorials.


Seeking to fulfill stakeholder’s needs, satisfaction and approval while increasing student’s learning, information literacy initiatives are accountable and challenging library’s endeavors amplified by even more complex assessment methodologies development. The presented specialized literature aims to support our library team in evaluating the most practicable and suitable ILT assessment methodology applied to academic libraries based on the ACRL’s Information Literacy Competency Standards for Higher Education, and withdrawn from the examination of breadth and valuable peer-libraries best practices and experiences.