Assessment Glossary

Accreditation – as practiced by WASC and other regional accrediting associations, a voluntary, non-governmental, peer-based form of quality assurance at the institutional level. To receive or reaffirm accredited status, institutions demonstrate that they are in compliance with state and federal law and meet the accrediting association’s standards. Accrediting associations must be recognized by the National Advisory Committee on Institutional Quality and Integrity (NACIQI) in order for their accredited institutions to qualify for federal grants and loans to students.

Analytic Rubric – analytic rubrics break down characteristics of a student artifact to be evaluated and therefore provides separate assessment of those components. Analytic rubrics have three parts: criteria, which establish the standards to be assessed; performance indicators, which describe the level of achievement of each criterion, and finally the rating scale for the performance indicators. 

Assessment – it is the process in which programs and institutions articulate what students should learn, how students demonstrate that learning, think critically about the effectiveness of methods to student learning, and make action plans based on the results of these functions (from WSCUC Assessment 101, 2016).

Close the Loop (or closing the loop) – a crucial part of the assessment process, it is the process of analyzing evidence from an assessment activity and using the results to improve student learning.

Direct Assessment – consists of the assessment of products produced by students for the purposes of learning and to demonstrate learning (e.g. papers, projects, presentations, performances, posters, tests, theses, dissertations, etc.)

Formative Assessment – information gathered during the learning process. Enables the instructor to provide feedback to the learner and subsequently enhance student learning. An example of formative assessment might be feedback provided on drafts of student work.

Holistic Rubric – holistic rubrics provide one score for an entire product.

Indirect Assessment – focuses on perceptions of student learning. These perceptions can be obtained from any stakeholder (student, faculty, alumni) and often take the form of surveys, focus groups, or self-reflections.

Inter-Rater Reliability – the degree of agreement among raters / evaluators / assessors. For assessment purposes, inter-rater reliability ensures that those engaged in the assessment process are evaluating material using the same understanding of criteria. The better the inter-rater reliability, the better the validity of the results.

Program Review – a systematic process of examining the capacity, processes, and outcomes of a degree program or department in order to judge its quality and effectiveness and to support improvement

Rubric – quite simply, a scoring guide. These guides can be used to evaluate individual student performance on an assignment, or for assessing student learning of a cohort of students.

Sample Size – the number of artifacts assessed. The greater the sample size, the more reliable the results represent the abilities of the entire population. Qualtrics provides a good sample size calculator here: https://www.qualtrics.com/blog/calculating-sample-size/.

Summative Assessment – information gathered at the conclusion of a learning experience to provide a picture of student abilities at that point in time. An example of a summative assessment is a final / exit examination.

Triangulation – involves using multiple and complementary sources of evidence & data to answer questions about student learning.