Terms and Definitions

Assessment

Over the years, assessment has been used to describe either a process toward improvement, or a process toward accountability, sometimes both. At Macalester, our goal for assessment is the continual improvement in the quality of the curricular and co-curricular programming offered by the College to its students. Following are a few descriptions of assessment that we’ve found especially useful:

“Assessment is the process of gathering and discussing information from multiple and diverse sources in order to develop a deep understanding of what students know, understand, and can do with their knowledge as a result of their educational experiences; the process culminates when assessment results are used to improve subsequent learning.” (Huba and Freed p. 8)

“…A systemic and systematic process of examining student work against our standards of judgment, it enables us to determine the fit between what we expect our students to be able to demonstrate or represent and what they actually do demonstrate or represent at points along their educational careers. Beyond its role of ascertaining what students learn in individual courses, assessment, as a collective institutional process of inquiry, examines students’ learning over time. It explores multiple sources of evidence that enable us to draw inferences about how students make meaning based on our educational practices.” (Maki, p. 2)

“Assessment is more than the collection of data. To make assessment work, educators must be purposeful about the information they collect. As a basis for data gathering, they must clarify their goals and objectives for learning and be aware of where these goals and objectives are addressed in the curriculum…Hutchings and Marchese suggest that the meaning of assessment is captured best by its key questions. What should college graduates know, be able to do, and value? Have the graduates of our institutions acquired this learning? What, in fact, are the contributions of the institution and its programs to student growth? How can student learning be improved? (Palomba and Banta, p. 4)

Direct v. Indirect Assessment

There are a variety of tools that may be used for different types of assessment. It is important that the tools selected are a good “fit” with the goals and objectives of the intended learning outcomes. Ideally, emphasis should be placed on direct assessment methods. However, to complement the range of learning that takes place, it is recommended that a well-balanced assessment plan include a variety of assessment methods. This section will introduce some basic terms and types of assessment.

Direct Assessments

Direct assessments provide students the opportunity to show what they know. They “…prompt students to represent or demonstrate their learning or produce work so that observers can assess how well students’ texts or responses fit institutional or program-level expectations.” (Maki, p. 88)

The use of direct assessment requires clear objectives, and a set of criteria by which the work will be evaluated. (Walvoord p. 13) The use of rubrics is recommended to help define the expectations for a given task, and to aid in the process of comparing work over time. Examples of rubrics are available in Assessment Examples and Resources.

Examples of direct assessment:

  • Capstone experience. As noted in the Macalester College catalog, students are required to complete a capstone experience: “The purpose of the capstone requirement is to give students experience with reading original research literature, doing original work, or presenting a performance. The requirement may be met in many ways, e.g. senior seminar, independent project, or honors project. The means of completing this experience are designated by the departments...” The capstone experience typically culminates in a significant effort such as a major research paper/project with oral presentation, often including peer and faculty review, presenting a paper at a conference, etc.
  • MAI (Macalester Assessment Instrument)
  • CLA (Collegiate Learning Assessment)
  • Portfolios, or e-portfolios, are collections of student work over a period of time. A portfolio may include a student’s work from the beginning of their college career to graduation, or any other portion of that time, such as within a particular class or department.
  • Internal and external juried reviews (e.g. speeches, recitals, performances in the arts, exhibitions or colloquia)
  • Oral exams
  • Individual or group projects (peer evaluations and/or faculty or staff rating)
  • Public presentations
  • One-minute paper
  • Embedded assignments, such as test questions or essay questions embedded in the course
  • National-testing within a discipline or licensure exams (e.g. Major Field Achievement Test)
  • Evaluations completed by an internship supervisor

Please see Tools for Implementation for descriptions and specific examples of direct assessment tools.

Indirect Assessments

In addition to something students are able to “produce” as a result of their learning experiences, value is also placed on students’ perceptions of this experience.

Indirect assessments “…capture students’ perceptions of their learning and the educational environment that supports that learning, such as access to and the quality of services, programs, or educational offerings that support their learning…By themselves, results of indirect methods cannot substitute for the evidence of learning that direct methods provide. They can, however, contribute to interpreting the results of direct methods…” (Maki, p. 88, 89)

Examples of indirect assessment:

  • Self-reported student experiences, such as those included in NSSE
  • Satisfaction surveys
  • Alumni surveys
  • Exit interviews with graduates
  • Group discussions
  • Employer surveys

Please see Tools for Implementation for descriptions and specific examples of indirect assessment tools.

Formative v. Summative Assessment

Most methods of indirect or direct assessment may be either formative or summative, depending upon the particular design and when the assessment is introduced. Many grant proposals require a combination of both formative and summative assessment

Formative Assessment

Formative assessment seeks “evidence of learning along the progression of students’ studies.” (Maki, p. 89) It is used to understand a student’s progress or a program’s effectiveness in moving toward a goal. It may be thought of as a diagnostic assessment tool, whereby faculty or program managers verify whether or not progress is being made as expected. Because formative assessment is implemented throughout the learning process, faculty or program managers may implement any changes relatively quickly—while the student is still in the class/department/program. One example of formative assessment is the “one-minute paper.” The intent of this method is to identify whether students are able to understand the “key takeaway” for a given class period.

Formative assessment may be used to track progress toward successful attainment of learning outcomes, and help to identify whether changes are necessary in order to meet the goals.

Summative Assessment

Summative assessments are used to understand whether a goal has been met. Summative assessment documents achievement of institution-level and program-level learning goals.

Qualitative v. Quantitative

Both qualitative and quantitative methods bring valuable information to light, but they yield different types of results. Review the type of information needed when selecting a method, and also consider whether qualitative and quantitative research could be used together. Please contact the Assessment Office if you would like to brainstorm options.

Qualitative Methods

Qualitative research uses open-ended questions to gain an in-depth understanding of the questions being explored. Common qualitative techniques include focus group discussions, mini-groups, and in-depth interviews either face-to-face or via telephone.

Quantitative Methods

“Quantitative methods are distinguished by their emphasis on numbers, measurement, experimental design, and statistical analysis. Researchers typically work with a small number of predetermined response categories to capture various experiences and perspectives of individuals. Often emphasis is on analyzing a large number of cases using carefully constructed instruments that have been evaluated for their reliability and validity (Patton, 1990). Techniques include questionnaires, structured interviews, and tests.” (Palomba and Banta, p. 337)

Sources

Allen, Mary J. “From Assessment to Academic Excellence: Intentionally Mapping Student Success.” 2008 AAC&U General Education and Assessment Conference. February 21, 2008.

Huba, Mary E. and Freed, Jann E. Learner-Centered Assessment on College Campuses: Shifting the Focus from Teaching to Learning. Needham Heights, MA: Allyn and Bacon, 2001.

Maki, Peggy. Assessing for Learning. Sterling, VA: American Association for Higher Education, 2004.

Palomba, Catherine A. and Banta, Trudy W. Assessment Essentials. San Francisco: Jossey-Bass Publishers, 1999.

Schuh, John H. and Upcraft, M. Lee. Assessment Practice in Student Affairs: An Applications Manual. San Francisco: Jossey-Bass, 2001.

Walvoord, Barbara E. Assessment Clear and Simple. San Francisco: Jossey-Bass, 2004.

Additional Resources

Dictionary of Student Outcome Assessment created by James Madison University