Validating Assessment for Learning: Consequential and Systems Approaches

Main Article Content

James Patrick Van Haneghan

Abstract

In order to adequately evaluate assessment for learning, expanded approaches to validity need to be considered. The purpose of this manuscript is to explore what is necessary to evaluate claims that assessment facilitates learning. Messick’s (1994) concept of consequential validity provides one lens for determining the learning consequences of assessment. His approach suggests that learning consequences are a special case of the general concept of consequential validity and should be evaluated from that perspective. The systems approach developed by Frederiksen and Collins (1989) provides another perspective of how assessments can be designed with learning consequences in mind. Their model provides a transparent way to link assessments to learning by making teaching to the test a valid activity. The adequacy of both models for evaluating claims of learning from assessment is explored. Based on the analysis, a model for evaluating the validity of evidence for assessment for learning is outlined.

Downloads

Download data is not yet available.

Article Details

How to Cite
Van Haneghan, J. P. (2009). Validating Assessment for Learning: Consequential and Systems Approaches. Journal of MultiDisciplinary Evaluation, 6(12), 23–31. https://doi.org/10.56645/jmde.v6i12.237
Section
Assessment for Learning

References

Black, P., & Wiliam, D. (1998). Inside the black box. Phi Delta Kappan, 80, 139.

Bransford, J. D., & Schwartz, D. L. (1999). Rethinking transfer: A simple proposal with multiple implications. Review of Research in Education, 24, 61-100.

https://doi.org/10.2307/1167267 DOI: https://doi.org/10.2307/1167267

Brookhart, S. M. (2003). Developing measurement theory for classroom assessment purposes and uses. Educational Measurement Issues and Practice, 22(4), 5-12.

https://doi.org/10.1111/j.1745-3992.2003.tb00139.x DOI: https://doi.org/10.1111/j.1745-3992.2003.tb00139.x

Chi, M. (2006). Laboratory methods for assessing experts' and novices' knowledge. In K. A. Ericsson, N. Charness, P. J. Feltovich, & R. R. Hoffman (Eds.), The Cambridge handbook of expertise and expert performance (pp. 167-184). New York: Cambridge University Press.

https://doi.org/10.1017/CBO9780511816796.010 DOI: https://doi.org/10.1017/CBO9780511816796.010

Ericsson, K., & Ward, P. (2007). Capturing the naturally occurring superior performance of experts in the laboratory: Toward a science of expert and exceptional performance. Current Directions in Psychological Science, 16, 346-350.

https://doi.org/10.1111/j.1467-8721.2007.00533.x DOI: https://doi.org/10.1111/j.1467-8721.2007.00533.x

Frederiksen, J., & Collins, A. (1989). A systems approach to educational testing. Educational Researcher, 18(9), 27-32. DOI: https://doi.org/10.3102/0013189X018009027

https://doi.org/10.2307/1176716 DOI: https://doi.org/10.2307/1176716

Frederiksen, J. R., & Collins, A. (1996). Designing an assessment system for the workplace of the future. (pp. 193-221). In L. B. Resnick, J. Wirt, & D. Jenkins (Eds.), Linking school and work: Roles for standards and assessment. San Francisco: Jossey-Bass.

Mansell, W. (2008, June 20). Every school to get a champion of assessment for learning. Times Education Supplement, p. 7.

Marshall, H. (1988). In pursuit of learning-oriented classrooms. Teaching and Teacher Education, 4, 85-98.

https://doi.org/10.1016/0742-051X(88)90010-8 DOI: https://doi.org/10.1016/0742-051X(88)90010-8

Messick, S. (1994). The interplay of evidence and consequences in the validation of performance assessment. Educational Researcher, 23(2), 13-23. DOI: https://doi.org/10.3102/0013189X023002013

https://doi.org/10.2307/1176219 DOI: https://doi.org/10.2307/1176219

Messick, S. (1996). Validity and washback in language testing. Research Report, Educational Testing Service. (ERIC Document Reproduction Service No. ED403277).

https://doi.org/10.1002/j.2333-8504.1996.tb01695.x DOI: https://doi.org/10.1002/j.2333-8504.1996.tb01695.x

Mislevy, R. J. (2006). Cognitive psychology and educational assessment. In R. I. Brennan (Ed.), Educational measurement (4th ed.) (pp. 257-305). Westport, CT: American Council on Educational Measuerment/Praeger.

Mislevy, R. J. (2007). Validity by design. Educational Researcher, 36, 463-469.

https://doi.org/10.3102/0013189X07311660 DOI: https://doi.org/10.3102/0013189X07311660

Moss, P. A. (2003). Reconceptualizing validity for classroom assessment. Educational measurement: Issues and Practice, 22(4), 13-25.

https://doi.org/10.1111/j.1745-3992.2003.tb00140.x DOI: https://doi.org/10.1111/j.1745-3992.2003.tb00140.x

Pellegrino, J., Chudowsky, N, & Glaser, R. (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academies Press.

Schultz, K. S., & Whitney, D. J. (2004). Measurement theory in action. Thousand Oaks, CA: Sage.

Shepard, L. (2006). Classroom assessment. In R. I. Brennan (Ed.), Educational measurement (4th ed.)(pp. 623-646). Westport, CT: American Council on Educational Measuerment/ Praeger.

Smith, J. K. (2003). Reconsidering reliability in classroom assessment and grading. Educational Measurement: Issues and Practices, 22(4), 26-33.

https://doi.org/10.1111/j.1745-3992.2003.tb00141.x DOI: https://doi.org/10.1111/j.1745-3992.2003.tb00141.x

Shute, V. J. (2008). Focus on formative assessment. Review of Educational Research, 78, 153-189.

https://doi.org/10.3102/0034654307313795 DOI: https://doi.org/10.3102/0034654307313795

Sweller, J. (2006). Discussion of "Emerging topics in cognitive load research: Using learner and information characteristics in the design of powerful learning environments." Applied Cognitive Psychology, 20, 353-357.

https://doi.org/10.1002/acp.1251 DOI: https://doi.org/10.1002/acp.1251

Vygotsky, L. S. (1978). Mind in society. Cambridge, MA: Harvard.

Ward, H. (2008, October 17). Assessment for learning has fallen prey to gimmicks, says critic. Times Education Supplement, p. 18.