Rethinking Evaluation Methodology

Main Article Content

Michael Scriven

Abstract




Medicine, engineering, and evaluation have a highly significant common feature: they completely ignored the ban on evaluation that controlled the social sciences through most of the last century. What is it that doctors do in their core practice? They diagnose disease and malfunction, they recommend treatment, they encourage good health. What do engineers do? Amongst other tasks, they work out why the bridge failed, why the plane crashed, and how to correct the underlying errors and build better structures thereafter. And evaluators do the same with programs or policies or products or personnel—find the best, improve the flawed, report on the worst. It is the core nature of these essentially practical enterprises to be evaluative; they were not just describing or explaining or predicting how the world is, but trying to improve it. They simply didn’t take seriously that the essential nature of science had to be ‘value-free.’ Are there any lessons to be learnt from the methods used by our fellow-practitioners in these highly evaluative disciplines?




Downloads

Download data is not yet available.

Article Details

How to Cite
Scriven, M. (2010). Rethinking Evaluation Methodology. Journal of MultiDisciplinary Evaluation, 6(13), i-ii. https://doi.org/10.56645/jmde.v6i13.264
Section
Editorial

References

Gawande, A. (2009). The checklist manifesto: How to get things right. New York, NY: Henry Holt and Company, LLC.

Melucci, R. J. (1982). Modern personnel checklists, with commentary. Boston, MA: Warren, Gorham & Lamontby.

Scriven, M. (2000). The logic and methodology of checklists. Retrieved February 2, 2010 from http://www.wmich.edu/evalctr/checklists/papers/logic_methodology.pdf

Stufflebeam, D. L. (2007). CIPP evaluation model checklist (2nd ed.). Retrieved February 2, 2010 from http://www.wmich.edu/evalctr/checklists/cippchecklist_mar07.pdf