Metaevaluation in Practice Selection and Application of Criteria
Main Article Content
Abstract
This paper examines the practice of metaevaluation as indicated by the Metaevaluation standard of the Program Evaluation Standards, as the evaluation of a specific evaluation to inform stakeholders about the evaluation’s strengths and weaknesses. The findings from an analysis of eighteen metaevaluations, including a description of the data sources and methods used to come to conclusions about the evaluation and the criteria of quality employed, are reported. A diverse set of practices were identified, ranging from the use of emergent criteria in a narrative review of information about an evaluation to the structured application of the Program Evaluation Standards using a checklist. The paper concludes that the evaluation field does not have a common understanding of metaevaluation practice.
Downloads
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Copyright and Permissions
Authors retain full copyright for articles published in JMDE. JMDE publishes under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY - NC 4.0). Users are allowed to copy, distribute, and transmit the work in any medium or format for noncommercial purposes, provided that the original authors and source are credited accurately and appropriately. Only the original authors may distribute the article for commercial or compensatory purposes. To view a copy of this license, visit creativecommons.org
References
Benson, A. P., Hinn, D. M., & Lloyd, C. (Eds.). (2001). Visions of quality: How evaluators define, understand and represent program quality. Advances in Program Evaluation (Vol. 7). New York: Elsevier.
https://doi.org/10.1016/S1474-7863(2001)7 DOI: https://doi.org/10.1016/S1474-7863(2001)7
Bickman, L. (1997). Evaluating evaluation: Where do we go from here? Evaluation Practice, 18(1), 1-16.
https://doi.org/10.1016/S0886-1633(97)90003-9 DOI: https://doi.org/10.1016/S0886-1633(97)90003-9
Brandon, P. (1998). A meta-evaluation of schools' methods for selecting site-managed projects. Studies in Educational Evaluation, 24(3), 213-28.
https://doi.org/10.1016/S0191-491X(98)00014-5 DOI: https://doi.org/10.1016/S0191-491X(98)00014-5
*Burbules, N. C. (2000). Meta-evaluation of the
Milwaukee Teacher Education Center evaluation. In M. Chandler, R. Stake, M. Montavon, G. Hoke, R. Davis, J-H. Lee, & S. Rierson, Final report: Evaluation of the MTEC Alternative Teacher Education program (Appendix A). Chicago: University of Illinois, Center for Instructional Research & Curriculum Evaluation.
Bustelo, M. (2002, October). Metaevaluation as a tool for the improvement and development of the evaluation function in public administrations. Paper presented at the meeting of the European Evaluation Society Conference, Seville. Retrieved October 24, 2007, from http://evaluationcanada.ca/distribution/20021010_bustelo_maria.pdf
Cleave-Hogg, D., & Byrne, P. N. (1988). Evaluation of an innovation in a traditional medical school: A metaevaluation. Evaluation & the Health Professions, 11, 249-271.
https://doi.org/10.1177/016327878801100207 DOI: https://doi.org/10.1177/016327878801100207
Cook, T. D., Cooper, H., Cordray, D. S., Hartmannn, H., Hedges, L. V., Light, R. J., Louis, T. A., & Mosteller, F. (1992). Meta- analysis for explanation: A casebook. New York: Russell Sage Foundation.
Cook, T. D., & Gruder, C. L. (1978). Metaevaluation research. Evaluation Quarterly, 2(1), 5-51.
https://doi.org/10.1177/0193841X7800200101 DOI: https://doi.org/10.1177/0193841X7800200101
Cooksy, L. J., & Caracelli, V. J. (2005). Quality, context, and use: Issues in achieving the goals of metaevaluation. American Journal of Evaluation, 26(1), 31-42.
https://doi.org/10.1177/1098214004273252 DOI: https://doi.org/10.1177/1098214004273252
Cooksy, L. J., & Caracelli, V. J. (2007, November). The practice of metaevaluation: Does evaluation practice measure up? Panel presentation at the meeting of the American Evaluation Association, Baltimore, MD.
Curran, V. R. (2000). An eclectic model for evaluating Web-based continuing medical education courseware systems. Evaluation & the Health Professions, 23, 318-347.
https://doi.org/10.1177/01632780022034633 DOI: https://doi.org/10.1177/01632780022034633
Datta, L. (1997). Multimethod evaluations: Using case studies together with other methods. In E. Chelimsky & W. R. Shadish (Eds.), Evaluation for the 21st century: A handbook (pp. 344-359). Thousand Oaks, CA: Sage.
https://doi.org/10.4135/9781483348896.n24 DOI: https://doi.org/10.4135/9781483348896.n24
Datta, L. (1999). CIRCE's demonstration of a close-to-ideal evaluation in a less-than-ideal world. American Journal of Evaluation, 20, 345- 354. DOI: https://doi.org/10.1016/S1098-2140(99)00012-0
https://doi.org/10.1177/109821409902000215 DOI: https://doi.org/10.1177/109821409902000215
Dickersin, K. & Berlin, J. A. (1992). Meta- analysis: State-of-the-science. Epidemiologic Reviews, 14, 154-176.
https://doi.org/10.1093/oxfordjournals.epirev.a036084 DOI: https://doi.org/10.1093/oxfordjournals.epirev.a036084
Dillman, D. (2000). Mail and Internet surveys: The tailored design method (2nd ed.). San Francisco: Wiley.
*Farrar, E., & House, E. R. (1983). The evaluation of Push/Excel: A case study. In A. S. Byrk (Ed.), Stakeholder-based education (pp. 31-57). New Directions for Program Evaluation, 17.
https://doi.org/10.1002/ev.1324 DOI: https://doi.org/10.1002/ev.1324
*Finn Jr., C. E., Stevens, F. I., Stufflebeam, D. L., & Walberg, H. J. (1997). A meta- evaluation. In H. L. Miller, Jr. (Guest Ed.), The New York City Public Schools Integrated Learning Systems Project: Evaluation and meta-evaluation. International Journal of Educational Research, 27(2), 159-174.
https://doi.org/10.1016/S0883-0355(97)90031-8 DOI: https://doi.org/10.1016/S0883-0355(97)90031-8
Grasso, P. G. (1999). Meta-evaluation of an evaluation of reader focused writing for the Veterans Benefits Administration. American Journal of Evaluation, 20, 355-371. DOI: https://doi.org/10.1016/S1098-2140(99)00011-9
https://doi.org/10.1177/109821409902000216 DOI: https://doi.org/10.1177/109821409902000216
Greene, J. C. (1992). A case study of evaluation auditing as metaevaluation. Evaluation and Program Planning, 15(1), 71-74.
https://doi.org/10.1016/0149-7189(92)90063-Z DOI: https://doi.org/10.1016/0149-7189(92)90063-Z
*Greene, J. C. (1999). Meta-evaluation: Evaluation of the VBA Appeals Training Module. Unpublished paper, University of Illinois at Urbana-Champaign.
*Greene, J. C., Doughty, J., Marquart, J. M., Ray, M. L, & Roberts, L. (1988). Qualitative evaluation audits in practice. Evaluation Review, 12, 352-375.
https://doi.org/10.1177/0193841X8801200402 DOI: https://doi.org/10.1177/0193841X8801200402
*Greene, J. C., Dumont, J., & Doughty, J. (1992). A formative audit of the ECAETC year 1 evaluation: Audit procedures, findings, and issues. Evaluation & Program Planning, 15, 81-90.
https://doi.org/10.1016/0149-7189(92)90065-3 DOI: https://doi.org/10.1016/0149-7189(92)90065-3
Guba, E. G., & Lincoln, Y. S. (1981). Effective evaluation. San Francisco: Jossey-Bass.
Guba, E. G. & Lincoln, Y. S. (1989). Fourth generation evaluation. Newbury Park, CA: Sage
Hanssen, C. E., Lawrenz, F., & Dunet, D. O. (2008). Concurrent metaevaluation: A critique. American Journal of Evaluation, 29, 572-582.
https://doi.org/10.1177/1098214008320462 DOI: https://doi.org/10.1177/1098214008320462
*Hartmann, D., & Loizides, G. (2001). Metaevaluation of the Web-based ATE survey evaluation system. Western Michigan University: Kercher Center for Social Research. Retrieved July 24, 2008, from the Western Michigan University Evaluation Center Web site: http://www.wmich.edu/evalctr/ate/webbasedmetafinal.pdf
Henry, G. T. (2001). How modern democracies are shaping evaluation and the emerging challenges for evaluation. American Journal of Evaluation, 22(3), 419-429. DOI: https://doi.org/10.1016/S1098-2140(01)00138-2
https://doi.org/10.1177/109821400102200320 DOI: https://doi.org/10.1177/109821400102200320
*House, E. R. (1987). The evaluation audit. Evaluation Practice, 8(2), 52-56. DOI: https://doi.org/10.1016/S0886-1633(87)80085-5
https://doi.org/10.1177/109821408700800208 DOI: https://doi.org/10.1177/109821408700800208
House, E. R. (1988). Jesse Jackson and the politics of charisma: The rise and fall of the PUSH/Excel Program. Boulder, CO: Westview.
*House, E. R., Glass, G. V., McLean, L. D., & Walker, D. F. (1978). No simple answer: A critique of the Follow Through evaluation. Harvard Educational Review, 48(2), 128-160.
https://doi.org/10.17763/haer.48.2.j2167r4594027x87 DOI: https://doi.org/10.17763/haer.48.2.j2167r4594027x87
Joint Committee on Standards for Educational Evaluation. (1981). Standards for the evaluation of educational programs, projects, and materials. New York: McGraw-Hill.
Joint Committee on Standards for Educational Evaluation. (1994). The program evaluation standards: How to assess evaluations of educational programs. Thousand Oaks, CA: Sage.
*Kemmis, S. (1997). Metaevaluation executive summary. In R. Stake, R. Davis, & S. Guynn, Evaluation of Reader-Focused Writing for the Veterans Benefits Administration (pp. 143- 148). Retrieved June 25, 2008, from http://www.ed.uiuc.edu/CIRCE/RFW/15metaeval.pdf
Kirkhart, K. E. (1995). Seeking multicultural validity: A postcard from the road. American Journal of Evaluation, 16, 1-12.
https://doi.org/10.1016/0886-1633(95)90002-0 DOI: https://doi.org/10.1016/0886-1633(95)90002-0
Leeuw, F. L. & Cooksy, L. J. (2005). Evaluating the performance of development agencies: The role of metaevaluations. In G. K. Pitman, O. N. Feinstein, & G. K. Ingram (Eds.), Evaluating development effectiveness (pp. 95-108). World Bank series on evaluation and development (Vol. 7). New Brunswick: Transaction.
Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic Inquiry. Beverly Hills, CA: Sage.
https://doi.org/10.1016/0147-1767(85)90062-8 DOI: https://doi.org/10.1016/0147-1767(85)90062-8
Lipsey, M. W., Crosse, S., Dunkle, J., Pollard, J., & Stobart, G. (1985). Evaluation: The state of the art and the sorry state of the science. In D. S. Cordray (Ed.), Utilizing prior research in evaluation planning (pp.7-28). New Directions for Program Evaluation, 27.
https://doi.org/10.1002/ev.1398 DOI: https://doi.org/10.1002/ev.1398
*Lynch, D. C., Greer, A. G., Larson, L. C., Cummings, D. M., Harriett, B. S., Dreyfus, K. S., & Clay, M. C. (2003). Descriptive metaevaluation: Case study of an interdisciplinary curriculum. Evaluation & the Health Professions, 26, 447-461.
https://doi.org/10.1177/0163278703258099 DOI: https://doi.org/10.1177/0163278703258099
*McKinley, K. H. (1999). Metaevaluation report of the Evaluation of the Michigan Public School Academy Initiative. Retrieved July 24, 2008, from the Western Michigan University Evaluation Center Web site: http://www.wmich.edu/evalctr/charter/reports/metaeval.html
*Migotsky, C., & Stake, R. (2001). An evaluation of an evaluation: CIRCE's metaevaluation of the site visits and issue papers of the ATE program evaluation. Retrieved July 24, 2008, from the Western Michigan University Evaluation Center Web site: http://www.wmich.edu/evalctr/ate/sitevisitmetafinal.pdf
Mixed-method Collaboration. (1994). Mixed- method evaluation: Developing quality criteria through concept mapping. Evaluation Practice, 15, 139-152.
https://doi.org/10.1177/109821409401500204 DOI: https://doi.org/10.1177/109821409401500204
Patton, M. Q. (1997). Utilization-focused evaluation: The new century text (3rd ed.). Thousand Oaks, CA: Sage.
Peshkin, A. (1988). In search of subjectivity- one's own. Educational Researcher, 17, 17-21.
https://doi.org/10.3102/0013189X017007017 DOI: https://doi.org/10.3102/0013189X017007017
*Ray, M. L. (1988). Evaluation audit. In L. J. Cooksy, The nature of participation in a mandatory employment assistance program (Appendix). Unpublished doctoral dissertation, Cornell University, New York.
Rebolloso, E., Fernández-Ramírez, B., Cantón, P., & Pozo, C. (2002). Metaevaluation of a total quality management evaluation system. Psychology in Spain, 6(1), 12-25.
Sanders, J. R. (1999). Metaevaluation of "The Effectiveness of Comprehensive, Case Management Interventions: Evidence from the National Evaluation of the Comprehensive Child Development Program." American Journal of Evaluation, 20(3), 577-582. DOI: https://doi.org/10.1016/S1098-2140(99)00043-0
https://doi.org/10.1177/109821409902000316 DOI: https://doi.org/10.1177/109821409902000316
Schwandt, T. A. (1989). The politics of verifying trustworthiness in evaluation auditing. American Journal of Evaluation, 10(4) 33-40.
https://doi.org/10.1177/109821408901000405 DOI: https://doi.org/10.1177/109821408901000405
Schwandt, T. A. (1997). Qualitative inquiry: A dictionary of terms. Thousand Oaks, CA: Sage.
Schwandt, T. A. & Halpern, E. S. (1988).
Linking auditing and metaevaluation: Enhancing quality in applied research. Applied Social Research Methods Series, 11. Thousand Oaks, CA: Sage.
Scriven, M. (1969.). Introduction to metaevaluation. Educational Product Report, 2, 36-38.
Scriven, M. (1975). Evaluation bias and its control. Occasional Paper #4. Retrieved August 23, 2008, from the Western Michigan University Evaluation Center Web site: http://www.wmich.edu/evalctr/pubs/ops/ops04.pdf
Scriven, M. (1991). Evaluation thesaurus (4th ed.). Newbury Park, CA: Sage.
Scriven, M. (2001). Evaluation: Future tense. American Journal of Evaluation, 22(3), 301-307. DOI: https://doi.org/10.1016/S1098-2140(01)00154-0
https://doi.org/10.1177/109821400102200303 DOI: https://doi.org/10.1177/109821400102200303
Shadish, W. R., Newman, D. L., Scheirer, M. A., & Wye, C. (Eds.). (1995). Guiding principles for evaluators. New Directions for Program Evaluation, 66.
https://doi.org/10.1002/ev.1705 DOI: https://doi.org/10.1002/ev.1705
*Smith, L. M. (1999). Meta-evaluation of the Veterans Appeals Training program evaluation. Unpublished manuscript, Washington University, St. Louis, MO.
Smith, N. L. (1987). Book Review: Quieting reform: Social science and social action in an urban youth program by Robert E. Stake. Educational Evaluation & Policy Analysis, 9(4).
https://doi.org/10.2307/1163775 DOI: https://doi.org/10.2307/1163775
*Stake, R. E. (1986). Quieting reform: Social science and social action in an urban youth program. Chicago: University of Illinois Press.
Stake, R., & Davis, R. (1999). Summary evaluation of reader focused writing for the Veterans Benefits Administration. American Journal of Evaluation, 20, 323-344.
https://doi.org/10.1177/109821409902000214 DOI: https://doi.org/10.1177/109821409902000214
Stufflebeam, D. (1974). Metaevaluation. Occasional Paper #3. Retrieved January 22, 2008, from the Western Michigan University Evaluation Center Web site: http://www.wmich.edu/evalctr/pubs/ops/ops03.pdf
Stufflebeam, D. L. (1978). Metaevaluation: An overview. Evaluation & the Health Professions, 2(1), 17-43.
https://doi.org/10.1177/016327877800100102 DOI: https://doi.org/10.1177/016327877800100102
Stufflebeam, D. L. (1999a). Program evaluation metaevaluation checklist (based on the Program Evaluation Standards) [short version].RetrievedNovember1,2008,from the Western Michigan University Web site: http://www.wmich.edu/evalctr/checklists/program_metaeval.pdf.
Stufflebeam, D. L. (1999 b). Program evaluation metaevaluation checklist (based on the Program Evaluation Standards) [long version]. Retrieved November 1, 2008, from the Western Michigan University Web site: http://www.wmich.edu/evalctr/checklists/program_metaeval_10point.pdf
Stufflebeam, D. L. (2001a). The meta-evaluation imperative. American Journal of Evaluation, 22(2), 183-209.
https://doi.org/10.1016/S1098-2140(01)00127-8 DOI: https://doi.org/10.1016/S1098-2140(01)00127-8
Stufflebeam, D. L. (2001b). Evaluation models. New Directions for Evaluation, 89.
https://doi.org/10.1002/ev.1 DOI: https://doi.org/10.1002/ev.1
*Stufflebeam, D. L. & Wingate, L. (2002). Metaevaluation: Attestation of the Evaluation's adherence to professional standards for program evaluation. In D. L. Stufflebeam, A. Gullickson, & L. Wingate, The Spirit of Consuelo: An Evaluation of Ke Aka Ho'ona (Appendix D). Retrieved November 11, 2007, from the Western Michigan University Web site: http://www.wmich.edu/evalctr/pubs/consuelo/
U.S. General Accounting Office. (1992). The evaluation synthesis (GAO/PEMD-10-1.2). Washington, DC: Author.
U.S. Office of Management and Budget. (2002). Guidelines for ensuring and maximizing the quality, objectivity, utility, and integrity of information disseminated by federal agencies. Federal Register, February, 22, 2002. Retrieved March 20, 2005, from http://www.whitehouse.gov/omb/fedreg/r eproducible2.pdf
Weiss, C. H. Evaluation (2nd ed.). (1998). Upper Saddle River, NJ: Prentice-Hall.
*Whitmore, E., & Ray, M. L. (1989). Qualitative evaluation audits. Evaluation Review, 13(1), 78-90.
https://doi.org/10.1177/0193841X8901300106 DOI: https://doi.org/10.1177/0193841X8901300106
Worthen, B. R. (2001). Whither evaluation? That all depends. American Journal of Evaluation, 22(3), 409-418. DOI: https://doi.org/10.1016/S1098-2140(01)00139-4
https://doi.org/10.1177/109821400102200319 DOI: https://doi.org/10.1177/109821400102200319
Wortman, P. M. (1994). Judging research quality. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 97-110). New York: Russell Sage Foundation.