Quality as praxis A tool for formative meta-evaluation
Main Article Content
Abstract
Summative meta-evaluation is known to be more commonly practiced than formative meta-evaluation. While evaluation theorists speak to the importance of formative meta-evaluation, examples of how to do this are rarely specified in the evaluation literature. This paper aims to (1) further explore formative meta-evaluation as a means for quality assurance, with implications for both developing the capacity of evaluators and for advancing evaluation as a field of practice; and (2) to present a model with the intent to move toward a more deliberate formative quality evaluation practice. Discussion focuses on the relationship between evaluator and commissioner and how the development and use of a deliberate approach to formative meta-evaluation, through examination of the proposed model, can lead to a more egalitarian and inclusive approach to defining and promoting evaluation quality. Lastly, formative meta-evaluation is discussed as an important tool for evaluators in exercising professional judgment and for taking an active role in advancing the evaluation field.
Downloads
Article Details
![Creative Commons License](http://i.creativecommons.org/l/by-nc/4.0/88x31.png)
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Copyright and Permissions
Authors retain full copyright for articles published in JMDE. JMDE publishes under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY - NC 4.0). Users are allowed to copy, distribute, and transmit the work in any medium or format for noncommercial purposes, provided that the original authors and source are credited accurately and appropriately. Only the original authors may distribute the article for commercial or compensatory purposes. To view a copy of this license, visit creativecommons.org
References
Abbott, A. (1988). The system of professions. University of Chicago Press. https://doi.org/10.7208/chicago/9780226189666.001.0001
American Evaluation Association (2018). Guiding principles. https://www.eval.org/Portals/0/Docs/AEA_289398-18_GuidingPrinciples_Brochure_2.pdf
Christie, C. (2003). What guides evaluation? A study of how evaluation practice maps onto evaluation theory. New Directions for Evaluation, 97, 7-35. https://doi.org/10.1002/ev.72
Cooksy, L. J., & Caracelli, V. J. (2009). Metaevaluation in practice: Selection and application of criteria. Journal of MultiDisciplinary Evaluation, 6(11), 1-15. https://doi.org/10.56645/jmde.v6i11.211
Dahler-Larsen, P. (2019). Quality: From Plato to performance. Springer International Publishing Imprint: Palgrave Macmillan. https://doi.org/10.1007/978-3-030-10392-7
DFID (2019). DFID ethical guidance for research, evaluation and monitoring activities. https://www.gov.uk/government/publications/dfid-ethical-guidance-for-research-evaluation-and-monitoring-activities
Fitzpatrick, J., Sanders, J., & Worthen, B. (2011). Program evaluation: Alternative approaches and practical guidelines (4th ed.). Pearson.
Friedson, E. (1983). Professionalism: The third logic. University of Chicago Press.
Harnar, M. A., Hillman, J. A., Endres, C. L., & Snow, J. Z. (2020). Internal formative meta-evaluation: Assuring quality in evaluation practice. American Journal of Evaluation, 41(4), 603-613. https://doi.org/10.1177/1098214020924471
Henry, G. T., & Mark, M. M. (2003). Toward an agenda for research on evaluation. New Directions for Evaluation, 97, 69-80. https://doi.org/10.1002/ev.77
Jacobs, S., & Affrodegon, W. S. (2015). Conducting quality evaluations: Four generations of meta-evaluation. SpazioFilosofico, 13, 165-175.
Kirkhart, K. (1995). Seeking multicultural validity: A postcard from the road. Evaluation Practice, 16(1), 1-12. https://doi.org/10.1016/0886-1633(95)90002-0
Lincoln, Y. S. (1985). The ERS standards for program evaluation. Evaluation and Program Planning, 8(3), 251-253. https://doi.org/10.1016/0149-7189(85)90046-1
Ofir, Z. (2013). Strengthening evaluation for development. American Journal of Evaluation, 34(4), 582-586. https://doi.org/10.1177/1098214013497531
Patton, M. Q. (2008). Utilization-focused evaluation (4th ed.) Sage.
Picciotto, R. (2011). The logic of evaluation professionalism. Evaluation, 17(2), 165-180. https://doi.org/10.1177/1356389011403362
Picciotto, R. (2020). From disenchantment to renewal. Evaluation, 26(1), 49-60. https://doi.org/10.1177/1356389019897696
Sanders, J. R. (1995). Standards and principles. New Directions for Program Evaluation, 66, 47-52. https://doi.org/10.1002/ev.1709
Schwandt, T. (2003). 'Back to the rough ground!' Beyond theory to practice in evaluation. Evaluation, 9(3), 353-364. https://doi.org/10.1177/13563890030093008
Schwandt, T. (2015). Evaluation Foundations Revisited: Cultivating a Life of the Mind for Practice. Stanford University Press.
Scriven, M. (1969). An introduction to meta-evaluation. Educational Products Report, 2(5), 36-38.
Scriven, M. (1991). Evaluation Thesaurus (4th ed.). Sage.
Scriven, M. (1994). Evaluation as a discipline. Studies in Educational Evaluation, 20, 147-166. https://doi.org/10.1016/S0191-491X(00)80010-3
Scriven, M. (2007a). Key evaluation checklist. https://wmich.edu/evaluation/checklists
Scriven, M. (2007b, unpublished). The logic and methodology of checklists.
Scriven, M. (2009). Meta-evaluation revisited. Journal of MultiDisciplinary Evaluation, 6(11), iii-viii. https://doi.org/10.56645/jmde.v6i11.220
Scriven, M. (2012). Formative, preformative, and proformative evaluation. Journal of MultiDisciplinary Evaluation, 8(18), 58-61. https://doi.org/10.56645/jmde.v8i18.353
Société Canadienne d'Evaluation (SCE) (2011). The program evaluation standards: A guide for evaluators and evaluation users. Sage.
Societe Suisse d'Evaluation (SEVAL) (2016). Evaluation standards of the Swiss Evaluation Society. https://www.seval.ch/app/uploads/2018/08/SEVAL-Standards-2016_e.pdf
Stufflebeam, D. L. (1999). Program evaluation metaevaluation checklist. https://wmich.edu/evaluation/checklists
Stufflebeam, D. L. (2001a). The metaevaluation imperative. American Journal of Evaluation, 22, 183-209. https://doi.org/10.1016/S1098-2140(01)00127-8
Stufflebeam, D. L. (2001b). Evaluation checklists: Practical tools for guiding and judging evaluations. American Journal of Evaluation, 22(1), 71-79. https://doi.org/10.1016/S1098-2140(01)00119-9
Stufflebeam, D. L., & Shinkfield, A. (2007). Evaluation theory, models, and applications. John Wiley.
Stufflebeam, D. L. (2011) Meta-evaluation. Journal of MultiDisciplinary Evaluation, 7(15), 99-158. https://doi.org/10.56645/jmde.v7i15.300
Stufflebeam, D. L., & Coryn, C. L. S. (2014). Evaluation theory, models, and applications (2nd ed.). Josse-Bass.
Symonette, H. (2004). Walking pathways toward becoming a culturally competent evaluator: Boundaries, borderlands and border crossings. New Directions for Evaluation, 102, 95-109. https://doi.org/10.1002/ev.118
United Nations Evaluation Group (2020). Ethical guidelines for evaluation. http://www.unevaluation.org/document/detail/2866.
Wingate, L. A. (2009). The program evaluation standards applied for metaevaluation purposes: Investigating interrater reliability and implications for use. [Unpublished doctoral dissertation]. Western Michigan University.
Yanagi, M., & Leach, B. (1972). The unknown craftsman: A Japanese insight into beauty (1st ed.; B. Leach, Adapt.) Kodansha International.
Yarbrough, D. B., Shula, L. M., Hopson, R. K., & Caruthers, F. A. (2010). The Program Evaluation Standards: A guide for evaluators and evaluation users (3rd. ed). Thousand Oaks, CA: Corwin Press.