Developmental Evaluation in Theory versus Practice: Lessons from Three Developmental Evaluation Pilots
Main Article Content
Abstract
Background. Developmental Evaluation (DE) practitioners turn to DE theory to make design and implementation decisions. However, DE practitioners can experience difficulty in fully understanding how to implement DE using theory because it is method agnostic (Patton, 2016). Instead, DE is a principle-based approach.
Purpose. This article presents an empirical examination of how DE theory was (or was not) applied during three DE pilots. Our analysis aims to better understand how DE theory is used in practice to expand the evidence base and strengthen future DE implementation.
Setting. A consortium of three organizations implemented three DE pilots through the United States Agency for International Development (USAID) from November 2016 to September 2019. The authors—who participated in the consortium—did not implement the DEs but instead conducted a study or meta-evaluation across the DE pilots.
Data Collection and Analysis. This article focuses on the results of an ex post facto analysis of three DE pilots based on the entire DE implementation experience. For each DE studied, we used mixed methods to collect data on the effectiveness of the DE approach, to identify adaptations to strengthen DE implementation in the USAID context, and to measure its value to stakeholders. Data included more than 100 hours of interviews, 465 pages of qualitative data, and 30 surveys completed by DE participants.
Findings. We find that the ability to apply the DE principles in practice is influenced, in no particular order, by DE participant buy-in to the DE, the Developmental Evaluator’s aptitude, support and resources available to the Developmental Evaluator, and the number of DE participants. We also find that buy-in can change and this should be closely monitored throughout a DE to inform whether a DE should be paused or prematurely ended.
Downloads
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Copyright and Permissions
Authors retain full copyright for articles published in JMDE. JMDE publishes under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY - NC 4.0). Users are allowed to copy, distribute, and transmit the work in any medium or format for noncommercial purposes, provided that the original authors and source are credited accurately and appropriately. Only the original authors may distribute the article for commercial or compensatory purposes. To view a copy of this license, visit creativecommons.org
Funding data
References
Baldwin, C. K., & Lander, R. (2018). Developmental evaluator functional role activities and programmatic developments: A case study analysis. American Journal of Evaluation, 40(1), 35-54. https://doi.org/10.1177/1098214017743586 DOI: https://doi.org/10.1177/1098214017743586
Baylor, R., Fatehi Y. K., & Esper, H. (2020). Advancing the use of developmental evaluation: A summary of key questions answered during a multiyear study of developmental evaluations implemented at USAID. DEPA-MERL Consortium. USAID. https://www.usaid.gov/GlobalDevLab/MERLIN/DEPA-MERL/analysis-across-pilots
Baylor, R., Fatehi, Y. K., & Esper, H. (2019). Key lessons from an attempted developmental evaluation pilot. DEPA-MERL Consortium. USAID. https://www.usaid.gov/GlobalDevLab/MERLIN/DEPA-MERL/bureau-food-security/bfs-learning-report
Beer, T. (2019). Are you really ready for developmental evaluation? You may have to get out of your own way. Center for Evaluation Innovation. https://www.evaluationinnovation.org/insight/are-you-really-ready-for-developmental-evaluation-you-may-have-to-get-out-of-your-own-way/
Beer, T., Lee, K., & Ward, K. (2019). Developmental evaluation: Rewards, perils, and anxieties [Webinar]. Gender and Evaluation. https://gendereval.ning.com/events/webinar-developmental-evaluation-rewards-perils-and-anxieties
Blanchet-Cohen, N., & Langlois, M. (2010). DE 201: A practitioner's guide to developmental evaluation. International Institute for Child Rights and Development, University of Victoria.
DEPA-MERL Consortium. (2019a). Implementing developmental evaluation: A practical guide for evaluators and administrators. U.S. Agency for International Development. https://www.usaid.gov/sites/default/files/documents/15396/ImplementingDE_Admin_20.pdf
DEPA-MERL Consortium. (2019b). Implementing developmental evaluation: A practical guide for funders. U.S. Agency for International Development. https://www.usaid.gov/sites/default/files/documents/15396/ImplementingDE_Funders_20.pdf
Fatehi, Y. K., Baylor, R., Esper H. (2018). A study of the family care first in Cambodia developmental evaluation. DEPA-MERL Consortium. USAID. https://www.usaid.gov/GlobalDevLab/MERLIN/DEPA-MERL/family-care-first/final-version
Gamble, J.A.A. (2008). A developmental evaluation primer. J.W. McConnell Family Foundation. https://www.betterevaluation.org/sites/default/files/A%20Developmental%20Evaluation%20Primer%20-%20EN.pdf
Gold, J., Wilson-Grau, R. W., Fisher, S., & Otoo, S. (2014). Cases in outcome harvesting: Ten pilot experiences identify new learning from multi-stakeholder projects to improve results. The World Bank.
Hayes, H., Witkowski, S., & Smith, L. (2016). Failing forward quickly as a developmental evaluator: Lessons from year one of the LiveWell Kershaw journey. Journal of MultiDisciplinary Evaluation, 12(27), 112-118. https://doi.org/10.56645/jmde.v12i27.435 DOI: https://doi.org/10.56645/jmde.v12i27.435
Lam, C. Y. (2016). A case study on a design-informed DE. QSpace. http://hdl.handle.net/1974/14929
Lam, C. Y., & Shulha, L. M. (2015). Insights on using developmental evaluation for innovating: A case study on the co-creation of an innovative program. American Journal of Evaluation, 36(3), 358-374. https://doi.org/10.1177/1098214014542100 DOI: https://doi.org/10.1177/1098214014542100
Langlois, M., Blanchet-Cohen, N., & Beer, T. (2012). The art of the nudge: Five practices for developmental evaluators. The Canadian Journal of Program Evaluation, 27(2), 39. https://doi.org/10.3138/cjpe.27.003 DOI: https://doi.org/10.3138/cjpe.27.003
Miller, R. L. (2010). Developing standards for empirical examinations of evaluation theory. American Journal of Evaluation, 31(3), 390-399. https://doi.org/10.1177/1098214010371819 DOI: https://doi.org/10.1177/1098214010371819
Mintzberg, H. (2007). Tracking strategies: Toward a general theory. Oxford Press.
Patton, M. Q. (1994). Developmental evaluation. American Journal of Evaluation, 15, 311-319. https://doi.org/10.1177/109821409401500312 DOI: https://doi.org/10.1177/109821409401500312
Patton, M. Q. (2011). Developmental evaluation: Applying complexity concepts to enhance innovation and use. Guilford Press.
Patton, M. Q. (2016). What is essential in developmental evaluation? On integrity, fidelity, adultery, abstinence, impotence, long-term commitment, integrity, and sensitivity in implementing evaluation models. American Journal of Evaluation, 37(2), 250-265. https://doi.org/10.1177/1098214015626295 DOI: https://doi.org/10.1177/1098214015626295
Patton, M. Q., McKegg, K., & Wehipeihana, N. (Eds.). (2015). Developmental evaluation exemplars: Principles in practice. Guilford Press.
Patton, M.Q., & DEPA-MERL Clinic #9. (2019). How developmental evaluation informs learning and adaptation [Webinar]. Not available on a public domain.
Stufflebeam, D. L. (2001) The metaevaluation imperative. American Journal of Evaluation, 22(2):183-209. https://doi.org/10.1016/S1098-2140(01)00127-8 DOI: https://doi.org/10.1016/S1098-2140(01)00127-8
Wilson-Grau, R. (2015). Outcome harvesting. Better Evaluation. https://www.betterevaluation.org/en/plan/approach/outcome_harvesting