You Call This Exemplary? Lessons from an Unsung International Evaluation

Main Article Content

Douglas Horton

Abstract

This paper reflects on the role of academic discipline and epistemic community in judging what is an exemplary evaluation. It examines the case of an evaluation that was considered ‘exemplary’ by a panel of program evaluators, but methodologically flawed by evaluators from a different evaluation tradition. The evaluation in question was carried out within an international agricultural research network (known as the CGIAR), with a rich tradition of economic impact assessment. The evaluation was carried out by a team of experienced program evaluators, who attempted to apply accepted good practices in the program evaluation community. The evaluation employed mixed methods and multiple data sources with heavy reliance on triangulated perceptual data. A meta-evaluation led by an experienced program evaluator considered the evaluation to be exemplary. However, within the CGIAR, both the evaluation and the meta-evaluation study were rejected, as methodologically flawed. The paper closes with four propositions related to what is considered an “exemplary evaluation.” 


Background: Program evaluators have reached broad agreement on principles for planning and conducting evaluations and on standards for judging their quality. However, many evaluation stakeholders, including key intended users, may judge evaluations on criteria that differ sharply from the professional standards and the criteria we commonly employ in meta-evaluations.


Purpose: This paper highlights the role of academic discipline and epistemic community in judging what is an “exemplary” evaluation, by examining the case of an evaluation that was considered exemplary by professional program evaluators, but methodologically flawed by professionals from different disciplinary traditions.


Setting: The evaluation in question was carried out within the Consultative Group on International Agricultural Research (CGIAR), a research network with a rich tradition of economic impact assessment.


Intervention: NA


Research Design: This is a case study that combines participatory action research and historical analysis.


Data Collection and Analysis: The study is based on the author’s personal involvement in the evaluation and on a review of publications and unpublished documents related to the case.


Findings: A team of experienced evaluators applied what are generally considered to be good practices in the program evaluation community. A meta-evaluation led by an experienced program evaluator considered the evaluation to be exemplary. In contrast, within the CGIAR, both the evaluation and the meta-evaluation study were considered to be methodologically flawed and biased. Three lessons related to exemplary evaluation are formulated and elaborated upon:


Lesson 1. Being exemplary is in the eyes of the beholder.


Lesson 2. Epistemic communities are hard nuts to crack.


Lesson 3. You can’t win them all.


While the early efforts with program evaluation analyzed here were experienced as failures, a number of subsequent developments have led to greater understanding of diverse evaluation approaches, and some movement toward agreement on what constitutes exemplary evaluation in the CGIAR. Nevertheless, there is still a considerable way to go.

Downloads

Download data is not yet available.

Article Details

How to Cite
Horton, D. (2017). You Call This Exemplary? Lessons from an Unsung International Evaluation. Journal of MultiDisciplinary Evaluation, 13(28), 41–52. https://doi.org/10.56645/jmde.v13i28.469
Section
Research Articles
Author Biography

Douglas Horton, Independent evaluator, University Park, FL

Doug is an independent applied researcher and evaluator specialized in international development, innovation, and capacity development .

References

Alston, J., Norton, G., & Pardey, P. (1995). Science under scarcity: Principles and practice for agricultural research evaluation and priority setting. Ithaca, NY: Cornell University Press.

Alston, J., Chan-Kang, C., Marra, M., Pardey, P., & Wyatt, T. (2000). A meta-analysis of rates of return to agricultural R&D: Ex pede herculem? (Research Report 113). Washington, DC: International Food Policy Research Institute.

Anderson, J. (1997). Draft observations on the impact presentations. Notes for a presentation to the CGIAR International Centers Week, October 31, 1997. Retrieved from http://library.cgiar.org/bitstream/handle/10947/480/cg9710d.pdf?sequence=1

Anderson, J., Herdt, R., & Scobie, G. (1988). Science and food: The CGIAR and its partners. Washington, DC: The World Bank.

Cash, D., Clark, W., Alcock, F., Dickson, N., Eckley, N., Guston, D., Jager, J., & Mitchell, R. (2003). Knowledge systems for sustainable development. PNAS, 100(14), 8086-8091. https://doi.org/10.1073/pnas.1231332100 DOI: https://doi.org/10.1073/pnas.1231332100

CGIAR Secretariat. (1995a). Renewal of the CGIAR: Sustainable agriculture for food security in developing countries. Washington, DC: Consultative Group on International Agricultural Research.

CGIAR Secretariat. (1995b). Review processes in the CGIAR: Terms of reference and guidelines for external program and management review of CGIAR centers (CGIAR Document No. MTM/95/11). Washington, DC: Consultative Group on International Agricultural Research.

Conway, G. (1997). The doubly green revolution: Food for all in the 21st century. London: Penguin Books.

Cooksy, L. (1997a). CGIAR methodological review and synthesis of existing ex post impact assessments. Report 1: A review of documents reporting effects of international agricultural research centers. Rome: Consultative Group on International Agricultural Research, Impact Assessment and Evaluation Group.

Cooksy, L. (1997b). CGIAR methodological review and synthesis of existing ex post impact assessments. Report 2: Analysis of comprehensive ex post studies of impacts of international agricultural research centers. Rome: Consultative Group on International Agricultural Research, Impact Assessment and Evaluation Group.

Cooksy, L., & Caracelli, V. (2005). Quality, context, and use: Issues in achieving the goals of metaevaluation. American Journal of Evaluation, 26(1), 31-42. https://doi.org/10.1177/1098214004273252 DOI: https://doi.org/10.1177/1098214004273252

Cullather, N. (2010). The hungry world: America's cold war battle against poverty in Asia. Cambridge, MA: Harvard University Press. https://doi.org/10.4159/9780674058828 DOI: https://doi.org/10.4159/9780674058828

Development Assistance Committee. (2002). Glossary of key terms in evaluation and results-based management. Paris: Organization for Economic Cooperation and Development. Retrieved from http://www.oecd.org/dac/2754804.pdf

Earle, S., Carden, F., & Smutylo, T. (2001). Outcome mapping: Building learning and reflection into development programs. Ottawa: International Development Research Centre. Retrieved from http://www.outcomemapping.ca/download/OM_English_final.pdf

Haas, P. (1992). Introduction: Epistemic communities and international policy coordination. International Organization, 46(1), 1-35. https://doi.org/10.1017/S0020818300001442 DOI: https://doi.org/10.1017/S0020818300001442

Horton, D. (1998). Disciplinary roots and branches of evaluation: Some lessons from agricultural research. Knowledge and Policy, 10(4), 31-66. https://doi.org/10.1007/BF02912498 DOI: https://doi.org/10.1007/BF02912498

Horton, D., & Mackay, R. (1998). Assessing the organizational impact of development cooperation: A case from agricultural R&D. Canadian Journal of Program Evaluation, 13(2), 1-28. https://doi.org/10.3138/cjpe.13.001 DOI: https://doi.org/10.3138/cjpe.13.001

Kelley, T., Ryan, J., & Gregersen, H. (2008). Enhancing ex post impact assessment of agricultural research: The CGIAR experience. Research Evaluation, 17(3), 201-212. https://doi.org/10.3152/095820208X331711 DOI: https://doi.org/10.3152/095820208X331711

Lusthaus, C., Anderson, G., & Murphy, E. (1995). Institutional assessment: A framework for strengthening organizational capacity for IDRC's research partners. Ottawa: International Development Research Center.

Mackay, R., Debela, S., Smutylo, T., Borges-Andrade, J., & Lusthaus, C. (1998). ISNAR's achievements, impacts, and constraints: An assessment of organizational performance and institutional impact. The Hague: International Service for National Agricultural Research.

Özgediz, S. (1999). Evaluating research institutions: Lessons from the CGIAR. Knowledge and Policy, 11(4), 97-113. https://doi.org/10.1007/s12130-999-1005-5 DOI: https://doi.org/10.1007/s12130-999-1005-5

Patton, M. (2008). Utilization-focused evaluation. Los Angeles, CA: Sage Publications.

Pingali, P. (2001). Milestones in impact assessment research in the CGIAR, 1970-1999. Rome: Standing Panel on Impact Assessment, Technical Advisory Committee of the Consultative Group on International Agricultural Research.

Rossi, P., & Freeman, H. (1985). Evaluation: A systematic approach. Newbury Park, CA: Sage Publications.

Schalock, R. (1995). Outcome-based evaluation. New York, NY: Plenum Press. https://doi.org/10.1007/978-1-4757-2399-1 DOI: https://doi.org/10.1007/978-1-4757-2399-1

TAC Secretariat. (1997). Report of the third external programme and management review of the International Service for National Agricultural Research (Document No. SDR/TAC:IAR/96/23). Rome: Technical Advisory Committee of CGIAR.

Walker, T., Maredia, M., Kelley, T., La Rovere, R., Templeton, D., Thiele, G., & Douthwaite, B. (2008). Strategic guidance for ex post impact assessment of agricultural research. Rome: Standing Panel on Impact Assessment, CGIAR Science Council Secretariat.