An In-Depth International Comparison of Major Donor Agencies: How Do They Systematically Conduct Country Program Evaluation?

Main Article Content

Ryo Sasaki

Abstract

Background: This paper presents an in-depth international comparison of systems and procedures of aid evaluation, focusing on Country Program Evaluation among major donor agencies. The original client of this study is Ministry of Foreign Affairs, Japan (MOFAJ).


Purpose: The purposes of this paper are set as follows: (1) to understand how aid agencies conduct Country Program Evaluation; and (2) to make recommendations for improvement of the current practice of Country Program Evaluation in the aid evaluation community.


Setting: The examined donors include: the World Bank (WB), the Asian Development Bank (ADB), the Inter-American Development Bank (IADB), the United Nations Development Programme (UNDP), the U.S. (USAID), Canada (CIDA), the U.K. (DFID), the Netherlands (IOB), Germany (BMZ), France (Foreign Ministry), and Japan (Ministry of Foreign Affairs (MOFAJ)). In addition, aid agencies conducting respective project evaluation are also examined, and they are JICA (Japan), GTZ and KfW (Germany) and AFD (France).


Intervention: This study presents the result of comparative analysis among those donor agencies in terms of the following viewpoints: (1) evaluation criteria employed; (2) approaches to evaluate “effectiveness” and “impact”; (3) attribution issue; (4) the use of a rating system;  and (5) overall evaluative conclusion and integrating methods. All viewpoints are focusing on Country Program Evaluation. One conclusion is that most agencies have been struggling with how to judge the degree and value of their country programs.


Data Collection and Analysis: Mixed methodologies were employed to collect data from the said donor agencies. The analysis was conducted by a systematic procedure consisting of: (i) summarizing information in a comparative table; (ii) trying to make groups/categories based on common characteristics if possible; and (iii) examining and concluding basic thoughts/philosophy which make their differences.


Findings: This study made some new knowledge about how aid agencies conduct Country Program Evaluation and identified several issues remained. Varieties of their practices are observed and it is far from the unified methods agreed. Some remarkable points identified in this study are:(1) Most aid agencies invoke the DAC five evaluation criteria for Country Program Evaluation. (Major exception was USAID); (2) “Strategic relevance” and “coherence/complementarity” are the emerging new criteria; (3) Attribution is still the issue that aid agencies have struggled; and (4) The attitude for introduction of rating system is clearly divided among aid agencies.

Downloads

Download data is not yet available.

Article Details

How to Cite
Sasaki, R. (2012). An In-Depth International Comparison of Major Donor Agencies: How Do They Systematically Conduct Country Program Evaluation?. Journal of MultiDisciplinary Evaluation, 8(18), 29–46. https://doi.org/10.56645/jmde.v8i18.349
Section
Research on Evaluation Articles

References

Cassen, R. & Associates. (1994). Does aid work? (2nd ed.). Oxford: Oxford University Press.

Davidson, J. (2005). Evaluation methods basics. Thousand Oaks, CA: Sage.

Organisation for Economic Co-operation and Development - Development Assistance Committee (OECD-DAC). (1991). DAC Criteria for Evaluating Development Assistance. OECD-DAC. Retrieved from http://www.oecd.org/document/22/0,2340,en_2649_34435_2086550_1_1_1_1,00.html

Organisation for Economic Co-operation and Development - Development Assistance Committee (OECD-DAC). (2002). Glossary of key terms in evaluation and results based management. OECD-DAC. Retrieved from http://www.oecd.org/dataoecd/29/21/2754804.pdf

Organisation for Economic Co-operation and Development - Development Assistance Committee (OECD-DAC) - Network on Development Evaluation. (2002). Development evaluation resources and systems: A study of network members. OECD-DAC. Retrieved from http://www.oecd.org/dataoecd/13/6/45605026.pdf

International Development Center of Japan (IDCJ). (2010). Study on country program ODA evaluation systems and methodologies. Ministry of Foreign Affairs, Japan. Retrieved from http://www.mofa.go.jp/mofaj/gaiko/oda/kaikaku/hyoka/pdfs/10_oda_hyoka_tyosa.pdf

International Initiative of Impact Evaluation (3iE). (2011). Impact Evaluation Glossary – December 2011. Retrieved from http://www.3ieimpact.org/userfiles/doc/Impact%20Evaluation%20Glossary%20-%20Dec%202011.pdf

Japan Bank for International Cooperation (JBIC). (2006). JBIC Loan Project Evaluation report. Japan International Cooperation Agency (JICA).

House, E. R., & Howe, K. R. (1999). Values in evaluation and social research. Thousand Oaks, CA: Sage. https://doi.org/10.4135/9781452243252 DOI: https://doi.org/10.4135/9781452243252

Ministry of Foreign Affairs of Japan. (2009). ODA Evaluation Guidelines. ODA Evaluation Division, International Cooperation Bureau, Ministry of Foreign Affairs of Japan (MOFAJ). Retrieved from http://www.mofa.go.jp/policy/oda/evaluation/guideline.pdf

Scriven, M. (1991). Evaluation thesaurus (4th ed.). Thousand Oaks, CA: Sage.

Shadish, W. R., Cook, T. D., & Leviton, L. C. (1991). Foundations of program evaluation: Theories of practice. Thousand Oaks, CA: Sage.

Stokke, O. (1992). Evaluating development assistance: Policies and performance. London: Frank Cass Publishers.

Full reference of each donor agency is available at Annex 6 of the following report: International Development Center of Japan (IDCJ) (2010). Study on Country Program ODA Evaluation Systems and Methodologies. Ministry of Foreign Affairs, Japan.