The Contribution of Metaevaluation to Program Evaluation: Proposition of a Model

Main Article Content

Helga Hedler
Namara G. Ribeiro

Abstract

Background: This theoretical article points to the fundamental difference between meta-analysis and metaevaluation. A model of metaevaluation for social programs is presented based on prior practical research.


Purpose: The purpose is to present a model for metaevaluation as a tool that can be used in other studies. Theory points to the need of a qualitative framework to go beyond the understanding of meta-analysis for program evaluation.


Setting: This theoretical article is based on an empirical research conducted at a Brazilian Governmental audit agency.


Subjects: The Government agency where the practical research was conducted is responsible for the effectiveness and accountability of social programs through audits that occurred from 2003 to 2006.


Intervention: Meetings and interviews were held with auditors that participated in the evaluation process going from planning to final reports as the model proposes


 Research Design: The model for metaevaluation has a qualitative approach used to evaluate prior evaluations for social programs.


Data Collection and Analysis:  Data collection included structured interview with the chief manager of the agency in charge of evaluating governmental programs. Documents and reports were analyzed using qualitative method for content analysis. Synthesis of categories was applied to compare different analysis and summarize findings.


Findings: Metaevaluation and meta-analysis are different research methods with a different approach. Meta evaluation is a qualitative method useful when evaluating prior evaluations.  Yet the quantitative approach of meta-analysis applies better for first evaluations. Meta evaluation may include other methods to help strengthen the evaluation results.


Conclusions: Metaevaluation aligns theory and practice for program evaluation. The proposed model for metaevaluation may hold value for future theoretical and empirical work.

Downloads

Download data is not yet available.

Article Details

How to Cite
Hedler, H., & Ribeiro, N. G. (2009). The Contribution of Metaevaluation to Program Evaluation: Proposition of a Model. Journal of MultiDisciplinary Evaluation, 6(12), 210–223. https://doi.org/10.56645/jmde.v6i12.206
Section
The Theory, Method, and Practice of Metaevaluation

References

Aguilar, M. J., & Ander-Egg, E. (1995). Avaliação de serviços e programas sociais. Petrópolis: Vozes.

Ala-Harja, M., & Sigurdur, H. (2000, Outubro- Dezembro). Em direção às melhores práticas de avaliação. Revista do Serviço Público. Fundação Escola Nacional de Administração Pública, v.1, no 1. 51. Brasília: ENAP.

https://doi.org/10.21874/rsp.v51i4.334

Ashworth, K., Cebulla. A., & Greenberg, D. W, Robert (2004). Metaevaluation: discovering what works beast in welfare provision. Evaluation. Vol. 10. [On-line] Obtido em 18 de fevereiro de 2007, de http://evi.sagepub.com/cgi/content/abstract.

https://doi.org/10.1177/1356389004045075

Bardin, L. (1977). Análise de Conteúdo. Lisboa: Edições 70

Barreira, M. C. R. N. (2002). Avaliação Participativa de Programas Sociais. São Paulo: Editoria Veras CPIHTS.

Brasil. (2000). Tribunal de Contas da União. Manual de auditoria de natureza operacional. Brasília: TCU. Coordenadoria de Fiscalização e Controle. 114p.

Cano, I. (2004). Introdução à avaliação de programas sociais. (2a ed.) Rio de Janeiro: Editora. FGV. Chelimsky, E. (1985). Comparing and

contrasting auditing and evaluation: some notes on their relationship. Evaluation Review. vol. 9, no 4, p. 483-503 [On-line], obtido em 18 de fevereiro de 2007, de http://erx.sagepub.com/cgi/content/abstract/9/4/483.

https://doi.org/10.1177/0193841X8500900406

Cook, T. D., & Gruder, C. L. (1978). Metaevaluation research. Evaluation Review. [On-line], February 18th 2007, de http://erx.sagepub.com/cgi/content/abstra ct/2/1/5.

https://doi.org/10.1177/0193841X7800200101

Cotta, T. C. (1998, Abril - Junho). Metodologias de avaliação de programas e projetos sociais: análise do resultado de impacto. Revista do Serviço Público. Fundação Escola Nacional de Administração Pública. 49, no.2. Brasília: ENAP.

https://doi.org/10.21874/rsp.v49i2.368

Evaluation Research Society [ERS] (1998). Standards for program evaluation. San Francisco, CA: Jossey-Bass.

Fernández-Ballesteros, R., Vedung, E., & Seyfried, E. (1998). Psychology in program evaluation. European Psychologist, 3,143-154.

https://doi.org/10.1027/1016-9040.3.2.143

Gibram, N. F. R. (2004). Trabalho e familia: um estudo da congruência dinâmica de demandas multiplas (Work and family: a study on the dynamic congruency of multiple demands). Thesis presented for Doctor's degree in Psychology by the University of Brasília.

Günther, H. (2006). Pesquisa qualitativa versus pesquisa quantitativa: Esta é a questão? Série: Textos de Psicologia Ambiental, No 07. Brasília, DF: UnB, Laboratório de Psicologia Ambiental.

https://doi.org/10.1590/S0102-37722006000200010

Hedler, H. C. (2007). Meta-avaliação em de Auditorias de Natureza Operacional do Tribunal de Contas da União: Um estudo sobre auditorias de Programas Sociais. Thesis presented for Doctor's degree in Psychology by the University of Brasilia.

Hunter, J. E., & Schmidt, F. L. (1996). Measurement error in psychological research: lessons from 26 research scenarios. Psychological Methods, 1(2), 199-223

https://doi.org/10.1037/1082-989X.1.2.199

Hunter, J. E., & Schmidt, F. L. (1999). Comparison of three meta-analysis revisited: An analysis of Johnson, Lullen, and Salas (1995). Journal of Applied Psychology, 84(1), 144-148.

https://doi.org/10.1037//0021-9010.84.1.144

Mendes, A. M. B., Tamayo, A., Paz, M. G. T., Neiva, E. R., Tamayo, N., Silva, P. T., Souza, A. C., Martins, A. J., & David, R. G. (1999). Análise da cultura organizacional do Tribunal de Contas da União - TCU. Brasília: Relatório Final. O&T Consultoria/FINATEC/Unb.

Oskamp, S. (1984). Applied social psychology. New Jersey: Prentice Hall.

Patton, M. Q. (2001). Qualitative research and evaluation methods (3rd ed.). Thousand Oaks, CA: Sage.

Posavac, E. J., & Carey, R. G. (2003). Program evaluation. Methods and case Studies (6th ed.). New Jersey: Prentice Hall.

Rossi, P. H., & Freeman, H., E. (1993). Evaluation: A systematic approach (5th ed.). Thousand Oaks, CA: Sage.

Schwandt, T. A. (1989). The politics of verifying trustworthiness in evaluation auditing. American Journal of Evaluation, 10, 33-40.

https://doi.org/10.1177/109821408901000405

Silva, P. L. B. (2002). A avaliação de programas públicos: Reflexões sobre a experiência brasileira. Relatório técnico. Brasília: IPEA.

Smith, P. B., & Bond, M. H. (1999). Social psychology across cultures. Allyn e Bacon: EUA.

Stufflebeam, D. L., & Shinkfield, A. J. (1987). Evaluación sistemática: guía teórica y práctica. Barcelona: Ediciones Paidós Ibérica.

Woodside, A. G., & Sakai, M. Y. (2001). Metaevaluation of performance audits of government tourism-marketing programs. Journal of Travel Research

https://doi.org/10.1177/004728750103900403

, 369. [On-line], Obtido em 18 de fevereiro de 2007, de http://jtr.sagepub.com/cgi/content/abstract/39/4/369

Worthen, B. R., Sanders, J. R., & Fitzpatrick, J. L. (2004). Avaliação de programas: concepções e práticas. São Paulo: Editora Gente.