Evaluability Assessment: Clarifying Organizational Support and Data Availability

Main Article Content

Joseph Hare
Timothy Guetterman

Abstract

Background: Evaluability assessment (EA) emerged in the 1970s as a way to ensure a program was ready for summative evaluation. The primary purpose was assessing the presence of measurable program objectives (Trevisan, 2007), yet evaluators conducting EA encountered difficulty with unclear, ambiguous methods (Smith, 2005).


Purpose: The purpose of this qualitative study was to clarify two aspects of evaluability assessment, organizational support and data availability. In practice, organizational stakeholders must support the evaluation project to ensure it is pursued to completion. In addition, the availability of operational data facilitates analysis of the evaluand effect.


Setting: Participants from both human services and corporate organizations participated in interviews. Participants worked on evaluation projects serving in three roles: organizational stakeholder, program evaluator, and information technology personnel.


Intervention: NA


Research Design: A qualitative research design was selected to best understand the experiences with regard to organizational support and data sufficiency of individuals who have engaged in evaluation studies and to understand how these domains affected their ability to conduct an evaluation.


Data Collection and Analysis: This study consisted of purposive sampling of 13 participants serving various roles to add breadth to the data. The researchers conducted semi-structured interviews and analyzed the data using thematic analysis.


Findings: The findings indicate the importance of specific organizational and data related considerations that affect evaluability. The researchers recommend considerations that elaborate upon the existing EA framework. The recommended evaluability considerations assist evaluators in identifying ill-advised evaluations and enhancing the likelihood of success in ongoing studies.

Downloads

Download data is not yet available.

Article Details

How to Cite
Hare, J., & Guetterman, T. (2014). Evaluability Assessment: Clarifying Organizational Support and Data Availability. Journal of MultiDisciplinary Evaluation, 10(23), 9–25. https://doi.org/10.56645/jmde.v10i23.395
Section
Research on Evaluation Articles
Author Biographies

Joseph Hare, Center for Learning Innovation, Bellevue University

Joseph Hare, Ed.D., is the Assistant Dean, Center for Learning Innovation.  His background includes leadership positions in curriculum development.

Timothy Guetterman, Department of Educational Psychology, University of Nebraska-Lincoln

Timothy Guetterman, MA, is a Ph.D. student in Educational Psychology at the University of Nebraska-Lincoln.  His professional interests and research writings are in research methodology, namely mixed methods and general research design, particularly as applied to assessment and evaluation.

References

Alkin, M. C. (2011). Evaluation essentials: From A to Z. New York, NY: Guilford.

Batini, C., Cappiello, C., Francalanci, C., & Maurino, A. (2009). Methodologies for data quality assessment and improvement. ACM Computing Surveys, 41(3), 16.1-16.52. https://doi.org/10.1145/1541880.1541883 DOI: https://doi.org/10.1145/1541880.1541883

Bryson, J. M., & Patton, M. Q. (2011). Analyzing and engaging stakeholders. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation(3rded., pp. 81-99). SanFrancisco, CA: Jossey-Bass.

Creswell, J. W. (2013). Qualitative inquiry and research design: Choosing among five approaches(3rded.). Thousand Oaks, CA: Sage.

Creswell, J. W. (2014).Research design: Qualitative, quantitative, and mixed methods approaches(4thed.). Thousand Oaks, CA: Sage.

D'Ostie-Racine, L., Dagenais, C., Ridde, V. (2013), An evaluability assessment of a West Africa based Non-Governmental Organization's (NGO) progressive evaluation strategy. Evaluation and Program Planning, 36, 71-79. https://doi.org/10.1016/j.evalprogplan.2012.07.002 DOI: https://doi.org/10.1016/j.evalprogplan.2012.07.002

Eden, C., & Ackermann, F. (1998). Making strategy: The journey of strategic management. London: Sage. https://doi.org/10.4135/9781446217153 DOI: https://doi.org/10.4135/9781446217153

Greene, J. G. (1988). Stakeholder participation and utilization in program evaluation. Evaluation Review, 12, 91-116.Hatry, H. P. (2011). Using agency records. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation(3rded., pp. 81-99). San Francisco, CA: Jossey-Bass.

Horst, P., Nay, J. N., Scanlon, J. W., & Wholey, J. S. (1974). Program management and the federal evaluator. Public Administration Review, 34(4), 300-308. https://doi.org/10.2307/975239 DOI: https://doi.org/10.2307/975239

House, E. R., & Howe, K. R. (1999). Values in evaluation and social research. Thousand Oaks, CA: Sage. https://doi.org/10.4135/9781452243252 DOI: https://doi.org/10.4135/9781452243252

Leviton, L. C., & Gutman, M. A. (2010). Overview and rationale for the Systematic Screening and Assessment Method. In L. C. Leviton, L. Kettel Khan, & N. Dawkins (Eds.), The Systematic Screening and Assessment Method: Finding innovations worth evaluating. New Directions for Evaluation, 125,7-31. https://doi.org/10.1002/ev.318 DOI: https://doi.org/10.1002/ev.318

Leviton, L. C., Kettel Khan, L., Rog, D., Dawkins, N., &Cotton, D. (2010). Evaluability Assessment to Improve Public Health Policies, Programs, and Practices*. Annual review of public health, 31, 213-233. https://doi.org/10.1146/annurev.publhealth.012809.103625 DOI: https://doi.org/10.1146/annurev.publhealth.012809.103625

Maxwell, J. A. (2013). Qualitative research design: An interactive approach (3rded.).Thousand Oaks, CA: Sage.Merriam, S.B. (2009). Qualitative research: A guide to design and implementation. San Francisco, CA: Jossey-Bass.

Patton, M. Q. (2011). Essentials of utilization-focused evaluation. Thousand Oaks, CA: Sage.

Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach (7thed.). Thousand Oaks, CA: Sage.Smith, M. F. (1989). Evaluability assessment: A practical approach. Boston, MA: Kluwer Academic.

Smith, M. F. (1999). Participatory evaluation: Not workingor not tested? American Journal of Evaluation, 20, 295-308. DOI: https://doi.org/10.1016/S1098-2140(99)00016-8

https://doi.org/10.1016/S1098-2140(99)00016-8 DOI: https://doi.org/10.1016/S1098-2140(99)00016-8

Smith, M. F. (2005). Evaluability assessment. In S. Mathison (Ed.), Encyclopedia of evaluation(pp. 136-139). Thousand Oaks, CA: Sage.

Strong, D. M., Lee, Y. W., & Wang, R. Y. (1997). Data quality in context. Communications of the ACM, 40(5), 103-110. DOI: https://doi.org/10.1145/253769.253804

https://doi.org/10.1145/253769.253804 DOI: https://doi.org/10.1145/253769.253804

Stufflebeam, D. L., & Shinkfield, A. J. (2007).Evaluation theory, models, and applications. San Francisco, CA: Jossey-Bass.

Thurston, W. E., & Potvin, L. (2003). Evaluability assessment: A tool for incorporating evaluation in social change programmes. Evaluation, 9, 453-469. DOI: https://doi.org/10.1177/1356389003094006

https://doi.org/10.1177/135638900300900406 DOI: https://doi.org/10.1177/135638900300900406

Torres, R. T., & Preskill, H. (2001). Evaluation and organizational learning: Past, present, and future. American Journal of Evaluation, 22, 387-395.Trevisan, M. S. (2007). Evaluability assessment from 1986 to 2006. American Journal of Evaluation, 28, 290-303. DOI: https://doi.org/10.1177/109821400102200316

https://doi.org/10.1177/109821400102200316 DOI: https://doi.org/10.1177/109821400102200316

Trevisan, M. S., & Huang, Y. M. (2003). Evaluability assessment: A primer. Practical Assessment, Research & Evaluation, 8(20), 2-9. Retrieved from http://PAREonline.net/getvn.asp?v=8&n=20

VERBI GmbH. (2011). MAXQDA [Computer software]. Retrieved from http://www.maxqda.com/

Wang, R. L., & Strong, D. M. (1996). Beyond accuracy: What data quality means to data consumers. Journal of Management Information Systems, 12(4), 5-33. https://doi.org/10.1080/07421222.1996.11518099 DOI: https://doi.org/10.1080/07421222.1996.11518099

Wholey, J. S. (1979). Evaluation: Promise and performance. Washington, DC: The Urban Institute.

Wholey, J. S. (1987). Evaluability assessment: Developing program theory. New Directions for Program Evaluation, 33, 77-92. https://doi.org/10.1002/ev.1447 DOI: https://doi.org/10.1002/ev.1447

Wholey, J. S. (2004). Evaluability assessment. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation(2nded., pp. 33-62). San Francisco, CA: Jossey-Bass.

Wholey, J. S. (2011). Exploratory evaluation. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation(3rded., pp. 81-99). San Francisco, CA: Jossey-Bass.

Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2010). The program evaluation standards: A guide for evaluators and evaluation users. Thousand Oaks, CA: Sage.