The Context of Evaluation: Balancing Rigor and Relevance
Main Article Content
Abstract
Background: The context in which an evaluation is undertaken impacts not only the core evaluation activities but also the ethical standards that guide our work. Theoretical constructs and ethical decision-making frameworks often may not support us in ethical dilemmas given certain evaluation settings.
Purpose: The purpose of this article is to present an example of how a well-intentioned, responsive yet rigorous evaluation provided opportunities to expand our experiences. We conclude this paper with a set of recommendations for organizations/institutions, evaluators, project developers/ implementers, and grantors.
Setting: Not applicable
Intervention: Not applicable.
Research Design: Not applicable.
Data Collection and Analysis: Desk review.
Findings: The complexities of conducting an external evaluation of a study that relied on a public-private partnership coupled with a complex intervention that was not fully developed resulted in unintended study limitations and necessitated adapting evaluation strategies and making ad hoc adjustments midstream. Through our recommendations we share our lessons learned and how challenges can be creatively and ethically addressed at the onset of an evaluation.
Downloads
Article Details
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Copyright and Permissions
Authors retain full copyright for articles published in JMDE. JMDE publishes under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY - NC 4.0). Users are allowed to copy, distribute, and transmit the work in any medium or format for noncommercial purposes, provided that the original authors and source are credited accurately and appropriately. Only the original authors may distribute the article for commercial or compensatory purposes. To view a copy of this license, visit creativecommons.org
References
American Evaluation Association. (2004). Guiding Principles for Evaluators. http://www.eval.org/publications/Gui dingPrinciplesPrintable.asp. Retrieved May 2, 2010.
Chaskin, R.J. 2003. The challenge of two- tiered evaluation in community initiatives. Journal of Community Practice 11(1), 61-83. https://doi.org/10.1300/J125v11n01_04
Fitzpatrick, J. L., Sanders, J. R. and Worthen, B. R. (2004). Program evaluation: Alternative approaches and practical guidelines. New York: Pearson Education Inc.
Gaskill, D., Morrison, P., Sanders, F., Forster, E., Edwards, H., Fleming, R. & McClure, S. (2003). University and industry partnerships: Lessons learned from collaborative research. International Journal of Nursing Practice, 9, 347-355. https://doi.org/10.1046/j.1440-172X.2003.00448.x
Gorman D. M. , Conde, E. (2007). Conflict of interest in the evaluation and dissemination of 'model' school- based drug and violence prevention programs. Evaluation and Program Planning, 30, 422-429. https://doi.org/10.1016/j.evalprogplan.2007.06.004
Medical Research Council. (2008). Developing and evaluating complex interventions: New guidance. BMJ, 337, a1655. https://doi.org/10.1136/bmj.a1655
Powell, W. & Koput, K. &.-D. (1996). Interorganizational collaboration and the locus of innovation: networks of learning in biotechnology. Administrative Science Quarterly , 41, 116-145. https://doi.org/10.2307/2393988
Rodi, M. S. & Paget, K. D. (2007). Where local and national evaluators meet: Unintended threats to ethical evaluation practice. Evaluation and Program Planning, 30(4), 416-421. https://doi.org/10.1016/j.evalprogplan.2007.06.005
Shadish, W. R. (2006). The common threads in program evaluation. Preventing Chronic Disease, 3(1), 1-5.
Walt, G., Brugha, R., & Haines, A. (2002). Working with the private Sector: the need for institutional guidelines. BMJ, 432-435. https://doi.org/10.1136/bmj.325.7361.432