Implementation Fidelity: The Disconnect Between Theory and Practice
Main Article Content
Abstract
The current paper combines a conceptual analysis of major reviews of implementation fidelity studies (e.g., Dane & Schneider, 1998; Mowbray et al., 2003; O’Donnell, 2008) with reflections on fidelity measurement practice in field evaluations. It claims that practice is impoverished by the failure to recognize the existence of competing conceptualizations of fidelity, rooted in different theoretical perspectives. Different evaluation contexts may be better matched to one or the other of these perspectives. Confusion about how fidelity should be defined in a given funding program or evaluation prevents evaluators from instituting a maximally useful fidelity measurement program. Difficulties inherent to creating high-quality fidelity measures contribute to the problem. The causes and consequences of this disconnect between fidelity theory and fidelity practice are discussed herein. Preliminary suggestions for solutions are advanced.
Downloads
Article Details
![Creative Commons License](http://i.creativecommons.org/l/by-nc/4.0/88x31.png)
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Copyright and Permissions
Authors retain full copyright for articles published in JMDE. JMDE publishes under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY - NC 4.0). Users are allowed to copy, distribute, and transmit the work in any medium or format for noncommercial purposes, provided that the original authors and source are credited accurately and appropriately. Only the original authors may distribute the article for commercial or compensatory purposes. To view a copy of this license, visit creativecommons.org
References
Abry, T., Hulleman, C. S., & Rimm-Kaufman, S. E. (2015). Using indices of fidelity to intervention core components to identify program active ingredients. American Journal of Evaluation, 36(3), 320-338. https://doi.org/10.1177/1098214014557009
Akiba, M., Ramp, L., & Wilkinson, B. (2014). Lesson study policy and practice in Florida: Findings from a statewide district survey. Florida State University.
Alcohol, Drug Abuse and Mental Health Services Administration Reorganization Act of 1992, Public Law 102-321 (p. 141).
Anglin, K. L., Wong, V. C., & Boguslav, A. (2021). A natural language processing approach to measuring treatment adherence and consistency using semantic similarity. AERA Open, 7. https://doi.org/10.1177/23328584211028615
Bamberger, M., Tarsilla, M., & Hesse-Biber, S. (2016). Why so many "rigorous" evaluations fail to identify unintended consequences of development programs: How mixed methods can contribute. Evaluation and Program Planning, 55, 155-162. https://doi.org/10.1016/j.evalprogplan.2016.01.001
Berman, P., & McLaughlin, M. W. (1976) Implementation of educational innovation. The Educational Forum, 40(3), 345-370.https://doi.org/10.1080/00131727609336469
Bickman, L. (1987). The functions of program theory. New Directions for Program Evaluation, 33, 5-18. https://doi.org/10.1002/ev.1443
Bronfenbrenner, U. (1979). Contexts of child rearing: Problems and prospects. American psychologist, 34(10), 844. https://doi.org/10.1037/0003-066X.34.10.844
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., & Askell, A. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877-1901.
Can, D., Marín, R. A., Georgiou, P. G., Imel, Z. E., Atkins, D. C., & Narayanan, S. S. (2016). "It sounds like...": A natural language processing approach to detecting counselor reflections in motivational interviewing. Journal of Counseling Psychology, 63(3), 343. https://doi.org/10.1037/cou0000111
Carroll, C., Patterson, M., Wood, S., Booth, A., Rick, J., & Balain, S. (2007). A conceptual framework for implementation fidelity. Implementation Science, 2(40) https://doi.org/10.1186/1748-5908-2-40 https://doi.org/10.1186/1748-5908-2-40
Charters, W. W., & Jones, J. E. (1974, February). On neglect of the independent variable in program evaluation. University of Oregon, Project MITT.
Century, J., Rudnick, M., & Freeman, C. (2010). A framework for measuring fidelity of implementation: A foundation for shared language and accumulation of knowledge. American Journal of Evaluation, 31(2), 199-218. https://doi.org/10.1177/1098214010366173
Chen, H.-T., & Rossi, P. H. (1980). The multi-goal, theory-driven approach to evaluation: A model linking basic and applied social science. Social Forces, 59, 106-122. https://doi.org/10.2307/2577835
Cordray, D. S. (1989). Optimizing validity in program research: An elaboration of Chen and Rossi's theory-driven approach. Evaluation and program planning, 12(4), 379-385. https://doi.org/10.1016/0149-7189(89)90055-4
Cordray, D. S., & Morphy, P. (2009). Research synthesis and public policy. The handbook of research synthesis and meta-analysis, 2, 473-494.
Cordray, D. S., & Pion, G. M. (2006). Treatment strength and integrity: Models and methods. In R. Bootzin & P. McKnight (Eds.), Contributions of Lee Sechrest to methodology and evaluation. APA. https://doi.org/10.1037/11384-006
Cordray, D. S., Pion, G. M., Brandt, C., & Molefe, A. (2013). The impact of the measures of academic progress (MAP) program on student reading achievement. Society for Research on Educational Effectiveness.
Dane, A. V., & Schneider, B. H. (1998). Program integrity in primary and early secondary prevention: Are implementation effects out of control? Clinical Psychology Review, 18, 23-45. https://doi.org/10.1016/S0272-7358(97)00043-3
Donaldson, S. I. (2007). Program theory-driven evaluation science: Strategies and applications. Erlbaum. https://doi.org/10.4324/9780203809730
Education Sciences Reform Act of 2002 (Pub. L. No. 107-279). Retrieved June 29, 2022, from http://www.ed.gov/legislation/EdSciencesRef/
Ensminger, D. C., Frazier, E. W., Montrosse‐Moorhead, B., & Linfield, K. J. (2021). How do we deepen our story reservoir by designing, developing, and writing instructional cases for teaching evaluation? New Directions for Evaluation, 172, 85-102. https://doi.org/10.1002/ev.20484
Fuchs, D., Fuchs, L. S., Bahr, M. W., Fernstrom, P., & Stecker, P. M. (1990). Prereferral intervention: A prescriptive approach. Exceptional Children, 56(6), 493-513. https://doi.org/10.1177/001440299005600602
Funnell, S. C., & Rogers, P. J. (2011). Purposeful program theory: Effective use of theories of change and logic models. John Wiley & Sons.
Gawande, A. (2014). The Checklist Manifesto. Penguin Books.
Gomaa, W. H. (2013). A survey of text similarity approaches. International Journal of Computer Applications, 68, 6. https://doi.org/10.5120/11638-7118
Hall, G. E., & Loucks, S. F. (1977). A developmental model for determining whether the treatment is actually implemented. American Educational Research Journal, 14(3), 263-276. https://doi.org/10.3102/00028312014003263
Havelock, R. G. (1969). A comparative study of the literature on the dissemination and utilization of scientific knowledge. Center for Research on Utilization of Scientific Knowledge at the University of Michigan. https://files.eric.ed.gov/fulltext/ED029171.pdf
Hill, H. C. (2011). The nature and effects of middle school mathematics teacher learning experiences. Teachers College Record, 113(1), 205-234. https://doi.org/10.1177/016146811111300106
Hill, H. C., & Erickson, A. (2019). Using implementation fidelity to aid in interpreting program impacts: A brief review. Educational Researcher, 48(9), 590-598. https://doi.org/10.3102/0013189X19891436
House, E. R., Kerins, T., & Steele, J. M. (1972). A test of the research and development model of change. Educational Administration Quarterly, 8(1), 1-14. https://doi.org/10.1177/0013131X7200800102
Hulleman, C. & Cordray, D. (2009). Moving from the lab to the field: The role of fidelity and achieved relative strength. Journal of Research on Educational Effectiveness, 2, 88-110. https://doi.org/10.1080/19345740802539325
Institute of Education Sciences (2013). FY 2013 Education Research Grants RFA.
Institute of Education Sciences (2021). Standards for Excellence in Educational Research (SEER) Principles. https://ies.ed.gov/seer/index.asp, retrieved 6/23/22.
Institute of Education Sciences (2023). FY 2024 Education Research Grants RFA.
Jabeen, S. (2016). Do we really care about unintended outcomes? An analysis of evaluation theory and practice. Evaluation and Program Planning, 55, 144-154. https://doi.org/10.1016/j.evalprogplan.2015.12.010
Jensen, E., Dale, M., Donnelly, P. J., Stone, C., Kelly, S., Godley, A., & D'Mello, S. K. (2020). Toward automated feedback on teacher discourse to enhance teacher learning. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3313831.3376418
LaVelle, J. M. (2018). 2018 Directory of evaluator education programs in the United States. University of Minnesota Libraries Publishing. https://conservancy.umn.edu/items/c14d5684-9030-4b86-9009-c11e0b095c6f
Lewis, C., & Hurd, J. (2011). Lesson study step by step: How teacher learning communities improve instruction. Heinemann.
Lewis, C., & Tsuchida, I. (1997). Planned educational change in Japan: The case of elementary science instruction. Journal of Educational Policy, 12(5), 313-331. https://doi.org/10.1080/0268093970120502
Lipsey, M. W., Crosse, S., Dunkle, J., Pollard, J., & Stobart, G. (1985). Evaluation: The state of the art and the sorry state of the science. New Directions for Program Evaluation, 27, 7-28. https://doi.org/10.1002/ev.1398
Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons' responses and performances as scientific inquiry into score meaning. American Psychologist, 50, 741-749. https://doi.org/10.1037/0003-066X.50.9.741
Meyers, C. V., & Brandt, W. C. (Eds.). (2014). Implementation fidelity in education research: Designer and evaluator considerations. Routledge. https://doi.org/10.4324/9781315795089
Montrosse-Moorhead, B., Juskiewicz, K., & Li, E. Y. (2016). Have we reached consensus on implementation fidelity in evaluation practice? [Conference presentation] Annual meeting of the American Educational Research Association, Washington, DC, United States.
Mowbray, C. T., Holter, M. C., Teague, G. B., & Bybee, D. (2003). Fidelity criteria: Development, measurement, and validation. The American Journal of Evaluation, 24, 315-340. https://doi.org/10.1177/109821400302400303
Munter, C., Wilhelm, A. G., Cobb, P., & Cordray, D. S. (2014). Assessing fidelity of implementation of an unprescribed, diagnostic mathematics intervention. Journal of Research on Educational Effectiveness, 7(1), 83-113. https://doi.org/10.1080/19345747.2013.809177
NIER (National Institute for Educational Policy Research) [Kokuritsu Kyouiku Seisaku Kenkyuujo]. (2011). Kyouin no Shitsu no koujou ni kansuru chosa kenkyuu [Report of Survey Research on Improvement of Teacher Quality]. Kokuritsu Kyouiku Seisaku Kenkyuujou.
Nelson, M. C., Cordray, D. S., Hulleman, C. S., Darrow, C. L., & Sommer, E. C. (2012). A procedure for assessing intervention fidelity in experiments testing educational and behavioral interventions. The Journal of Behavioral Health Services and Research, 39(4): 374-396. https://doi.org/10.1007/s11414-012-9295-x
O'Donnell, C. (2008). Defining, conceptualizing, and measuring fidelity of implementation and its relationship to outcomes in K-12 curriculum intervention research. Review of Educational Research, 78, 33-84. https://doi.org/10.3102/0034654307313793
Orwin, R. G., Sonnefeld, L. J., Cordray, D. S., Pion, G. M., & Perl, H. I. (1998). Constructing quantitative implementation scales from categorical services data: Examples from a multisite evaluation. Evaluation Review, 22(2), 245-288. https://doi.org/10.1177/0193841X9802200204
Perry, R. R., & Lewis, C. C. (2009). What is successful adaptation of lesson study in the US? Journal of Educational Change, 1ß0, 365-391. https://doi.org/10.1007/s10833-008-9069-7
Rogers, E. M. (1995) Diffusion of innovations. Free Press.
Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach (7th ed.). Sage.
Scriven, M. (1973). Goal-free evaluation. In E. R. House (Ed.), School evaluation: The politics and the process. McCutchan Publishing.
Sechrest, L., & Redner, R. (1979). Strength and integrity of treatments in evaluation studies: How well does it work? National Criminal Justice Reference Service.
Shepard, L. A. (1997). The centrality of test use and consequences for test validity. Educational Measurement: Issues and Practice, 16(2), 5-24. https://doi.org/10.1111/j.1745-3992.1997.tb00585.x
Stigler, J. W., & Hiebert, J. (1999). The teaching gap: Best ideas from the world's teachers for improving education in the classroom. Summit Books.
WALS. (2012). World Association of Lesson Studies International Conference 2014: Programme and abstracts. World Association of Lesson Studies. http://www.walsnet.org/2012/programme.html
Wang-Iverson, P., & Yoshida, M. (2005). Building our understanding of lesson study. Research for Better Schools.