Program Logic Foundations: Putting the Logic Back into Program Logic

Main Article Content

Andrew J. Hawkins
https://orcid.org/0000-0003-4365-1025

Abstract

Background: Program logic is one of the most used tools by the public policy evaluator. There is, however, little explanation in the evaluation literature about the logical foundations of program logic or discussion of how it may be determined if a program is logical. This paper was born on a long journey that started with program logic and ended with the logic of evaluation. Consistent throughout was the idea that the discipline of program evaluation is a pragmatic one, concerned with applied social science and effective action in complex, adaptive systems. It gradually became the central claim of this paper that evidence-based policy requires sound reasoning more urgently than further development and testing of scientific theory. This was difficult to reconcile with the observation that much evaluation was conducted within a scientific paradigm, concerned with the development and testing of various types of theory.


Purpose: This paper demonstrates the benefits of considering the core essence of a program to be a proposition about the value of a course of action. This contrasts with a research-based paradigm in which programs are considered to be a type of theory, and in which experimental and theory-driven evaluations are conducted. Experimental approaches focus on internal validity of knowledge claims about programs and on discovering stable cause and effect relationships—or, colloquially, ‘what works?’. Theory-driven approaches tend to focus on external validity and in the case of the realist approach, the search for transfactual causal mechanisms—extending the ‘what works’ mantra to include ‘for whom and in what circumstances’. On both approaches, evaluation aspires to be a scientific pursuit for obtaining knowledge of general laws of phenomena, or in the case of realists, replicable context-mechanism-outcome configurations. This paper presents and seeks to justify an approach rooted in logic, and that supports anyone to engage in a reasonable and democratic deliberation about the value of a course of action.


It is consistent with systems thinking, complexity and the associated limits to certainty for determining the value of a proposed, or actual, course of action in the social world. It suggests that evaluation should learn from the past and have an eye toward the future, but that it would be most beneficial if concerned with evaluating in the present, in addressing the question ‘is this a good idea here and now?


Setting: Not applicable.


Intervention: Not applicable


Research design: Not applicable.


Findings: In seeking foundations of program logic, this paper exposes roots that extend far deeper than the post-enlightenment, positivist and post-positivist social science search for stable cause and effect relationships. These roots lie in the 4th century BCE with Aristotle’s ‘enthymeme’. The exploration leads to conclusions about the need for a greater focus on logic and reasoning in the design and evaluation of programs and interventions for the public good. Science and research are shown to play a crucial role in providing reasons or warrants to support a claim about the value of a course of action; however, one subordinate to the alpha-discipline of logical evaluation and decision making that must consider what is feasible given the context, capability and capacity available, not to mention values and ethics. Program Design Logic (PDL) is presented as an accessible and incremental innovation that may be used to determine if a program makes sense ‘on paper’ in the design stage as well as ‘in reality’ during delivery. It is based on a configurationalist theory of causality and the concepts of ‘necessary’ and ‘sufficient’ conditions. It is intended to guide deliberation and decision making across the life cycle of any intervention intended for the public good.

Downloads

Download data is not yet available.

Article Details

How to Cite
Hawkins, A. J. (2020). Program Logic Foundations: Putting the Logic Back into Program Logic. Journal of MultiDisciplinary Evaluation, 16(37), 38–57. https://doi.org/10.56645/jmde.v16i37.657
Section
Research on Evaluation Articles
Author Biography

Andrew J. Hawkins, ARTD Consultants, Charles Darwin University, Australia

Senior Consultant, project manager and data analyst for simple, complicated and complex research and evaluation projects.

References

Alkin, M. C., & Patton, M. Q. (2020). The Birth and Adaptation of Evaluation Theories. Journal of MultiDisciplinary Evaluation, 16(35), 1-13. https://doi.org/10.56645/jmde.v16i35.637 DOI: https://doi.org/10.56645/jmde.v16i35.637

Althaus, C., Bridgman P. & Davis G. (2017). The Australian Policy Handbook 6th edition, Allen & Unwin.

Aristotle. (1992). The Art of Rhetoric. Penguin Classics.

Bhaskar, R. A. (2008). A realist theory of science. London: Verso.

Cartwright, N., & Hardie, J. (2012). Evidence-based policy: A practical guide to doing it better. Oxford University Press. https://doi.org/10.1093/acprof:osobl/9780199841608.001.0001 DOI: https://doi.org/10.1093/acprof:osobl/9780199841608.001.0001

Comte, A. & Lenzer, G. (ed). (1975). Auguste Comte and Positivism: The Essential Writings. Transaction Publishers.

Cook, T. D., Scriven, M., Coryn, C. L. S. & Evergreen, S. D. H. (2010). Contemporary Thinking About Causation in Evaluation: A Dialogue with Tom Cook and Michael Scriven. American Journal of Evaluation, 31(1), 105-117. https://doi.org/10.1177/1098214009354918 DOI: https://doi.org/10.1177/1098214009354918

Dagli, M. M, (1998). Modus Ponens, Modus Tollens, and Likeness. The Paideia Archive: Twentieth World Congress of Philosophy, Volume 8, pp 45-52. https://doi.org/10.5840/wcp20-paideia19988179 DOI: https://doi.org/10.5840/wcp20-paideia19988179

Datta, L, E. (1990). Prospective evaluation methods: The prospective evaluation synthesis. GAO/PEMD-10.1.10. United States General Accounting Office, 119.

Donaldson, S. & Lipsey, M. (2006). Roles for theory in contemporary evaluation practice: developing practical knowledge. In Shaw, I. F., Greene, J. C., & Mark, M. M. The SAGE Handbook of Evaluation, 57-75. SAGE. https://doi.org/10.4135/9781848608078.n2 DOI: https://doi.org/10.4135/9781848608078.n2

Fournier, D. (1995). Establishing evaluative conclusions: A distinction between general and working logic. New Directions for Evaluation, 68. https://doi.org/10.1002/ev.1017 DOI: https://doi.org/10.1002/ev.1017

Frechtling, J. (2007). Logic modelling methods in program evaluation. San Francisco, CA: Jossey-Bass.

Funnell, S. C., & Rogers, P. J. (2011). Purposeful program theory: Effective use of theories of change and logic models. San Francisco, SA: Jossey-Bass.

Hawkins, A. J. (2016). Realist evaluation and randomised controlled trials for testing program theory in complex social systems. Evaluation, 22(3), 270-285. https://doi.org/10.1177/1356389016652744 DOI: https://doi.org/10.1177/1356389016652744

Hernandez, M. (2000). Using logic models and program theory to build outcome accountability. Education & Treatment of Children, 23(1), 24-41.

Jones, N. D., Azzam, T., Wanzer, D. L., Skousen, D., Knight, C. & Sabarre, N. (2019). Enhancing the effectiveness of logic models. American Journal of Evaluation. 1098214018824417. https://doi.org/10.1177/1098214018824417 DOI: https://doi.org/10.1177/1098214018824417

Kurtz, C. F. & Snowden, D. J. (2003). The new dynamics of strategy: Sense-making in a complex and complicated world. IBM Systems Journal, 42(3), 462-483. https://doi.org/10.1147/sj.423.0462 DOI: https://doi.org/10.1147/sj.423.0462

Mackie, J. L. (1974). The cement of the universe: A study of causation. Oxford: Clarendon Press.

McLaughlin, J. A. & Jordan, G. B. (2015). Using logic models. Handbook of Practical Program Evaluation: Fourth Edition, 62-87. https://doi.org/10.1002/9781119171386.ch3 DOI: https://doi.org/10.1002/9781119171386.ch3

OECD/DAC Network on Development Evaluation (2019) Better Criteria for Better Evaluation Revised Evaluation Criteria Definitions and Principles for Use.

Pawson, R. & Tilley, N. (1997) Realistic Evaluation. London, UK: SAGE Publications.

Pawson, R. (2013). The science of evaluation: A realist manifesto. SAGE Publications. https://doi.org/10.4135/9781473913820 DOI: https://doi.org/10.4135/9781473913820

Pawson, R. (2008). Causality for beginners. NCRM Research Methods Festival 2008.

Pearl, J. (2017) The Book of Why: The New Science of Cause and Effect. Illustrated edition. New York: Ingram Publisher Services Us.

Ragin, C. (2009). Qualitative comparative analysis using fuzzy sets (fsQCA). In Rihoux, B. & Ragin, C. C. Applied Social Research Methods: Configurational comparative methods: Qualitative comparative analysis (QCA) and related techniques, 51, pp. 87-122. Thousand Oaks, CA: SAGE Publications, Inc. https://doi.org/10.4135/9781452226569.n5 DOI: https://doi.org/10.4135/9781452226569.n5

Renger and Titcomb. (2002). A three step approach to teaching logic models. The American Journal of Evaluation, 23, 493-503. https://doi.org/10.1177/109821400202300409 DOI: https://doi.org/10.1177/109821400202300409

Renger, R. (2015). System evaluation theory (SET): A practical framework for evaluators to meet the challenges of system evaluation. Evaluation Journal of Australasia, 15(4), 16-28. https://doi.org/10.1177/1035719X1501500403 DOI: https://doi.org/10.1177/1035719X1501500403

Rogers, P. (2000) Causal models in program theory evaluation. In Rogers, P., Hacsi, T. Petrosino, A. & Huebner, T. (eds), Program theory in evaluation: Challenges and opportunities, New Directions in Program Evaluation, 87, 47-55. San Francisco, CA: Jossey-Bass Publishers. https://doi.org/10.1002/ev.1181 DOI: https://doi.org/10.1002/ev.1177

Rossi, P. (1985) The iron law of evaluation and other metallic rules: Paper presented at State University of New York, Albany, Rockefeller College.

Schwandt, T. (2015). Evaluation foundations revisited: Cultivating a life of the mind for practice. Stanford University Press.

Scriven, M. (2008). The Concept of a transdiscipline: And of evaluation as a transdiscipline. Journal of MultiDisciplinary Evaluation, 5(10), 65-66. https://doi.org/10.56645/jmde.v5i10.161 DOI: https://doi.org/10.56645/jmde.v5i10.161

Shadish, Cook & Campbell. (2002). Experimental and quasi-experimental designs for generalised causal inference. https://doi.org/10.1016/B0-08-043076-7/00419-8 DOI: https://doi.org/10.1016/B0-08-043076-7/00419-8

Shadish, Cook and Leviton. (1991). Foundations of program evaluation, theories of practice. SAGE publications.

Taylor-Powell, E., & Henert, E. (2008). Developing a logic model: Teaching and training guide. Madison, WI: University of Wisconsin Extension, Cooperative Extension, Program Development and Evaluation. http://www.uwex.edu/ces/pdande

Toulmin, S. (2003a). The Uses of Argument (2nd edition). Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9780511840005 DOI: https://doi.org/10.1017/CBO9780511840005

Toulmin, S. (2003b). Return to reason. Harvard University Press. https://doi.org/10.4159/9780674044425 DOI: https://doi.org/10.4159/9780674044425

W.K. Kellogg Foundation. (2004). Using Logic models to bring together planning, evaluation, and action: Logic model development guide. Battle Creek, MI.

Weiss, Carol (1995). Nothing as practical as good theory: Exploring theory-based evaluation for comprehensive community initiatives for children and families. New Approaches to Evaluating Community Initiatives. Aspen Institute.

Wong, G., Westhorp, G., Manzano, A., Greenhalgh, J., Jagosh, J. & Greenhalgh, T. (2016). RAMESES II reporting standards for realist evaluations. BMC medicine, 14(1), 96. https://doi.org/10.1186/s12916-016-0643-1 DOI: https://doi.org/10.1186/s12916-016-0643-1