Communicating About Evaluation: A Conceptual Model and Case Example
Main Article Content
Abstract
Background: Despite consensus within the evaluation community about what is distinctive about evaluation, confusion among stakeholders and other professions abounds. The evaluation literature describes how those in the social sciences continue to view evaluation as applied social science and part of what they already know how to do, with the implication that no additional training beyond the traditional social sciences is needed. Given the lack of broader understanding of the specialized role of evaluation, the field struggles with how best to communicate about evaluation to stakeholders and other professions.
Purpose: This paper addresses the need to clearly communicate what is distinctive about evaluation to stakeholders and other professions by offering a conceptual tool that can be used in dialogue with others. Specifically, we adapt a personnel evaluation framework to map out what is distinctive about what evaluators know and can do. We then compare this map with the knowledge and skill needed in a related profession (i.e., assessment) in order to reveal how the professions differ.
Setting: Not applicable.
Intervention: Not applicable.
Research Design: Not applicable.
Data Collection and Analysis: Not applicable.
Findings: We argue that using a conceptual tool such as the one presented in this paper with comparative case examples would clarify for outsiders the distinct work of evaluators. Additionally, we explain how this conceptual tool is flexible and could be extended by evaluation practitioners in a myriad of ways.
Downloads
Article Details
![Creative Commons License](http://i.creativecommons.org/l/by-nc/4.0/88x31.png)
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Copyright and Permissions
Authors retain full copyright for articles published in JMDE. JMDE publishes under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY - NC 4.0). Users are allowed to copy, distribute, and transmit the work in any medium or format for noncommercial purposes, provided that the original authors and source are credited accurately and appropriately. Only the original authors may distribute the article for commercial or compensatory purposes. To view a copy of this license, visit creativecommons.org
References
Alkin, M. C. (2004). Context adapted utilization: A personal journey. In M. Alkin, Evaluation roots: Tracing theorists' views and influences (pp. 293-303). Thousand Oaks, CA: Sage. https://doi.org/10.4135/9781412984157.n19 DOI: https://doi.org/10.4135/9781412984157.n19
Alkin, M. C. (2011). Evaluation Essentials: From A to Z. Guilford Press.
Alkin, M. C. (2013). Evaluation roots: A wider perspective of theorists views and influences (2nd ed.). Thousand Oaks, CA: Sage.
Altschuld, J. W., & Engle, M. (2015). Accreditation, certification, and credentialing: Relevant concerns for U.S. evaluators. New Directions for Evaluation, 145, 1-113. https://doi.org/10.1002/ev.20107 DOI: https://doi.org/10.1002/ev.20114
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education (2014). Standards for Educational and Psychological Testing. Washington, DC: American Educational Research Association.
ANZEA & SuPERU (2015). Evaluation Standards for Aotearo New Zealand. The Aotearoa New Zealand Evaluation Association. Retreived from http://www.anzea.org.nz/evaluation/evaluation-standards/
Blome, J. (2009). National Institute of General Medical Sciences.
Chen, H. T. (2015) Practical program evaluation: Theory-driven evaluation and the integrated evaluation perspective (2nd ed.). Thousand Oaks, CA: Sage. https://doi.org/10.4135/9781071909850 DOI: https://doi.org/10.4135/9781071909850
Davidson, E. J. (2007). Marketing evaluation as a profession and a discipline. Journal of MultiDisciplinary Evaluation, 2, 3-10. https://doi.org/10.56645/jmde.v2i2.117 DOI: https://doi.org/10.56645/jmde.v2i2.117
Davidson, E. J. (2015). Question-driven methods or method-driven questions? How we limit what we learn by limiting what we ask. Journal of MultiDisciplinary Evaluation, 11(24), i-x. https://doi.org/10.56645/jmde.v11i24.414 DOI: https://doi.org/10.56645/jmde.v11i24.414
Davies, R., Randall, D., & West, R. E. (2015). Using open badges to certify practicing evaluators. American Journal of Evaluation, 36, 151-163. https://doi.org/10.1177/1098214014565505 DOI: https://doi.org/10.1177/1098214014565505
Dewey, J., Montrosse, B. E., Schröter, D. C., Sullins, C., & Mattox, J. R. (2008). Evaluator competencies: What's taught versus what's sought. American Journal of Evaluation, 29, 268-287. https://doi.org/10.1177/1098214008321152 DOI: https://doi.org/10.1177/1098214008321152
DeLuca, C. (2011). Interpretive validity theory: Mapping a methodology for validating educational assessments. Journal of Educational Research, 53(3), 303-320. https://doi.org/10.1080/00131881.2011.598659 DOI: https://doi.org/10.1080/00131881.2011.598659
Donaldson, S. I. (Ed.). (2013). The future of evaluation in society: A tribute to Michael Scriven. Charlotte, NC: Information Age Publishing.
Dynarski, M., Rui, N., Webber, A., & Gutmann, B. (2017). Evaluation of the DC Opportunity Scholarship Program: Impacts after one year. NCEE 2017-4022. National Center for Education Evaluation and Regional Assistance.
Fitzpatrick, J. (2002). Dialogue with David Fetterman. American Journal of Evaluation, 21(2), 242-259. https://doi.org/10.1177/109821400002100214 DOI: https://doi.org/10.1016/S1098-2140(00)00066-7
Fournier, D. M. (1995). Establishing evaluative conclusions: A distinction between general and working logic. New Directions for Evaluation, 68, 15-32. https://doi.org/10.1002/ev.1017 DOI: https://doi.org/10.1002/ev.1017
Fournier, D. M. (2005). Logic of evaluation. In S. Mathison (Ed.), Encyclopedia of evaluation (p. 140). Thousand Oaks, CA: SAGE.
Freidson, E. (2001). Professionalism, the third logic: On the practice of knowledge. Chicago, IL: University of Chicago Press.
Furr, R., & Bacharach, V. (2014). Psychometrics: An introduction (Second ed.). Los Angeles, CA: Sage.
Gauthier, B., Kishchuk, N., Borys, S., & Roy, S. N. (2015). The CES professional designation program: Views from members. Canadian Journal of Program Evaluation, 29(3), 98-133. https://doi.org/10.3138/cjpe.29.3.98 DOI: https://doi.org/10.3138/cjpe.29.3.98
Hambleton, R. K., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of item response theory. Newbury Park, CA: Sage.
House, E. R. (1977). The logic of evaluative argument (CSE Monograph 7). Los Angeles: University of California, Los Angeles, Center for the Study of Evaluation.
House, E. R. (1980). Evaluating with validity. Beverly Hills, CA: SAGE.
House, E. R. (1993). Professional evaluation: Social impact and political consequences. Newberry Park, CA: Sage.
Jacob, S. (2008). Cross-disciplinarization: A new talisman for evaluation? American Journal of Evaluation, 29(2), 175-194. https://doi.org/10.1177/1098214008316655 DOI: https://doi.org/10.1177/1098214008316655
Jacob, S., & Boisvert, Y. (2010). To be or not to be a profession: Pros, cons and challenges for evaluation. Evaluation, 16(4), 349-369. https://doi.org/10.1177/1356389010380001 DOI: https://doi.org/10.1177/1356389010380001
King, J., McKegg, K., Oakden, J., & Wehipeihana, N. (2013). Rubrics: A method for surfacing values and improving the credibility of evaluation. Journal of MultiDisciplinary Evaluation, 9(21), 11-21. https://doi.org/10.56645/jmde.v9i21.374 DOI: https://doi.org/10.56645/jmde.v9i21.374
King, J. A., & Stevahn, L. (2013). Interactive evaluation practice: Mastering the interpersonal dynamics of program evaluation. Thousand Oaks, CA: Sage. https://doi.org/10.4135/9781452269979 DOI: https://doi.org/10.4135/9781452269979
King, J. A., & Stevahn, L. (2015). Competencies for program evaluators in light of adaptive action: What? So what? Now what? In J. W. Altschuld & M. Engle (Eds.), Accreditation, certification, and credentialing: Relevant concerns for U.S. evaluators. New Directions for Evaluation, 145, 21-37. https://doi.org/10.1002/ev.20109 DOI: https://doi.org/10.1002/ev.20109
Larson, M. S. (1977). The rise of professionalism: A sociological approach (Vol. 233). Berkeley, CA: University of California Press. https://doi.org/10.1525/9780520323070 DOI: https://doi.org/10.1525/9780520323070
LaVelle, J. M. (2014). An examination of evaluation education programs and evaluator skills across the world. (Unpublished doctoral dissertation). The Claremont Graduate University, Claremont, CA.
LaVelle, J. M., & Donaldson, S. I. (2010). University-based evaluation training programs in the United States 1980-2008: An empirical examination. American Journal of Evaluation, 31(1), 9-23. https://doi.org/10.1177/1098214009356022 DOI: https://doi.org/10.1177/1098214009356022
LaVelle, J. M., & Donaldson, S. I. (2015). The state of preparing evaluators. In J. W. Altschuld & M. Engle (Eds.), Accreditation, certification, and credentialing: Relevant concerns for U.S. evaluators. New Directions for Evaluation, 145, 39-52. https://doi.org/10.1002/ev.20110 DOI: https://doi.org/10.1002/ev.20110
Lincoln, Y. S. (1985). The ERS standards for program evaluation: Guidance for a fledging profession. Evaluation and Program Planning, 8, 251-253. https://doi.org/10.1016/0149-7189(85)90046-1 DOI: https://doi.org/10.1016/0149-7189(85)90046-1
MacDonald, K.M. (1995) The Sociology of the Professions. Thousand Oaks, CA: Sage.
Mathison, S. (2007). What is the difference between evaluation and research-and why do we care? In N. L. Smith & P. R. Brandon (Eds.), Fundamental issues in evaluation (pp. 183-196). New York, NY: Guilford.
McCoach, D. B., Gable, R. K., & Madura, J. P. (2013). Instrument development in the affective domain: Social and corporate applications (3rd ed.). New York: Springer. https://doi.org/10.1007/978-1-4614-7135-6 DOI: https://doi.org/10.1007/978-1-4614-7135-6
Millerson G (1964). The qualifying associations: A study in professionalization. London: Routledge and Kegan Paul Ltd.
Nilsson, N., & Hogben, D. (1983). Metaevaluation. In E. R. House (Ed.), New directions for program evaluation: No. 19. Philosophy of evaluation (pp. 83-97). San Francisco, CA: Jossey-Bass. https://doi.org/10.1002/ev.1346 DOI: https://doi.org/10.1002/ev.1346
Nunns, H., & Roorda, M. (2010, May). Final evaluation report of the recognized seasonal employer policy (2007-2009). New Zealand: Evalue Research team, Research New Zealand, the University of Auckland, and the Department of Labour.
OECD (2002). Glossary of key terms in evaluation and results-based management. Paris, France: OECD/DAC.
Oksanen, R. (2016, October). Professionalization of evaluation in Europe. Paper presented at the meeting of the American Evaluation Association, Atlanta, GA.
Patel, M. (2013). African Evaluation Guidelines. African Evaluation Journal, 1, 15. https://doi.org/10.4102/aej.v1i1.51 DOI: https://doi.org/10.4102/aej.v1i1.51
Patton, M. Q. (1990). The challenge of being a profession. Evaluation Practice, 11(1), 45-51. https://doi.org/10.1177/109821409001100107 DOI: https://doi.org/10.1177/109821409001100107
Perrin, B. (2005). How can information about the competencies required for evaluation be useful? Canadian Journal of Program Evaluation, 20(2), 169-188. https://doi.org/10.3138/cjpe.20.009 DOI: https://doi.org/10.3138/cjpe.20.009
Picciotto, R. (2011). The logic of evaluation professionalism. Evaluation, 17(2), 165-180. https://doi.org/10.1177/1356389011403362 DOI: https://doi.org/10.1177/1356389011403362
Picciotto, R. (2010, January 13). Country-led M&E systems -Robert Picciotto part 2 [Video file]. Retrieved from http://www.youtube.com/watch?v=UfNakPTODBs%3E
Porter, S., & Goldman, I. (2013). A growing demand for monitoring and evaluation in Africa. African Evaluation Journal, 1(1). https://doi.org/10.4102/aej.v1i1.25 DOI: https://doi.org/10.4102/aej.v1i1.25
Profession. (2017). In OxfordDictionaries.com. Retrieved from https://en.oxforddictionaries.com/definition/profession
Professionalize. (2017). In OxfordDictionaries.com. Retrieved from https://en.oxforddictionaries.com/definition/professionalize
Rodríguez Bilella, P. D., Martinic Valencia, S., Soberón Alvarez, L., Klier, S. D., Guzmán Hernández, A. L. & Tapella, E. (2016). Evaluation Standards for Latin America and the Caribbean (1st ed.). Ciudad Autónoma de Buenos Aires.
Rog, D. J., Fitzpatrick, J. L., & Conner, R. F. (2012) Context: A framework for its influence on evaluation practice. New Directions for Evaluation, 135, 1-105. https://doi.org/10.1002/ev.20023 DOI: https://doi.org/10.1002/ev.20023
Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach. Thousand Oaks, CA: Sage.
Schwandt, T. A. (2015). Evaluation foundations revisited: Cultivating a life of the mind for practice. Stanford, CA: Stanford University Press.
Scriven M. (1975). Evaluation bias and its control. Occasional Paper Series sponsored by the National Science Center (Paper #4). Kalamazoo, MI: Evaluation Center, Western Michigan University.
Scriven, M. (1991). Evaluation thesaurus (Fourth ed.). Thousand Oaks, CA: SAGE.
Scriven, M. (1993). The nature of evaluation. In M. Scriven (Ed.), New directions for program evaluation: No. 58. Hard-won lessons in program evaluation (pp. 5-48). San Francisco, CA: Jossey-Bass. https://doi.org/10.1002/ev.1640 DOI: https://doi.org/10.1002/ev.1640
Scriven, M. (1994). Duties of the teacher. Journal of Personnel Evaluation in Education, 8, 2, 151-184. https://doi.org/10.1007/BF00972261 DOI: https://doi.org/10.1007/BF00972261
Scriven, M. (2007). The logic of evaluation. In H. V. Hansen, D. M. Godden, H. V. Hansen, C.W. Tindale, & J. A. Blair (Eds.), Dissensus and the search for common ground [CD]. Windsor, ON: Ontario Society for the Study of Argumentation (OSSA).
Scriven, M. (2015, February). Duties of the teacher [updated]. Retrieved from
http://michaelscriven.info/papersandpublications.html
Scriven, M. (2015, August). Key evaluation checklist [updated]. Retrieved from
http://michaelscriven.info/papersandpublications.html
Segone, M. (2010). From policies to results: Developing capacities for country monitoring and evaluation systems. Retrieved from: http://mymande.org/sites/default/files/From_Policy_To_Results.zip
Shadish, W. R. (1999). Evaluation theory is who we are. American Journal of Evaluation, 19(1), 1-19. https://doi.org/10.1177/109821409801900102 DOI: https://doi.org/10.1016/S1098-2140(99)80177-5
Stufflebeam, D. L. (1978). Meta evaluation: An overview. Evaluation and the Health Professions, 1, 17-43. https://doi.org/10.1177/016327877800100102 DOI: https://doi.org/10.1177/016327877800100102
Stufflebeam, D. L. (2001). The metaevaluation imperative. American Journal of Evaluation, 22, 183-209. https://doi.org/10.1177/109821400102200204 DOI: https://doi.org/10.1016/S1098-2140(01)00127-8
Stufflebeam, D. L. (2011). Meta-evaluation. Journal of MultiDisciplinary Evaluation, 7, 99-158. (Original work published 1974). https://doi.org/10.56645/jmde.v7i15.300 DOI: https://doi.org/10.56645/jmde.v7i15.300
Stufflebeam, D. L., & Coryn, C. L. S. (2014). Evaluation theory, models, & applications (2nd ed.). San Francisco, CA: Jossey-Bass.
Swaminathan, H., & Rogers, H. J. (1990). Detecting differential item functioning using logistic regression procedures. Journal of Educational Measurement, 27(4), 361-370. https://doi.org/10.1111/j.1745-3984.1990.tb00754.x DOI: https://doi.org/10.1111/j.1745-3984.1990.tb00754.x
Toulmin, S. E. (1972). Human understanding: The collective use and evolution of concepts (Vol. One) Princeton, NJ: Princeton University Press.
Willis, G. B. (1999, August). Cognitive interviewing: A "how to" guide. Paper presented at the meeting of the American Statistical Association, Baltimore, MD.