Refining and Measuring the Construct of Evaluative Thinking: An Exploratory Factor Analysis of the Evaluative Thinking Inventory
Main Article Content
Abstract
Background: Evaluative thinking has emerged as a key construct in evaluation, especially for evaluation practitioners and researchers interested in evaluation capacity building (ECB). Yet, despite increasing calls for more research on evaluation and, more specifically, for more research on ECB, until recently little empirical inquiry on the dimensions of evaluative thinking has been conducted.
Purpose: To address that lack, the purpose of the study presented in this paper is to refine the construct of evaluative thinking by exploring its underlying dimensions and to ascertain the internal consistency of an instrument developed to measure evaluative thinking, the Evaluative Thinking Inventory (ETI).
Setting: The ETI was developed as part of an ECB initiative focused on non-formal science, engineering, technology, and math (STEM) education in the United States, and was tested as part of a study focused on evaluating gifted education programs, also in the United States.
Intervention: Not applicable.
Research design: Survey research and exploratory factor analysis (EFA).
Data collection & analysis: The ETI was administered to participants in a study measuring the effectiveness of a tool used to conduct internal evaluations of gifted education programs. SPSS was used to conduct an EFA on 96 completed ETIs. Cronbach’s alpha was used to estimate the internal consistency of the instrument.
Findings: The analysis of the ETI revealed a two-factor model of evaluative thinking (i.e., believe in and practice evaluation and pose thoughtful questions and seek alternatives). This study also provided internal consistency evidence for the ETI showing alpha reliabilities for the two factors ranging from 0.80 to 0.82. The ETI has potentially wide applicability in research and practice in ECB and in the field of evaluation more generally.
Downloads
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Copyright and Permissions
Authors retain full copyright for articles published in JMDE. JMDE publishes under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY - NC 4.0). Users are allowed to copy, distribute, and transmit the work in any medium or format for noncommercial purposes, provided that the original authors and source are credited accurately and appropriately. Only the original authors may distribute the article for commercial or compensatory purposes. To view a copy of this license, visit creativecommons.org
Funding data
-
National Science Foundation
Grant numbers 0814364
References
Baker, A., Bruner, B., Sabo, K. & Cook, A. (2006). Evaluation capacity and evaluative thinking in organizations. Retrieved from: http://www.evaluativethinking.org/docs/EvalCap_EvalThink.pdf
Bourgeois, I., & Cousins, J. B. (2013). Understanding dimensions of organizational evaluation capacity. American Journal of Evaluation. 34(3), 299-319. DOI: https://doi.org/10.1177/1098214013477235
https://doi.org/10.1177/1098214013477235 DOI: https://doi.org/10.1177/1098214013477235
Bourgeois, I., Toews, E., Whynot, J., & Lamarche, M. K. (2013). Measuring organizational evaluation capacity in the Canadian federal government. Canadian Journal of Program Evaluation, 28(2), 1-19. DOI: https://doi.org/10.3138/cjpe.28.001
https://doi.org/10.3138/cjpe.28.001 DOI: https://doi.org/10.3138/cjpe.28.001
Brown, T. (2006). Confirmatory factor analysis for applied research. New York: Guilford Press.
Buckley, J. & Archibald, T. (2011). The Evaluative Thinking Inventory. Cornell University. Retrieved from: https://cornell.qualtrics.com/jfe/form/SV_eLKEC3flFDNuU8A
Buckley, J., Archibald, T., Hargraves, M., & Trochim, W. M. (2015). Defining and teaching evaluative thinking: Insights from research on critical thinking. American Journal of Evaluation, 36(3), 375-388. DOI: https://doi.org/10.1177/1098214015581706
https://doi.org/10.1177/1098214015581706 DOI: https://doi.org/10.1177/1098214015581706
Callahan, C. (1986). Asking the right questions: The central issue in evaluating programs for the gifted and talented. Gifted Child Quarterly, 30, 38-42. DOI: https://doi.org/10.1177/001698628603000108
https://doi.org/10.1177/001698628603000108 DOI: https://doi.org/10.1177/001698628603000108
Coryn, C. L., Wilson, L. N., Westine, C. D., Hobson, K. A., Ozeki, S., Fiekowsky, E. L., ... & Schröter, D. C. (2017). A decade of research on evaluation: A systematic review of research on evaluation published between 2005 and 2014. American Journal of Evaluation, 38(3), 329-347. DOI: https://doi.org/10.1177/1098214016688556
https://doi.org/10.1177/1098214016688556 DOI: https://doi.org/10.1177/1098214016688556
Cousins, J. B., Goh, S. C., Elliott, C. J., & Bourgeois, I. (2014). Framing the capacity to do and use evaluation. In J. B. Cousins & I. Bourgeois (Eds.), Organizational capacity to do and use evaluation. New Directions for Evaluation, 141, 7-23. DOI: https://doi.org/10.1002/ev.20076
https://doi.org/10.1002/ev.20076 DOI: https://doi.org/10.1002/ev.20076
Davidson, E., Howe, M., & Scriven, M. (2004). Evaluative thinking for grantees. In M. Braverman, N. Constantine, & J.K. Slater (Eds.), Foundations and evaluation: Contexts and practices for effective philanthropy (pp. 259-280). San Francisco, CA: Jossey-Bass.
de Winter, J. D., Dodou, D., & Wieringa, P. A. (2009). Exploratory factor analysis with small sample sizes. Multivariate Behavioral Research, 44(2), 147-181. DOI: https://doi.org/10.1080/00273170902794206
https://doi.org/10.1080/00273170902794206 DOI: https://doi.org/10.1080/00273170902794206
Fetterman, D. & Wandersman, A. (2005). Empowerment evaluation principles in practice. New York: Guilford Press.
Fierro, L. A., Codd, H., Gill, S., Pham, P. K., Grandjean Targos, P. T., & Wilce, M. (2018). Evaluative thinking in practice: The National Asthma Control Program. In A. T. Vo & T. Archibald (Eds.), Evaluative Thinking. New Directions for Evaluation. 158, 49-72. DOI: https://doi.org/10.1002/ev.20322
https://doi.org/10.1002/ev.20322 DOI: https://doi.org/10.1002/ev.20322
Gagnon, F., Aubry, T., Cousins, J. B., Goh, S. C., & Elliott, C. (2018). Validation of the evaluation capacity in organizations questionnaire. Evaluation and Program Planning, 68, 166-175. DOI: https://doi.org/10.1016/j.evalprogplan.2018.01.002
https://doi.org/10.1016/j.evalprogplan.2018.01.002 DOI: https://doi.org/10.1016/j.evalprogplan.2018.01.002
Hurley, A., Scandura, T., Schriescheim, C., Brannick, M., Seers, A., Vandenberg, R., & Williams, L. (1997). Exploratory and confirmatory factor analysis guidelines, issues, and alternatives. Journal of Organizational Behavior, 18, 667-682. DOI: https://doi.org/10.1002/(SICI)1099-1379(199711)18:6<667::AID-JOB874>3.0.CO;2-T
https://doi.org/10.1002/(SICI)1099-1379(199711)18:6<667::AID-JOB874>3.0.CO;2-T DOI: https://doi.org/10.1002/(SICI)1099-1379(199711)18:6<667::AID-JOB874>3.0.CO;2-T
Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263-291. DOI: https://doi.org/10.2307/1914185
https://doi.org/10.2307/1914185 DOI: https://doi.org/10.2307/1914185
King, J. A. (2007). Developing evaluation capacity through process use. New Directions for Evaluation, 116, 45-59. DOI: https://doi.org/10.1002/ev.242
https://doi.org/10.1002/ev.242 DOI: https://doi.org/10.1002/ev.242
Labin, S. N. (2014). Developing common measures in evaluation capacity building: An iterative science and practice process. American Journal of Evaluation, 35(1), 107-115. DOI: https://doi.org/10.1177/1098214013499965
https://doi.org/10.1177/1098214013499965 DOI: https://doi.org/10.1177/1098214013499965
Labin, S. N., Duffy, J. L., Meyers, D. C., Wandersman, A., & Lesesne, C. A. (2012). A research synthesis of the evaluation capacity building literature. American Journal of Evaluation, 33(3), 307-338. DOI: https://doi.org/10.1177/1098214011434608
https://doi.org/10.1177/1098214011434608 DOI: https://doi.org/10.1177/1098214011434608
Lord, C., Ross, L., & Lepper, M. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37, 2098-2109. DOI: https://doi.org/10.1037//0022-3514.37.11.2098
https://doi.org/10.1037/0022-3514.37.11.2098 DOI: https://doi.org/10.1037/0022-3514.37.11.2098
McCoach, B., Gable, R., & Madura, J. (2013). Instrument development in the affective domain: School and corporate applications. New York: Springer. DOI: https://doi.org/10.1007/978-1-4614-7135-6
https://doi.org/10.1007/978-1-4614-7135-6 DOI: https://doi.org/10.1007/978-1-4614-7135-6
McIntosh, J. (2015). The depth and complexity program evaluation tool: A new method for conducting internal evaluations of gifted education programs (Unpublished doctoral dissertation). Purdue University, West Lafayette, IN.
NAGC. (2010). Pre-K-Grade 12 gifted programming standards. Retrieved from: http://www.nagc.org/uploadedFiles/Information_and_Resources/Gifted_Program_Standards/K-12%20programming%20standards.pdf
Preskill, H. (2014). Now for the hard stuff: Next steps in ECB research and practice. American Journal of Evaluation, 35(1), 116-119. DOI: https://doi.org/10.1177/1098214013499439
https://doi.org/10.1177/1098214013499439 DOI: https://doi.org/10.1177/1098214013499439
Preskill, H., & Boyle, S. (2008). A multidisciplinary model of evaluation capacity building. American Journal of Evaluation, 29(4), 443-459. DOI: https://doi.org/10.1177/1098214008324182
https://doi.org/10.1177/1098214008324182 DOI: https://doi.org/10.1177/1098214008324182
Preskill, H., & Torres, R. T. (2000). Readiness for organizational learning and evaluation instrument. Available from H. Preskill, hallie.preskill@fsg.org. DOI: https://doi.org/10.1002/ev.1189
https://doi.org/10.1002/ev.1189 DOI: https://doi.org/10.1002/ev.1189
Sanders, M., Gugiu, P. C., & Enciso, P. (2015). How good are our measures? Investigating the appropriate use of factor analysis for survey instruments. Journal of Multidisciplinary Evaluation, 11(25), 22-33. DOI: https://doi.org/10.56645/jmde.v11i25.432
https://doi.org/10.56645/jmde.v11i25.432 DOI: https://doi.org/10.56645/jmde.v11i25.432
Schwandt, T. A. (2018). Evaluative thinking as a collaborative social practice: The case of boundary judgment making. In A. T. Vo & T. Archibald (Eds.), Evaluative Thinking. New Directions for Evaluation.158, 125-137. DOI: https://doi.org/10.1002/ev.20318
https://doi.org/10.1002/ev.20318 DOI: https://doi.org/10.1002/ev.20318
Stockdill, S. H., Baizerman, M., & Compton, D. W. (2002). Toward a definition of the ECB process: A conversation with the ECB literature. In D. W. Compton, M. Baizerman, & S. H. Stockdill (Eds.), The Art, Craft and Science of Evaluation Capacity Building: New Directions for Evaluation, 93, 7-25. DOI: https://doi.org/10.1002/ev.39
https://doi.org/10.1002/ev.39 DOI: https://doi.org/10.1002/ev.39
Suarez-Balcazar, Y., & Taylor-Ritzler, T. (2014). Moving from science to practice in evaluation capacity building. American Journal of Evaluation, 35(1), 95-99. DOI: https://doi.org/10.1177/1098214013499440
https://doi.org/10.1177/1098214013499440 DOI: https://doi.org/10.1177/1098214013499440
Taylor-Powell, E., & Boyd, H. H. (2008). Evaluation capacity building in complex organizations. In M. T. Braverman, M. Engle, M. E. Arnold, & R. A. Rennekamp (Eds.), Program evaluation in a complex organizational system: Lessons from Cooperative Extension. New Directions for Evaluation, 120, 55-69. DOI: https://doi.org/10.1002/ev.276
https://doi.org/10.1002/ev.276 DOI: https://doi.org/10.1002/ev.276
Taylor-Ritzler, T., Suarez-Balcazar, Y., Garcia-Iriarte, E., Henry, D., Balcazar, F. (2013). Understanding and measuring evaluation capacity: A model and instrument validation study. American Journal of Evaluation, 34(2), 190-206. DOI: https://doi.org/10.1177/1098214012471421
https://doi.org/10.1177/1098214012471421 DOI: https://doi.org/10.1177/1098214012471421
Vo, A. T., & Archibald, T. (2018). New directions for evaluative thinking. In A. Vo & T. Archibald (Eds.), Evaluative Thinking: New Directions for Evaluation, 158, 139-147. doi.org/10.1002/ev.20317 DOI: https://doi.org/10.1002/ev.20317
https://doi.org/10.1002/ev.20317 DOI: https://doi.org/10.1002/ev.20317
Vo, A. T., Schreiber, J. S., & Martin, A. (2018). Toward a conceptual understanding of evaluative thinking. In A. T. Vo & T. Archibald (Eds.), Evaluative Thinking. New Directions for Evaluation. 158, 29-47. DOI: https://doi.org/10.1002/ev.20324
https://doi.org/10.1002/ev.20324 DOI: https://doi.org/10.1002/ev.20324
Wandersman, A. (2014). Moving forward with the science and practice of evaluation capacity building (ECB): The why, how, what, and outcomes of ECB. American Journal of Evaluation, 35(1), 87-89. DOI: https://doi.org/10.1177/1098214013503895
https://doi.org/10.1177/1098214013503895 DOI: https://doi.org/10.1177/1098214013503895
Yarbrough, D., Shulha, L., Hopson, R., & Caruthers, F. (2011). The program evaluation standards: A guide for evaluators and evaluation users. Thousand Oaks, CA: Sage.