The Use of Technology in Evaluation Practice
Main Article Content
Abstract
Background: Evaluation practice is no longer limited to pencil and paper questionnaires, today technological advances allow evaluators to collect data with handheld devices, visualize information in interactive ways, and communicate instantaneously with stakeholders across the globe. These advances have changed how we conduct our practice and they will continue to redefine how we design our evaluations, interact with stakeholders, and communicate our findings. There are few published articles that examine the interface between evaluation and technology and this study represents an initial attempt at examining the technological tools that evaluators use in their practice, the reasons they are adopted, and future technological interest of practicing evaluators.
Purpose: This research on evaluation study attempts to (1) identify the types of technology tools evaluators use in their practice, (2) describe the factors that predict technology adoption, and (3) understand the tools that evaluators are interested in learning more about. This inquiry offers the evaluation community a broader perspective on the technologies that evaluators are implementing in their practice, offers insights on future technological trends within the field, and introduces the evaluation community to tools that can potentially enhance practice.
Setting: Virtual on-line community of evaluation practitioners.
Intervention: This was an exploratory research on evaluation study with no intervention.
Research Design: A panel of experts on technology and evaluation were recruited to brainstorm a comprehensive list of technologies that could be adopted by evaluators as part of their practice. The comprehensive list of technology tools was then embedded within a larger survey instrument that was distributed to a sample of evaluation practitioners from the American Evaluation Association. The survey asked evaluators to select the technology tools they have or currently use in their practice, how often each were used, their satisfaction with each tool, and why they were utilizing each tool in their practice.
Data Collection and Analysis: Data were collected through a web-based survey from members of the American Evaluation Association. The analysis utilized descriptive statics to represent trends in technology use and adoption. Multiple statistical comparisons using ANOVA were also utilized to examine the relationship between technological adoption and use and evaluator background characteristics. Open-ended responses in the survey were also presented as part of the analysis.
Findings: Analyses revealed that technological tools were adopted by evaluators because they helped to produce quality products, increased timeliness, reduced errors, and increased cost efficiencies, and the most adopted tools tended to aid in quantitative data analysis, project management, and productivity. Many evaluators expressed interest in learning more about the use of qualitative analysis tools, web-based data collection tools, and relational database creation and management.
Downloads
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Copyright and Permissions
Authors retain full copyright for articles published in JMDE. JMDE publishes under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY - NC 4.0). Users are allowed to copy, distribute, and transmit the work in any medium or format for noncommercial purposes, provided that the original authors and source are credited accurately and appropriately. Only the original authors may distribute the article for commercial or compensatory purposes. To view a copy of this license, visit creativecommons.org
References
Bennington, T. L. (1999). Ethical implications of computer-mediated evaluation. New Directions for Evaluation, 84, 87-103. https://doi.org/10.1002/ev.1155 DOI: https://doi.org/10.1002/ev.1155
Duncan, D., White, J. B., & Nicholson, T. (2003). Using Internet based surveys to reach hidden populations: Case of non-abusive illicit drug users. American Journal of Health Behavior, 27(3), 208-218. https://doi.org/10.5993/AJHB.27.3.2 DOI: https://doi.org/10.5993/AJHB.27.3.2
Fetterman, D. M. (2002). Web surveys to digital movies: Technological tools of the trade. Educational Researcher, 31, 29-38. https://doi.org/10.3102/0013189X031006029 DOI: https://doi.org/10.3102/0013189X031006029
Galen, M. & Grodzicki, D. (2011). Utilizing emerging technology in program evaluation. New Directions for Evaluation, 131, 123-128. https://doi.org/10.1002/ev.389 DOI: https://doi.org/10.1002/ev.389
Gay, G., & Bennington, T. L. (1999a). Editors' notes. New Directions for Evaluation, 84, 1-2. https://doi.org/10.1002/ev.1149 DOI: https://doi.org/10.1002/ev.1149
Gay, G., & Bennington, T. L. (1999b) Reflective evaluation in a technologically textured world: An activity theory approach. New Directions for Evaluation, 84, 3-21. https://doi.org/10.1002/ev.1150 DOI: https://doi.org/10.1002/ev.1150
Glaser, B., & Strauss, A. (1967). The discovery of grounded theory. Chicago: Aldine.
Love, A. (2001). The future of evaluation: Catching rocks with cauldrons. American Journal of Evaluation, 22, 437-444. https://doi.org/10.1177/109821400102200322 DOI: https://doi.org/10.1016/S1098-2140(01)00140-0
Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis (2nd ed.). Thousand Oaks, CA: Sage.
Mulvey, K., Atkinson, D., Avula, D. Luckey, W. (2005). Using the internet to measure program performance. American Journal of Evaluation, 26, 587-597. https://doi.org/10.1177/1098214005281320 DOI: https://doi.org/10.1177/1098214005281320
Richards, T., & Richards, L. (1995). Using hierarchical categories in qualitative data analysis. In U. Kelle (Ed.), Computer-aided qualitative data analysis: Theory, methods, and practice. London, UK: Sage.
Ritter, L. A. & Sue, V. M. (2007). Authors' notes. New Directions for Evaluation, 115, 1-3. https://doi.org/10.1002/ev.229 DOI: https://doi.org/10.1002/ev.229
Rogers, E. M. (2003). Diffusion of innovation. New York: Free Press.
Scriven, M. (1991). Evaluation thesaurus (4th ed.). Newbury Park, CA: Sage.
Strauss, A. (1987). Qualitative research for social scientists. Cambridge, UK: Cambridge University Press. https://doi.org/10.1017/CBO9780511557842 DOI: https://doi.org/10.1017/CBO9780511557842
Thomas, D. R. (2006). A general inductive approach for analyzing qualitative evaluation data. American Journal of Evaluation, 27, 237-246. https://doi.org/10.1177/1098214005283748 DOI: https://doi.org/10.1177/1098214005283748
Tornatzky, L. G., & Klein, K. J. (1982). Innovation characteristics and innovation adoption implementation: A meta-analysis of findings. Transactions on Engineering Management, 29, 28-45. https://doi.org/10.1109/TEM.1982.6447463 DOI: https://doi.org/10.1109/TEM.1982.6447463
Wilson, A. L., Ramamurthy, K., & Nystrom, P. C. (1999). A multi-attribute measure for innovation adoption: The context of imaging technology. Transactions on Engineering Management, 46, 311-321. https://doi.org/10.1109/17.775283 DOI: https://doi.org/10.1109/17.775283