We Can’t Hear You – You’re on Mute: Findings From a Review of Evaluation Capacity Building (ECB) Practice Online
DOI:
https://doi.org/10.56645/jmde.v19i45.739Abstract
Background: In her presidential address to the American Evaluation Association (AEA) in 2007, Hallie Preskill (2008) highlighted the potential role of technology to promote learning from evaluation, noting the increased use of computers, the internet, and social media as untapped ways to facilitate evaluation. More than ten years later in the context of the COVID-19 pandemic, evaluators and evaluation capacity building (ECB) practitioners found themselves needing to shift to online modalities to conduct evaluation and build capacity. The COVID-19 pandemic, technological advancements, and the rapid shift to remote work have changed our way of working (Gratton, 2021; Kane et al., 2021). Building evaluation capacity is no exception to this trend.
Purpose: This study aimed to examine ways that practitioners have built evaluation capacity online or have used technology to do so, to capture lessons learned that can be applied in a COVID and post-normal context.
Setting: Findings from this study can be applied in online contexts for developing evaluation capacity.
Intervention: Not applicable.
Research Design: The study design consisted of a rapid review of the ECB literature published from 2000 to 2019 in eight academic journals focused on evaluation research and practice.
Data Collection and Analysis: Twenty-nine case applications of ECB practice that: 1) mentioned use of technology as a strategy for building evaluation capacity or 2) noted that at least one component of the ECB intervention was carried out online or virtually were reviewed for this study. Quantitative data were analyzed via descriptive statistics. Qualitative data were coded in MAXQDA using conventional content analysis (Hsieh & Shannon, 2005).
Findings: More diverse online interventions have increased over time. Less than half (45%) of ECB interventions made use of both asynchronous and synchronous strategies for building capacity while more than one-third (38%) made use of asynchronous only strategies. Key barriers to implementing ECB strategies online included lack of social connections to other participants during the capacity building activity, technical malfunctions, lack of access to or familiarity with the technology in use, and limited resources for carrying out evaluation activities. Key facilitators for enhancing implementation included facilitating participant interaction and relationship-building both on and off-line, tailoring ECB activities to participant work contexts, and providing tutorials for accessing and using the technology in play.
Downloads
References
Anderson, C., Chase, M., Johnson, J., Mekiana, D., McIntyre, D., Ruerup, A., & Kerr, S. (2012). It is only new because it has been missing for so long: Indigenous evaluation capacity building. American Journal of Evaluation, 33(4), 566–582. https://doi.org/10.1177/1098214012449686 DOI: https://doi.org/10.1177/1098214012449686
Beere, D. (2005). Evaluation capacity-building: A tale of value-adding. Evaluation Journal of Australasia, 5(2), 41–47. https://doi.org/10.1177/1035719X0500500207 DOI: https://doi.org/10.1177/1035719X0500500207
Bennington, T. L., Gay, G., & Jones, M. L. W. (1999) Using multimedia records to support mixed-method evaluation. New Directions for Evaluation, 84, 59–72. https://doi.org/10.1002/ev.1153 DOI: https://doi.org/10.1002/ev.1153
Bourgeois, I., Lemire, S. T., Fierro, L. A., Castleman, A. M., & Cho, M. (2023). Laying a solid foundation for the next generation of evaluation capacity building: Findings from an integrative review. American Journal of Evaluation, 44(1), 29–49. https://doi.org/10.1177/10982140221106991 DOI: https://doi.org/10.1177/10982140221106991
Brandon, P. R., & Higa, T. A. F. (2004). An empirical study of building the evaluation capacity of K-12 site-managed project personnel. Canadian Journal of Program Evaluation, 19(1), 125–142. DOI: https://doi.org/10.3138/cjpe.019.005
Campbell, R., Townsend, S. M., Shaw, J., Karim, N., & Markowitz, J. (2015). Can a workbook work? Examining whether a practitioner evaluation toolkit can promote instrumental use. Evaluation and Program Planning, 52, 107–117. https://doi.org/10.1016/j.evalprogplan.2015.04.005 DOI: https://doi.org/10.1016/j.evalprogplan.2015.04.005
Cohen, C. (2006). Evaluation learning circles: A sole proprietor’s evaluation capacity-building strategy. New Directions for Evaluation, 111, 85–93. https://doi.org/10.1002/ev.200 DOI: https://doi.org/10.1002/ev.200
Compton, D. W., MacDonald, G., Schooley, M., Zhang, L., & Baizerman, M. (2008). Using evaluation capacity building (ECB) to interpret evaluation strategy and practice in the United States National Tobacco Control Program (NTCP): A preliminary study. Canadian Journal of Program Evaluation, 23(3 Spec. Issue), 199–224. DOI: https://doi.org/10.3138/cjpe.0023.010
Dette, R., Steets, J., & Sagmeister, E. (2016). Technologies for monitoring in insecure environments. Secure Access in Volatile Environments (SAVE). https://www.gppi.net/media/SAVE__2016__Toolkit_on_Technologies_for_Monitoring_in_Insecure_Environments.pdf
Duncan, D. F., White, J. B., & Nicholson, T. (2003). Using internet-based surveys to reach hidden populations: Case of nonabusive illicit drug users. American Journal of Health Behavior, 27(3), 208–218. https://doi.org/10.5993/AJHB.27.3.2 DOI: https://doi.org/10.5993/AJHB.27.3.2
Fayard, A.-L., Weeks, J., & Khan, M. (2021). Designing the hybrid office. Harvard Business Review, 99(2), 114–114.
Fetterman, D. M. (2002). Web surveys to digital movies: Technological tools of the trade. Educational Researcher, 31(6), 29–37. DOI: https://doi.org/10.3102/0013189X031006029
Fleming, M. L., & Easton, J. (2010). Building environmental educators’ evaluation capacity through distance education. Evaluation and Program Planning, 33(2), 172–177. https://doi.org/10.1016/j.evalprogplan.2009.07.007 DOI: https://doi.org/10.1016/j.evalprogplan.2009.07.007
Galen, M., & Grodzicki, D. (2011). Utilizing emerging technology in program evaluation. New Directions for Evaluation, 131, 123–128. DOI: https://doi.org/10.1002/ev.389
Gay, G., & Bennington, T. L. (Eds.). (1999). Information technologies in evaluation: Social, moral, epistemological, and practical implications [Special issue]. New Directions for Evaluation, 84. DOI: https://doi.org/10.1002/ev.1149
Gibson, R., & Robichaud, S. (2020). Evaluating Dancing with Parkinson’s: Reflections from the perspective of a community organization. Evaluation and Program Planning, 80. https://doi.org/10.1016/j.evalprogplan.2017.05.010 DOI: https://doi.org/10.1016/j.evalprogplan.2017.05.010
Goodyear, L. K. (2011). Building a community of evaluation practice within a multisite program. New Directions for Evaluation, 129, 97–105. https://doi.org/10.1002/ev.358 DOI: https://doi.org/10.1002/ev.358
Gratton, L. (2021). How to do hybrid right. Harvard Business Review, 99(3), 66–66.
Hilton, L., & Libretto, S. (2017). Evaluation capacity building in the context of military psychological health: Utilizing Preskill and Boyle’s multidisciplinary model. American Journal of Evaluation, 38(3), 393–404. https://doi.org/10.1177/1098214016664584 DOI: https://doi.org/10.1177/1098214016664584
Hong, L. (2008). Blending online components into traditional instruction in pre-service teacher education: The good, the bad, and the ugly. International Journal for the Scholarship of Teaching and Learning, 2(1). https://doi.org/10.20429/ijsotl.2008.020114 DOI: https://doi.org/10.20429/ijsotl.2008.020114
Hsieh, H. F., & Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative Health Research, 15(9), 1277–88. https://doi.org/10.1177/1049732305276687 DOI: https://doi.org/10.1177/1049732305276687
Jamieson, V., & Azzam, T. (2012). The use of technology in evaluation practice. Journal of MultiDisciplinary Evaluation, 8(18), 1–15. https://journals.sfu.ca/jmde/index.php/jmde_1/article/view/340 DOI: https://doi.org/10.56645/jmde.v8i18.340
Kane, G. C., Nanda, R., Phillips, A., & Copulsky, J. (2021). Redesigning the post-pandemic workplace. MIT Sloan Management Review, 62(3), 12–13.
Kentnor, H. E. (2015). Distance education and the evolution of online learning in the United States. Curriculum and Teaching Dialogue, 17(1/2), 21–34. https://digitalcommons.du.edu/law_facpub/24/
Khangura, S., Konnyu, K., Cushman, R., Grimshaw, J., & Moher, D. (2012). Evidence summaries: The evolution of a rapid review approach. Systematic Reviews, 1(1), 1–9. https://doi.org/10.1186/2046-4053-1-10 DOI: https://doi.org/10.1186/2046-4053-1-10
Labin, S. N., Duffy, J. L., Meyers, D. C., Wandersman, A., & Lesesne, C. A. (2012). A research synthesis of the evaluation capacity building literature. American Journal of Evaluation, 33(3), 307–338. https://doi.org/10.1177/1098214011434608 DOI: https://doi.org/10.1177/1098214011434608
Lachance, L., Watson, C., Blais, D., Ungar, M., Healey, G., Salaffie, M., Sundar, P., Kelly, L., & Lagace, M. C. (2019). Strengthening child and youth programs: A look at inter-organizational mentoring strategies. Evaluation and Program Planning, 76. https://doi.org/10.1016/j.evalprogplan.2019.101679 DOI: https://doi.org/10.1016/j.evalprogplan.2019.101679
Ma, L., & Lee, C. S. (2021). Evaluating the effectiveness of blended learning using the arcs model. Journal of Computer Assisted Learning, 37(5), 1397–1408. https://doi.org/10.1111/jcal.12579 DOI: https://doi.org/10.1111/jcal.12579
Mackay, K. (2002). The World Bank’s ECB experience. New Directions for Evaluation, 93, 81–100. https://doi.org/10.1002/ev.43 DOI: https://doi.org/10.1002/ev.43
Mulvey, K. P., Atkinson, D. D., Avula, D., & Luckey, J. W. (2005). Using the internet to measure program performance. American Journal of Evaluation, 26(4), 587–597. https://doi.org/10.1177/1098214005281320 DOI: https://doi.org/10.1177/1098214005281320
Mayberry, R. M., Daniels, P., Yancey, E. M., Akintobi, T. H., Berry, J., Clark, N., & Dawaghreh, A. (2009). Enhancing community-based organizations’ capacity for HIV/AIDS education and prevention. Evaluation and Program Planning, 32(3), 213–220. https://doi.org/10.1016/j.evalprogplan.2009.01.002 DOI: https://doi.org/10.1016/j.evalprogplan.2009.01.002
Means, B., Toyama, Y., Murphy, R., Bakia, M., & Jones, K. (2010). Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies. U.S. Department of Education, Office of Planning, Evaluation, and Policy Development. https://eric.ed.gov/?id=ED505824
Naccarella, L., Pirkis, J., Kohn, F., Morley, B., Burgess, P., & Blashki, G. (2007). Building evaluation capacity: Definitional and practical implications from an Australian case study. Evaluation and Program Planning, 30(3), 231–236. https://doi.org/10.1016/j.evalprogplan.2007.05.001 DOI: https://doi.org/10.1016/j.evalprogplan.2007.05.001
Nelson, M., & Eddy, R. (2008). Evaluative thinking and action in the classroom. New Directions for Evaluation, 117, 37–46. https://doi.org/10.1002/ev.250 DOI: https://doi.org/10.1002/ev.250
Palvia, S., Aeron, P., Gupta, P., Mahapatra, D., Parida, R., Rosner, R., & Sindhi, S. (2018). Online education: Worldwide status, challenges, trends and implications. Journal of Global Information Technology Management, 21(4), 233–241. https://doi.org/10.1080/1097198X.2018.1542262 DOI: https://doi.org/10.1080/1097198X.2018.1542262
Preskill, H. (2008). Evaluation’s second act: A spotlight on learning. American Journal of Evaluation, 29(2), 127–138. https://doi.org/10.1177/1098214008316896 DOI: https://doi.org/10.1177/1098214008316896
Preskill, H., & Boyle, S. (2008). A multidisciplinary model of evaluation capacity building. American Journal of Evaluation, 29(4), 443–459. https://doi.org/10.1177/1098214008324182 DOI: https://doi.org/10.1177/1098214008324182
Rolfe, A., & Cheek, B. (2012). Learning styles. InnovAiT, 5(3), 176–181. https://doi.org/10.1093/innovait/inr239 DOI: https://doi.org/10.1093/innovait/inr239
Rorrer, A. S. (2016). An evaluation capacity building toolkit for principal investigators of undergraduate research experiences: A demonstration of transforming theory into practice. Evaluation and Program Planning, 55, 103–111. https://doi.org/10.1016/j.evalprogplan.2015.12.006 DOI: https://doi.org/10.1016/j.evalprogplan.2015.12.006
Rosenstein, B., & Englert, P. E. (2008). The road to evaluation capacity building: A case study from Israel. Canadian Journal of Program Evaluation, 23(3), 83–102. DOI: https://doi.org/10.3138/cjpe.0023.005
Satterlund, T. D., Treiber, J., Kipke, R., Kwon, N., & Cassady, D. (2013). Accommodating diverse clients’ needs in evaluation capacity building: A case study of the tobacco control evaluation center. Evaluation and Program Planning, 36(1), 49–55. https://doi.org/10.1016/j.evalprogplan.2012.05.004 DOI: https://doi.org/10.1016/j.evalprogplan.2012.05.004
Scharbatke-Church, C. & Patel, A. G. (2016). Technology for evaluation in fragile and conflict affected states: An introduction for the digital immigrant evaluator. The Fletcher School, Tufts University and Besa. https://sites.tufts.edu/ihs/files/2018/02/Technology-and-Evaluation-Hitachi-Paper.pdf
Stockdill, S. H., Baizerman, M., & Compton, D. W. (2002). Toward a definition of the ECB process: A conversation with the ECB literature. New Directions for Evaluation, 93, 7–26. https://doi.org/10.1002/ev.39 DOI: https://doi.org/10.1002/ev.39
Sundar, P., Kasprzak, S., Halsall, T., & Woltman, H. (2010). Using web-based technologies to increase evaluation capacity in organizations providing child and youth mental health services. Canadian Journal of Program Evaluation, 25(1), 91–112. DOI: https://doi.org/10.3138/cjpe.025.005
Tang, H., Cowling, D. W., Koumjian, K., Roesler, A., Lloyd, J., & Rogers, T. (2002). Building local program evaluation capacity toward a comprehensive evaluation. New Directions for Evaluation, 95, 39–56. https://doi.org/10.1002/ev.57 DOI: https://doi.org/10.1002/ev.57
Taut, S. (2007). Studying self-evaluation capacity building in a large international development organization. American Journal of Evaluation, 28(1), 45–59. https://doi.org/10.1177/1098214006296430 DOI: https://doi.org/10.1177/1098214006296430
Tricco, A. C., Antony, J., Zarin, W., Strifler, L., Ghassemi, M., Ivory, J., Perrier, L., Hutton, B., Moher, D., & Straus, S. E. (2015). A scoping review of rapid review methods. BMC Medicine, 13, 224. https://doi.org/10.1186/s12916-015-0465-6 DOI: https://doi.org/10.1186/s12916-015-0465-6
Watt, J. H. (1999). Internet systems for evaluation research. New Directions for Evaluation, 84, 23–43. https://doi.org/10.1002/ev.1151 DOI: https://doi.org/10.1002/ev.1151
Zhao, K., Sridharan, S., Ingabire, M.-G., Yu, M., Nakaima, A., Li, X., Xiao, Y., & Chen, E. (2017). An experiment on building evaluation capacity to address health inequities in China. New Directions for Evaluation, 154, 17–28. https://doi.org/10.1002/ev.20239 DOI: https://doi.org/10.1002/ev.20239
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 Ann Marie Castleman, Minji Cho, Isabelle Bourgeois, Leslie Fierro, Sebastian Lemire
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Copyright and Permissions
Authors retain full copyright for articles published in JMDE. JMDE publishes under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY - NC 4.0). Users are allowed to copy, distribute, and transmit the work in any medium or format for noncommercial purposes, provided that the original authors and source are credited accurately and appropriately. Only the original authors may distribute the article for commercial or compensatory purposes. To view a copy of this license, visit creativecommons.org