Formative Evaluation of an Educational Technology Innovation: Developer's Insights into Assessment Tools for Teaching and Learning

Main Article Content

John A. Hattie
https://orcid.org/0000-0003-3873-4854
Gavin T. L. Brown
https://orcid.org/0000-0002-8352-2351
Lorrae Ward
S. Earl Gavin
Peter J. Keegan

Abstract

Formative evaluations conducted by the development team, focusing on the consequences of choices to be made by the developers, during the development of asTTle, a new ICT2-mediated educational assessment resource, are reported. The evaluations focused on the validity, accuracy, added value, training, and utility standards identified as important in the deployment of educational technology. As a result of in-house utilization of the evaluations, it is argued that users can have confidence that asTTle has demonstrated validity through alignment with the curriculum and classroom practice, that the reporting systems add value to teachers’ work and that accuracy of understanding is enhanced by professional development. Insights into effective PD have been obtained, though further improvements in the training component of the system are required. The utility of the software has been constantly increased, but there are still unresolved issues that are currently being responded to. The gradual implementation mechanism has been successfully used to obtain user buy-in as well as identification and response to identified needs. Users and funding agencies can have confidence that asTTle has been designed, developed, and implemented in a fashion consistent with maximizing benefits to the end-user. The software has been developed in such a way that the validity, added value, accuracy, training and utility standards for educational technology have been met, at least in part. The data reported here demonstrate again that formative evaluations which maximize end-user benefits can add significant value to an educational technology innovation.

Downloads

Download data is not yet available.

Article Details

How to Cite
Hattie, J. A., Brown, G. T. L., Ward, L., Gavin, S. E., & Keegan, P. J. (2006). Formative Evaluation of an Educational Technology Innovation: Developer’s Insights into Assessment Tools for Teaching and Learning. Journal of MultiDisciplinary Evaluation, 3(5), 1–54. https://doi.org/10.56645/jmde.v3i5.50
Section
Research on Evaluation Articles

References

Ajzen, I., & Madden, T. J. (1986). Predictions of goal-direct behavior: Attitudes, intentions, and perceived behavioral control. Journal of Experimental Social Psychology, 22, 453-474. DOI: https://doi.org/10.1016/0022-1031(86)90045-4

https://doi.org/10.1016/0022-1031(86)90045-4 DOI: https://doi.org/10.1016/0022-1031(86)90045-4

Baker, E. L. (2005, July). Improving accountability models by using technology-enabled knowledge systems (TEKS) (CSE Report No. 656). Los Angeles, CA: National Center for Research on Evaluation, Standards, and Student Testing (CRESST), University of California, Los Angeles.

Baker, E. L., & Herman, J. L. (2003). A distributed evaluation model. In G. D. Haertel & B. Means (Eds.), Evaluating educational technology: Effective research designs for improving learning (pp. 95-119). New York, NY: Teachers College Press.

Bottino R. M., Forcheri P. & Molfino M. T. (1998). Technology transfer in schools: From research to innovation. British Journal of Educational Technology, 29(2), 163-172. DOI: https://doi.org/10.1111/1467-8535.00057

https://doi.org/10.1111/1467-8535.00057 DOI: https://doi.org/10.1111/1467-8535.00057

Bourges-Waldegg, P., Moreno L., Rojano T. (2000). The role of usability on the implementation and evaluation of educational technology. Proceedings of the 33rd Hawaii International Conference on System Sciences, pp. 1-7. DOI: https://doi.org/10.1109/HICSS.2000.926722

https://doi.org/10.1109/HICSS.2000.926722 DOI: https://doi.org/10.1109/HICSS.2000.926722

Brown, G. T. L. (2002). Teachers' conceptions of assessment. Unpublished doctoral dissertation, The University of Auckland, Auckland, New Zealand.

Brown, G. T. L. (2004a). Measuring attitude with positively packed self-report ratings: Comparison of agreement and frequency scales. Psychological Reports, 94, 1015-1024. DOI: https://doi.org/10.2466/pr0.94.3.1015-1024

https://doi.org/10.2466/pr0.94.3.1015-1024 DOI: https://doi.org/10.2466/pr0.94.3.1015-1024

Brown, G. T. L. (2004b). Teachers' conceptions of assessment: Implications for policy and professional development. Assessment in Education: Policy, Principles and Practice, 11(3), 305-322. DOI: https://doi.org/10.1080/0969594042000304609

https://doi.org/10.1080/0969594042000304609 DOI: https://doi.org/10.1080/0969594042000304609

Brown, G. T. L., & Hattie, J. A. C. (2005, September). School-based assessment and assessment for learning: How can it be implemented in developed, developing and underdeveloped countries? Paper presented at the APEC East Meets West: An International Colloquium on Educational Assessment, Kuala Lumpur, Malaysia.

Carroll, T. G. (2001). Do today's evaluations meet the needs of tomorrow's networked learning communities? In W. F. Heinecke & L. Blasi (Eds.), Methods of evaluating educational technology (pp. 3-15). Greenwich, CT: Information Age Publishing. DOI: https://doi.org/10.1108/978-1-60752-504-220251003

Compton, V., & Jones, A. (1998). Reflecting on teacher development in technology education: Implications for future programs. International Journal of Technology and Design Education, 8, 151-166. DOI: https://doi.org/10.1023/A:1008808327436

https://doi.org/10.1023/A:1008808327436 DOI: https://doi.org/10.1023/A:1008808327436

Conole, G., & Warburton, B. (2005). A review of computer-assisted assessment. ALT-J, Research in Learning Technology, 13(1), 17-31. DOI: https://doi.org/10.1080/0968776042000339772

https://doi.org/10.3402/rlt.v13i1.10970 DOI: https://doi.org/10.3402/rlt.v13i1.10970

Cuban, L. (2001). Oversold and underused: Reforming schools through technology, 1980-2000. Cambridge MA: Harvard University Press. DOI: https://doi.org/10.4159/9780674030107

https://doi.org/10.4159/9780674030107 DOI: https://doi.org/10.4159/9780674030107

Fichman, R. G. (1992). Information technology diffusion: A review of empirical research. Cambridge, MA: Massachusetts Institute of Technology, Sloan School of Management.

Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention, and behavior: An introduction to theory and research. Reading, MA: Addison-Wesley.

Hambleton, R. K., & Slater, S. C. (1997). Are NAEP executive summary reports understandable to policy makers and educators? (CSE Technical Report 430.) Los Angeles, CA: National Center for Research on Evaluation, Standards, and Student Testing, Graduate School of Education & Information Studies, University of California, Los Angeles.

Hattie, J. A. (1999, August). Influences on student learning. Inaugural Lecture: Professor of Education, The University of Auckland, Auckland, New Zealand.

Hattie, J. A. (2002). Schools like mine: Cluster analysis of New Zealand schools. (asTTle Tech. Rep. No. 14). Auckland, New Zealand: The University of Auckland, Project asTTle.

Hattie, J. A., Brown, G. T. L., & Keegan, P. J. (2003). A national teacher-managed, curriculum-based assessment system: Assessment Tools for Teaching & Learning (asTTle). International Journal of Learning, 10, 771-778.

Hedges, L. V., Konstantopoulos, S., & Thoreson, A. (2003). Studies of technology implementation and effects. In G. D. Haertel & B. Means (Eds.), Evaluating educational technology: Effective research designs for improving learning(pp. 187-204). New York, NY: Teachers College Press.

Irving, S. E., & Higginson, R. M. (2003). Improving asTTle for secondary school use: Teacher and student feedback (asTTle Tech. Rep. No. 42). Auckland, New Zealand: The University of Auckland/Ministry of Education.

Lesgold, A. (2003). Detecting technology's effects in complex school environments. In G. D. Haertel & B. Means (Eds.), Evaluating educational technology: Effective research designs for improving learning (pp. 38-74). New York, NY: Teachers College Press.

Lewis, A. (1999, June). Comprehensive Systems for Educational Accounting and Improvement: R&D Results (CSE Technical Report No. 504). Los Angeles, CA: National Center for Research on Evaluation, Standards, and Student Testing (CRESST), Center for the Study of Evaluation (CSE), Graduate School of Education & Information Studies, University of California, Los Angeles.

Linn, R. L., & Dunbar, S. B. (1992). Issues in the design and reporting of the National Assessment of Educational Progress. Journal of Educational Measurement, 29(2), 177-194. DOI: https://doi.org/10.1111/j.1745-3984.1992.tb00369.x

https://doi.org/10.1111/j.1745-3984.1992.tb00369.x DOI: https://doi.org/10.1111/j.1745-3984.1992.tb00369.x

Meagher-Lundberg, P. (2000). Report on comparison groups/variable for use in analyzing assessment results (Technical Report No. 1). Auckland, New Zealand: The University of Auckland, Project asTTle.

Meagher-Lundberg, P. (2001a). Report on output reporting design: Focus group 1(Technical Report No. 9). Auckland, New Zealand: The University of Auckland, Project asTTle.

Meagher-Lundberg, P. (2001b). Report output reporting design: Focus group 2(Technical Report No. 10). Auckland, New Zealand: The University of Auckland, Project asTTle.

Means, B., Haertel, G. D., & Moses, L. (2003). Evaluating the effects of learning technologies. In G. D. Haertel & B. Means (Eds.), Evaluating educational technology: Effective research designs for improving learning (pp. 1-13). New York, NY: Teachers College Press.

Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational Measurement (3rded.) (pp. 2-103). New York, NY: Macmillan.

North, R. F. J., Strain, D. M., & Abbott, L. (2000). Training teachers in computer-based management information systems. Journal of Computer Assisted Learning, 16, 27-40. DOI: https://doi.org/10.1046/j.1365-2729.2000.00113.x

https://doi.org/10.1046/j.1365-2729.2000.00113.x DOI: https://doi.org/10.1046/j.1365-2729.2000.00113.x

Parks, S., Huot, D., Hamers, J., & H.-Lemonnier, F. (2003). Crossing boundaries: Multimedia technology and pedagogical innovation in a high school class. Language Learning & Technology, 7(1), 28-45. DOI: https://doi.org/10.64152/10125/25186

Patton, M. Q. (1978). Utilization-focused evaluation. Beverley Hills, CA: Sage.

Posavac, E. J., & Carey, R. G. (1997). Program evaluation: Methods and case studies (5th ed.). Upper Saddle River, NJ: Prentice Hall.

Provus, M. M. (1973). Evaluation of ongoing programs in the public school system. In B. R. Worthen & J. R. Sanders (Eds.), Educational evaluation: Theory and practice. Belmont, CA: Wadsworth.

Raikes, N., & Harding, R. (2003). The horseless carriage stage: Replacing conventional measures. Assessment in Education: Policy, Principles and Practice, 10(3), 267-277. DOI: https://doi.org/10.1080/0969594032000148136

https://doi.org/10.1080/0969594032000148136 DOI: https://doi.org/10.1080/0969594032000148136

Riel, M. (2001). Evaluating educational technology: A call for collaborative learning, teaching, research and development. In W. F. Heinecke & L. Blasi (Eds.), Methods of evaluating educational technology (pp. 17-40). Greenwich, CT: Information Age Publishing. DOI: https://doi.org/10.1108/978-1-60752-504-220251004

Sen, A. (2000). Consequential evaluation and practical reason. The Journal of Philosophy, 47(9), 477-502. DOI: https://doi.org/10.2307/2678488

https://doi.org/10.2307/2678488 DOI: https://doi.org/10.2307/2678488

Scriven, M. (1991). Beyond formative and summative evaluation. In M. W. McLaughlin & D. C. Phillips (Eds.), Evaluation & education: At quarter century (pp. 19-64). Chicago, IL: National Society for the Study of Education. DOI: https://doi.org/10.1177/016146819109200603

https://doi.org/10.1177/016146819109200603 DOI: https://doi.org/10.1177/016146819109200603

Spolsky, J. (2001). User interface design for programmers. Berkeley, CA: APress LP. DOI: https://doi.org/10.1007/978-1-4302-0857-0

https://doi.org/10.1007/978-1-4302-0857-0 DOI: https://doi.org/10.1007/978-1-4302-0857-0

Stufflebeam, D. L. (1983). The CIPP model for program evaluation. In G. F. Madaus, M. Scriven, & D. L. Stufflebeam (Eds.), Evaluation models: Viewpoints on educational and human services evaluation. Boston, MA: Kluwer-Nijhoff. DOI: https://doi.org/10.1007/978-94-009-6675-8_7

https://doi.org/10.1007/978-94-009-6669-7_7 DOI: https://doi.org/10.1007/978-94-009-6669-7_7

Ward, L., Hattie, J. A., & Brown, G. T. L. (2003). The evaluation of asTTle in schools: The power of professional development (asTTle Tech. Rep. No. 35). Auckland, New Zealand: The University of Auckland/Ministry of Education.

Watson, D.M. (2001). Pedagogy before technology: Re-thinking the relationship between ICT and teaching. Education and Information Technologies, 6(4), 251-266. DOI: https://doi.org/10.1023/A:1012976702296

https://doi.org/10.1023/A:1012976702296 DOI: https://doi.org/10.1023/A:1012976702296