The Pragmatist in Context of a National Science Foundation Supported Grant Program Evaluation: Guidelines and Paradigms

Main Article Content

Margaret E. Ross
N. Hari Narayanan
Theron Dean Hendrix
Lakshman Sundeep Myneni

Abstract




Background:  The philosophical underpinnings of evaluation guidelines set forth by a funding agency can sometimes seem inconsistent with that of the intervention.





Purpose: Our purpose is to introduce questions pertaining to the contrast between the instructional program’s underlying philosophical beliefs and assumptions and those underlying our evaluation approach. Drawing heavily on Scriven, we discuss these from a pragmatist evaluation stance in light of issues defined by Lincoln and Guba (2000). The discussion is couched in the evaluation of an innovative approach to teaching computer science.


Setting: Auburn University, Auburn, AL


Intervention: The evaluation is designed to investigate the effects of a studio-based teaching approach in computer science education. The evaluation framework employs a rigorous design that seeks to provide evidence to support or refute some assumed truth about the object (or construct) investigated. The program evaluated is steeped in a constructivist framework which assumes that no universal truth or reality exists, but rather, is constructed by the individual.


Research Design: Our evaluation design, to a good extent, reflects a post-positivist, quasi-experimental position. We also include a qualitative component using student interviews.


Data Collection and Analysis: Evidence of the effectiveness of the instructional approach for learning is assessed quantitatively using pre- and post-test and pre- and post-survey data group comparisons (mixed design ANOVA). Interviews provide the basis for qualitative theme analysis.


Findings: Quantitative results were somewhat weak but consistent in support of the studio-based teaching. Interview data suggest that most students did find working in groups enjoyable and a valuable experience.

Downloads

Download data is not yet available.

Article Details

How to Cite
Ross, M. E., Narayanan, N. H., Hendrix, T. D., & Myneni, L. S. (2011). The Pragmatist in Context of a National Science Foundation Supported Grant Program Evaluation: Guidelines and Paradigms. Journal of MultiDisciplinary Evaluation, 7(16), 111–130. https://doi.org/10.56645/jmde.v7i16.295
Section
Research on Evaluation Articles

Funding data

References

Boyer, E. L., & Mitgang, L. D. (1996). Building community: A new future for architecture education and practice. Princeton, NJ: Carnegie Foundation for the Advancement of Teaching.

Burke, J. R., & Onwuegbuzie, A. J. (2004, October). Mixed methods research: A research paradigm whose time has come. Educational Researcher, 33(7), 14-26. https://doi.org/10.3102/0013189X033007014 DOI: https://doi.org/10.3102/0013189X033007014

Carbone, A., & Sheard, J. (2002). A studio-based teaching and learning model in IT: What do first year students think? Proceedings of the ITiCSE '02 Conference, Aarhus, Denmark, 213-217. https://doi.org/10.1145/544414.544485 DOI: https://doi.org/10.1145/544414.544485

Chen, H. T. (2003). Theory-driven approach for facilitation of planning health promotion or other programs. The Canadian Journal of Program Evaluation, 18(2), 91-113. https://doi.org/10.3138/cjpe.18.005 DOI: https://doi.org/10.3138/cjpe.18.005

Chen, H. T. (2006, Spring). A theory-driven evaluation perspective on mixed methods research. Research in the Schools, 13(1), 75-83.

Christie, C. A., & Alkin, M. C. (2003). The user-oriented evaluator's role in formulating a program theory: Using a theory-driven approach. The American Journal of Evaluation, 24(3), 373-385. https://doi.org/10.1177/109821400302400306 DOI: https://doi.org/10.1177/109821400302400306

Creswell, J. W. (2003). Research design: qualitative, quantitative, and mixed methods approaches. Thousand Oaks, Sage Publications, Inc.

Denzin, N. K., & Lincoln, Y. S. (2000). Part II. Paradigms and perspectives in transition. In Denzin, Norman K. and Lincoln, Yvonna S. (Eds.). Handbook of qualitative research, 2nd ed. Thousand Oaks: Sage Publications.

Docherty, M., Sutton, P., Brereton, M., & Kaplan, S. (2001). An innovative design and studio-based CS degree. ACM SIGCSE Bulletin, 33(1) 144-148. https://doi.org/10.1145/366413.364591 DOI: https://doi.org/10.1145/366413.364591

Durkheim, E. (1938). The rules of sociological method, Eighth Edition. (Sarah A. Solovay and John H. Muleller trans). George E.G. Catlin (Ed). Glencoe, IL: The Free Press.

Eggen, P. & Kauchak, D. (2007). Educational psychology: Windows on classrooms. Upper Saddle River, NJ: Pearson Education, Inc.

Fetterman, D. M. (2002). Empowerment evaluation: Building communities of practice and a culture of learning. American Journal of Community Psychology, 30(1), 89-102. https://doi.org/10.1023/A:1014324218388 DOI: https://doi.org/10.1023/A:1014324218388

Fetterman, D. & Wandersman, A. (2007). Empowerment evaluation: Yesterday, today, and tomorrow. American Journal of Evaluation, 28, 179-198. https://doi.org/10.1177/1098214007301350 DOI: https://doi.org/10.1177/1098214007301350

Fitzpatrick, J. L., Sanders, J. R. & Worthen, B. R. (2004). Program evaluation: Alternative approaches and practical guidelines. Boston: Pearson Education, Inc.

Gredler, M. E. (2008). Vygotsky's legacy: A foundation for research and practice. New York: Guilford Press.

Greene, J. C., & McClintock, C. (1991). The evolution of evaluation methodology. Theory into Practice, 30(1), 13-21. https://doi.org/10.1080/00405849109543471 DOI: https://doi.org/10.1080/00405849109543471

Guba, E. C. (1987) What have we learned about naturalistic evaluation? Evaluation Practice, 8(1), 23-43. https://doi.org/10.1177/109821408700800102 DOI: https://doi.org/10.1177/109821408700800102

Hendrix D., Myneni, L., Narayanan, N. H., & Ross, M. (2008). Adapting a Studio-Based Learning Model for CS2. Technical Report CSSE08-03, Auburn, AL: Computer Science & Software Engineering Dept., Auburn University.

House, E. R. (1983) How we think about evaluation. In E. R. House (Ed.) Philosophy of Evaluation. New Directions for Program Evaluation, San Francisco: Jossey-Bass, 75-82. https://doi.org/10.1002/ev.1342 DOI: https://doi.org/10.1002/ev.1342

Howe K. R. (1988). Against the qualitative-quantitative incompatibility thesis or dogmas die hard. Educational Researcher, 10-16. https://doi.org/10.3102/0013189X017008010 DOI: https://doi.org/10.3102/0013189X017008010

Hubscher-Younger, T. (2002). Understanding algorithms through shared representations. Unpublished Doctoral Dissertation, Auburn University, Auburn, AL.

Hubscher-Younger, T., & Narayanan. N. H. (2003a). Authority and convergence in collaborative learning. Computers & education, 41(4), 313-334. https://doi.org/10.1016/j.compedu.2003.06.003 DOI: https://doi.org/10.1016/j.compedu.2003.06.003

Hubscher-Younger, T. & Narayanan. N. H. (2003b). Constructive and collaborative learning of algorithms. Proceedings of the 34th SIGSE technical symposium on computer science education, Reno, NV, 6-10. https://doi.org/10.1145/611892.611919 DOI: https://doi.org/10.1145/611892.611919

Hubscher-Younger, T. & Narayanan. N. H. (2003c). Dancing hamsters and marble statues: Characterizing student visualizations of algorithms. In Proceedings of the 2003 symposium on software visualization, San Diego, CA, 95-104. https://doi.org/10.1145/774833.774847 DOI: https://doi.org/10.1145/774833.774847

Hundhausen, C. D. (2002). Integrating algorithm visualization technology into an undergraduate algorithms course: Ethnographic studies of a social constructivist approach. Computers and Education, 39(3), 237-260. https://doi.org/10.1016/S0360-1315(02)00044-1 DOI: https://doi.org/10.1016/S0360-1315(02)00044-1

Hundhausen, C. D., & Brown, J. L. (2007). What you see is what you code: A 'live' algorithm development and visualization environment for novice learners. Journal of visual languages and computing, 18(1)22-47. https://doi.org/10.1016/j.jvlc.2006.03.002 DOI: https://doi.org/10.1016/j.jvlc.2006.03.002

Hundhausen, C. D., & Brown, J. L. (2008). Designing, visualizing, and discussing algorithms within a CS1 studio experience: An empirical study. Computers & Education, 50(1), 301-326. https://doi.org/10.1016/j.compedu.2006.06.002 DOI: https://doi.org/10.1016/j.compedu.2006.06.002

Hundhausen, C. D., Narayanan, N. H., & Crosby, M. E. (2008). Exploring studio-based instructional models for computing education. Proceedings of the Technical Symposium on Computer Science Education (SIGCSE 2008), Portland OR, 392-396. https://doi.org/10.1145/1352135.1352271 DOI: https://doi.org/10.1145/1352135.1352271

Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. New York: Cambridge University Press. https://doi.org/10.1017/CBO9780511815355 DOI: https://doi.org/10.1017/CBO9780511815355

Lincoln, Y. S., & Guba, E. G. (2000). Paradigmatic controversies, contradictions, and emerging confluences. In Denzin, Norman K. and Lincoln, Yvonna S. (Eds.). Handbook of qualitative research (2nd ed.). Thousand Oaks: Sage Publications.

Madaus, G. F., & Stufflebeam, D. (1989). Educational evaluation: Classic works of Ralph W. Tyler. Boston: Kluwer Academic Publishers. https://doi.org/10.1007/978-94-009-2679-0 DOI: https://doi.org/10.1007/978-94-009-2679-0

Mckinney, J. P., Mckinney, K., Franiuk, R., & Schweitzer, J. (2006). The college classroom as a community: Impact on student attitudes and learning. College Teaching 54(3), 281-284. https://doi.org/10.3200/CTCH.54.3.281-284 DOI: https://doi.org/10.3200/CTCH.54.3.281-284

McLaughlin, M. W., & Phillips, D. C. (1991). Evaluation and education: At quarter century. Chicago: The National Society for the Study of Education.

Melvin, M. M., Henry, G. T., & Julnes, G. (1999). Toward an integrative framework for evaluation practice. American Journal of Evaluation, 20(2), 177-199. https://doi.org/10.1177/109821409902000202 DOI: https://doi.org/10.1016/S1098-2140(99)00025-9

Mill, J. S. (1965) On the logic of the moral sciences.: A system of Logic, Book VI. H. M. Magid (ed.). Indianapolis: The Bobbs-Merrill Company, Inc. (Original work published 1843).

Myneni, L., Ross, M., Hendrix, D., & Narayanan, N. H. (2008). Studio-based learning in CS2: An experience report. Proceedings of the 46th AMC Southeast Conference (ACM-SE 2008), March 28-29, 2008, Auburn, AL, 253-255. https://doi.org/10.1145/1593105.1593171 DOI: https://doi.org/10.1145/1593105.1593171

National Science Foundation. (n.d.). NSF project evaluation guide. Retrieved March 19, 2008 from https://www.nsf-proj-eval-guide.org/html/Module1a.htm.

National Science Foundation [NSF], (2003). A review of the evaluation program in the division of research evaluation and communication for directorate for education and human resources. Retrieved March 31, 2009, from http://www.scribd.com/doc/999990/National-Science-Foundation-REC-EvalCOV

Onwuegbuzie, A. J. (2000). Positivists, Post-Positivists, Post-Structuralists, and Post-Modernists: Why can't we all get along? Towards a framework for unifying research paradigms. Paper presented at the Annual Meeting of the Association for the Advancement of Educational Research, Ponte Vedra, FL.

Perla, R. J., & Carifio, J. (2009). Toward a general and unified view of educational research and educational evaluation. Journal of MultiDisiplinary Evaluation, 6(11), 38-55. https://doi.org/10.56645/jmde.v6i11.200 DOI: https://doi.org/10.56645/jmde.v6i11.200

Preskill, H. S., & Catsambas, T. T. (2006). Refreming evaluation through appreciative inquiry. Thousand Oaks: Sage Publications.

Pintrich, P. R., Smith, D. A. F., Garcia, T., & Mc Keachie, W. J. (1991). A manual for the use of the Motivated Strategies for Learning Questionnaire (MSLQ). National Center for Research to Improve Postsecondary Teaching and Learning.

Rickart, H. (1962). Science and history: A critique of positivist epistemology. A. Goddard (Ed.). (G Reisman, Trans.) Princeton, NJD. Van Nostrand Company, Inc. (Original work published 1894)

Schweitzer, J. H., Kim, J. W, & Mackin, J. R. (1999) The impact of the built environment on crime and fear in urban neighborhoods. Journal of Urban Technology, 6(3), 59-73. https://doi.org/10.1080/10630739983588 DOI: https://doi.org/10.1080/10630739983588

Scriven, M. (1967). The Methodology of Evaluation. In R. Tyler, R. Gagne, M. Scriven (Eds.), Perspectives on curriculum evaluation (pp. 39-83). AERA Monograph Series on Curriculum Evaluation, No. 1. Skokie, IL: Rand McNally.

Scriven, S. (1983). The evaluation taboo. In E. R. House (Ed.) Philosophy of evaluation. New Directions for program evaluation, 19, San Francisco: Jossey-Bass, 75-82. https://doi.org/10.1002/ev.1345 DOI: https://doi.org/10.1002/ev.1345

Scriven, M. (1991). Evaluation thesaurus. (4th ed.). Thousand Oaks, CA: Sage.

Scriven, M. (1993). Hard-won lessons in program evaluation. New Directions for Program Evaluation, 58, 581-103. https://doi.org/10.1002/ev.1647 DOI: https://doi.org/10.1002/ev.1647

Scriven, M. (1994). Evaluation as a discipline. Studies in Educational Evaluation, 20, 147-156. https://doi.org/10.1016/S0191-491X(00)80010-3 DOI: https://doi.org/10.1016/S0191-491X(00)80010-3

Scriven, M. (1995). The logic of evaluation and evaluation practice. New Directions for Evaluation, 68, 49-70. https://doi.org/10.1002/ev.1019 DOI: https://doi.org/10.1002/ev.1019

Scriven, M. (1998a). Minimalist theory: The least theory that practice requires. American Journal of Evaluation, 19(1). https://doi.org/10.1016/S1098-2140(99)80180-5 DOI: https://doi.org/10.1016/S1098-2140(99)80180-5

Scriven, M. (1998b). The new science of evaluation. Scandinavian Journal of Social Welfare, 7, 79-89. https://doi.org/10.1111/j.1468-2397.1998.tb00206.x DOI: https://doi.org/10.1111/j.1468-2397.1998.tb00206.x

Scriven, M. (1999). The fine line between evaluation and explanation. Research on Social Work Practice, 9(4), 521-524. https://doi.org/10.1177/104973159900900407 DOI: https://doi.org/10.1177/104973159900900407

Scriven, S. (2000). Evaluation ideologies. In D. L. Stufflebeam, G. F. Madaus, and T. Callaghan (Eds.). Evaluation models. Boston: Kluwer Academic Publishers.

Scriven, M. (2005). Can we infer causation from cross-sectional data? National Academy of Sciences.

Scriven, M. (2007). Key evaluation checklist [On-line]. Retrieved July 10, 2008 from http://www.wmich.edu/evalctr/checklists

Scriven, M., & Coryn, C. L. S. (2008). The logic of research evaluation. New Directions for Evaluation, 118, 89-105. https://doi.org/10.1002/ev.263 DOI: https://doi.org/10.1002/ev.263

Smith, J. K. (1983) Quantitative versus interpretive: The problem of conducting social inquiry. In E. R. House (Ed.) Philosophy of evaluation. New Directions for program evaluation, 19, San Francisco: Jossey-Bass, 75-82. https://doi.org/10.1002/ev.1343 DOI: https://doi.org/10.1002/ev.1343

Stufflebeam, D. L. (2001). Evaluation checklists: Practical tools for guiding and judging evaluations. American Journal of Evaluation, 22(1), 71-79. https://doi.org/10.1177/109821400102200107 DOI: https://doi.org/10.1016/S1098-2140(01)00119-9

Stufflebeam, D. L., & Shinkfield, A. J. (2007). Evaluation theory, models, and applications. San Francisco: John Wiley & Sons, Inc.

Tyler, R. W. (1942). General statement on evaluation. Journal of Educational Research, 35(7), 492-501. https://doi.org/10.1080/00220671.1942.10881106 DOI: https://doi.org/10.1080/00220671.1942.10881106

Tyler, R. W. (1966). The objectives and plans for a national assessment of educational progress. Journal of Educational Measurement, 3(1), 1-10. https://doi.org/10.1111/j.1745-3984.1966.tb00857.x DOI: https://doi.org/10.1111/j.1745-3984.1966.tb00857.x

Tyler, R. W. (1991), General statement of program evaluation. In M. W. McLaughlin & D. C. Phillips (Eds.), Evaluation and education: At quarter century. Chicago: The National Society for the Study of Education.

U.S. Department of Education. (2007). Report of the Academic Competitiveness Council.

Wallace, B. A. (1997). An understanding of the role of socio-cultural learning: Using The past to maximize potential for future academic success. Paper presented at the Annual Meeting of at the Mid-South Educational Research Association, Memphis, TN, November 13.

Woodley, M., & Kamin, S. N. (2007). Programming studio: A course for improving programming skills in undergraduates. Proceedings of the Technical Symposium on Computer Science Education, Covington, Kentucky, 231-535. https://doi.org/10.1145/1227310.1227490 DOI: https://doi.org/10.1145/1227310.1227490