Evaluation in the Context of the Government Market Place: Implications for the Evaluation of Research

Main Article Content

Connie K. Della-Piana
Gabriel M. Della-Piana

Abstract

The evaluation community has concentrated on examining and explicating implications of the choice of methods for evaluating federal programs, as described in the New Directions for Evaluation edited by Julnes and Rog (2007), placing the policy debate in historical and contemporary contexts. In that volume and elsewhere we find that there are several mechanisms described for supporting and/or conducting program evaluation at the federal level. In the Julnes and Rog volume, Chelimsky (2007) describes evaluation activities conducted within the federal government by the Government Accountability Office (GAO). Both grants and contracts supported the work of Yin (Yin and Davis, 2007) in the evaluation of large comprehensive reforms in K-12 science and mathematics education. Other evaluation activities come under the authority of the Office of the Inspector General which conducts performance audits of government programs that draw on program evaluation and its methods (see the Yellow Book http://www.gao.gov/govaud/ybk01.htm).

Downloads

Download data is not yet available.

Article Details

How to Cite
Della-Piana, C. K., & Della-Piana, G. M. (2007). Evaluation in the Context of the Government Market Place: Implications for the Evaluation of Research. Journal of MultiDisciplinary Evaluation, 4(8), 79–91. https://doi.org/10.56645/jmde.v4i8.33
Section
Reforming the Evaluation of Research

References

Arnold , E., & Balazs, K. (1998). Methods in the evaluation of publicly funded basic research: A review for OECD. Brighton, UK: Technopolis Ltd.

Biderman, A. D., & Sharp, L. M. (1972). Evaluation research: Procurement and method. Social Science Information, 11(3/4), 141-170. DOI: https://doi.org/10.1177/053901847201100305

https://doi.org/10.1177/053901847201100305 DOI: https://doi.org/10.1177/053901847201100305

Boix-Mansilla, V. (2006). Assessing expert interdisciplinary work at the frontier: An empirical exploration. Research Evaluation, 15(1), 17-29. DOI: https://doi.org/10.3152/147154406781776075

https://doi.org/10.3152/147154406781776075 DOI: https://doi.org/10.3152/147154406781776075

Bozeman, B., Dietz, J. S., & Gaughan, M. (2001). Scientific and technical human capital: An alternative model for research evaluation. International Journal of Technology Management, 22(7/8), 716-740. DOI: https://doi.org/10.1504/IJTM.2001.002988

https://doi.org/10.1504/IJTM.2001.002988 DOI: https://doi.org/10.1504/IJTM.2001.002988

Chelimsky, E. (2007). Factors influencing the choice of methods in federal evaluation practice. In G. Julnes & D. J. Rog (Eds.), Informing federal policies on evaluation methodology: Building the evidence base for method choice in government sponsored evaluation (pp. 13-34). New Directions for Evaluation (No. 113). San Francisco: Jossey-Bass. DOI: https://doi.org/10.1002/ev.213

https://doi.org/10.1002/ev.213 DOI: https://doi.org/10.1002/ev.213

Congressional Budget Office. (2005, June). R & D productivity and growth (Background paper). Washington, DC: Author.

Cozzens, S. E. (2002). Research Assessment: What's Next? Final report on a workshop. Research Evaluation, 11(2), 65-79. DOI: https://doi.org/10.3152/147154402781776925

https://doi.org/10.3152/147154402781776925 DOI: https://doi.org/10.3152/147154402781776925

Cronbach, L. J. (1982). Prudent aspirations for social inquiry. In W. H. Kruskal (Ed.), The social sciences: Their nature and uses (pp. 61-81). Chicago: University of Chicago Press.

Della-Piana, G. & Della-Piana, C. K. (2005, April). Approaches to discovery of how and why evaluations deviate from design and differ from reports. In B. Olds (Chair), Some National Science Foundation responses to difficulties in linking evaluation and program improvement. Symposium conducted in the meeting of the American Educational Research Association, Montreal, Canada.

Dunbar, K. (1999). How scientists build models: In Vivo science as a window on the scientific mind. In L. Magnani, N. Nersessian, & P. Thagard (Eds.), Model-based reasoning in scientific discovery (pp. 89-98). New York: Plenum Press. DOI: https://doi.org/10.1007/978-1-4615-4813-3_6

https://doi.org/10.1007/978-1-4615-4813-3_6 DOI: https://doi.org/10.1007/978-1-4615-4813-3_6

Feist, G. J., & Gorman, M.E. (1998). The psychology of science: Review and integration of a nascent discipline. Review of General Psychology, 2(1), 3-47. DOI: https://doi.org/10.1037//1089-2680.2.1.3

https://doi.org/10.1037/1089-2680.2.1.3 DOI: https://doi.org/10.1037/1089-2680.2.1.3

Gordon, E. W. (1999). Education and justice: A view from the back of the bus. New York: Teachers College Press.

House, E. R.(1997). Evaluation in the government marketplace. Evaluation Practice, 8(1), 37-48. DOI: https://doi.org/10.1016/S0886-1633(97)90006-4

https://doi.org/10.1177/109821409701800104 DOI: https://doi.org/10.1177/109821409701800104

House, E. R. & Howe, K. R. (1999). Values in evaluation and social research. Thousand Oaks, CA: Sage. 1999. DOI: https://doi.org/10.4135/9781452243252

https://doi.org/10.4135/9781452243252 DOI: https://doi.org/10.4135/9781452243252

Howe, K.R. & Ashcroft, C. (2005). Deliberative democratic evaluation: Successes and limitations of an evaluation of school choice. Teachers College Record, 107(10), 2275- 2296. DOI: https://doi.org/10.1177/016146810510701004

https://doi.org/10.1111/j.1467-9620.2005.00592.x DOI: https://doi.org/10.1111/j.1467-9620.2005.00592.x

Julnes, G., & Rog, D. J. (Eds.). (2007). Informing federal policies on evaluation methodology: Building the evidence base for method choice in government sponsored evaluation. New Directions for Evaluation (No. 113). San Francisco: Jossey- Bass.

Kelly, G. J. (2006). Epistemology and educational research. In J. L. Green, G. Camilli, & P. B. Elmore (Eds.), Handbook of complementary methods in education research. Mahwah, NJ: Lawrence Erlbaum.

Kettl, D. F. (2005). The next government of the United States: Challenges for performance in the 21st century (Transformation of Organization Series). Washington, DC: IBM Center for the Business of Government.

Kettl, D. F. (1993). Sharing power: Public governance and private markets. Washington, DC: The Brookings Institution.

Klahr, D., & Simon, H. A. (1999). Studies of scientific discovery: Complementary approaches and convergent findings. Psychological Bulletin, 125(5), 524-543. DOI: https://doi.org/10.1037//0033-2909.125.5.524

https://doi.org/10.1037/0033-2909.125.5.524 DOI: https://doi.org/10.1037/0033-2909.125.5.524

Langfeldt, L. (2006). The policy challenges of peer review: Managing bias, conflict of interests and interdisciplinary assessments. Research Evaluation, 15(1), 31-41. DOI: https://doi.org/10.3152/147154406781776039

https://doi.org/10.3152/147154406781776039 DOI: https://doi.org/10.3152/147154406781776039

Lengwiler, M. (2006). Between charisma and heuristics: Four styles of interdisciplinarity. Science and Public Policy, 33(6), 423-434. DOI: https://doi.org/10.3152/147154306781778821

https://doi.org/10.3152/147154306781778821 DOI: https://doi.org/10.3152/147154306781778821

Maasen, S., & Liven, O. (2005). Transdisciplinarity: A new mode of Governing Science? Science and Public Policy, 33(6), 399-410. DOI: https://doi.org/10.3152/147154306781778803

https://doi.org/10.3152/147154306781778803 DOI: https://doi.org/10.3152/147154306781778803

Maasen, S., Lengwiler, M., & Guggenheim, M. (2006). Practices of transdisciplinary research: Close(r) encounters of science and society. Science and Public Policy, 33(6), 394- 398. DOI: https://doi.org/10.3152/147154306781778830

https://doi.org/10.3152/147154306781778830 DOI: https://doi.org/10.3152/147154306781778830

Mark, M. M. (2003). Toward an integrative view of the theory and practice of program and policy evaluation. In S. I. Donaldson & M. Scriven, (Eds.), Evaluating social programs and problems: Visions for the new millennium (pp. 183-204). Mahwah, NJ: Lawrence Erlbaum Associates.

Mark, M. M. (2005). Evaluation's future: Furor, futile, or fertile? American Journal of Evaluation, 22(3), 457-479. DOI: https://doi.org/10.1016/S1098-2140(01)00160-6

https://doi.org/10.1016/S1098-2140(01)00160-6 DOI: https://doi.org/10.1016/S1098-2140(01)00160-6

National Research Council (2004). Strengthening peer review in federal agencies that support education research. Washington, DC: The National Academies Press.

Perrin, B. (2006). Moving from outputs to outcomes: Practical advice from governments around the world (Managing for Performance and Results Series). Washington, DC: IBM Center for the Business of Government.

Phillips, D. C. (2006). Muddying the waters: The many purposes of educational inquiry. In C. F. Conrad & R. O. Serlin (Eds.), The Sage handbook of research in education. Thousand Oaks, CA: Sage.

Reddy, S. (2005). The role of apparent constraints in normative reasoning: A methodological statement and applications to global justice. The Journal of Ethics, 9, 119- 125. DOI: https://doi.org/10.1007/s10892-004-3322-y

https://doi.org/10.1007/s10892-004-3322-y DOI: https://doi.org/10.1007/s10892-004-3322-y

Ruegg, R., & Jordan, G. (2007). Overview of evaluation methods for R & D programs: A directory of evaluation methods relevant to technology development programs. Washington, DC: U.S. Department of Energy. Office of Energy Efficiency and Renewable Energy.

https://doi.org/10.2172/1219257 DOI: https://doi.org/10.2172/1219257

Scriven, M. (2004). Reflections. In M. Alkin (Ed.), Evaluation roots: Tracing theorists' views and influences (pp. 183-195). Thousand Oaks, CA: Sage. DOI: https://doi.org/10.4135/9781412984157.n11

https://doi.org/10.4135/9781412984157.n11 DOI: https://doi.org/10.4135/9781412984157.n11

Scriven, M. (2003). Evaluation in the new millennium: The transdisciplinary vision. In S. I. Donaldson & M. Scriven (Eds.), Evaluating social problems: Visions for the new millennium (pp. 19-41). Mahwah, NJ: Lawrence Erlbaum.

Scriven, M. (1991). Evaluation thesaurus (4th ed.). Newbury Park: Sage.

Shadish, W. R., & Fuller, S. (1994). The social psychology of science. New York: The Guilford Press.

Taylor, P. (1961). Normative discourse. Englewood Cliffs, NJ: Prentice-Hall.

Weick, K. E., & Sutcliffe, K. M. (2001). Managing the unexpected: Assuring high performance in an age of complexity. San Francisco, CA: Jossey Bass.