Question-Driven Methods or Method-Driven Questions? How We Limit What We Learn by Limiting What We Ask

Main Article Content

E. Jane Davidson
https://orcid.org/0009-0006-3360-8241

Abstract

The “methodologically manic-obsessive” evaluation profession and the metrics- and measures-obsessed lay users of evidence have managed to seriously limit the value of what we learn from evaluation. Evaluation questions asked at the front end are limited by the askers’ narrow understanding of what is possible methodologically at the back end. This, alongside the political and psychological forces working against real evaluation, are major drivers of the single narrative thinking that pervades the formulation and evaluation of national and local government policies and initiatives. This paper provides practical suggestions for asking the big-picture questions that really need to be asked, and suggests how real evaluation can step up to the plate, methodologically and otherwise.

Downloads

Download data is not yet available.

Article Details

How to Cite
Davidson, E. J. (2015). Question-Driven Methods or Method-Driven Questions? How We Limit What We Learn by Limiting What We Ask. Journal of MultiDisciplinary Evaluation, 11(24), i-x. https://doi.org/10.56645/jmde.v11i24.414
Section
Editorial
Author Biography

E. Jane Davidson, Real Evaluation Ltd

Dr. Jane Davidson is an internationally recognized evaluation specialist and thought leader, best known for developing evaluation rubrics as a methodology for drawing conclusions about quality and value. She has also made significant contributions in the areas of causal inference for qualitative and mixed methods, and in synthesis methodologies for evaluation.

References

Canadian Evaluation Society (2010). Competencies for Canadian evaluation practice.v. 11.0 4 16 2010.

Chelimsky, E. (2011). Evaluation and the single narrative. Presentation at the meeting of the American Evaluation Association.

Chelimsky, E. (2012). Values, evaluation methods, and the politicization of the evaluation process. New Directions for Evaluation, 133,77-83. https://doi.org/10.1002/ev.20008 DOI: https://doi.org/10.1002/ev.20008

Davidson, E. J. (2005). Evaluation methodology basics: The nuts and bolts of sound evaluation. Thousand Oaks, CA: Sage. https://doi.org/10.4135/9781452230115 DOI: https://doi.org/10.4135/9781452230115

Davidson, E. J. (2007). Unlearning some of our social scientist habits. Journal of Multidisciplinary Evaluation, 4(8), iii-vi. https://doi.org/10.56645/jmde.v4i8.68 DOI: https://doi.org/10.56645/jmde.v4i8.68

Davidson, E. J. (2010). "Process values" and "deep values" in evaluation. Journal of Multidisciplinary Evaluation,6(13), 206-208. https://doi.org/10.56645/jmde.v6i13.262 DOI: https://doi.org/10.56645/jmde.v6i13.262

Davidson, E. J. (2012). Actionable evaluation basics: Getting succinct answers to the most important questions [minibook]. Real Evaluation.

Davidson, E. J. (2013). Evaluation-specific methodology: The methodologies that are distinctive to evaluation. Genuine Evaluation blog. http://genuineevaluation.com/evaluation-specific-methodology-the-methodologies-that-are-distinctive-to-evaluation/

Davidson, E. J., & Martineau, J. W. (2006). Strategic uses of evaluation. In J. W. Martineau, L. Merritt, & K. Hannum (Eds.), Leadership development evaluation handbook(pp. 433-463). San Francisco, CA: Jossey-Bass.

Evergreen, S.D. H. (2013). Presenting Data Effectively: Communicating Your Findings for Maximum Impact. Thousand Oaks, CA: Sage.

Hayes, R., & Grossman, D. (2006). A scientist's guide to talking with the media. Piscataway, NJ: Rutgers University Press.

Heath, C., & Heath, D. (2008). Made to stick: Why some ideas take hold and others become unstuck. [Kindle edition] Cornerstone Digital.

King, S., & Davidson, E. J. (2012). Making the complex doable: Practical evaluation frameworks that work.. Mini workshop presented at the Aotearoa New Zealand Evaluation Association conference, Hamilton.

Morris, L. L., Fitz-Gibbon, C. T., & Freeman, M. E. (1987). How to communicate evaluation findings. Thousand Oaks, CA: Sage.Patton, M. Q. (2012). Contextual pragmatics of valuing. New Directions for Evaluation, 133,97-108. https://doi.org/10.1002/ev.20011 DOI: https://doi.org/10.1002/ev.20011

Peterson, C., & Park, N. (2010). Keeping it simple. The Psychologist, 23(5), 398-400.

Sathe, V., & Davidson, E. J. (2000). Toward a new conceptualization of culture change. In N. M. Ashkanasy, C. P. M. Wilderom, & M. F. Peterson (Eds.), Handbook of organizational culture and climate (pp. 279-296). Thousand Oaks, CA: Sage.

Scriven, M. (1991). Evaluation thesaurus(4thed.). Thousand Oaks, CA: Sage.

Scriven, M. (1994). The final synthesis. Evaluation Practice, 15(3), 367-382. https://doi.org/10.1177/109821409401500317 DOI: https://doi.org/10.1016/0886-1633(94)90031-0

Scriven, M. (2003). Evaluation in the new millennium: The transdisciplinary vision. In S. I. Donaldson & M. Scriven (Eds.), Evaluating social programs and problems: Visions for the new millennium. Mahwah, NJ: Routledge.

Scriven, M. (2006). The real evaluation competencies. Personal communication from Michael Scriven; also posted on the EVALWMU listserv, accessible at http://evaluation.wmich.edu/scripts/wa.exe?A2=ind0603&L=eval-wmu&F=&S=&P=2160

Scriven, M. (2013). The good old days and the schizophrenic break in the history of evaluation. Claremont Evaluation Center webinar, accessible at: http://www.cgu.edu/pages/10257.asp

Scriven, M., Davidson, E. J., & King, S. (2012). Biting the evaluative bullet.Demonstration session at the meeting of the American Evaluation Association, Minneapolis, MN. Online: http://comm.eval.org/EVAL/Resources/ViewDocument/?DocumentKey=e3a5bfb4-22f1-4f87-a139-2984da097a22

Stevahn, L., King, J. A., Ghere, G., & Minnema, J. (2005). Establishing essential competencies for evaluators. American Journal of Evaluation, 26(1), 43-59. https://doi.org/10.1177/1098214004273180 DOI: https://doi.org/10.1177/1098214004273180

Wehipeihana, N., Bailey, R., Davidson, E. J., & McKegg, K. (in press). Values and culture at the core: New Zealand's evaluator competencies. Canadian Journal of Program Evaluation.

Wehipeihana, N., & Davidson, E. J. (2010). Strategic policy evaluation: Answering macro-level cross-project questions. Workshop presented at the Aotearoa New Zealand Evaluation Association Regional Symposium, Wellington, New Zealand.

Wehipeihana, N., & Davidson, E. J. (2011). More strategic policy evaluation: Answering macro-level cross-project questions. Workshop presented at the Aotearoa New Zealand Evaluation Association conference, Wellington, New Zealand.

Western Michigan University (2004). Interdisciplinary PhD in Evaluation Competency Assessment. Kalamazoo, MI: The Evaluation Center.

Wilkinson, L., & The Task Force on Statistical Inference (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54,594-604. https://doi.org/10.1037/0003-066X.54.8.594 DOI: https://doi.org/10.1037/0003-066X.54.8.594

Most read articles by the same author(s)