The Logic of Summative Confidence
Main Article Content
Abstract
The constraints of conducting evaluations in real-world settings often necessitate the implementation of less than ideal designs. Unfortunately, the standard method for estimating the precision of a result (i.e., confidence intervals [CI]) cannot be used for evaluative conclusions that are derived from multiple indicators, measures, and data sources, for example. Moreover, CIs ignore the impact of sampling and measurement error. Considering that the vast majority of evaluative conclusions are based on numerous criteria of merit that often are poorly measured, a significant gap exists with respect to how one can estimate the CI of an evaluative conclusion. The purpose of this paper is (1) to heighten reader consciousness about the consequences of utilizing a weak evaluation design and (2) to introduce the need for the development a methodology that can be used to characterize the precision of an evaluative conclusion.
Downloads
Article Details
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Copyright and Permissions
Authors retain full copyright for articles published in JMDE. JMDE publishes under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY - NC 4.0). Users are allowed to copy, distribute, and transmit the work in any medium or format for noncommercial purposes, provided that the original authors and source are credited accurately and appropriately. Only the original authors may distribute the article for commercial or compensatory purposes. To view a copy of this license, visit creativecommons.org
References
American Psychiatric Association. (2000). Diagnostic and statistical manual of mental disorders (DSM-IV-TR) (4th ed.). Washington, DC: American Psychiatric Press, Inc.
Burstein, L., Freeman, H. E., Sirotnik, K. A., Delandshere, G., & Hollis, M. (1985). Data collection: The Achilles heel of evaluation research. Sociological Methods Research, 14(1), 65-80.
https://doi.org/10.1177/0049124185014001004
Chang, L. (2005, October). Hypertension: Symptoms of high blood pressure. Retrieved February 17, 2007, from http://www.webmd.com/content/article/96/103784.htm.
Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field-settings. Boston, MA: Houghton Mifflin Company.
Crocker, L., & Algina, J. (1986). Introduction to classical & modern test theory. Fort Worth, TX: Holt, Rinehart, & Winston.
Davidson, J. (2005). Evaluation methodology basics: The nuts and bolts of sound evaluation. Thousand Oaks, CA: Sage.
https://doi.org/10.4135/9781452230115
Hays, W. L. (1994). Statistics (5th ed.). Fort Worth, TX: Harcourt Brace College Publishers.
Hopkins, K. D. (1998). Educational and psychological measurement and evaluation (8th ed.). Boston, MA: Allyn and Bacon.
Kish, L. (1995). Survey Sampling. New York: John Wiley & Sons Inc.
Scriven, M. (1991). Evaluation theasuraus (4th ed.). Newbury Park, CA: Sage Publications.
Scriven, M. (2007, February). Key evaluation checklist (KEC). Retrieved April 15, 2007, from http://www.wmich.edu/evalctr/checklists/kec_feb07.pdf.
Wilkinson, L., and American Psychological Association Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54, 594-604.