The Logic of Summative Confidence

Main Article Content

P. Cristian Gugiu

Abstract

The constraints of conducting evaluations in real-world settings often necessitate the implementation of less than ideal designs. Unfortunately, the standard method for estimating the precision of a result (i.e., confidence intervals [CI]) cannot be used for evaluative conclusions that are derived from multiple indicators, measures, and data sources, for example. Moreover, CIs ignore the impact of sampling and measurement error. Considering that the vast majority of evaluative conclusions are based on numerous criteria of merit that often are poorly measured, a significant gap exists with respect to how one can estimate the CI of an evaluative conclusion. The purpose of this paper is (1) to heighten reader consciousness about the consequences of utilizing a weak evaluation design and (2) to introduce the need for the development a methodology that can be used to characterize the precision of an evaluative conclusion.

Downloads

Download data is not yet available.

Article Details

How to Cite
Gugiu, P. C. (2007). The Logic of Summative Confidence. Journal of MultiDisciplinary Evaluation, 4(8), 1–15. https://doi.org/10.56645/jmde.v4i8.64
Section
Research Articles

References

American Psychiatric Association. (2000). Diagnostic and statistical manual of mental disorders (DSM-IV-TR) (4th ed.). Washington, DC: American Psychiatric Press, Inc.

Burstein, L., Freeman, H. E., Sirotnik, K. A., Delandshere, G., & Hollis, M. (1985). Data collection: The Achilles heel of evaluation research. Sociological Methods Research, 14(1), 65-80.

https://doi.org/10.1177/0049124185014001004

Chang, L. (2005, October). Hypertension: Symptoms of high blood pressure. Retrieved February 17, 2007, from http://www.webmd.com/content/article/96/103784.htm.

Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field-settings. Boston, MA: Houghton Mifflin Company.

Crocker, L., & Algina, J. (1986). Introduction to classical & modern test theory. Fort Worth, TX: Holt, Rinehart, & Winston.

Davidson, J. (2005). Evaluation methodology basics: The nuts and bolts of sound evaluation. Thousand Oaks, CA: Sage.

https://doi.org/10.4135/9781452230115

Hays, W. L. (1994). Statistics (5th ed.). Fort Worth, TX: Harcourt Brace College Publishers.

Hopkins, K. D. (1998). Educational and psychological measurement and evaluation (8th ed.). Boston, MA: Allyn and Bacon.

Kish, L. (1995). Survey Sampling. New York: John Wiley & Sons Inc.

Scriven, M. (1991). Evaluation theasuraus (4th ed.). Newbury Park, CA: Sage Publications.

Scriven, M. (2007, February). Key evaluation checklist (KEC). Retrieved April 15, 2007, from http://www.wmich.edu/evalctr/checklists/kec_feb07.pdf.

Wilkinson, L., and American Psychological Association Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54, 594-604.

https://doi.org/10.1037/0003-066X.54.8.594