Are All Biases Bad? Collaborative Grounded Theory in Developmental Evaluation of Education Policy
Main Article Content
Abstract
Background: Using two researchers as independent instruments for interpretation in education policy evaluation, this study applies a collaborative grounded theory approach to qualitative data analysis and theory generation.
Purpose: This study argues that varied perspectives should be a critical component in the methodological and analytical choices of education research, especially when the sought after outcome is deeper understanding of the impact, both positive and negative, of an education program or policy. In this study, rather than using one researcher to confirm the reliability of the other, the study explores the outcome of drawing on the positional reflexivity of two researchers, each with a distinct perspective, as a potential strength to co-generate themes and theory in the evaluation of complex policy or programs.
Setting: The data for this analysis originated from interviews of education leaders (n = 13) from two states with contrasting approaches to teacher evaluation: Kentucky and California.
Intervention: NA
Research Design: Qualitative developmental evaluation
Data Collection and Analysis: Semi-structured interviews and grounded theory coding
Findings: Results suggest that more robust theory and analysis may result from independent thematic development and converged theory generation when working in a research team, as opposed to early application of inter-rater reliability. The reflexivity of different perspectives was, in part, reflexive of the self—their own biased perspective and prior experience. When merged, their joint interpretation may have unearthed greater dimensionality. Findings from this study can inform future strategies for evaluating qualitative research data, especially within a developmental evaluation approach aimed at understanding system complexity in education policy and practice.
Downloads
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Copyright and Permissions
Authors retain full copyright for articles published in JMDE. JMDE publishes under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY - NC 4.0). Users are allowed to copy, distribute, and transmit the work in any medium or format for noncommercial purposes, provided that the original authors and source are credited accurately and appropriately. Only the original authors may distribute the article for commercial or compensatory purposes. To view a copy of this license, visit creativecommons.org
References
Armstrong, D., Gosling, A., Weinman, J., & Marteau, T. (1997). The place of inter-rater reliability in qualitative research: An empirical study. Sociology, 31, 597-606. https://doi.org/10.1177/0038038597031003015 DOI: https://doi.org/10.1177/0038038597031003015
Babbie, E. (2007). The practice of social research. Belmont, CA: Thomson Wadsworth.
Baker, B. D., Oluwole, J. O., & Green III, P. C. (2013). The legal consequences of mandating high stakes decisions based on low quality information: Teacher evaluation in the Race-to-the-Top Era. Education Policy Analysis Archives, 21, 1-64. https://doi.org/10.14507/epaa.v21n5.2013 DOI: https://doi.org/10.14507/epaa.v21n5.2013
Burawoy, M., (1998). The extended case method. Sociological Theory, 16, 5-33. https://doi.org/10.1111/0735-2751.00040 DOI: https://doi.org/10.1111/0735-2751.00040
Charmaz, K. (2014). Constructing grounded theory (2nd ed.). Thousand Oaks, CA: Sage.
Danielson, C. (2001). New trends in teacher evaluation. Educational leadership, 58, 12-15.
Davey, J. W., Gugiu, P. C., & Coryn, C. L. S. (2010). Quantitative methods for estimating the reliability of qualitative data. Journal of Multidisciplinary Evaluation, 6, 140-162. https://doi.org/10.56645/jmde.v6i13.266 DOI: https://doi.org/10.56645/jmde.v6i13.266
Friese, D. S. (2013). ATLAS.ti 7. GmbH, Berlin: Scientific Software Development.
Glaser B. & Strauss, A. (1967) The discovery of grounded theory: Strategies for qualitative research. Chicago, IL: Aldine. https://doi.org/10.1097/00006199-196807000-00014 DOI: https://doi.org/10.1097/00006199-196807000-00014
Grayson, K. (2001). Interrater reliability. Journal of Consumer Psychology, 10, 71-73. https://doi.org/10.1207/S15327663JCP1001&2_06 DOI: https://doi.org/10.1207/S15327663JCP1001&2_06
Kane, M. T. (1992). An argument-based approach to validity. Psychological Bulletin, 112, 527. https://doi.org/10.1037/0033-2909.112.3.527 DOI: https://doi.org/10.1037//0033-2909.112.3.527
Landis, J. R., & Koch, G. C. (1977). The measurement of observer agreement for categorical data. Biometrics, 33, 159-174. https://doi.org/10.2307/2529310 DOI: https://doi.org/10.2307/2529310
Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Newbury Park, CA: Sage. https://doi.org/10.1016/0147-1767(85)90062-8 DOI: https://doi.org/10.1016/0147-1767(85)90062-8
Maxwell, J. A. (2013). Qualitative research design: An interactive approach (3rd ed.). Thousand Oaks, CA: Sage.
Mays, N. & Pope, C. (1995) Rigour and qualitative research. British Medical Journal, 311, 109-11. https://doi.org/10.1136/bmj.311.6997.109 DOI: https://doi.org/10.1136/bmj.311.6997.109
Mays, N., & Pope, C. (2000). Assessing quality in qualitative research. British Medical Journal, 320, 50-52. https://doi.org/10.1136/bmj.320.7226.50 DOI: https://doi.org/10.1136/bmj.320.7226.50
Merriam, S. B. (2009). Qualitative research: A guide to design and implementation. San Francisco, CA: Jossey-Bass.
Messick, S. (1996). Validity of performance assessments. Technical issues in large-scale performance assessment, 1-18.
Miller, P. (2008). Reliability. In L. M. Given (Ed.), The Sage encyclopedia of qualitative research methods (Vol. 2, pp. 753-754). Thousand Oaks, CA: Sage.
Morse, J. M. (1994). Designing funded qualitative research. In N. K. Denzin and Y. S. Lincoln (Eds.) Handbook of qualitative research. London, UK: Sage.
Nunnally, J. C. (1978). Psychometric theory (2nd ed.). New York, NY: McGraw-Hill.
Olesen, V., Droes, N., Hatton, D., Chico, N., & Schatzman, L. (1994). Analyzing together: Recollections of a team approach. In A. Bryman and R. G. Burgess (Eds.) Analyzing qualitative data. London, UK: Routledge.
Patton, M. Q. (2015). Qualitative research and evaluation methods (4th ed.). Thousand Oaks, CA: Sage.
Patton, M. Q. (2011). Developmental evaluation: Applying complexity concepts to enhance innovation and use. New York, NY: Guilford.
Pillow, P. (2003). Confession, catharsis, or cure? Rethinking the uses of reflexivity as methodological power in qualitative research. International Journal of Qualitative Studies in Education, 16, 175-196. https://doi.org/10.1080/0951839032000060635 DOI: https://doi.org/10.1080/0951839032000060635
Shenton, Andrew, K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22, 63-75. https://doi.org/10.3233/EFI-2004-22201 DOI: https://doi.org/10.3233/EFI-2004-22201
Schmitt, N. (1996). Uses and abuses of coefficient alpha. Psychological Assessment, 8, 81-84. https://doi.org/10.1037/1040-3590.8.4.350 DOI: https://doi.org/10.1037//1040-3590.8.4.350
Stenbacka, C. (2001). Qualitative research requires quality concepts of its own. Management Decision, 39, 551-555. https://doi.org/10.1108/EUM0000000005801 DOI: https://doi.org/10.1108/EUM0000000005801
Strauss, A. L., & Corbin, J. M. (1990). Basics of qualitative research (Vol. 15). Newbury Park, CA: Sage.
Symonds, J. E., & Gorard, S. (2010). Death of mixed methods? Or the rebirth of research as a craft. Evaluation & Research in Education, 23, 121-136. https://doi.org/10.1080/09500790.2010.483514 DOI: https://doi.org/10.1080/09500790.2010.483514
Vidich, A. J., & Lyman, S. M. (1994). Qualitative methods: Their history in sociology and anthropology. Handbook of qualitative research, 23-59.
Waitzkin, H. (1991). The politics of medical encounters. New Haven, CT: Yale.