Being Blind in a World of Multiple Perspectives: The Evaluator’s Dilemma Between the Hope of Becoming a Team Player and the Fear of Becoming a Critical Friend with No Friends

Main Article Content

Michele Tarsilla
https://orcid.org/0000-0002-5062-1935

Abstract

Background: A large number of evaluation theorists have debated the issue of objectivity and bias in evaluation over the last five decades. In particular, the degree to which distance from the evaluand enhances the validity and reliability of evaluation findings has been a prominent topic of discussion.


Purpose: This article has two primary objectives. First, it intends to present some of the positivist and post-positivist theories on distance that have dominated the evaluation discourse since the late 1960’s, by also showing the limitations of their respective assumptions. Second, it describes a more recent evaluation theory on distance that is proving to help evaluators rapport with their evaluand more effectively, especially in the case of complex programs involving a large variety of stakeholders.


Setting: Not applicable.


Subjects: Not applicable.


Research Design: Not applicable.


Data Collection and Analysis: The paper is the result of both a desk review and a series of interviews with some major evaluation theorists aimed to compare and contrast some of the most relevant ideas on distance in evaluation expressed over the last five decades.


Findings: The author shows that evaluators today are still facing the dilemma of whether they should seek proximity to or distance from the evaluand. However, the author identifies an increasingly popular evaluation approach (herewith referred as the pluralist approach as opposed to the niche approach) that promises to overcome the issue of distance in evaluation more successfully than any other earlier theory. The author dismisses the idea of absolute distance, predicated by both Scriven and Campbell. In doing so, he also shows that evaluators who are closer to the evaluand and the context contiguous to it tend to have a deeper understanding of the issues at stake and therefore enhance the overall quality of their evaluation. In addition, the author acknowledges that evaluators today have a new important role to play vis-à-vis their evaluand: mediating stakeholders’ competing values and agendas for the sake of equity and social justice.


Conclusions: The author concludes that the proximity to the evaluand and the integration of multiple perspectives in an evaluation represent two of evaluation’s most enriching—rather than detrimental—factors. The author also asserts that a truly participatory approach can effectively coexist with advocacy, so long as evaluators are able to clarify their stances vis-à-vis the social, economic, and political issues associated with their evaluand. 

Downloads

Download data is not yet available.

Article Details

How to Cite
Tarsilla, M. (2010). Being Blind in a World of Multiple Perspectives: The Evaluator’s Dilemma Between the Hope of Becoming a Team Player and the Fear of Becoming a Critical Friend with No Friends. Journal of MultiDisciplinary Evaluation, 6(13), 200–205. https://doi.org/10.56645/jmde.v6i13.257
Section
Ideas to Consider in Evaluation

References

Campbell D. T., & Stanley, J. C. (1969). Experimental and quasi-experimental designs for research. Chicago, IL: Rand McNalley.

Caracelli, V. J. (2006). The evaluator- observer comes to the table: A moment of consequential decision making. American Journal of Evaluation, 27(1), 104-107. https://doi.org/10.1177/1098214005284980 DOI: https://doi.org/10.1177/1098214005284980

Fetterman, D. M. (1997). Empowerment evaluation: A response to Patton and Scriven. American Journal of Evaluation, 18(3), 253-266. https://doi.org/10.1016/S0886-1633(97)90033-7 DOI: https://doi.org/10.1016/S0886-1633(97)90033-7

Greene, J. C. (1997). Evaluation as advocacy. American Journal of Evaluation, 18(1), 25-35. https://doi.org/10.1177/109821409701800103 DOI: https://doi.org/10.1016/S0886-1633(97)90005-2

Greene, J. C. (2004). The educative evaluator: An interpretation of Lee J. Cronbach's vision of evaluation. In M. C. Alkin (Ed.), Evaluation roots: Tracing theorists' views and influences (pp. 169-181). Thousand Oaks, CA: Sage. https://doi.org/10.4135/9781412984157.n10 DOI: https://doi.org/10.4135/9781412984157.n10

Hawkins, R. (1991). Is social validity what we are interested in? Argument for a functional approach. Journal of Applied Behavioral Analysis, 24(2), 205-213. https://doi.org/10.1901/jaba.1991.24-205 DOI: https://doi.org/10.1901/jaba.1991.24-205

Mertens, D. M. (2007a). Transformative considerations: Inclusion and social justice. American Journal of Evaluation, 28(1), 86-90. https://doi.org/10.1177/1098214006298058 DOI: https://doi.org/10.1177/1098214006298058

Mertens, D. M. (2007b). Transformative paradigm: Mixed methods and social justice. Journal of Mixed Methods Research, 1(3), 212-225. https://doi.org/10.1177/1558689807302811 DOI: https://doi.org/10.1177/1558689807302811

Mertens, D. M. (2008). Stakeholder representation in culturally complex communities: Insights from the transformative paradigm. In N. L. Smith & P. R. Brandon (Eds.), Fundamental issues in evaluation (pp. 41-60). New York, NY: Guilford.

Mertens, D.M. (2009). Transformative research and evaluation. New York, NY: Guilford.

Patton, M. Q. (1997). Utilization-focused evaluation: The new century text (3rd ed.). Thousand Oaks, CA: Sage.

Rosas, S. R. (2006). Nonparticipant to participant: A methodological perspective on evaluator ethics. American Journal of Evaluation, 27(1), 98-103. https://doi.org/10.1177/1098214005284979 DOI: https://doi.org/10.1177/1098214005284979

Scriven, M. (1967). The methodology of evaluation. In R. Tyler, R. Gagne, & M. Scriven (Eds.), Perspectives of curriculum evaluation (pp. 39-83). Chicago, IL: Rand-McNally.

Scriven, M. (1976). Evaluation bias and its control. In G. V. Glass (Ed.), Evaluation studies review annual (Vol. 1, pp. 101-118). Beverly Hills, CA: Sage.

Scriven, M. (1993). Hard-won lessons in program evaluation. New Directions in Program Evaluation, No. 58. San Francisco, CA: Jossey-Bass. https://doi.org/10.1002/ev.1647 DOI: https://doi.org/10.1002/ev.1647

Scriven, M. (2000). Evaluation ideologies. In D. L. Stufflebeam, G. F. Madaus, & T. Kelligan (Eds.), Evaluation models: Viewpoints on educational and human services evaluation (pp. 249- 278). Boston, MA: Kluwer. https://doi.org/10.1007/0-306-47559-6_15 DOI: https://doi.org/10.1007/0-306-47559-6_15

Shadish, W. R., Cook, T. D., & Leviton, L. C. (1991). Foundations of program evaluation: Theories of practice. Newbury Park, CA: Sage.

Stake, R. E. (2004). Standards-based and responsive evaluation. Thousand Oaks, CA: Sage. https://doi.org/10.4135/9781412985932 DOI: https://doi.org/10.4135/9781412985932

Stake, R. E. (2006). Multiple case study analysis. New York, NY: Guilford.

Weiss, C. H. (Ed.). (1977). Using social research in public policy making. Lexington, MA: Lexington Books.

Weiss, C. H. (1996). Excerpts from evaluation research: Methods of assessing program effectiveness. American Journal of Evaluation, 17(2), 173-175. https://doi.org/10.1016/S0886-1633(96)90023-9 DOI: https://doi.org/10.1016/S0886-1633(96)90023-9

Wholey, J. S. (1983). Evaluation and effective public management. Boston, MA: Little, Brown.

Wholey, J. S., & Newcomer, K. E. (1989). Improving government performance: Evaluation strategies for strengthening public agencies and programs. San Francisco, CA: Jossey- Bass.