Wednesday, May 15, 2013

The Political Nature of Program Evaluation

Program evaluation is a political process. An evaluator who ignores, avoids, or mismanages the political realities of evaluation limits the effectiveness and usefulness of the process (Fitzpatrick, Sanders, & Worthen, 2010). Ethical complexities wind in and among the more overt political features of evaluation such as financial support, stakeholder allegiance, and social impact. Morris and Cohn (1993) detail several ways in which stakeholders may seek to influence evaluation outcomes, and Fitzpatrick et al. (2010) caution that evaluators also need to be aware of their own potential to taint the evaluative process.

If we accept that evaluation is political (and, therefore, ripe for ethical complication), then we must ask how best to balance the objectivity required in a program evaluation with the political interests of stakeholders. We must ask, “What ethical standards and values need to be emphasized in program evaluation?”

The American Evaluation Association’s (AEA) Program Evaluation Standards (Yarbrough, Shulha, Hopson, & Caruthers, 2011) and Guiding Principles for Evaluators (American Evaluation Association, 2004) provide a broad, somewhat obvious, framework for ethical conduct.

Fitzpatrick et al. (2010) are more specific, encouraging evaluators to be both self-reflective about their role in the evaluation process and circumspect about client requests, so as to minimize the potential for bias and ethical compromise: “…the client may be asking for what the client perceives as editing changes, but the evaluator sees as watering down the clarity or strength of the judgments made” (p. 81). And Schweigert (2007) roots evaluator responsibility in the notion of justice – public, procedural, and distributive.

From this we can extract answers to the question “What ethical standards and values need to be emphasized in program evaluation?” 

Ethical standards:
  • Those detailed in the AEA’s and other professionally recognized codes of conduct.

Values:
  • Commitment to truth – what Schweigert (2007) calls the priority of justice
  • Cultural sensitivity
  • Respect (for stakeholders, ourselves, and the evaluation process)

It seems that no professional code nor personal charter can do the whole job. No matter how pointed the professional standards, situational circumstance requires evaluators to make interpretations and best guesses (Schweigert, 2007), which are subject to bias and ethical compromise, as Weiss (2006) lays bare any illusions we may have that we are above or beyond the snare of bias and ethical confusion: “You never start from scratch. We pick up the ideas that are congenial to our own perspective. Therefore, people pick up this thought or that interpretation of a research report that fits with what they know or what they want to do” (p. 480).

I have thought about this a lot over the past few days, returning again and again to Sieber’s (1980) conclusion that “being ethical in program evaluation is a process of growth in understanding, perception, and creative problem-solving ability that respects the interests of individuals and of society” (p. 53). 


References
 
American Evaluation Association, 2004. Guiding principles. Retrieved from www.eval.org/Publications/Guiding Principles.asp.

Fitzpatrick, J., Sanders, J., & Worthen, B. (2010). Program evaluation: Alternative approaches and practical guidelines (4th ed.). Boston, MA: Pearson

Morris, M., & Cohn, R. (1993). Program evaluators and ethical challenges: A national survey. Evaluation Review, 17, 621-642.

Schweigert, F. J. (2007). The priority of justice: A framework approach to ethics in program evaluation. Evaluation and Program Planning, 30(4), 394–399.

Sieber, J. E. (1980). Being ethical: Professional and personal decisions in program evaluation. In R.E. Perloff & E. Perloff (Eds.), Values, ethics, and standards in evaluation. New Directions for Program Evaluation, No. 7, 51-61. San Francisco: Jossey-Bass. 

Weiss, C. H., & Mark, M. M. (2006). The oral history of evaluation Part IV: The professional evolution of Carol Weiss. American Journal of Evaluation, 27(4), p. 474-483.

Thursday, May 2, 2013

Fostering Behavior Change

In Fostering Behavior Change (Tulgan, 2013), Bruce Tulgan offers seven best practices for creating training that increases knowledge uptake and meaningful behavior change. Two-thirds of my way through a Master’s degree in Instructional Design and Technology, I first thought Tulgan’s tips were obvious. Simplistic. After thinking about it quite a bit, I’m sure they are. Why would Tulgan, an established training expert, tell us what we already know? Because it’s true. Because he’s right.  

There is no magic to training, and all the cool whiz-bang technology in the world doesn’t change the fact that effective training is a product of sound design and delivery. Tulgan’s tips should seem obvious, because he is reminding instructional designers and trainers of what we already know, yet sometimes fail to execute. We need to leverage needs assessments to align instructional objectives with identifiable skill and knowledge gaps, link instructional content to real-life, and deliver content to multiple memory centers. Sticky training offers actionable solutions and learning extensions. Finally, we need to follow up and cultivate support for ongoing learning.

We know this. We need to do it. Every time.

Make a great day,


Reference
 
Tulgan, B. (2013, January/February). Fostering behavior change. Training, 50(1), p. 9.