Sunday, December 11, 2011

Fiddling with Fidelity?

I'm a huge fan of bread pudding.

First, I like the way it tastes. Second, I love the idea that this particular dessert gives me something useful to do with stale bread, other than to throw it out or to the birds. A couple of months ago, I gathered from the recesses of the fridge all of the last bits of lonely bread items (heels, an errant hot dog bun, a forgotten hamburger bun, a pair of deflated-looking biscuits ) and whipped up a batch of bread pudding. As it baked, my mouth watered. I couldn't wait to sit down to a nice big slice with my coffee after dinner that evening.


Imagine my disappointment when I took the first bite, and instead of savoring its cinnamon-y sweet flavor, I spit it out and said "Eeeeewwwww!"


Gallantly, my husband nibbled, then looked at me. "What the heck is wrong with this?"


After racking my brain--what had I done differently?--I realized I'd forgotten to add the required dose of sugar. I had not been faithful to the recipe. I had not been accurate in the details of concocting what I thought would be a pleasant treat to enjoy at the end of the day.


I had fiddled with fidelity.


Right now you might be wondering: what is fidelity and what does bread pudding have to do with program evaluation?

Fidelity is an integral part of Process Evaluation: the who, what, where, when, why and how of a program. Webster's defines fidelity as (a) the quality of being faithful and (b) accuracy in details www.merriam-webster.com/dictionary/fidelity. In theory, if you deliver a research-based program as designed, your results will mirror those of the developer. Mess with fidelity, fiddle with a component here and there: your end results--outcomes--may fall as flat and yucky as my bad batch of bread pudding.

Key Process Questions to Ask Before and During Program Implementation

Before and during program implementation, you and your evaluator should routinely ask these key process evaluation questions related to fidelity:

Are we following the grant management plan? The grant management plan should identify


  • Who you serve (population)

  • What you deliver (curriculum, interventions, training, etc.)

  • Where you deliver (schools, communities, neighborhoods or regionally)

  • When you deliver dosage (frequency and duration of services)

  • Why you deliver the service (to change behavior, attitudes, skills, knowledge) and

  • How you intend to effect the proposed change (ideally, using a research-proven strategy, intervention, or activity designed to promote desired change).

If you discover you are serving a different population than planned, you may have to go back to the drawing board. Stuff happens, all is not lost. However, you cannot blindly substitute populations and expect to get the same results.


Years ago we evaluated an after-school program designed specifically for middle school females. Post-award and overtime, the grantee learned seventh and eighth grade females had better things to do with their time (in the girls' opinion) than to receive information about female health issues, academic aspirations, and self-esteem. However, 5th grade girls (and their parents) were chomping at the bit to gain entry to the program. To boost attendance, the grantee eagerly accepted them. This set the program timeline back, as adaptations were made; however, in making the necessary developmental changes to the program activities to accommodate a different population, the grant was able to produce results in keeping with the original application and project goals.


Are we meeting the developer's model? When you select a particular research-based program, you do so presumably to achieve outcomes of positive change related to the participant behaviors you wish to impact. The key factors you and your evaluator will take into consideration include (but are not limited to):



  • Number of sessions, and frequency and duration of sessions (dosage)

  • Ordering of sessions (must the program be delivered sequentially or can it be delivered out-of-sequence with the same results)

  • Use of handouts, worksheets, and team projects

  • Facilitator training and access to materials

  • Facilitator commitment to the program

  • Delivery environment and setting

  • Appropriateness of materials to the target population (developmental, gender or cultural application)

Most research-based programs are just that: field-tested with a variety of populations in varying doses (frequency and duration of services) to determine what works, and what doesn't. When a research-based program is delivered with less frequency and duration than the developer prescribes, or if important topics are left out or irrelevant information added, chances are the program will not produce the desired results.


Several years ago, one of forty schools participating in a very large grant routinely failed to produce results. Using Key Informant interviews, we determined that the site facilitators didn't like the information contained in the developer's drug prevention component. So, they created their own materials, which proved to be superficial and opinion- rather than fact-based. As a result, students could not correctly answer knowledge questions. And, lack of knowledge negatively affected student responses to attitudinal survey items. Essentially, these facilitators changed an important ingredient of a well-researched and proven-program, and in doing so, rendered it ineffective. Kind of like forgetting to add the sugar to the bread pudding--or worse, substituting powdered or brown for white grain.


There are times when adjusting or adapting a management plan and even a research-based program is warranted. We'll address those situations in a future blog. The important thing to remember for now is to do everything in your power to faithfully stay on your plan and in keeping with the developer's model. Anything you can do to avoid fiddling with fidelity increases the likelihood you will achieve program outcomes, and meet initiative goals!