Sunday, December 11, 2011

Fiddling with Fidelity?

I'm a huge fan of bread pudding.

First, I like the way it tastes. Second, I love the idea that this particular dessert gives me something useful to do with stale bread, other than to throw it out or to the birds. A couple of months ago, I gathered from the recesses of the fridge all of the last bits of lonely bread items (heels, an errant hot dog bun, a forgotten hamburger bun, a pair of deflated-looking biscuits ) and whipped up a batch of bread pudding. As it baked, my mouth watered. I couldn't wait to sit down to a nice big slice with my coffee after dinner that evening.


Imagine my disappointment when I took the first bite, and instead of savoring its cinnamon-y sweet flavor, I spit it out and said "Eeeeewwwww!"


Gallantly, my husband nibbled, then looked at me. "What the heck is wrong with this?"


After racking my brain--what had I done differently?--I realized I'd forgotten to add the required dose of sugar. I had not been faithful to the recipe. I had not been accurate in the details of concocting what I thought would be a pleasant treat to enjoy at the end of the day.


I had fiddled with fidelity.


Right now you might be wondering: what is fidelity and what does bread pudding have to do with program evaluation?

Fidelity is an integral part of Process Evaluation: the who, what, where, when, why and how of a program. Webster's defines fidelity as (a) the quality of being faithful and (b) accuracy in details www.merriam-webster.com/dictionary/fidelity. In theory, if you deliver a research-based program as designed, your results will mirror those of the developer. Mess with fidelity, fiddle with a component here and there: your end results--outcomes--may fall as flat and yucky as my bad batch of bread pudding.

Key Process Questions to Ask Before and During Program Implementation

Before and during program implementation, you and your evaluator should routinely ask these key process evaluation questions related to fidelity:

Are we following the grant management plan? The grant management plan should identify


  • Who you serve (population)

  • What you deliver (curriculum, interventions, training, etc.)

  • Where you deliver (schools, communities, neighborhoods or regionally)

  • When you deliver dosage (frequency and duration of services)

  • Why you deliver the service (to change behavior, attitudes, skills, knowledge) and

  • How you intend to effect the proposed change (ideally, using a research-proven strategy, intervention, or activity designed to promote desired change).

If you discover you are serving a different population than planned, you may have to go back to the drawing board. Stuff happens, all is not lost. However, you cannot blindly substitute populations and expect to get the same results.


Years ago we evaluated an after-school program designed specifically for middle school females. Post-award and overtime, the grantee learned seventh and eighth grade females had better things to do with their time (in the girls' opinion) than to receive information about female health issues, academic aspirations, and self-esteem. However, 5th grade girls (and their parents) were chomping at the bit to gain entry to the program. To boost attendance, the grantee eagerly accepted them. This set the program timeline back, as adaptations were made; however, in making the necessary developmental changes to the program activities to accommodate a different population, the grant was able to produce results in keeping with the original application and project goals.


Are we meeting the developer's model? When you select a particular research-based program, you do so presumably to achieve outcomes of positive change related to the participant behaviors you wish to impact. The key factors you and your evaluator will take into consideration include (but are not limited to):



  • Number of sessions, and frequency and duration of sessions (dosage)

  • Ordering of sessions (must the program be delivered sequentially or can it be delivered out-of-sequence with the same results)

  • Use of handouts, worksheets, and team projects

  • Facilitator training and access to materials

  • Facilitator commitment to the program

  • Delivery environment and setting

  • Appropriateness of materials to the target population (developmental, gender or cultural application)

Most research-based programs are just that: field-tested with a variety of populations in varying doses (frequency and duration of services) to determine what works, and what doesn't. When a research-based program is delivered with less frequency and duration than the developer prescribes, or if important topics are left out or irrelevant information added, chances are the program will not produce the desired results.


Several years ago, one of forty schools participating in a very large grant routinely failed to produce results. Using Key Informant interviews, we determined that the site facilitators didn't like the information contained in the developer's drug prevention component. So, they created their own materials, which proved to be superficial and opinion- rather than fact-based. As a result, students could not correctly answer knowledge questions. And, lack of knowledge negatively affected student responses to attitudinal survey items. Essentially, these facilitators changed an important ingredient of a well-researched and proven-program, and in doing so, rendered it ineffective. Kind of like forgetting to add the sugar to the bread pudding--or worse, substituting powdered or brown for white grain.


There are times when adjusting or adapting a management plan and even a research-based program is warranted. We'll address those situations in a future blog. The important thing to remember for now is to do everything in your power to faithfully stay on your plan and in keeping with the developer's model. Anything you can do to avoid fiddling with fidelity increases the likelihood you will achieve program outcomes, and meet initiative goals!





Monday, October 3, 2011

7 Qualities of an Effective Universal Program Design

Whether you’re designing a series of program interventions as part of grant-funding or you are developing a single program, it is important to consider the qualities of effective universal program design before you implement the program with a broad-based audience.

A universal program is one delivered to general members of a population. These programs address ‘universal’ problems commonly occurring among certain populations (such as drinking and driving among high school students or bullying among middle school students). We know not all high school students drink and drive and not all middle school students bully. However, an estimated proportion of these populations will do just that, if they do not receive prevention programs that help them develop the knowledge, attitudes, skills and abilities necessary to avoid these or other harmful, risk behaviors.

Below, we’ve listed what we believe to be the best qualities of an effective universal program design.

Quality 1: You’ve defined the program goals.

While the research community is getting better at predicting who might develop risk behaviors (thus permitting the development of ‘targeted interventions’), it is far simpler and less expensive to direct programs at students ‘universally’.

Goals of universal programs aim primarily to prevent a problem behavior from developing in the first place. However, given the ages some youth become involved in risk behaviors, universal program designers cannot ignore the fact that some kids may be experimenting with certain harmful activities while others may be actively involved in them. Therefore goals of an effective universal program should include preventing, reducing, and/or eliminating the identified problem behavior.

Quality 2: You’ve identified the problem and the pervasiveness of the problem.

What is the problem your program addresses? And is it a real problem? Often the problems we ‘perceive’ turn out to be real problems—but you have to review the statistics to assess whether the problem is pervasive enough to address through a universal program.

Statistical sources include the Centers for Disease Control, your local health department, state departments of Education or Children and Family Services, and local schools and law enforcement. These entities generate publicly accessible data bases identifying, among other things, rates of substance abuse, violence, crime, family disruption, and morbidity and mortality associated with the problem behavior.

Once you’ve determined the problem is real and worth the time, cost and effort of designing and implementing a program, you need to assess the underlying causes and the evidence-based solutions to the problem.

Quality 3: You’ve identified underlying or contributing causes and evidence-based solutions to the problem behavior.

Underlying or contributing causes to a problem behavior include a variety of factors. Ideally, your program will address more than one contributing factor. Using childhood obesity as the problem example, consider some of the factors influencing the condition:


  • Family history of obesity

  • Inactivity

  • Poor coping skills

  • Poverty and poor eating habits

  • Family traditions and culture

Before you can identify an evidence-based solution to the problem behavior, you must identify which factors seem most likely associated with the behavior in your population. To find out, conduct a needs assessment, talk to professionals working with the population, and ask others to share data and information. Some of these factors can be verified statistically by the same sources (e.g. Centers for Disease Control) that helped you identify the pervasiveness of the problem.


Next, you need to know what works and what doesn’t. That means reading the literature, consuming everything you can about the problem and solutions, so that you can structure your program to replicate or improve upon solutions. For example, in the case of childhood obesity, solutions might include



  • Access to health assessments to determine medical causes, treatments and other related health issues affecting the child

  • Family education programs: cooking, meal preparation, budgeting, family attitudes

  • Increased opportunities for the child to engage in physical activity and exercise

When you review these solutions, you begin to understand why targeting one factor might not prove effective. If a child engages in increased physical activity and exercise, yet does not first see a medical specialist, you won’t know if the child has other health problems such as undiagnosed high blood pressure or diabetes that places the child further at risk. Sending a kid to the gym for aerobic exercise when he has other health problems could cause him and you significant difficulty. Or, if a child is otherwise healthy and does increase physical activity, but mom continues to serve foods at home containing high fat or calories, chances are the child will not achieve her weight goal.


Quality 4: Choose a change model to underpin the conceptual framework of your program.


Change is dynamic. It takes time, energy and effort. It is reciprocal. There are setbacks. Above all, it relies on the individual to first form the intention to change and second to decide that the ‘benefits’ of change outweigh the ‘costs’.


Therefore, design your program to incorporate a theoretical change model taking these and other dynamics into account. You can find basic information on change models in Theory at a Glance: A Guide for Health Promotion Practice. U.S. Department of Health and Human Services, at www.cancer.gov/cancertopics/cancerlibrary/theory.pdf .


Quality 5: You’ve addressed the keys to behavioral change: Knowledge, Attitude, Skills and Abilities.


A knowledge program alone will not generally change behavior. However it might help an individual shape the intention to change—for example: Now that I know I have a problem, I plan to do something about it. An attitudinal shift may help the individual think differently about their problem, problems in their families, or among their peers—for example: I think my friends and I could have more fun if we didn’t drink at parties. Altered attitudes go a long way toward promoting change—but as a stand-alone, they may not effect an actual behavioral change.



The most effective programs provide individuals with components that increase their knowledge, shape positive attitudes, and imbue participants with skills and abilities to change behavior (or avoid the behavior). Bottom line: if you know you have a problem, if you want to make a change, you won’t get very far without knowing how to make it happen. Effective universal programs commonly provide the tools that permit an individual to put a plan into action.


Quality 6: Your program is age and developmentally appropriate, and therefore meaningful to the population.


Make sure your program is appropriate to the population you serve. If you are working with students, readability is an issue. Work with a school educator to ensure the language you use and the concepts you present can be understood with meaning by your population. Make sure your topics are relevant to the population: a session on sexually transmitted diseases might be appropriate for middle and high school students, but not for elementary grades. While the consequences of losing a driver’s license might have great impact on high school students, middle school students can’t envision that far into the future (yes, two to three years down the road seems like a life time to kids!).


Quality 7: You’ve built in monitoring measures, piloted the program, and made revisions.


Pilot your program before ‘broadcasting’ or disseminating it to a wide population. Use pre-post measures to see if the pilot population achieves the goals and objectives you envisioned. Monitor the implementation through observation. Some things that appear great on paper fall absolutely flat in practice. Be willing to go back to the drawing board to revise your program as necessary.

Thursday, September 8, 2011

5 Questions to Ask Before Hiring an Evaluator

Making the decision to hire an evaluator is a big step. Almost as big as deciding to get married! After all, you and your evaluator will be glued-to-the-hip for an extended period of time—at least a year, if not for the full term of your project: three to five years, give or take a no-cost extension or two! Here are five minimum questions you should ask a prospective evaluator before making your final decision.

Question 1: What are your credentials?

Credentials are qualifications and consist of:

• Academic education, degrees and professional training

• Years experience as an evaluator

• Experience with programs and populations similar to yours

• Experience with various forms of evaluation design and statistical analysis

To answer your question, your evaluation candidate should be able to provide evidence of his/her credentials, including but not limited to:

• Curriculum Vitae (CV) or Summary of Qualifications

• Description of Sample Projects (discipline, population, evaluation design and methods of analyses employed)

• One or two sample reports or published articles authored by the evaluator

Question 2: How familiar are you with the population and community we serve?

Many evaluators work across the nation and internationally. However, just because an evaluator is not a member of your community or neighborhood, doesn’t mean he/she can’t effectively serve your project, especially using technology. However, evaluators should know something about you, your organization and community, the population and yes, even the policies and culture of your geographic service area. He/she should be able at interview to show some familiarity with:

• Your organization, the community where you are located, and the population you serve

• Basic demographics of your population, for example: gender, age, developmental age, race/ethnicity, economic levels, health conditions, or languages spoken

• Recent policy or cultural issues that could negatively impact or positively benefit your project

Question 3: Are you willing to train our staff on evaluation as part of your services?

It is no secret that project personnel, who have never worked with an evaluator, are afraid of finding themselves or their projects 'under the microscope'. They might fear they will lose their positions if an evaluator thinks they aren’t doing a good enough job or that their workload and paperwork will increase or materially change. Some believe evaluation is a waste of much needed resources and that the dedicated budget should go to services and constituent needs.

More importantly, not all project personnel understand the what, why, how, and how-to of program evaluation, its benefits, its methods or its terminology. Successful projects blend program with evaluation—the two must work hand-in-hand. So, find out if your prospective evaluator will:

• Train staff on the purposes and components of your project evaluation

• Take time to show them how each part of the program fits with the various evaluation components

• Explain how the data will be collected, and how it will be used

• Discuss the benefits of evaluation for your project as well as your organization. Well-leveraged evaluation results can grow your organization, expand your service capacity, increase your organizational capability, and increase your funding success!

• Provide on-going technical assistance

Question 4: Are you registered with an Institutional Review Board?

While the Secretary of Health and Human Services (HHS), in conjunction with the Office of Science and Technology Policy (OSTP) is considering revisions to the rules governing Participant Protection and Confidentiality is some areas affecting evaluation (American Evaluation Association, info@eval.org, September 6, 2011), it is important to understand your project may be subject to review and approval by an Institutional Review Board (IRB). This is particularly true if your project serves ‘vulnerable populations’ such as minor children, minorities, and individuals involved with the criminal justice system, among others.

Obtaining an IRB can be a labor intensive and at times, expensive process. Find out if your prospective evaluator is currently registered with an IRB. If not, ask how he/she would go about securing one for your project.

Question 5: How do you structure your fees?

Whether your prospective evaluator is an individual, a member of a for-profit or non-profit corporation, or a university faculty member, he or she will expect payment for the services they provide. Some evaluators take projects on a flat-fee basis, others charge by the hour and still others charge based on ‘deliverables’. You also want to know when and how you will pay your evaluator. Some prefer to be paid monthly, others at the time of delivery of a project deliverable, and still others ask for ‘retainers’ paid up-front, followed by remaining payments structured over the balance of the contract period (e.g. quarterly).


Consider these questions. Take time to interview your prospective evaluator. Get to know him or her and make sure the two of you 'fit'. If you do, it will be match made in heaven!