Saturday, April 28, 2012

Another AEA:365: Bloggers Series

Believe it or not, evaluators BLOG. Yes, and some of us Face Book and Tweet!  The American Evaluation Association's Executive Direct Susan Kistler embarked on a fantabulous project this year: highlighting evaluators who blog! 

I joined the ranks of those who blog at the beginning of 2011.  With the economy tight, resources limited, I felt a blog was one way to bring a little bit of in-kind info and education to our clients and others in the fields of evaluation, not-for-profit organizations and public entities working in health-related, risk-reduction programs and behavioral-health models of change.

Maintaining a blog is not easy--I am more often than not swept away by my 'must do' list.  And face it: a blog is just a blog!  So, I am not up to par in terms of cranking out the words as often as I should.  However, because the American Evaluation Association (AEA) decided to feature those of us who slog as well as blog, readers of my blog have the opportunity to read the informative and eye-opening words, thoughts, strategies and techniques belonging to other evaluators who also are AEA members.

Below, I link you to my AEA piece, published today April 27.  Take time to read others in the series (as well as the non-blogger pieces--I've learned so much from my colleagues and I'm sure you will, too!).  I've also cut and pasted the published piece in my blog below.  Enjoy!  


Bloggers Series: Catherine (Brehm) Rain on The Evaluation Forum

My name is Catherine (Brehm) Rain, Vice President of Rain & Brehm Consulting Group, Inc., an independent evaluation and consulting firm located in Rockledge, Florida. I blog at The Evaluation Forum.

Rad Resource – The Evaluation Forum: New to our website, The Evaluation Forum focuses on the why and wherefore of evaluation of health promotion and health-related, risk-reduction programming. The blog targets program personnel with some or no background in the principles, practices, purposes and benefits of program evaluation. Content is basic, and covers issues such as hiring an evaluator, program design, and fidelity (among other future topics). We post new content monthly and expect to increase frequency of postings this year.

Hot Tips – favorite posts: We added our blog in September of this 2011. Thus far, my two favorite posts are

  • 12/11/2011 Fiddling with Fidelity? Fidelity means, in a word: faithfulness. As a former project director and a current evaluation team member specializing in Process Evaluation, I liken adherence to a grant management plan or a program design, to following a recipe for bread pudding. Yes, you can tweak it here and there, if you know what you’re doing. If you don’t, you might end up as I did, with a batch of botched pudding!
Lessons Learned – why I blog: I blog, because I am first and foremost a writer—I write two other blogs non-related to evaluation. Chiefly and with relevance to The Evaluation Forum: I blog to bring basic information to clients and program personnel so that they (a) grow their knowledge about evaluation; (b) apply evaluation principles to program design and implementation; and in so doing (c) maximize outcomes.

Lessons Learned: You have to commit to a blog in the same way you do to a subscribed newsletter: often; and whether you have time for it, or not. It is an adjustment. It also takes time to develop a following—if you want one. Linking posts to our Facebook page has added a ‘friendly community’ factor, as well. Sometimes, folks are a little shy of evaluation and its impact on their organization or project. Finding us on-line or on Facebook with helpful hints or solid information they can use meaningfully, may be the first step we can take as professionals to help our clients and community succeed! (It’s also nice to be ‘liked’!)

This winter, we’re running a series highlighting evaluators who blog. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

AEA365: Catherine Rain on Going Public: Communicating Findings like a Pro!

The American Evaluation Association (AEA) sponsors a tip-a-day series, written by established and emerging evaluators.

We enjoyed the distinct pleasure of having an article featured on March 2, 2012 titled Going Public: Communicating Findings like a Pro! I've cut and pasted the piece into this blog.

However, I also share the link to AEA365, so that you can read all of the great posts and learn from others working in the field of evaluation! As well, you can click the links embedded in the post to access the resources described under Rad Resources. Enjoy!

Catherine Rain on Going Public: Communicating Findings like a Pro!

I’m Catherine Rain of Rain and Brehm Consulting Group, Inc., a research and evaluation firm in Rockledge, Florida. Ever look at an evaluation project overflowing with new learning and fantastic results and wonder: What now? What can we do with this valuable information –beyond writing a report or peer-review publishing?

How about taking it public with a communications strategy, targeting various segments of the community and the field?

Such campaigns:
  • Educate policy-makers and decision-makers, as well as beneficiaries of services;
  • Leverage data, bringing recognition and benefactors to the program or service;
  • Positively re-frame attitudes about problems and risks affecting the community; and
  • Demonstrate accountability and transparency among tax-supported providers of programs and services.
Okay: sounds like you might need to hire an expensive marketing firm, right? Wrong! With a couple of no-cost tools, you can become the marketing pro!

Lesson Learned: Planning a communications strategy is similar to planning an evaluation. Carrying it forward, beginning to end resembles the process of designing, implementing and evaluating a program or service. Communicating with various ‘markets’ requires you to shape messages in the same way you tailor those directed at an evaluated population: with relevance, in a language they understand, and with sensitivity to their culture, values and traditions!

Rad Resource: The Pink Book, otherwise known by its longer name Making Health Communication Programs Work developed by the National Cancer Institute (NCI) at the National Institutes of Health (NIH) provides you with everything you need to know about planning, designing, implementing and evaluating a communication strategy. While the book addresses health communications, the strategies easily transfer to any evaluated discipline. Note: hard copies of the book are no longer available; however, you can download the document or print it in HTML and PDF versions (option to print page or whole document).

Lesson Learned: As with evaluation and program implementation: Planning remains the key to success!

Rad Resource: Under contract, our firm authored the publication Translating Evaluation Results to Published Documents which summarizes Stage 1–Planning and Strategy Development—contained in the Pink Book. We analogize the approach to steps taken when designing and developing a program management or evaluation plan.

Hot Tip: There is a time and resource cost to producing an effective communications strategy.
Rad Resource: Funders often require evaluators to disseminate results and lessons learned. Ask if you can fund the communications strategy as a dissemination budget line item.

Hot Tip: According to our colleague and AEA Executive Director Susan Kistler, evaluation roles are changing! We need to redefine our approaches with clients in meaningful ways. What better than to extend our communication avenues?

Rad Resource: Susan Kistler The Future of Evaluation: 5 Predictions (building on 10 others!) .

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Friday, February 3, 2012

AEA365 Tip-A-Day Series

The American Evaluation Association (AEA) sponsors a tip-a-day series, written by established and emerging evaluators.

We enjoyed the distinct pleasure of having an article featured on February 2, 2012 titled Grasshopper Moments: The Kung-Fu Masters of Process Evaluation. I've cut and pasted the piece into this blog.

However, I also share the link to AEA365, so that you can read all of the great posts and learn from others working in the field of evaluation! As well, you can click the links embedded in the post to access the resources described under Rad Resources. Enjoy!

Catherine (Brehm) Rain on Grasshopper Moments: The Kung-Fu Masters of Process Evaluation
Posted: 01 Feb 2012 12:33 AM PST

I’m Catherine (Brehm) Rain of Rain and Brehm Consulting Group, Inc., an independent research and evaluation firm in Rockledge, Florida. I specialize in Process Evaluation, which answers the questions Who, What, When, Where and How in support of the Outcome Evaluation. Field evaluations occur in chaotic environments where change is a constant. Documenting and managing change using process methods help inform and explain outcomes.

Lesson Learned: If you don’t know what or how events influenced a program, chances are you won’t be able to explain the reasons for its success or failure.

Lesson Learned: I’m a technology fan, but I’m also pretty old-school. Like Caine in the legendary TV show Kung Fu, I frequently conjure up the process evaluation ‘masters’ of the 1980s and ‘90s to strengthen the foundation of my practice and to regenerate those early ‘Grasshopper’ moments of my career.

Old-school? Or enticingly relevant? You decide, Grasshopper! I share a few with you.

Hot Tip: Process evaluation ensures you answer questions of fidelity (to the grant, program and evaluation plan): did you do what you set out to with respect to needs, population, setting, intervention and delivery? When these questions are answered, a feedback loop is established so that necessary modifications to the program or the evaluation can be made along the way.

Rad Resource: Workbook for Designing a Process Evaluation, produced by the State of Georgia, contains hands-on tools and walk-through mechanics for creating a process evaluation. The strategies incorporate the research of several early masters, including three I routinely follow: Freeman, Hawkins and Lipsey.

Hot Tip: Life is a journey—and so is a long-term evaluation. Stuff happens. However, it is often in the chaotic that we find the nugget of truth, the unknown need, or a new direction to better serve constituents. A well-documented process evaluation assists programs to ‘turn on a dime’, adapt to changing environments and issues, and maximize outcome potential.

Rad Resource: Principles and Tools for Evaluating Community-Based Prevention and Health Promotion Programs by Robert Goodman includes content on the FORECAST Model designed by two of my favorites (Goodman & Wandersman), which enables users to plot anticipated activities against resultant deviations or modifications in program and evaluation.

Hot Tip: If you short shrift process evaluation, you may end up with Type III error primarily because the program you evaluated is not the program you thought you evaluated!

Rad Resource: Process Evaluation for Public Health Research and Evaluations: An Overview by Linnan and Steckler discusses Type III error avoidance as a function of process evaluation. As well, the authors discuss the historical evolution of process evaluation by several masters including but not limited to Cook, Glanz and Pirie.

Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

Tuesday, January 31, 2012

Creating a Research-Based Program Fidelity Instrument

Webster's defines fidelity as (a) the quality of being faithful and (b) accuracy in details www.merriam-webster.com/dictionary/fidelity. When you implement a research-based program, you want to make sure that you are matching the developer's design, step-for-step, faithfully and accurately.

In a previous blog we discussed the importance of ensuring that the program you deliver reaches the correct population, in the right setting, with the right amount of service hours, and with fidelity. Some of that information can be determined from demographics and dosage sheets (the number of hours per session times the number of sessions).

Other information, however, might not be as readily available via quantitative measures. And generally speaking, most developers do not include a fidelity instrument with their programs or curricula. That means it is up to you to design a qualitative instrument that will adequately assess whether or not the facilitator followed the developer's design and met critical elements of a curriculum.

Steps to Designing A Fidelity Instrument

1. Read the Developer's Introduction:
Most of the time, the developer will describe in narrative and at the beginning of the program key elements about his or her product. We consider key elements the components of the program that must or should be in place in order for the program to work as designed.

Key element information could include the research basis (or theoretical underpinnings) of the program, a description of the population tested, goals, research results, and modifications made to materials over time. It may also describe the minimum number of sessions participants must receive to achieve stated goals, the frequency of exposure, expectations about the organization of the class or group as well as the facilitator skills required with respect to training, application of training, and group management. Programs requiring student interaction (e.g. discussion groups, role plays) might also recommend establishing a set of mutually agreed-upon rules to follow. Other details can include a description of class or group activities, such as community service, the conduct of a media campaign, or parent involvement and when those should occur.

2. Make a List of the Suggested Key Elements Contained in the Introduction:
After reading the introduction, make a list of the Key Elements. Key elements we usually identify as worthy of tracking through evaluation include (but are not limited to):

a. Sequencing of sessions - to achieve effectiveness, must the group receive sessions in the proper sequence, or can a creative facilitator 'mix it up' without confusing the group or negatively impacting outcomes.

b. Number of sessions, duration, and timing - to replicate the developer's findings, participants should receive the minimum number of sessions recommended, the minimum number of hours suggested, and at prescribed intervals. Some programs (like Botvin's Life Skills) permit delivery on a daily basis over two weeks, three times a week, or once a week. Other programs (such as Second Step) use a therapeutic model, meaning students receive information and then process it over a few days or a full week before receiving the next session. Second Step students might receive only one or two sessions per week over multiple weeks.

c. Facilitator training and skills acquisition - Some programs offer no training while others provide several hours of in-person, on or off-site training. It also is not unusual to find organizations familiar with the program conducting their own training (e.g. schools and non-profit organizations). It is important to know whether facilitators received training, and if so the number of hours and type. We also examine facilitator dexterity: how well do they know the curriculum; are they at-ease with content; is their training evident in the way they manage participants, invite discussion, guide activities? Do they display excellent facilitator skills: being non-judgmental, content-informed, and, above all, enthused?

d. Group or classroom setting - Most programs we've worked with recommend seating participants in a circle. A circle invites openness, offers great opportunity to bring the group into focus, encourages collaboration, and facilitates discussion or observation of role plays. However--and this is a Big However--we've found most school administrators are not too keen on rearranging classrooms. The logistics of setting up/taking down before the next class comes in may prove chaotic. If the classroom is left with chairs or desks in a circle rather than in rows at the end of the day, maintenance personnel might end up being responsible for putting the class back in order before the start of the next school day. Chances are school-based programs--especially delivered to middle and high school students--may not meet the circle-organization standard most developers envision. This may be one of those things you will need to let go, in terms of fidelity.

However, you should pay attention to other issues within the classroom or group setting. For example: Group Rules. Group rules consist of agreements not to speak when others are speaking, to raise hands, be on time, complete assignments, and participate in role plays and activities. Group Rules also apply to issues of confidentiality. In some programs, students disclose private information during discussion. Many programs suggest facilitators encourage students to formally agree to "Keep in Vegas, what happens in Vegas." As well, facilitators should be trained to intervene in the discussion as well as to refer for services, when issues involve the health, mental health and safety of the disclosing student (or others).

e. Participant enthusiasm and adoption of behaviors - Sometimes kids (or grown-ups) just don't get it. If so, either the facilitator requires re-training or the materials require modification. Years ago, middle school students in a low-performing school consistently scored poorly on knowledge surveys, which included very basic definitions of behavior. When interviewing facilitators, we learned these students had difficulty pronouncing certain words as well as understanding what they meant. To take corrective action, facilitators set up 'vocabulary flash cards' containing the difficult to understand words and their definitions, which they used with students each time new terms appeared in sessions. While this represented a deviation in fidelity, it served as an important if not imperative modification to facilitate student acquisition of knowledge and attitudinal change.

3. Arrange Topics on an Instrument--Preferably Likert-Scaled:
We arrange major qualifier topics (such as those explored in Items a-e above) as heading topics. Beneath each heading, we identify a series of sub-topic qualifiers (such as "students participated in discussions"; classroom facilitator used non-judgmental statements"; "facilitator collected assignments"; "facilitator reviewed major components of previous session before beginning new session"; "facilitator referred to Group Rules when necessary").

Each of these contains a two-part, observation response: The first identifies whether, from observation, the qualifier was apparent: Yes, No, or Does Not Apply. You'd use Does Not Apply when the activity or qualifier was not called for in the session you observed, for example, Role Plays. The session you observed might not have included Role Play, therefore this qualifier and others associated with the conduct of Role Play would not apply.

The second part response applies only if the answer to the first question was "Yes". You'd then score the degree to which the qualifier achieved fidelity. We use a Likert Scale, usually with five responses. When checked off, these items later can be entered into a database and quantitatively analyzed.

A Word About Observations
Observation is part and parcel of process evaluation. What better way for a process evaluator to see for him or herself, whether a program is running as designed, by a skilled and knowledgeable facilitator engaging with enthusiastic and adaptive participants?

However--another Big However--unless you are free to come and go at will, stop in when the mood (or your management plan) dictates, observation may not always work. In some instances, we have come across excellent and effective facilitators who crumble under the eye of an evaluator (no matter how friendly or back-of-the-room we remain). We have discovered others who perform well only when observed, putting on the proverbial 'Dog and Pony Show' while we're there. Thereafter, run-of-the-mill and less than stellar performance.

Once, we observed a facilitator who so knocked our socks off, we came back raving. Wow! What a super guy! Wow! His students are so lucky! Wow! Can't wait for results to come in!

Uh-huh. When results for the group of students he worked with did come in: Yikes! Their scores were so poor--worse than the comparison groups' scores--that we said to ourselves, "Gee, it's almost as if they didn't participate in the program at all."

Guess what. They didn't--except on the one day we observed.

We now use three combinations and methods to assess fidelity, besides evaluator observation.

First, we try to capture two on-site, evaluator observations.

Second, individuals within the grantee organization who have trained others or who are considered experts in the program, stop in and visit with facilitator and students. Their purpose is not to observe or score, but to answer questions, offer support and resources.

Third, we ask facilitators to complete a self-assessment rating survey (on-line). This survey not only asks facilitators to document Key Elements of the program, but asks facilitators for their feedback. Does this program work? Do you like it? How did your students respond? Did you have everything you needed to do your job? Did you have enough time to implement everything the developer expected you to in the space of session? Would you recommend this program to others at your school?

Managing fidelity can be time consuming. But when you take the time to put it in place, you will find it absolutely explains a lot (think of the Dog and Pony guy described above who never really, truly implemented the program!) and adds greater understanding of what worked, what didn't, why and why not!

Sunday, December 11, 2011

Fiddling with Fidelity?

I'm a huge fan of bread pudding.

First, I like the way it tastes. Second, I love the idea that this particular dessert gives me something useful to do with stale bread, other than to throw it out or to the birds. A couple of months ago, I gathered from the recesses of the fridge all of the last bits of lonely bread items (heels, an errant hot dog bun, a forgotten hamburger bun, a pair of deflated-looking biscuits ) and whipped up a batch of bread pudding. As it baked, my mouth watered. I couldn't wait to sit down to a nice big slice with my coffee after dinner that evening.


Imagine my disappointment when I took the first bite, and instead of savoring its cinnamon-y sweet flavor, I spit it out and said "Eeeeewwwww!"


Gallantly, my husband nibbled, then looked at me. "What the heck is wrong with this?"


After racking my brain--what had I done differently?--I realized I'd forgotten to add the required dose of sugar. I had not been faithful to the recipe. I had not been accurate in the details of concocting what I thought would be a pleasant treat to enjoy at the end of the day.


I had fiddled with fidelity.


Right now you might be wondering: what is fidelity and what does bread pudding have to do with program evaluation?

Fidelity is an integral part of Process Evaluation: the who, what, where, when, why and how of a program. Webster's defines fidelity as (a) the quality of being faithful and (b) accuracy in details www.merriam-webster.com/dictionary/fidelity. In theory, if you deliver a research-based program as designed, your results will mirror those of the developer. Mess with fidelity, fiddle with a component here and there: your end results--outcomes--may fall as flat and yucky as my bad batch of bread pudding.

Key Process Questions to Ask Before and During Program Implementation

Before and during program implementation, you and your evaluator should routinely ask these key process evaluation questions related to fidelity:

Are we following the grant management plan? The grant management plan should identify


  • Who you serve (population)

  • What you deliver (curriculum, interventions, training, etc.)

  • Where you deliver (schools, communities, neighborhoods or regionally)

  • When you deliver dosage (frequency and duration of services)

  • Why you deliver the service (to change behavior, attitudes, skills, knowledge) and

  • How you intend to effect the proposed change (ideally, using a research-proven strategy, intervention, or activity designed to promote desired change).

If you discover you are serving a different population than planned, you may have to go back to the drawing board. Stuff happens, all is not lost. However, you cannot blindly substitute populations and expect to get the same results.


Years ago we evaluated an after-school program designed specifically for middle school females. Post-award and overtime, the grantee learned seventh and eighth grade females had better things to do with their time (in the girls' opinion) than to receive information about female health issues, academic aspirations, and self-esteem. However, 5th grade girls (and their parents) were chomping at the bit to gain entry to the program. To boost attendance, the grantee eagerly accepted them. This set the program timeline back, as adaptations were made; however, in making the necessary developmental changes to the program activities to accommodate a different population, the grant was able to produce results in keeping with the original application and project goals.


Are we meeting the developer's model? When you select a particular research-based program, you do so presumably to achieve outcomes of positive change related to the participant behaviors you wish to impact. The key factors you and your evaluator will take into consideration include (but are not limited to):



  • Number of sessions, and frequency and duration of sessions (dosage)

  • Ordering of sessions (must the program be delivered sequentially or can it be delivered out-of-sequence with the same results)

  • Use of handouts, worksheets, and team projects

  • Facilitator training and access to materials

  • Facilitator commitment to the program

  • Delivery environment and setting

  • Appropriateness of materials to the target population (developmental, gender or cultural application)

Most research-based programs are just that: field-tested with a variety of populations in varying doses (frequency and duration of services) to determine what works, and what doesn't. When a research-based program is delivered with less frequency and duration than the developer prescribes, or if important topics are left out or irrelevant information added, chances are the program will not produce the desired results.


Several years ago, one of forty schools participating in a very large grant routinely failed to produce results. Using Key Informant interviews, we determined that the site facilitators didn't like the information contained in the developer's drug prevention component. So, they created their own materials, which proved to be superficial and opinion- rather than fact-based. As a result, students could not correctly answer knowledge questions. And, lack of knowledge negatively affected student responses to attitudinal survey items. Essentially, these facilitators changed an important ingredient of a well-researched and proven-program, and in doing so, rendered it ineffective. Kind of like forgetting to add the sugar to the bread pudding--or worse, substituting powdered or brown for white grain.


There are times when adjusting or adapting a management plan and even a research-based program is warranted. We'll address those situations in a future blog. The important thing to remember for now is to do everything in your power to faithfully stay on your plan and in keeping with the developer's model. Anything you can do to avoid fiddling with fidelity increases the likelihood you will achieve program outcomes, and meet initiative goals!





Monday, October 3, 2011

7 Qualities of an Effective Universal Program Design

Whether you’re designing a series of program interventions as part of grant-funding or you are developing a single program, it is important to consider the qualities of effective universal program design before you implement the program with a broad-based audience.

A universal program is one delivered to general members of a population. These programs address ‘universal’ problems commonly occurring among certain populations (such as drinking and driving among high school students or bullying among middle school students). We know not all high school students drink and drive and not all middle school students bully. However, an estimated proportion of these populations will do just that, if they do not receive prevention programs that help them develop the knowledge, attitudes, skills and abilities necessary to avoid these or other harmful, risk behaviors.

Below, we’ve listed what we believe to be the best qualities of an effective universal program design.

Quality 1: You’ve defined the program goals.

While the research community is getting better at predicting who might develop risk behaviors (thus permitting the development of ‘targeted interventions’), it is far simpler and less expensive to direct programs at students ‘universally’.

Goals of universal programs aim primarily to prevent a problem behavior from developing in the first place. However, given the ages some youth become involved in risk behaviors, universal program designers cannot ignore the fact that some kids may be experimenting with certain harmful activities while others may be actively involved in them. Therefore goals of an effective universal program should include preventing, reducing, and/or eliminating the identified problem behavior.

Quality 2: You’ve identified the problem and the pervasiveness of the problem.

What is the problem your program addresses? And is it a real problem? Often the problems we ‘perceive’ turn out to be real problems—but you have to review the statistics to assess whether the problem is pervasive enough to address through a universal program.

Statistical sources include the Centers for Disease Control, your local health department, state departments of Education or Children and Family Services, and local schools and law enforcement. These entities generate publicly accessible data bases identifying, among other things, rates of substance abuse, violence, crime, family disruption, and morbidity and mortality associated with the problem behavior.

Once you’ve determined the problem is real and worth the time, cost and effort of designing and implementing a program, you need to assess the underlying causes and the evidence-based solutions to the problem.

Quality 3: You’ve identified underlying or contributing causes and evidence-based solutions to the problem behavior.

Underlying or contributing causes to a problem behavior include a variety of factors. Ideally, your program will address more than one contributing factor. Using childhood obesity as the problem example, consider some of the factors influencing the condition:


  • Family history of obesity

  • Inactivity

  • Poor coping skills

  • Poverty and poor eating habits

  • Family traditions and culture

Before you can identify an evidence-based solution to the problem behavior, you must identify which factors seem most likely associated with the behavior in your population. To find out, conduct a needs assessment, talk to professionals working with the population, and ask others to share data and information. Some of these factors can be verified statistically by the same sources (e.g. Centers for Disease Control) that helped you identify the pervasiveness of the problem.


Next, you need to know what works and what doesn’t. That means reading the literature, consuming everything you can about the problem and solutions, so that you can structure your program to replicate or improve upon solutions. For example, in the case of childhood obesity, solutions might include



  • Access to health assessments to determine medical causes, treatments and other related health issues affecting the child

  • Family education programs: cooking, meal preparation, budgeting, family attitudes

  • Increased opportunities for the child to engage in physical activity and exercise

When you review these solutions, you begin to understand why targeting one factor might not prove effective. If a child engages in increased physical activity and exercise, yet does not first see a medical specialist, you won’t know if the child has other health problems such as undiagnosed high blood pressure or diabetes that places the child further at risk. Sending a kid to the gym for aerobic exercise when he has other health problems could cause him and you significant difficulty. Or, if a child is otherwise healthy and does increase physical activity, but mom continues to serve foods at home containing high fat or calories, chances are the child will not achieve her weight goal.


Quality 4: Choose a change model to underpin the conceptual framework of your program.


Change is dynamic. It takes time, energy and effort. It is reciprocal. There are setbacks. Above all, it relies on the individual to first form the intention to change and second to decide that the ‘benefits’ of change outweigh the ‘costs’.


Therefore, design your program to incorporate a theoretical change model taking these and other dynamics into account. You can find basic information on change models in Theory at a Glance: A Guide for Health Promotion Practice. U.S. Department of Health and Human Services, at www.cancer.gov/cancertopics/cancerlibrary/theory.pdf .


Quality 5: You’ve addressed the keys to behavioral change: Knowledge, Attitude, Skills and Abilities.


A knowledge program alone will not generally change behavior. However it might help an individual shape the intention to change—for example: Now that I know I have a problem, I plan to do something about it. An attitudinal shift may help the individual think differently about their problem, problems in their families, or among their peers—for example: I think my friends and I could have more fun if we didn’t drink at parties. Altered attitudes go a long way toward promoting change—but as a stand-alone, they may not effect an actual behavioral change.



The most effective programs provide individuals with components that increase their knowledge, shape positive attitudes, and imbue participants with skills and abilities to change behavior (or avoid the behavior). Bottom line: if you know you have a problem, if you want to make a change, you won’t get very far without knowing how to make it happen. Effective universal programs commonly provide the tools that permit an individual to put a plan into action.


Quality 6: Your program is age and developmentally appropriate, and therefore meaningful to the population.


Make sure your program is appropriate to the population you serve. If you are working with students, readability is an issue. Work with a school educator to ensure the language you use and the concepts you present can be understood with meaning by your population. Make sure your topics are relevant to the population: a session on sexually transmitted diseases might be appropriate for middle and high school students, but not for elementary grades. While the consequences of losing a driver’s license might have great impact on high school students, middle school students can’t envision that far into the future (yes, two to three years down the road seems like a life time to kids!).


Quality 7: You’ve built in monitoring measures, piloted the program, and made revisions.


Pilot your program before ‘broadcasting’ or disseminating it to a wide population. Use pre-post measures to see if the pilot population achieves the goals and objectives you envisioned. Monitor the implementation through observation. Some things that appear great on paper fall absolutely flat in practice. Be willing to go back to the drawing board to revise your program as necessary.

Thursday, September 8, 2011

5 Questions to Ask Before Hiring an Evaluator

Making the decision to hire an evaluator is a big step. Almost as big as deciding to get married! After all, you and your evaluator will be glued-to-the-hip for an extended period of time—at least a year, if not for the full term of your project: three to five years, give or take a no-cost extension or two! Here are five minimum questions you should ask a prospective evaluator before making your final decision.

Question 1: What are your credentials?

Credentials are qualifications and consist of:

• Academic education, degrees and professional training

• Years experience as an evaluator

• Experience with programs and populations similar to yours

• Experience with various forms of evaluation design and statistical analysis

To answer your question, your evaluation candidate should be able to provide evidence of his/her credentials, including but not limited to:

• Curriculum Vitae (CV) or Summary of Qualifications

• Description of Sample Projects (discipline, population, evaluation design and methods of analyses employed)

• One or two sample reports or published articles authored by the evaluator

Question 2: How familiar are you with the population and community we serve?

Many evaluators work across the nation and internationally. However, just because an evaluator is not a member of your community or neighborhood, doesn’t mean he/she can’t effectively serve your project, especially using technology. However, evaluators should know something about you, your organization and community, the population and yes, even the policies and culture of your geographic service area. He/she should be able at interview to show some familiarity with:

• Your organization, the community where you are located, and the population you serve

• Basic demographics of your population, for example: gender, age, developmental age, race/ethnicity, economic levels, health conditions, or languages spoken

• Recent policy or cultural issues that could negatively impact or positively benefit your project

Question 3: Are you willing to train our staff on evaluation as part of your services?

It is no secret that project personnel, who have never worked with an evaluator, are afraid of finding themselves or their projects 'under the microscope'. They might fear they will lose their positions if an evaluator thinks they aren’t doing a good enough job or that their workload and paperwork will increase or materially change. Some believe evaluation is a waste of much needed resources and that the dedicated budget should go to services and constituent needs.

More importantly, not all project personnel understand the what, why, how, and how-to of program evaluation, its benefits, its methods or its terminology. Successful projects blend program with evaluation—the two must work hand-in-hand. So, find out if your prospective evaluator will:

• Train staff on the purposes and components of your project evaluation

• Take time to show them how each part of the program fits with the various evaluation components

• Explain how the data will be collected, and how it will be used

• Discuss the benefits of evaluation for your project as well as your organization. Well-leveraged evaluation results can grow your organization, expand your service capacity, increase your organizational capability, and increase your funding success!

• Provide on-going technical assistance

Question 4: Are you registered with an Institutional Review Board?

While the Secretary of Health and Human Services (HHS), in conjunction with the Office of Science and Technology Policy (OSTP) is considering revisions to the rules governing Participant Protection and Confidentiality is some areas affecting evaluation (American Evaluation Association, info@eval.org, September 6, 2011), it is important to understand your project may be subject to review and approval by an Institutional Review Board (IRB). This is particularly true if your project serves ‘vulnerable populations’ such as minor children, minorities, and individuals involved with the criminal justice system, among others.

Obtaining an IRB can be a labor intensive and at times, expensive process. Find out if your prospective evaluator is currently registered with an IRB. If not, ask how he/she would go about securing one for your project.

Question 5: How do you structure your fees?

Whether your prospective evaluator is an individual, a member of a for-profit or non-profit corporation, or a university faculty member, he or she will expect payment for the services they provide. Some evaluators take projects on a flat-fee basis, others charge by the hour and still others charge based on ‘deliverables’. You also want to know when and how you will pay your evaluator. Some prefer to be paid monthly, others at the time of delivery of a project deliverable, and still others ask for ‘retainers’ paid up-front, followed by remaining payments structured over the balance of the contract period (e.g. quarterly).


Consider these questions. Take time to interview your prospective evaluator. Get to know him or her and make sure the two of you 'fit'. If you do, it will be match made in heaven!