Webster's defines fidelity as (a) the quality of being faithful and (b) accuracy in details
www.merriam-webster.com/dictionary/fidelity. When you implement a research-based program, you want to make sure that you are matching the developer's design, step-for-step, faithfully and accurately.
In a previous blog we discussed the importance of ensuring that the program you deliver reaches the correct population, in the right setting, with the right amount of service hours, and with fidelity. Some of that information can be determined from demographics and dosage sheets (the number of hours per session times the number of sessions).
Other information, however, might not be as readily available via quantitative measures. And generally speaking, most developers do not include a fidelity instrument with their programs or curricula. That means it is up to you to design a qualitative instrument that will adequately assess whether or not the facilitator followed the developer's design and met critical elements of a curriculum.
Steps to Designing A Fidelity Instrument 1. Read the Developer's Introduction:
Most of the time, the developer will describe in narrative and at the beginning of the program
key elements about his or her product. We consider key elements the components of the program that must or should be in place in order for the program to work as designed.
Key element information could include the research basis (or theoretical underpinnings) of the program, a description of the population tested, goals, research results, and modifications made to materials over time. It may also describe the minimum number of sessions participants must receive to achieve stated goals, the frequency of exposure, expectations about the organization of the class or group as well as the facilitator skills required with respect to training, application of training, and group management. Programs requiring student interaction (e.g. discussion groups, role plays) might also recommend establishing a set of mutually agreed-upon rules to follow. Other details can include a description of class or group activities, such as community service, the conduct of a media campaign, or parent involvement and when those should occur.
2. Make a List of the Suggested Key Elements Contained in the Introduction:After reading the introduction, make a list of the Key Elements. Key elements we usually identify as worthy of tracking through evaluation include (but are not limited to):
a. Sequencing of sessions - to achieve effectiveness, must the group receive sessions in the proper sequence, or can a creative facilitator 'mix it up' without confusing the group or negatively impacting outcomes.
b. Number of sessions, duration, and timing - to replicate the developer's findings, participants should receive the minimum number of sessions recommended, the minimum number of hours suggested, and at prescribed intervals. Some programs (like Botvin's Life Skills) permit delivery on a daily basis over two weeks, three times a week, or once a week. Other programs (such as Second Step) use a therapeutic model, meaning students receive information and then process it over a few days or a full week before receiving the next session. Second Step students might receive only one or two sessions per week over multiple weeks.
c. Facilitator training and skills acquisition - Some programs offer no training while others provide several hours of in-person, on or off-site training. It also is not unusual to find organizations familiar with the program conducting their own training (e.g. schools and non-profit organizations). It is important to know whether facilitators received training, and if so the number of hours and type. We also examine facilitator dexterity: how well do they know the curriculum; are they at-ease with content; is their training evident in the way they manage participants, invite discussion, guide activities? Do they display excellent facilitator skills: being non-judgmental, content-informed, and, above all, enthused?
d. Group or classroom setting - Most programs we've worked with recommend seating participants in a circle. A circle invites openness, offers great opportunity to bring the group into focus, encourages collaboration, and facilitates discussion or observation of role plays. However--and this is a
Big However--we've found most school administrators are not too keen on rearranging classrooms. The logistics of setting up/taking down before the next class comes in may prove chaotic. If the classroom is left with chairs or desks in a circle rather than in rows at the end of the day, maintenance personnel might end up being responsible for putting the class back in order before the start of the next school day. Chances are school-based programs--especially delivered to middle and high school students--may not meet the circle-organization standard most developers envision. This may be one of those things you will need to let go, in terms of fidelity.
However, you should pay attention to other issues within the classroom or group setting. For example: Group Rules. Group rules consist of agreements not to speak when others are speaking, to raise hands, be on time, complete assignments, and participate in role plays and activities. Group Rules also apply to issues of confidentiality. In some programs, students disclose private information during discussion. Many programs suggest facilitators encourage students to formally agree to "Keep in Vegas, what happens in Vegas." As well, facilitators should be trained to intervene in the discussion as well as to refer for services, when issues involve the health, mental health and safety of the disclosing student (or others).
e. Participant enthusiasm and adoption of behaviors - Sometimes kids (or grown-ups) just don't get it. If so, either the facilitator requires re-training or the materials require modification. Years ago, middle school students in a low-performing school consistently scored poorly on knowledge surveys, which included very basic definitions of behavior. When interviewing facilitators, we learned these students had difficulty pronouncing certain words as well as understanding what they meant. To take corrective action, facilitators set up 'vocabulary flash cards' containing the difficult to understand words and their definitions, which they used with students each time new terms appeared in sessions. While this represented a deviation in fidelity, it served as an important if not imperative modification to facilitate student acquisition of knowledge and attitudinal change.
3. Arrange Topics on an Instrument--Preferably Likert-Scaled: We arrange major qualifier topics (such as those explored in Items a-e above) as heading topics. Beneath each heading, we identify a series of sub-topic qualifiers (such as "students participated in discussions"; classroom facilitator used non-judgmental statements"; "facilitator collected assignments"; "facilitator reviewed major components of previous session before beginning new session"; "facilitator referred to Group Rules when necessary").
Each of these contains a two-part, observation response: The first identifies whether, from observation, the qualifier was apparent:
Yes, No, or
Does Not Apply. You'd use
Does Not Apply when the activity or qualifier was not called for in the session you observed, for example, Role Plays. The session you observed might not have included Role Play, therefore this qualifier and others associated with the conduct of Role Play would not apply.
The second part response applies only if the answer to the first question was "Yes". You'd then score the degree to which the qualifier achieved fidelity. We use a Likert Scale, usually with five responses. When checked off, these items later can be entered into a database and quantitatively analyzed.
A Word About ObservationsObservation is part and parcel of process evaluation. What better way for a process evaluator to see for him or herself, whether a program is running as designed, by a skilled and knowledgeable facilitator engaging with enthusiastic and adaptive participants?
However--another
Big However--unless you are free to come and go at will, stop in when the mood (or your management plan) dictates, observation may not always work. In some instances, we have come across excellent and effective facilitators who crumble under the eye of an evaluator (no matter how friendly or back-of-the-room we remain). We have discovered others who perform well only when observed, putting on the proverbial 'Dog and Pony Show' while we're there. Thereafter, run-of-the-mill and less than stellar performance.
Once, we observed a facilitator who so knocked our socks off, we came back raving. Wow! What a super guy! Wow! His students are so lucky! Wow! Can't wait for results to come in!
Uh-huh. When results for the group of students he worked with did come in: Yikes! Their scores were so poor--worse than the comparison groups' scores--that we said to ourselves, "Gee, it's almost as if they didn't participate in the program at all."
Guess what. They didn't--except on the one day we observed.
We now use three combinations and methods to assess fidelity, besides evaluator observation.
First, we try to capture two on-site, evaluator observations.
Second, individuals within the grantee organization who have trained others or who are considered experts in the program, stop in and visit with facilitator and students. Their purpose is not to observe or score, but to answer questions, offer support and resources.
Third, we ask facilitators to complete a self-assessment rating survey (on-line). This survey not only asks facilitators to document Key Elements of the program, but asks facilitators for their feedback. Does this program work? Do you like it? How did your students respond? Did you have everything you needed to do your job? Did you have enough time to implement everything the developer expected you to in the space of session? Would you recommend this program to others at your school?
Managing fidelity can be time consuming. But when you take the time to put it in place, you will find it absolutely explains a lot (think of the Dog and Pony guy described above who never really, truly implemented the program!) and adds greater understanding of what worked, what didn't, why and why not!