crisisresponse.promoteprevent.org
Skip to main content

Implementation Research: A Synthesis of the Literature

Publication Year: 
2007
Authored By: 
National Center for Mental Health Promotion and Youth Violence Prevention

Implementation Research: A Synthesis of the Literature, by Dean Fixsen, Sandra Naoom, Karen Blase, Robert Friedman, and Frances Wallace was published in 2005 by the National Implementation Research Networkat the Louis de la Parte Florida Mental Health Institute, University of South Florida and produced with support from the William T. Grant Foundation. This summary was created by the National Center for Mental Health Promotion and Youth Violence Prevention and is being distributed with permission from the National Implementation Research Network. The opinions expressed in the publication are those of the authors and do not necessarily reflect those of the William T. Grant Foundation. The complete version of Implementation Research: A Synthesis of the Literature is available online at http://www.schools.utah.gov/fsp/College-and-Career-Ready/Meetings/2011-F...
 

Why and How Implementation Research: A Synthesis of the Literature Was Created

The National Implementation Research Network prepared this synthesis because more is known about developing effective evidence-based intervention programs than about effectively implementing these programs. The analysis includes 377 articles and publications that presented rigorous evaluations, literature reviews, and theoretical discussions. The articles were selected from a literature search for research on program implementation published in English from 1970 to 2005.

Practices and Programs

Implementation Research: A Synthesis of the Literature distinguishes between practices and programs by using the following definitions.

Evidence-based practices are skills, techniques, and strategies that can be used by a practitioner. Examples of evidence-based practices include cognitive behavior therapy, cognitive mapping, good behavior game, systematic desensitization, token economy motivation systems, and social skills teaching strategies.

Evidence-based programs consist of a collection of practices that are done within known parameters (philosophy, values, service delivery structure, and treatment components). . . . Such programs, for example, may seek to integrate a number of intervention practices (e.g., social skills training, behavioral parent training, cognitive behavior setting) within a specific service delivery setting (e.g., office-based, family-based, foster home, group home, classroom), and organizational context (e.g., hospital, school, not-forprofit community agency, business), for a given population (e.g., children with severe emotional disturbances, adults with co-occurring disorders, children at risk of developing severe conduct disorders). Examples of evidence-based programs include Assertive Community Treatment, Functional Family Therapy, Multisystemic Therapy, and Supported Employment.

Implementation requirements for practices and programs seem to be similar. In this summary, therefore the term programs will be used because most Safe Schools/Healthy Students projects implement programs, rather than practices.

What Is Implementation?

Implementation is “a specified set of activities designed to put into practice an activity or program.” Implementation can take place at a number of levels: the practitioner, the agency, or the community. Evidence-based programs and practices will not be effective unless they are fully implemented. Effective implementation—that is “performance implementation”—is often elusive. Many programs settle for “paper implementation,” which creates policies and procedures that can be tracked on paper but have no effect, or “process implementation,” which substitutes trainings and changes in procedures, culture, and language that pay “lip service” to implementation but do not create actual changes in practice that produce effects benefiting the intended audience (the clients or consumers).

Conceptual Framework of Implementation

This analysis produced a conceptual framework with five essential components:

  1. a Source: that is, an organization or individual who has created and evaluated the program to be implemented. [In SS/HS, the Source is usually the developer.]
  2. a Destination: the practitioners or organization that will adopt and house the program. [In SS/HS, this is usually a school or district and its partners.]
  3. a Communication Link: individuals or a group (“purveyers”) who will conduct the implementation. [In SS/HS, this is the SS/HS initiative and its partners.]
  4. a Feedback mechanism, which provides “a regular flow of reliable information.” [In SS/HS, this is usually accomplished through partnership meetings, evaluation, and other means of communication.]
  5. a Sphere of Influence: the “social, economic, political, historical, and psychosocial factors that impinge directly on people, organizations, or systems.”

Stages of Implementation

The review revealed six stages of the implementation process:

  1. Exploration and Adoption. This is the process by which an individual, organization, or community (a) comes to understand a need; identifies a program to satisfy that need; assesses the potential match between the need, the practice, or program; determines what is needed to implement that program; and examines the resources available in the community; and (b) makes a decision to adopt (or not adopt) the practice or program.
  2. Program Installation: This stage includes the tasks that need to be accomplished before the new program can actually begin to function. These topics include putting new policies into place, obtaining necessary resources, hiring or training staff, and so on.
  3. Initial Implementation: Implementation has begun, but is not yet fully in place. It is during this stage when many implementation attempts end, as they become overwhelmed by inertia and/or other problems.
  4. Full Implementation: “At this point, the implemented program becomes fully operational with full staffing complements, full client loads, and all of the realities of ‘doing business.’”
  5. Innovation: This is the stage at which staff members have enough experience with the program to refine and expand the program to respond to the unique challenges and circumstances of the community in which it is implemented. Desirable changes that expand program effectiveness are “innovations” and should become part of the standard practice of the program. However, some changes represent “program drift” and are a threat to fidelity. Research seems to indicate “adaptations made after a model had been implemented with fidelity were more successful than modifications made before full implementation.”
  6. Sustainability: “The goal during this stage is the long-term survival and continued effectiveness of the implementation site in the context of a changing world.”

Unfortunately “most evaluations of attempted program implementation occur during the initial implementation stage, not the full operation stage. Thus, evaluations of nearly implemented programs may result in poor results, not because the program at an implementation site is ineffective, but because the results at the implementation site were assessed before the program was completely implemented and fully operational.”

Core Implementation Components

One of the goals of the synthesis was to establish the core implementation components, that is, the most essential and indispensable components of an implementation. Knowing what is essential and indispensable will allow implementers to focus on the activities necessary to implement a program effectively rather than on nonessential activities that waste time and resources and do not contribute to the success of the implementation. The researchers identified six core implementation components:

  1. Staff selection
  2. Preservice and inservice training
  3. Ongoing consultation and coaching
  4. Staff and program evaluation
  5. Facilitative administrative support
  6. Systems interventions

These core implementation components must be sustained if the effectiveness of the program is to be sustained. “A given practice or program may require more or less of any given component in order to be implemented successfully and some practices may be designed specifically to eliminate the need for one or more of the components.” For the most part, all core implementation components must be present if an implementation is to succeed. Core implementation components complement one another. A weakness in one component can be overcome by strengths in other components.

The core implementation components can be provided by contracting with an outside organization (such as the developer of the evidence-based program), by the implementing organization with the close consultation and help of an outside organization, or by developing self-sustaining implementation sites (either an entirely new organization or a new “department” in an existing organization).

Staff Selection (Core Implementation Component 1)

Staff selection (which preceded training) is important for implementation. Implementation Research: A Synthesis of the Literature states that:

Beyond academic qualifications or experience factors, certain practitioner characteristics are difficult to teach in training sessions so must be part of the selection criteria (e.g., knowledge of the field, common sense, social justice, ethics, willingness to learn, willingness to intervene, good judgment). . . . Some programs are purposely designed to minimize the need for careful selection. . . . Others have specific requirements for practitioner qualifications and competencies.

The bottom line is that unless the practitioners who are carrying out the interventions with clients are doing so effectively and with fidelity, the program will not be effective.

Staff selection extends beyond practitioners. Every member of a staff must be selected to suit that role (including practitioners, trainers, coach/consultants, evaluators, and administrators).

Preservice and Inservice Training (Core Implementation Component 2)

Implementation Research: A Synthesis of the Literature stresses the important of training.

Innovations such as evidence-based practices and programs represent new ways of providing treatment and support. Practitioners (and others) at an implementation site need to learn when, where, how, and with whom to use new approaches and new skills. Preservice and inservice training are efficient ways to provide knowledge of background information, theory, philosophy, and values; introduce the components and rationales of key practices; and provide opportunities to practice new skills and receive feedback in a safe training environment.

However, research demonstrates that training, in and of itself, does not result in changes in practitioner behavior or improvement in client outcomes. However, training, when combined with ongoing consultation and coaching (discussed below) can result in these changes.
 

Ongoing Consultation and Coaching (Core Implementation Component 3)

The research reveals that “Most skills needed by successful practitioners can be introduced in training but really are learned on the job with the help of a consultant/coach.” Training without coaching (or coaching without training) is insufficient to produce actual changes in practice. This is because

  • “Newly-learned behavior is crude compared to performance by a master practitioner”. . . . “Training is usually designed to introduce the learner to the essential elements of a new set of skills. . . . With experience and effective coaching, a practitioner develops a personal style that is comfortable for the practitioner while still incorporating the core intervention components of the evidence-based practice.”
  • “Newly-learned behavior is fragile and needs to be supported in the face of reactions from consumers and others in the service setting.” Clients and stakeholders may react negatively to changes in practice or programs. Coaches need to support practitioners in the face of these negative reactions.
  • “Newly-learned behavior is incomplete and will need to be shaped to be most functional in a service setting.” “Preservice workshop training can be used to develop entry-level knowledge and skills. Then, coaching can help practitioners put the segmented basic knowledge and skills into the whole clinical setting.”

Staff and Program Evaluation (Core Implementation Component 4)

Core Implementation Component 4 includes two types of evaluation.

Staff evaluation is designed to assess the use and outcomes of the skills that are reflected in the selection criteria, are taught in training, and reinforced and expanded in consultation and coaching process.

Program evaluation . . . . assesses key aspects of the overall performance of the organization to help assure continuing of the core intervention components over time.

Evaluating fidelity needs to measure a combination of context, compliance, and competence. Implementation Research: A Synthesis of the Literature defines these three terms as follows:

  • Context refers to the prerequisites that must be in place for a program or practice to operate (e.g., staffing qualifications or numbers, practitionerconsumer ratio, supervisor-practitioner ratio, location of service provision, prior completion of training).
  • Compliance refers to the extent to which the practitioner uses the core intervention component prescribed by the evidence-based program or practice and avoids those proscribed by the program or practice.
  • Competence refers to the level of skill shown by the [practitioner] in using the core intervention components as prescribed while delivering the treatment to a consumer.

Evaluation information on context, compliance, and competence provides essential feedback for coaches, administrators, and those responsible for effectively implementing evidence-based practices or programs.

Facilitative Administrative Support and Systems Interventions (Core Implementation Components 5 and 6)

Implementation Research: A Synthesis of the Literature defined two final core implementation components as follows.

  • Facilitative administration provides leadership and makes use of a range of data inputs to inform decision-making, support the overall processes, and keep staff organized and focused on the desired clinical outcomes.
  • Systems interventions are strategies to work with external systems to ensure the availability of the financial, organizational, and human resources required to support the work of the practitioners.

These components provide an essential context for those who actually implement the evidence-based practice or program. To quote from Implementation Research: A Synthesis of the Literature:

The core implementation components appear to be essential to changing the behavior of practitioners and other personnel who are key providers of evidence-based practices within an organization. The core components do not exist in a vacuum. They are contained within and supported by an organization that establishes facilitative administrative structures and processes to select, train, coach, and evaluate the performance of practitioners and other key staff members; carries out program evaluation functions to provide guidance for decision-making; and intervenes in external systems to assure ongoing resources and support for the evidence-based practices within the organization. Thus, as shown in Figure 1, the core implementation components must be present for implementation to occur with fidelity and good outcomes. The organizational components must be present to enable and support those core components over the long term. And, all of this must be accomplished over the years in the context of capricious but influential changes in governments, leadership, funding priorities, economic boom-bust cycles, shifting social priorities, and so on.

Table 1 suggests some possible fidelity outcomes and sustainability outcomes for different combinations of strong or weak core implementation components and organizational components within the context of policy and funding environments that generally are enabling or hindering.

Findings and Conclusions

The authors of Implementation Research: A Review of the Literature reached four major conclusions about the state of research on implementing evidence-based programs and practices, which are excerpted below.

  1. “The best evidence points to what does not work with respect to implementation.”
    • “Information dissemination alone (research literature, mailings, promulgation of practice guidelines) is an ineffective implementation method, and
    • Training (no matter how well done) by itself is an ineffective implementation method.”
  2. “There is good evidence that successful implementation efforts designed to achieve beneficial outcomes for consumers require a longer-term multilevel approach.” (That is, an approach involving the core implementation components.)
  3. “There is little evidence related to organizational and system influences on implementation, their specific influences, or the mechanisms for their impact on implementation efforts. Yet, there seems to be little doubt about the importance of these organizational and influence factors among those who have attempted broad-scale implementation.”
  4. “The most noticeable gap in the available literature concerns interaction effects among implementation factors and their relative influences over time.” That is, the relationship among (1) implementation stages, components, and approaches, and (2) adoption rates, program and practitioner effectiveness, and sustainability.

Figure 1

Multilevel Influences on Successful Implementation

Table 1