top of page

Part 3:

​

Intervention Planning

and

Implementation

Ah, you’ve reached the finish line (well, almost)! 

Let’s take a quick look at what you’ve accomplished thus far.

 

Up to this point, you have:

  • Identified the ABCs of the problem behavior

  • Developed a function hypothesis for the behavior

  • Decided on a replacement behavior to teach the student

  • Selected preventive, teaching and consequence strategies to increase the desired behavior and decrease the problem behavior

  • Documented all of this information on one of these templates, or something of your own

 

Go, you! That is some pretty impressive work!

 

Now that you have the bulk of your information, it’s time to begin planning how you will implement your strategies.

 

To appropriately plan for the implementation of your intervention, you will need to:

  • Identify each task to be completed and the resources required for each

  • Identify who will be responsible for implementing each aspect of the intervention

  • Determine how you will monitor the student’s progress while on the intervention

  • Set a date to review the chosen plan, to determine effectiveness

 

​

​

​

Specifying Responsibilities and Resources

Deciding who will implement each step within the intervention is simple. Since you have already identified what strategies you want to use and where the intervention will be done (i.e., where the behavior usually occurs), you’d choose who would “make sense” to implement the strategies in those specific settings.

 

You want to take into consideration:

  • The difficulty of the intervention (which should be addressed prior to strategy selection, anyway)

  • Any training that needs to be done to teach the “implementer(s)” prior to starting

  • Who is closest in proximity to the student during specified times of the day

  • With check-in, check-out (CICO) interventions, which staff members have the best relationship with the student

 

Typically, the selected strategies should be feasible enough for the classroom teacher to implement (that is the point of the simple problem-solving process, right?) Additionally, if you are having a student meet another staff member during any portion of the intervention, it is usually that person who is responsible during the time the they are with the student.

​

After you have outlined who will perform each step, decide on a date to begin implementation.

 

Let’s look at an example:

Suzy is talking out and disrupting class in order to get attention.

Her teacher requested to meet with her grade-level team to problem-solve why it is occuring. They have gathered preliminary data and have completed a competing behavior pathway for her behavior, identifying the three-term contingency, function hypothesis, and replacement behaviors.

 

​

​

​

​

​

​

​

​

 

 

 

​

​

​

​

 

                        Adapted from MO SWPBS Tier 3 Workbook

​

​

The team and Suzy’s teacher made sure that the strategies were easy to implement so she could do so on a daily basis.

Here is the written plan:

 

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

       

                 Source: MO SWPBS

 

You can document your plan using the Function-Based Problem-Solving Worksheet (FBPSW) from earlier. Find it on the Resource Page.


 

Next, you will need to decide how to monitor your plan.

 

​

Often overlooked during the intervention planning process, the simple act of monitoring the progress of an intervention, increases the likelihood that it will have successful outcomes (Durlak & DuPre, 2008).

 

To adequately evaluate the progress of your intervention, you will need three types of data:

  1. Student Behavior Data

  2. Implementation Fidelity Data

  3. Social Validity Data

 

​

​

Student Behavior Data

1. Decide which behavior(s) you are going to measure. You know which behavior(s) you are targeting with the proposed plan, but which ones will you actually measure overtime? Usually, the behaviors you measured during baseline data collection are the ones you choose to progress monitor. Note: It’s best practice to measure progress on a desired behavior, in addition to the problem behavior. Sometimes, you may find that a student may actually be increasing a certain skill, while another behavior shows less (or no) progress. Being able to look at these data side-by-side can help inform future intervention decisions, when it is time to review the plan.

 

Make sure the person collecting the data is aware of how to accurately measure it. You will want to include your full behavior description on your data forms to leave little room for measurement error.

 

Decide if the specific behaviors you are targeting be combined into one category (e.g., disruptive behavior, off-task behavior)? For example, if I am targeting

​

2.  Develop a measurable goal for the desired (i.e., replacement) behavior(s). Taking into consideration where the student is functioning at baseline and the length of the intervention, develop a reasonable goal for the student. If you haven’t collected baseline data, you will need to do this now. You can find how to do this here.

 

Your goal should include the following (MO SW-PBS):

  • The condition (Where does the behavior occur? In what specific conditions does it occur?)

  • The specific behavior (What is the replacement behavior?)

  • The criteria (What is a reasonable expectation based where the

  • student is functioning in relation to his/her same-aged peers? Where do they need to function to be successful?)

​

Here are some examples:

​

​

​

​

​

​

​

​

​

​

                        Source: MO SW PBS

​

 

**All of the information needed for goal development can be obtained from your completed problem-solving template.

​

​

  3. Select a measurement tool. You’ll need to determine what data you need to adequately measure the student’s progress toward the above goal, and how you plan to collect it.

 

There are quite a few ways to collect measurable data for the purposes of monitoring a students plan:

  • Teacher Ratings (e.g., teacher completes a daily form indicating the portion of time the student engaged in a behavior)

  • Direct Observations (e.g., frequency count, duration recording)

  • Permanent products (e.g., student self-monitoring worksheets, behavior charts)

  • School-collected data (e.g., office discipline referrals, curriculum-based measurement data)

​

To select a method, you’ll want to consider:

  • The specific types of data you need to adequately monitor a student’s progress (i.e., what questions will you need to answer when you review their progress)

  • The data that are already available (i.e., routinely collected) in the school-setting that provide some of the information you need

  • The feasibility of the method that you choose

  • Where you will collect intervention data?

    • This will depend on where the behavior is problematic (e.g., if a student exhibits the behavior during specials, data collection should extend to the student’s specials’ classes)

 

To learn more about data collection tools, click here.

 

​

​

Implementation Fidelity

Implementation Fidelity (also called “treatment integrity” or” intervention fidelity”) refers to the degree in which an intervention is consistently being provided as it was originally designed. Measuring the fidelity to which a plan is adhered, is a crucial aspect of any behavior plan. “Fidelity” simply refers to how well you (i.e., the implementer), are sticking to the initial plan. It is well documented that plans implemented with low fidelity are just not as successful as those implemented to a high degree. This should appear to be pretty straight-forward: if an intervention to change behavior is not done accurately and consistently, it is less likely that it will be successful.

​

Implementing with high fidelity (i.e, 80% or more) allows you to attribute increases or decreases in student performance to the intervention itself. When interventions are implemented with low fidelity, it introduces the likelihood that the inconsistency of delivery is contributing to a plan’s ineffectiveness. In other words, it is more difficult to discern the root cause of intervention outcomes. For example, if a student’s behavior appears to be getting worse, decisions need to be made about how to proceed with the intervention. If fidelity is low, it is not clear if the intervention needs to be amended or changed altogether because it has not been implemented as it was intended.

 

There are a few ways you can monitor fidelity (these are similar to general data collection methods):

  • Self-Monitoring

  • Direct Observations

  • Permanent Product (i.e., daily intervention materials)

​​

At a minimum, fidelity data should be collected on a weekly or bi-weekly basis. However, when possible, developing a simple system to collect daily fidelity data, is the ideal choice. Collecting measurable fidelity information does not need to be complicated. It is imperative that data collection procedures are feasible for the implementer (e.g., teacher), to maximize the potential for successful results.

 

It is actually very easy to collect daily fidelity data if an electronic data collection system (e.g., google forms) is being used. Asking a simple question or two at the end of a rating form is really all you need. Here is a very general question you can use:

​

​

 

​

​

​

​

​

​

If you need more detailed information, you can also ask about specific components of the plan. For example, for a break card intervention, you can ask the teacher to answer the following:

  • Were you able to provide the student with an immediate break, when requested?

  • Were you able to praise the student each time they used the strategy?

  • Were you able to provide a student’s rewards, when earned?

 

Here are some time-out-of-class examples:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

​


 

 

 

 

 

 

 

 

​
 

 

 

 

 

 

 

​

​

​

Social Validity

The social validity of an intervention refers to how well it is accepted by its stakeholders (Wolf, 1978). For the current purpose, we are referring to direct stakeholders (Schwartz and Baer, 1991), which include those directly affected by the intervention (e.g., parents, student, teachers).

The degree to which those involved in a student’s plan actually find it to be relevant, appropriate, and feasible, can have high implications for its outcomes. During the planning process, the teacher usually informally provides information about the social validity of the proposed intervention. However, it is equally important to not to overlook parents and students when planning for and evaluating these same interventions.

 

Assessing and monitoring social validity can be as simple as having a conversation with the parent and student, about the following:

  • Are you satisfied with the progress of the intervention?

  • Do you feel that the intervention is meaningfully impacting your student’s overall success?

  • Have you been able to obtain the support you need during this process?

  • Have your considerations been heard by staff? Are they addressed in the plan?

  • What suggestions do you have, moving forward with the plan?

 

Social Validity data can provide helpful insights into the reason behind a student’s performance on a plan, as well as increase the willingness of parents and/or teachers to be involved in the implementation progress. It is best practice to obtain this information prior to beginning the intervention and routinely, as the intervention is reviewed and/or changed (e.g., bi-weekly, bi-monthly). Of course, any suggestions that are made by the parent, teacher or student, should be considered as soon as a review of the plan is feasible.

 

 

​

​

​

Well, there you go! It's time to start your intervention! When you are ready to review your student's progress, Part 4 can help you out!

​

 

Continue to Part 4
 

How can I make sure I implement my chosen intervention with fidelity?

 

In a nutshell, in order to know if you are implementing with high fidelity, you’ll need to first know how you are actually supposed to be implementing the intervention. You should be able to refer to the student’s behavior plan for this. If strategies are not as detailed as you require, consider seeking assistance from your building or district staff about how to effectively

implement the evidence-based practices your team has chosen for a student.

 

You can find more on this at the IRIS Center.

Plan for Progress Monitoring
Fidelity
Screen Shot 2019-01-15 at 9.48.11 PM.png
Screen Shot 2019-01-15 at 9.46.43 PM.png
Screen Shot 2019-01-15 at 9.43.32 PM.png

References

​

Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41, 327–350.

​

The IRIS Center. (2014). Evidence-Based Practices (Part 2): Implementing a Practice or Program with Fidelity. Retrieved from https://iriscenter.staging.wpengine.com/module/ebp_02/
 

Missouri School-Wide Positive Behavior Support (2018). Tier 3 team workbook: 2018-2019. Retrieved from http://pbismissouri.org/wp-content/uploads/2018/05/MO-SW-PBS-Tier-3-2018-04.24.18.pdf?x30198

 

NJ PBSIS (2015). Monitoring progress. Retrieved from http://www.njpbs.org/problem_solving/monitor_progress.html

bottom of page