I. Introduction
Teachers responding to student problem behavior during class often lose instructional time (Sugai & Horner, 1994; Walker, Ramsey, & Gresham, 2003). When more than one student engages in problem behavior in the classroom, it can create a challenging environment that impedes student learning and achievement (Greenwood, Horton, & Utley, 2002; Ruhl & Berlinghoff, 1992). As a result, most interventions for classroom behavior focus on reducing problem behavior, such as disruption and off-task, as well as increasing academic engagement (Ennis, Blair, & George, 2015; Logan, Bakeman, & Keefe, 1997; Wilson & Lipset, 2007). One intervention for problem behavior in the classroom is teaching self-monitoring skills to students. Self-monitoring, a type of self-management intervention (Schloss & Smith, 1998), has widely been used in educational settings for improving a variety of academic and non-academic behaviors in students with and without disabilities (Dunlap et al., 1995; Fuchs et al., 1990; Ganz, 2008; Guzman, Goldberg, & Swanson, 2018; Mathes & Bender, 1997; Rafferty et al., 2011). Self-monitoring has been shown to be an effective intervention to increase academic engagement, decrease disruption, and enhance academic performance during academic periods (Bruhn, McDaniel, & Kreigh, 2015; Carr & Punzo, 1993; DiGangi, Maag, & Rutherford, 1991; Ha & Choi, 2021; Levendoski & Cartledge, 2000; Miller et al., 2015; Todd, Horner, & Sugai, 1999), and has been used as a Tier 2 intervention within School-Wide Positive Behavioral Interventions and Supports (SWPBIS; Bruhn et al., 2017).
Self-monitoring procedures utilize students as observers and data collectors (Amato-Zech, Hoff, & Doepke, 2006) and require audio, visual, or tactile cues to prompt students to rate or score their behavior (Axelrod et al., 2009; Brooks et al., 2003; Edwards et al., 1995; Holifield et al., 2010; Petscher & Bailey, 2006). The focus has been on teaching students to self-monitor on-task behavior or performance. For example, Rock (2005) trained students to self-record attention (engagement) and academic performance (productivity and accuracy) during independent math seatwork by using a self-monitoring work plan and a timing device. The students were instructed to record a checkmark on their self-monitoring sheet if their present behavior resembled their goal behavior for paying attention. They were asked to record the number of problems completed or pages read on the recording sheet for performance at the end of each 5-min interval. Academic engagement and productivity increased with a self-monitoring work plan for all students.
One of the benefits of using self-monitoring in the classroom is that it is easy for teachers to implement, placing few demands on teachers’ time and resources (Moore et al., 2013). Self-monitoring allows students to collect data on their own behavior and easily monitor their progress, allowing immediate behavior recording during targeted activities (Ganz, 2008). This means that the teacher does not lose instructional time. Several studies have used behavior-rating scales to implement self-monitoring procedures (e.g., Smith et al., 1992). Smith et al. (1992) showed that using behavior rating scales in self-recording effectively decreased disruptive behavior in high school students with behavior or learning disabilities. The author gave the participants a point card where the students rated their behavior on a 6-point point scale every 10 min. The students earned points for matching their ratings with the teacher’s rating of the student’s behavior.
Different behavior rating scales have been used as effective self-monitoring tools by students within the school system (Dalton, Martella, & Marchand-Mantella, 1999). Daily Behavior Report Card (DBRC) is a type of behavior rating scale typically used by teachers, which incorporates direct behavior observation and measuring behavior perceptually (Dalton et al.,1999). For self-monitoring purposes, DBRCs can be customized for individual students' needs and list target behaviors based on their behavioral or academic goals. Scoring of DBRCs is similar to a behavior rating scale in that they score themselves using a binary yes/no or a Likert-type scale system. The scale can be designed with numbers or symbols (e.g., smiley face; Vannest et al., 2010). Another rating scale used for self-monitoring is the Direct Behavior Rating (DBR). The DBR uses a combination of behavior rating scales and systematic direct observation and has also been used as a self-monitoring tool (Chafouleas, Riley-Tillman, & Christ, 2009). The ratings on target behaviors are recorded immediately at the end of an observation. DBR consists of using a scale to rate a directly observed target behavior. Chafouleas and colleagues (2012) implemented self-monitoring and a group contingency with a group of eighth-grade general education students. The authors used a DBR consisting of an 11-point scale with three qualitative anchors (0 = Not at all, 5 = Some, and 10 = Totally). During the intervention, students used the DBR to rate their performance on three target behaviors (preparedness, engagement, and homework completion). The results showed that self-monitoring with DBR and group contingency reduced problem behavior and increased academic engagement.
Recently, the Individualized Behavior Rating Scale Tool (IBRST; Iovannone et al., 2014; Narozanick & Blair, 2019) has emerged as a viable tool to monitor student progress toward intervention goals or to help students self-monitor their progress. The IBRST typically uses a 5-point Likert-type scale to record the perceived dimension of a target behavior to increase and a target behavior to decrease. The IBRST is considered a type of DBR because the ratings of the behaviors are completed close to the times of occurrence by the person who directly observes the behaviors. However, the IBRST has unique features compared to DBR because it does not use global scales of ratings for general behaviors. Instead, the teachers can determine the measurement characteristics for recording based on frequency, duration, percentage, latency, or intensity to develop a 5-point scale for individually defined target behaviors (Iovannone et al., 2014). In addition, it is designed to collect data across multiple days; therefore, a line graph can be created by plotting the data points to examine trends or changes in behavior over consecutive days. Although the IBRST was originally developed for Prevent-Teach-Reinforce (PTR), a standardized function-based intervention model (Dunlap et al., 2019; Kulikowski et al., 2015; Sullivan et al., 2021), it has been used in other school-based behavioral interventions to facilitate teacher involvement in monitoring student progress (Narozanick & Blair, 2019). The IBRST has shown to have adequate inter-rater reliability of .72-.83 (Iovannone et al., 2014) and concurrent validity of .70 (Barnes et al., 2019).
Iovannone et al. (2014) tested the inter-rater agreement scores of two independent observers using the IBRST. The authors recruited 19 students from a variety of schools to participate in the study. All of the participants had severe behavioral issues. The students’ teachers were trained to score the students’ behavior and create an individualized IBRST for each student. The results indicated that the IBRST was an efficient tool for teachers to improve classroom behavior observation and data recording practices. However, given that research on the IBRST has concentrated solely on teachers as data collectors, research is needed to determine whether students can effectively use the IBRST as a self-monitoring tool. If the IBRST can effectively be used as a self-monitoring tool, it will decrease the demands placed on teachers while using the IBRST as a progress monitoring tool in the classroom.
Self-monitoring requires the student to measure their behavior periodically. The schedule used in self-monitoring interventions depends on the self-monitoring tool. Daily report cards ask students to record their behavior once an observational period has ended (Chafouleas et al., 2006). More traditional self-monitoring interventions ask students to record behavior multiple times during an instructional period (McDougall & Brady, 1998). Despite the positive outcomes of using self-monitoring regardless of the schedules or frequency of self-monitoring during an instructional time period, there is limited information on whether the intervention effects can successfully be maintained over time during the fading of the self-monitoring procedures. Therefore, the current study aimed to examine the use of IBRST by elementary school students as a self-monitoring tool to decrease disruptive behavior and increase on-task behavior within the classroom setting. The following questions were addressed in the study: (a) to what extent will the use of the IBRST as a self-monitoring tool decrease student disruptive behavior and increase on-task behavior during class time; (b) to what extent will the improved student behaviors be maintained during fading phases and at a 1-week follow-up, and (c) will the teachers and students find the self-monitoring intervention to be acceptable and effective?
II. Method
Following university IRB, district IRB, and school level approval, three students and their corresponding teachers (two teachers for the three students) were recruited to participate in this study. The students were recruited through teacher nomination based on: (a) teacher report of problem behavior at least two times per day for 3 out of the 5 school days and (b) in grades 2-4 (ages 7-10). Students were excluded from the study if they had a disability diagnosis or if they engaged in problem behavior that put themselves or others in danger (e.g., aggression towards others or self-injurious behavior). Selection criteria for the teachers included: (a) consent to receive training and implement the intervention, (b) nominating at least one student in the class who engaged in problem behavior, and (c) currently not implementing a self-monitoring intervention in the classroom. Teachers were to be excluded from the study if they taught special education classes.
Although self-monitoring using the IBRST is often used as a Tier-2 intervention, it was implemented for all students in the classrooms in the current study due to the teachers’ request. However, to evaluate it as more of a Tier-2 support, three second-grade male students and their corresponding teachers participated in this study. All three students were 7 years old and had no known disabilities but were considered at risk for developing severe problem behavior. All students scored below grade level in reading on a districtwide assessment. They were participating in SWPBIS without receiving any supplemental targeted behavior support.
Gary was a White student. During the beginning of the new school year and before the intervention, Gary had received five office discipline referrals (ODRs), one in-school suspension, and zero out-of-school suspensions. Gary engaged in the highest frequency of problem behavior during reading instruction. His primary problem behavior was talking out without the teacher's permission during instructional time.
Jorge was an African American student. Before the intervention, Jorge had received four ODRs, but no in-school or out-of-school suspensions. He also engaged in high levels of disruptive, calling out behavior during instructional times, particularly during writing instructional periods.
Jerry was an African American student. Before beginning the intervention, Jerry had never received any ODRs but had received two classroom referrals for disruptive behavior. His teacher was concerned with his high frequency of inappropriate manipulation of objects that disrupted his and other students’ work. Jerry engaged in the problem behavior across all instructional periods; however, his problem behavior occurred at a higher rate during reading.
Two 2nd-grade classroom teachers participated in the study. Each classroom served 20 students, with the majority of the students having Hispanic or African American backgrounds (40%-50% Hispanic, 25%-30% African American). Teacher 1 was Gary’s teacher; she was in her 30s and had been working as a teacher for two years. Her classroom management strategies included: posting the rules and expectations on the wall, teaching expectations, arranging the seating so students with visual and hearing needs were closer to the front, and using a color level system. Teacher 2 was Jorge’s and Jerry’s teacher. She was in her 30s and had three years of teaching experience. She had the same classroom management strategies as those of Teacher 1. She provided specific positive verbal feedback to students for following expectations during instruction.
Once the consent forms were obtained, the first author interviewed the teachers (approximately 10 min) to identify the students’ possible target problem behavior and problematic instructional periods. Two observations were conducted for each student during the 30-min potential target instructional times to identify their current levels of problem behavior. If the students met the inclusion criteria, the first author obtained verbal assent from the students to participate in the research. This study took place in two 2nd-grade classrooms of a high-need public elementary school with 85% of students eligible for free or reduced lunch. The school was located in a suburban area of a large city and had been implementing SWPBIS for 5 years. Just before the study began, the school scored an 86% on the Benchmarks of Quality (BoQ; Childs, Kincaid, & George, 2011), indicating above average implementation fidelity of Tier 1 SWPBIS. The study was conducted during regular classroom periods. The most problematic academic period for each target student (i.e., class reading and writing instructional period) was targeted for intervention.
The dependent variables in this study were problem behavior and on-task behavior. Direct observation and the Individualized Behavior Rating Scale Tool (IBRST; Iovannone et al., 2014) were used to collect data on problem behavior and on-task behavior. Direct observation data were collected during 30-min instructional periods, three to five times per week for each student. An electronic timer identified different time intervals for interval recording, and paper and pencil were used to collect data. The participating teachers collected the IBRST data at the end of each data collection session. To collect direct observational data, the first author and teacher jointly identified and defined each student's problem behavior and on-task behavior. Problem behavior included calling out without teacher permission (Gary and Jorge), standing up from one’s seat without teacher permission (Gary and Jorge), and inappropriately manipulating objects (Jerry; e.g., tapping writing instrument, bouncing eraser on the desk) and was measured using a frequency recording system. On-task behavior focused on hand-raising with eyes directed toward the teacher and sitting in their assigned seat. The frequency of on-task behavior was recorded and then converted to a percentage of on-task. This was done by dividing the frequency of on-task behavior and dividing it by the total number of opportunities, then multiplying by 100.
As supplementary data, the IBRST data on student behavior collected by the teachers were used. Although the IBRST is typically created using a 5-point rating system, the teachers in the current study used a 6-point rating scale for both problem behavior and on-task behavior to have a more sensitive rating system. Each teacher collected data during the targeted instructional period at the end of each session. The tool consisted of three sections: (a) student name, (b) definitions for target behaviors and instructions on when and how to rate their behavior, and (c) a rating section. The rating section included two behavior-rating scales, one for problem behavior and the other for on-task behavior. The teachers developed the IBRST in collaboration with the first author based on frequency and percentage, reflecting a very bad day, a so-so day, and a very good day to set the anchors. For example, a very-good day for Gary was characterized by 0-2 instances of problem behavior and 86%-100% instances of on-task behavior. More specifically, ratings for problem behavior were established as 1 (0-1 instances), 2 (2-3 instances), 3 (4-5 instances), 4 (6-7 instances), 5 (8-9 instances), and 6 (10 or more), whereas ratings for on-task behavior were established as 1 (0-15%), 2 (16-30%), 3 (31-55%), 4 (56-70%), 5 (71-85%), and 6 (86-100%). The ratings of each problem behavior and on-task behavior on the IBRST were used to report data collected by the teachers.
A research assistant assessed treatment integrity during approximately 30% (33%-36%) of the intervention sessions, including fading phases, which focused on measuring teachers’ fidelity to intervention implementation and students’ completion of the self-monitoring tool (IBRST) correctly as planned. A 17-item implementation fidelity checklist with a yes/no scoring system was used to assess the teacher’s adherence to self-monitoring intervention implementation steps (e.g., providing the IBRST sheet prior to the instructional period, reviewing expectations for the instructional period, stating the instructional period has stated, setting the timer, providing positive praise for the correct use of the IBRST) during intervention sessions. The percentage of steps implemented was calculated based on the total number of steps. Fidelity averaged 90% (range = 80%-100%) across teachers.
Both the teachers and research staff reviewed the students’ completed IBRSTs at the end of each intervention session to check their rating accuracy. Correspondence between the IBRSTs completed by the teachers and students were examined by calculating the percentage of agreement between student ratings and teacher ratings. Because the students rated their behaviors twice during 30-min instructional periods, their mean ratings between the two self-checks were used to compare with teacher ratings. The analyses of the IBRSTs indicated that the levels of correspondence between student ratings and teacher ratings were high across students. The overall agreement between their ratings averaged 97.5%. Across all students, behaviors, and phases, the agreement was 100% except for the initial intervention phase for Gary, whose agreement averaged 95.5% (range = 62.5%-100%) across behaviors during this phase.
The study team collected two types of social validity data: (a) one with teachers and (b) one with students. The teachers’ acceptability of and satisfaction with self-monitoring intervention with the IBRST was assessed using a modified version of the Intervention Rating Profile-15 rating scale (IRP-15; Martens et al., 1985). The IRP-15 included 15 questions that were answered using a 5-point Likert-type scale. This tool asked questions about the ease of implementation, the likelihood of recommendation and use, and the perceived effect of the intervention on the student target behaviors. The researcher created the student questionnaire and asked how much they enjoyed using self-monitoring and whether they would like to use the self-monitoring procedure again. The assessment was scored on a 5-point Likert-type scale and had five questions.
Two independent observers collected data for a minimum of 30% of sessions in each phase for each student participant to assess IOA. Total count IOA method was used for both problem behavior and on-task behavior. Item-by-item IOA was used for teacher fidelity of intervention implementation. The mean IOA for problem behavior during baseline was 90.6% (87.5%-100%) for Gary, 100% for Jorge, and 100% for Jerry. The mean IOA for on-task behavior during intervention was 90% (80%-100%), 90% (80%-100%), and 100% for Gary, Jorge, and Jerry, respectively. The mean IOA for problem behavior during intervention was 91.6% (66%-100%) for Gary, 100% for Jorge, and 100% for Jerry. IOA for the first intervention observation for Gary was 66%, which required the provision of additional training to RA. The mean IOA on implementation fidelity averaged 100% for Teacher 1 and 90% (range = 80%-100%) for Teacher 2.
This study employed a multiple baseline design across participants, which consisted of baseline, self-monitoring with the IBRST, fading, and follow-up phases. The teacher implemented self-monitoring with IRBST class-wide; however, only data on target students were collected.
Before baseline, the researcher (first author) provided teachers with a 1-hr training on using the IBRST and implementing self-monitoring procedures. Training included a brief background on self-monitoring and behavior rating scales. The researcher used behavioral skills training (BST; Miltenberger, 2012) procedures to train the teachers. During instruction, the researcher explained each score criterion and how to score the student’s behavior. The researcher then modeled scoring to the individual teacher by giving the teacher an example. During rehearsal, the researcher collected fidelity data. If the teacher scored less than 90%, the training continued until the teacher reached the 90% criterion. To complete the training, the teacher must have scored at least 90% two consecutive times. The feedback portion consisted of the researcher identifying the correct steps the teacher took and the steps that needed more training. Training also consisted of instruction on how to score the target student using the score criteria created by the teacher and researcher.
Before baseline, the classroom rules were established and practiced in the classroom, which must have coincided with the school-wide expectations. During baseline, the teachers conducted their classroom as usual, teaching school-wide expectations and class rules using the color level system. No self-monitoring with IRBST was implemented during this condition. However, the teachers monitored the target students’ behaviors using the IRBST, and the observers collected direct observational data 3-5 days per week. After each targeted instructional period, the teachers were provided with the IBRST to complete for the target students.
Two days before implementation of self-monitoring, the entire class received approximately 20-min training on using the IBRST. Teachers trained students using the information sheet provided by the researcher on how to use the IBRST and self-monitor their own behavior during class activities. The steps for student training included: (a) instructing the students on how to circle the corresponding point on the scale according to the amount of the target behavior that occurred during the instructional period, (b) modeling for the students on how to correspond the behavior scale with the point system, (c) practicing the skill by asking the students to score the teacher role-playing engaging in the target behaviors, and (d) providing positive feedback for the correct use of the IBRST. During the modeling portion of the training, the teachers modeled examples of on-task behavior. The teachers instructed the students to return to the IBRST after training. The researcher monitored the training provided by the teacher to help them reach 100% fidelity.
During this phase, the teachers conducted lessons as usual, except for implementing the self-monitoring procedure with the IBRST. Before the instructional period started, the researcher provided the teachers with the fidelity checklist and reminded them to review the implementation steps. The students were asked to self-record their own behavior twice during class: at the end of 15 min and at the end of 30 min. At the beginning of the instructional time, the teachers stated the expectations and rules for the classroom, posted on the wall in the classroom. The teacher provided the target student(s) with his IBRST and the rest of the class with the class-wide IBRST.
After the expectations and rules were expressed and the materials were passed out, the teachers asked whether the students had any questions before starting the first 15 min timer. When the instructional period started, the teachers said, "You will start self-monitoring your behavior now," and then began the timer that provided an audible beep sound. The teachers then taught their lessons until the 15 min timer went off, indicating the end of the first self-monitoring time interval. When most of the class was on-task during instructions, the teachers would praise their class for being on-task. If a student engaged in the problem behavior during this time, the teacher provided verbal reminders about the class expectations and rules.
When the timer signaled, the teachers instructed their classes to stop working on their activity, place the IBRST sheets in front of them, and rate themselves based on the last 15 min of class. After students rated themselves, the teachers announced that the second 15 min period had begun and started the second 15 min timer. Again, the teachers continued their lessons until the second 15 min timer sounded, implementing the procedures described above. The teachers gave the students 2 min to complete this before collecting the IBRST sheets. While the teachers completed the IBRST sheets for target students at the end of each 15-min interval, they ensured that their students had rated themselves. The teachers did not check the accuracy of the students’ ratings; they only checked whether the students had circled two numbers: one for on-task and one for problem behavior. Although not explicitly trained, the students were self-graphing their data as graphing is a product of using the IBRST instrument. For example, if a student scored himself a 3 for calling out during the first 15 min period and then a 2 in the second 15 min period, the student would inadvertently chart two data points on the sheet showing a decrease in behavior. Having two ratings in each class time period, the rating sheet would create a data path over consecutive days. The pattern of the data would become clear after each additional rating.
The participating students must have reached their goal of at least a 30% decrease in problem behavior and a 30% increase in on-task behavior from the baseline condition for three consecutive sessions for the intervention to be faded out. Except for Gary, the target students reached their criterion during the intervention and participated in fading procedures before follow-up. Parent involvement was added to Gary's self-monitoring intervention during the later phase of intervention due to increases in his problem behavior. Gary’s parents were encouraged to review the self-monitoring checklist completed by Gary and problem solve with him on how to improve his classroom behavior if his goals were not met and how to provide positive feedback to Gary if the goals were met.
When the students reached their goals of reducing problem behavior and increasing on-task behavior, the teachers started fading the use of self-monitoring. The fading process gradually decreased the frequency of using the self-monitoring with the IBRST by the students. The first phase of fading involved decreasing self-monitoring from twice to once per instructional period, and the second phase decreased self-monitoring to two times per week. If a student engaged in problem behavior during the fading phase, they were to return to the last successful phase of fading until they had achieved their goal of a 30% decrease in problem behavior and a 30% increase in on-task behavior from baseline data. However, none of the students were required to return to the last fading phase. Follow-up data were collected one week after the intervention ended.
III. Results
Figure 1 displays direct observational data on the percentage of on-task behavior and the number of problem behavior incidents across three student participants. For all three participants, the baseline showed a lower level of on-task behavior and a higher level of problem behavior compared to the intervention levels. Implementing self-monitoring with the IBRST intervention immediately increased on-task behavior for all three participating students.
For Gary, baseline data averaged 49.3% (range = 42%-57%) for on-task behavior and 7.3 (range = 6-8) instances for problem behavior. Implementation of self-monitoring with the IBRST resulted in an increase in on-task behavior. The average on-task behavior was 89.6%, with a range of 75%-100%. After the intervention was implemented, problem behavior immediately decreased to an average of 3.8 occurrences per session, ranging from 2 to 6 occurrences. After showing an increasing trend during initial intervention sessions, Gary's on-task behavior remained at 100% during the later phase of the intervention. However, after dramatic decreases during the first three sessions, his problem behavior showed an increasing trend during the next three sessions. When parent involvement was added, his problem behavior decreased to zero or one occurrence during the later intervention sessions. During both phases 1 and 2 of fading, Gary’s on-task and problem behavior were stable with 100% for on-task behavior and <1 instance of problem behavior in all sessions. On-task and problem behavior continued to be stable in during the 1-week follow-up with one data point at 100% on-task behavior and only one instance of problem behavior.
During baseline, Jorge’s on-task behavior occurred at 56.1% (range = 50%-66%), and problem behavior occurred an average of 20.8 instances per session (range = 19-23). Once the self-monitoring with the IBRST phase began, an immediate change occurred in on-task behavior with an average of 100% and problem behavior with an average of 0.5 instances. Jorge's data also remained stable during both phase 1 and phase 2 of fading, with 100% for on-task behavior and 0 instances of problem behavior in all sessions. During the 1-week follow-up, his data continued to show no instances of problem behavior and 100% for on-task behavior.
During baseline, Jerry performed at a low to moderate level for on-task behavior with an average of 25.3% (range = 25%-50%) and problem behavior with an average of 10.3 instances (rage =8-10). There was an immediate level change once the self-monitoring phase began. On-task behaviors increased to an average of 100%, and problem behavior decreased to an average of 0.5 instances. Finally, Jerry's on-task data remained high at 100%, and problem behavior remained low at an average of less than 1 instance across all fading and follow-up phases.
Figure 2 presents student behavior rating scale data collected by teachers using the IBRST. Across target students, similar behavioral patterns to those of direct observational data were observed in both target behaviors across phases; the teachers consistently rated on-task behavior as occurring at low rates during baseline and high rates during intervention. They also rated problem behavior as occurring at high rates during baseline and low rates during intervention. Once the intervention was implemented, the teacher completed IBRST ratings of on-task behavior increased by 2-3 points, whereas problem behavior decreased by 3-4 points, on average, across students.
At the end of the study, the teachers and students were given social validity questionnaires. The teachers rated their satisfaction with the intervention as ‘high.” The average rating was 4.5, ranging from 4 to 5 across teachers. The results indicated that the teachers were willing to implement self-monitoring with the IBRST with other students. Both teachers found the intervention to be easy to implement and were willing to recommend the intervention to other teachers. The results also indicated that the teachers found the intervention to result in an increase in on-task behavior and a decrease in problem behavior. All students also rated their satisfaction with self-monitoring as high, with a mean of 5, the highest rating. The responses indicated that the students found the IBRST to help them stay on-task and be easy to use and that the self-monitoring helped them improve their behavior during class time.
IV. Discussion
The current study examined the use of the IBRST as a self-monitoring tool with three 2nd-grade students at risk for developing severe problem behavior at a high-need public elementary school. The study focused on examining whether self-monitoring using the IBRST would increase on-task behavior and decrease problem behavior and whether the students’ improved levels of behaviors would be maintained during fading and follow-up phases. The results indicated that self-monitoring with the IBRST dramatically increased on-task behavior and decreased problem behavior across all three students. The fading phase data showed that with less frequent use of self-monitoring with the IBRST, the students' levels of both behaviors were maintained at the levels shown during the initial intervention phase. Although data were limited, two students maintained their improved behaviors at a 1-week follow-up without the intervention. The correspondence between direct observational data and teacher-collected IBRST data was high across behaviors and phases for all three students. The social validity data indicated that teachers and students were highly satisfied with the procedures and outcomes of self-monitoring using the IBRST.
This study supports previous findings that self-monitoring effectively increases on-task behavior and reduces problem behavior during academic periods (Chafouleas et al., 2006; Cook et al., 2017; McDougall & Brady, 1998). In particular, the study adds to the current literature on using rating scales as a self-monitoring tool for at-risk students needing Tier 2 intervention supports within SWPBIS (Bruhn et al., 2017; Chafouleas et al., 2012). The results of the study suggest that the IBRST may be an appropriate rating scale tool that can effectively prompt students to stay on-task and improve academic performance when used with the self-monitoring procedures. Although the teachers did not provide feedback to students on the accuracy of their self-monitoring during the intervention, the correspondence checks between student ratings and teacher ratings indicated that the target students used the IBRST correctly while self-monitoring their own behavior.
A notable finding of the current study is the impact of parent involvement on target behaviors for one of the participating students, Gary. Although Gary’s problem behavior decreased and on-task behavior increased during the initial intervention phase, Gary did not reach his goal of a 30% decrease. Due to this, Gary’s teacher collaborated with his parents to make the intervention more successful by using the IBRST as a home-school note during the later sessions of the initial intervention phase. This result indicates that the IRBST can be used as a communication tool between the school and home, similar to daily behavior report cards (Jurbergs, Palcic, & Kelley, 2010), which will promote parent-teacher collaboration in implementing interventions and enhancing the outcomes of the interventions. The results of the study also suggest that schools looking to implement self-monitoring with the IBRST should plan fading phases that systematically reduce the intensity of supports. As indicated by the data during the initial intervention and fading phases, the participating students did not require frequent self-monitoring of their behavior to increase and maintain their on-task behavior. During the intervention, the students self-recorded their behavior only twice, during (at 15 min) and toward the end of the class period (at 30 min).
Although the self-monitoring with the IBRST intervention successfully increased on-task behavior and reduced problem behavior in the classroom for all target students, results are limited to only three participating students. Due to the low number of participants, it is uncertain if other students would have had similar results. In addition, parental involvement occurred with only one student. Another limitation is that two of the students had the same teacher. Even though the student who received the intervention second did not receive an IBRST self-monitoring sheet during baseline, he was exposed to the teacher’s prompts to his peers to complete the IBRST sheet. Being exposed to the academic period where his peers were using the IBRST might have been a confounding factor that affected his behavior. However, considering the fact that his baseline data were stable, and his intervention data consistently showed increases in on-task behavior and decreases in problem behavior, the impact of being exposed to the IBRST prior to intervention might have been minimal.
Due to the school schedule and state testing dates, only one 1-week follow-up data point was collected for two students. Due to the limited follow–up data, it is unknown whether self-monitoring with the IBRST intervention would have long-term maintenance effects. It is also important to mention that the color system that the teachers used in their classrooms could be considered a modified form of self-monitoring. It might be possible that the teachers’ use of a color system might have been a confounding factor that affected the students’ use of IBRST and its outcomes.
A consideration for future research is to provide students with feedback on their ratings of their behavior using the IBRST. Although the current study compared the student-collected IBRST with teacher-collected IBRST to examine the accuracy of the student ratings, no feedback was given to students on whether they were using the self-monitoring tool correctly. Feedback on students’ ratings would ensure that students use the IBRST correctly and effectively and might enhance the outcomes of the self-monitoring intervention (Bruhn et al., 2015). Another area for future research is involving students in goal setting when using the IBRST. In the current study, the participating students self-graphed their data on the IBRST, which helped them self-evaluate changes in their target behaviors over time. Given that goal-setting can enhance the outcomes of self-monitoring by having students take ownership of behavior (McKenna, 2020), future researchers interested in replicating the study should consider evaluating the goal setting component within self-monitoring with IBRST. Future researchers should also consider examining the long-term maintenance effects of using the IBRST as a self-monitoring tool. Future research using additional measures of academic performance, generalization assessments, and long-term follow-up would contribute to the current literature on the IBRST.
Although the study has limitations, it contributes greatly to the current literature on self-monitoring tools. This study is the first study to evaluate the IBRST as a self-monitoring tool for students who were at risk for developing severe problem behavior, requiring a Tier 2 level intervention within SWPBIS. As indicated by the social validity data, the study provides evidence that self-monitoring using the IBRST can be a feasible and effective intervention that can be implemented by teachers in the general classroom setting. Although data are limited to one student, this study is also the first to use an in-class self-monitoring behavior rating procedure that involved parents. Although two students were successful without their parent involvement, involving parents in addressing academic and behavioral issues using self-monitoring with the IBRST may enhance the intervention outcomes and promote more significant or faster behavior changes in students.