Article

Using the Individualized Behavior Rating Scale Tool (IBRST) as a Self-Monitoring Tool to Improve Classroom Behavior*

Dominique Martinez1, Kwang-Sun Cho Blair2,**, Marissa Novotny3
Author Information & Copyright
1Behavioral Innovations
2University of South Florida
3Dash ABA
**Corresponding Author : kwangsun@usf.edu

ⓒ Copyright 2022 Korean Association for Behavior Analysis. This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received: Nov 22, 2022; Revised: Dec 18, 2022; Accepted: Dec 22, 2022

Published Online: Dec 31, 2022

ABSTRACT

Research on self-monitoring in the classroom setting has shown improvement in student behavior. Behavior rating scales can be used by teachers to monitor student progress or used by students as a self-monitoring tool. The current study examined the impact of using the Individualized Behavior Rating Scale Tool (IBRST) as a self-monitoring tool on classroom behavior in three 2nd-grade students at a high-need public elementary school implementing School-Wide Positive Behavioral Interventions and Supports (SWPBIS). A multiple baseline design across participants was used to evaluate the intervention outcomes for the three students at risk for developing severe problem behavior. Results indicated that self-monitoring with the IBRST successfully decreased disruptive behavior and increased on-task behavior in all three students during the targeted academic period. In addition, improved levels of behaviors were maintained during fading with all three students and at 1-week follow-up with two students. The results also indicated that both teachers and students had high levels of satisfaction with the procedures and outcomes of self-monitoring using the IBRST. The results of the study have meaningful implications on using self-monitoring tools such as the IBRST in the school setting where SWPBIS is implemented.

요약

교실 환경에서 자기 점검에 대한 연구는 학생 행동의 개선에 효과가 있음을 보여주고 있다. 자기점검 중재를 실행하는데 있어 행동 평가 척도는 교사가 학생의 진전 상황을 점검하거나 학생이 자기 점검을 하는 도구로 사용될 수 있다. 본 연구는 학교 차원의 긍정적 행동 중재 및 지원 프로그램을 실행하고 있는, 저소득층 학생들이 주로 교육을 받고 있는 미국의 한 공립 초등학교에서 실행 한 것이다. 2학년 학생 3명을 대상으로 자기점검 도구로 개별화행동평가척도도구(IBRST)를 사용한 것이 학생들의 교실행동에 어떤 변화를 주는지 중다기초선 설계를 사용하여 그 효과를 검증해 본 연구이다. 연구결과, IBRST를 사용한 자기점검은 수업시간 중 성공적으로 대상 학생 모두의 방해행동을 감소시키고 참여행동을 증가시킨 것으로 나타났다. 또한, 용암법을 적용하여 중재를 순차적으로 제거해가는 기간에 대상 학생 모두의 개선된 행동은 유지되었고 2명 학생의 경우 중재를 중단한 후, 1주후 후속 관찰에서 변화된 행동이 유지되고 있는 것을 보여주었다.교사와 학생 모두 IBRST를 사용한 자기점검 중재 절차와 결과에 대해 높은 수준의 만족도를 보여주었다.

Keywords: 개별화행동평정척도도구; IBRST; 자기점검; 2차중재
Keywords: individualized behavior rating scale tool; self-monitoring; Tier 2 intervention

I. Introduction

Teachers responding to student problem behavior during class often lose instructional time (Sugai & Horner, 1994; Walker, Ramsey, & Gresham, 2003). When more than one student engages in problem behavior in the classroom, it can create a challenging environment that impedes student learning and achievement (Greenwood, Horton, & Utley, 2002; Ruhl & Berlinghoff, 1992). As a result, most interventions for classroom behavior focus on reducing problem behavior, such as disruption and off-task, as well as increasing academic engagement (Ennis, Blair, & George, 2015; Logan, Bakeman, & Keefe, 1997; Wilson & Lipset, 2007). One intervention for problem behavior in the classroom is teaching self-monitoring skills to students. Self-monitoring, a type of self-management intervention (Schloss & Smith, 1998), has widely been used in educational settings for improving a variety of academic and non-academic behaviors in students with and without disabilities (Dunlap et al., 1995; Fuchs et al., 1990; Ganz, 2008; Guzman, Goldberg, & Swanson, 2018; Mathes & Bender, 1997; Rafferty et al., 2011). Self-monitoring has been shown to be an effective intervention to increase academic engagement, decrease disruption, and enhance academic performance during academic periods (Bruhn, McDaniel, & Kreigh, 2015; Carr & Punzo, 1993; DiGangi, Maag, & Rutherford, 1991; Ha & Choi, 2021; Levendoski & Cartledge, 2000; Miller et al., 2015; Todd, Horner, & Sugai, 1999), and has been used as a Tier 2 intervention within School-Wide Positive Behavioral Interventions and Supports (SWPBIS; Bruhn et al., 2017).

Self-monitoring procedures utilize students as observers and data collectors (Amato-Zech, Hoff, & Doepke, 2006) and require audio, visual, or tactile cues to prompt students to rate or score their behavior (Axelrod et al., 2009; Brooks et al., 2003; Edwards et al., 1995; Holifield et al., 2010; Petscher & Bailey, 2006). The focus has been on teaching students to self-monitor on-task behavior or performance. For example, Rock (2005) trained students to self-record attention (engagement) and academic performance (productivity and accuracy) during independent math seatwork by using a self-monitoring work plan and a timing device. The students were instructed to record a checkmark on their self-monitoring sheet if their present behavior resembled their goal behavior for paying attention. They were asked to record the number of problems completed or pages read on the recording sheet for performance at the end of each 5-min interval. Academic engagement and productivity increased with a self-monitoring work plan for all students.

One of the benefits of using self-monitoring in the classroom is that it is easy for teachers to implement, placing few demands on teachers’ time and resources (Moore et al., 2013). Self-monitoring allows students to collect data on their own behavior and easily monitor their progress, allowing immediate behavior recording during targeted activities (Ganz, 2008). This means that the teacher does not lose instructional time. Several studies have used behavior-rating scales to implement self-monitoring procedures (e.g., Smith et al., 1992). Smith et al. (1992) showed that using behavior rating scales in self-recording effectively decreased disruptive behavior in high school students with behavior or learning disabilities. The author gave the participants a point card where the students rated their behavior on a 6-point point scale every 10 min. The students earned points for matching their ratings with the teacher’s rating of the student’s behavior.

Different behavior rating scales have been used as effective self-monitoring tools by students within the school system (Dalton, Martella, & Marchand-Mantella, 1999). Daily Behavior Report Card (DBRC) is a type of behavior rating scale typically used by teachers, which incorporates direct behavior observation and measuring behavior perceptually (Dalton et al.,1999). For self-monitoring purposes, DBRCs can be customized for individual students' needs and list target behaviors based on their behavioral or academic goals. Scoring of DBRCs is similar to a behavior rating scale in that they score themselves using a binary yes/no or a Likert-type scale system. The scale can be designed with numbers or symbols (e.g., smiley face; Vannest et al., 2010). Another rating scale used for self-monitoring is the Direct Behavior Rating (DBR). The DBR uses a combination of behavior rating scales and systematic direct observation and has also been used as a self-monitoring tool (Chafouleas, Riley-Tillman, & Christ, 2009). The ratings on target behaviors are recorded immediately at the end of an observation. DBR consists of using a scale to rate a directly observed target behavior. Chafouleas and colleagues (2012) implemented self-monitoring and a group contingency with a group of eighth-grade general education students. The authors used a DBR consisting of an 11-point scale with three qualitative anchors (0 = Not at all, 5 = Some, and 10 = Totally). During the intervention, students used the DBR to rate their performance on three target behaviors (preparedness, engagement, and homework completion). The results showed that self-monitoring with DBR and group contingency reduced problem behavior and increased academic engagement.

Recently, the Individualized Behavior Rating Scale Tool (IBRST; Iovannone et al., 2014; Narozanick & Blair, 2019) has emerged as a viable tool to monitor student progress toward intervention goals or to help students self-monitor their progress. The IBRST typically uses a 5-point Likert-type scale to record the perceived dimension of a target behavior to increase and a target behavior to decrease. The IBRST is considered a type of DBR because the ratings of the behaviors are completed close to the times of occurrence by the person who directly observes the behaviors. However, the IBRST has unique features compared to DBR because it does not use global scales of ratings for general behaviors. Instead, the teachers can determine the measurement characteristics for recording based on frequency, duration, percentage, latency, or intensity to develop a 5-point scale for individually defined target behaviors (Iovannone et al., 2014). In addition, it is designed to collect data across multiple days; therefore, a line graph can be created by plotting the data points to examine trends or changes in behavior over consecutive days. Although the IBRST was originally developed for Prevent-Teach-Reinforce (PTR), a standardized function-based intervention model (Dunlap et al., 2019; Kulikowski et al., 2015; Sullivan et al., 2021), it has been used in other school-based behavioral interventions to facilitate teacher involvement in monitoring student progress (Narozanick & Blair, 2019). The IBRST has shown to have adequate inter-rater reliability of .72-.83 (Iovannone et al., 2014) and concurrent validity of .70 (Barnes et al., 2019).

Iovannone et al. (2014) tested the inter-rater agreement scores of two independent observers using the IBRST. The authors recruited 19 students from a variety of schools to participate in the study. All of the participants had severe behavioral issues. The students’ teachers were trained to score the students’ behavior and create an individualized IBRST for each student. The results indicated that the IBRST was an efficient tool for teachers to improve classroom behavior observation and data recording practices. However, given that research on the IBRST has concentrated solely on teachers as data collectors, research is needed to determine whether students can effectively use the IBRST as a self-monitoring tool. If the IBRST can effectively be used as a self-monitoring tool, it will decrease the demands placed on teachers while using the IBRST as a progress monitoring tool in the classroom.

Self-monitoring requires the student to measure their behavior periodically. The schedule used in self-monitoring interventions depends on the self-monitoring tool. Daily report cards ask students to record their behavior once an observational period has ended (Chafouleas et al., 2006). More traditional self-monitoring interventions ask students to record behavior multiple times during an instructional period (McDougall & Brady, 1998). Despite the positive outcomes of using self-monitoring regardless of the schedules or frequency of self-monitoring during an instructional time period, there is limited information on whether the intervention effects can successfully be maintained over time during the fading of the self-monitoring procedures. Therefore, the current study aimed to examine the use of IBRST by elementary school students as a self-monitoring tool to decrease disruptive behavior and increase on-task behavior within the classroom setting. The following questions were addressed in the study: (a) to what extent will the use of the IBRST as a self-monitoring tool decrease student disruptive behavior and increase on-task behavior during class time; (b) to what extent will the improved student behaviors be maintained during fading phases and at a 1-week follow-up, and (c) will the teachers and students find the self-monitoring intervention to be acceptable and effective?

II. Method

1. Participants and Setting

Following university IRB, district IRB, and school level approval, three students and their corresponding teachers (two teachers for the three students) were recruited to participate in this study. The students were recruited through teacher nomination based on: (a) teacher report of problem behavior at least two times per day for 3 out of the 5 school days and (b) in grades 2-4 (ages 7-10). Students were excluded from the study if they had a disability diagnosis or if they engaged in problem behavior that put themselves or others in danger (e.g., aggression towards others or self-injurious behavior). Selection criteria for the teachers included: (a) consent to receive training and implement the intervention, (b) nominating at least one student in the class who engaged in problem behavior, and (c) currently not implementing a self-monitoring intervention in the classroom. Teachers were to be excluded from the study if they taught special education classes.

Although self-monitoring using the IBRST is often used as a Tier-2 intervention, it was implemented for all students in the classrooms in the current study due to the teachers’ request. However, to evaluate it as more of a Tier-2 support, three second-grade male students and their corresponding teachers participated in this study. All three students were 7 years old and had no known disabilities but were considered at risk for developing severe problem behavior. All students scored below grade level in reading on a districtwide assessment. They were participating in SWPBIS without receiving any supplemental targeted behavior support.

Gary was a White student. During the beginning of the new school year and before the intervention, Gary had received five office discipline referrals (ODRs), one in-school suspension, and zero out-of-school suspensions. Gary engaged in the highest frequency of problem behavior during reading instruction. His primary problem behavior was talking out without the teacher's permission during instructional time.

Jorge was an African American student. Before the intervention, Jorge had received four ODRs, but no in-school or out-of-school suspensions. He also engaged in high levels of disruptive, calling out behavior during instructional times, particularly during writing instructional periods.

Jerry was an African American student. Before beginning the intervention, Jerry had never received any ODRs but had received two classroom referrals for disruptive behavior. His teacher was concerned with his high frequency of inappropriate manipulation of objects that disrupted his and other students’ work. Jerry engaged in the problem behavior across all instructional periods; however, his problem behavior occurred at a higher rate during reading.

Two 2nd-grade classroom teachers participated in the study. Each classroom served 20 students, with the majority of the students having Hispanic or African American backgrounds (40%-50% Hispanic, 25%-30% African American). Teacher 1 was Gary’s teacher; she was in her 30s and had been working as a teacher for two years. Her classroom management strategies included: posting the rules and expectations on the wall, teaching expectations, arranging the seating so students with visual and hearing needs were closer to the front, and using a color level system. Teacher 2 was Jorge’s and Jerry’s teacher. She was in her 30s and had three years of teaching experience. She had the same classroom management strategies as those of Teacher 1. She provided specific positive verbal feedback to students for following expectations during instruction.

Once the consent forms were obtained, the first author interviewed the teachers (approximately 10 min) to identify the students’ possible target problem behavior and problematic instructional periods. Two observations were conducted for each student during the 30-min potential target instructional times to identify their current levels of problem behavior. If the students met the inclusion criteria, the first author obtained verbal assent from the students to participate in the research. This study took place in two 2nd-grade classrooms of a high-need public elementary school with 85% of students eligible for free or reduced lunch. The school was located in a suburban area of a large city and had been implementing SWPBIS for 5 years. Just before the study began, the school scored an 86% on the Benchmarks of Quality (BoQ; Childs, Kincaid, & George, 2011), indicating above average implementation fidelity of Tier 1 SWPBIS. The study was conducted during regular classroom periods. The most problematic academic period for each target student (i.e., class reading and writing instructional period) was targeted for intervention.

2. Measurement

The dependent variables in this study were problem behavior and on-task behavior. Direct observation and the Individualized Behavior Rating Scale Tool (IBRST; Iovannone et al., 2014) were used to collect data on problem behavior and on-task behavior. Direct observation data were collected during 30-min instructional periods, three to five times per week for each student. An electronic timer identified different time intervals for interval recording, and paper and pencil were used to collect data. The participating teachers collected the IBRST data at the end of each data collection session. To collect direct observational data, the first author and teacher jointly identified and defined each student's problem behavior and on-task behavior. Problem behavior included calling out without teacher permission (Gary and Jorge), standing up from one’s seat without teacher permission (Gary and Jorge), and inappropriately manipulating objects (Jerry; e.g., tapping writing instrument, bouncing eraser on the desk) and was measured using a frequency recording system. On-task behavior focused on hand-raising with eyes directed toward the teacher and sitting in their assigned seat. The frequency of on-task behavior was recorded and then converted to a percentage of on-task. This was done by dividing the frequency of on-task behavior and dividing it by the total number of opportunities, then multiplying by 100.

As supplementary data, the IBRST data on student behavior collected by the teachers were used. Although the IBRST is typically created using a 5-point rating system, the teachers in the current study used a 6-point rating scale for both problem behavior and on-task behavior to have a more sensitive rating system. Each teacher collected data during the targeted instructional period at the end of each session. The tool consisted of three sections: (a) student name, (b) definitions for target behaviors and instructions on when and how to rate their behavior, and (c) a rating section. The rating section included two behavior-rating scales, one for problem behavior and the other for on-task behavior. The teachers developed the IBRST in collaboration with the first author based on frequency and percentage, reflecting a very bad day, a so-so day, and a very good day to set the anchors. For example, a very-good day for Gary was characterized by 0-2 instances of problem behavior and 86%-100% instances of on-task behavior. More specifically, ratings for problem behavior were established as 1 (0-1 instances), 2 (2-3 instances), 3 (4-5 instances), 4 (6-7 instances), 5 (8-9 instances), and 6 (10 or more), whereas ratings for on-task behavior were established as 1 (0-15%), 2 (16-30%), 3 (31-55%), 4 (56-70%), 5 (71-85%), and 6 (86-100%). The ratings of each problem behavior and on-task behavior on the IBRST were used to report data collected by the teachers.

3. Treatment integrity

A research assistant assessed treatment integrity during approximately 30% (33%-36%) of the intervention sessions, including fading phases, which focused on measuring teachers’ fidelity to intervention implementation and students’ completion of the self-monitoring tool (IBRST) correctly as planned. A 17-item implementation fidelity checklist with a yes/no scoring system was used to assess the teacher’s adherence to self-monitoring intervention implementation steps (e.g., providing the IBRST sheet prior to the instructional period, reviewing expectations for the instructional period, stating the instructional period has stated, setting the timer, providing positive praise for the correct use of the IBRST) during intervention sessions. The percentage of steps implemented was calculated based on the total number of steps. Fidelity averaged 90% (range = 80%-100%) across teachers.

Both the teachers and research staff reviewed the students’ completed IBRSTs at the end of each intervention session to check their rating accuracy. Correspondence between the IBRSTs completed by the teachers and students were examined by calculating the percentage of agreement between student ratings and teacher ratings. Because the students rated their behaviors twice during 30-min instructional periods, their mean ratings between the two self-checks were used to compare with teacher ratings. The analyses of the IBRSTs indicated that the levels of correspondence between student ratings and teacher ratings were high across students. The overall agreement between their ratings averaged 97.5%. Across all students, behaviors, and phases, the agreement was 100% except for the initial intervention phase for Gary, whose agreement averaged 95.5% (range = 62.5%-100%) across behaviors during this phase.

4. Social validity

The study team collected two types of social validity data: (a) one with teachers and (b) one with students. The teachers’ acceptability of and satisfaction with self-monitoring intervention with the IBRST was assessed using a modified version of the Intervention Rating Profile-15 rating scale (IRP-15; Martens et al., 1985). The IRP-15 included 15 questions that were answered using a 5-point Likert-type scale. This tool asked questions about the ease of implementation, the likelihood of recommendation and use, and the perceived effect of the intervention on the student target behaviors. The researcher created the student questionnaire and asked how much they enjoyed using self-monitoring and whether they would like to use the self-monitoring procedure again. The assessment was scored on a 5-point Likert-type scale and had five questions.

5. Interobserver agreement (IOA)

Two independent observers collected data for a minimum of 30% of sessions in each phase for each student participant to assess IOA. Total count IOA method was used for both problem behavior and on-task behavior. Item-by-item IOA was used for teacher fidelity of intervention implementation. The mean IOA for problem behavior during baseline was 90.6% (87.5%-100%) for Gary, 100% for Jorge, and 100% for Jerry. The mean IOA for on-task behavior during intervention was 90% (80%-100%), 90% (80%-100%), and 100% for Gary, Jorge, and Jerry, respectively. The mean IOA for problem behavior during intervention was 91.6% (66%-100%) for Gary, 100% for Jorge, and 100% for Jerry. IOA for the first intervention observation for Gary was 66%, which required the provision of additional training to RA. The mean IOA on implementation fidelity averaged 100% for Teacher 1 and 90% (range = 80%-100%) for Teacher 2.

6. Experimental Design and Procedures

This study employed a multiple baseline design across participants, which consisted of baseline, self-monitoring with the IBRST, fading, and follow-up phases. The teacher implemented self-monitoring with IRBST class-wide; however, only data on target students were collected.

1) Teacher training

Before baseline, the researcher (first author) provided teachers with a 1-hr training on using the IBRST and implementing self-monitoring procedures. Training included a brief background on self-monitoring and behavior rating scales. The researcher used behavioral skills training (BST; Miltenberger, 2012) procedures to train the teachers. During instruction, the researcher explained each score criterion and how to score the student’s behavior. The researcher then modeled scoring to the individual teacher by giving the teacher an example. During rehearsal, the researcher collected fidelity data. If the teacher scored less than 90%, the training continued until the teacher reached the 90% criterion. To complete the training, the teacher must have scored at least 90% two consecutive times. The feedback portion consisted of the researcher identifying the correct steps the teacher took and the steps that needed more training. Training also consisted of instruction on how to score the target student using the score criteria created by the teacher and researcher.

2) Baseline

Before baseline, the classroom rules were established and practiced in the classroom, which must have coincided with the school-wide expectations. During baseline, the teachers conducted their classroom as usual, teaching school-wide expectations and class rules using the color level system. No self-monitoring with IRBST was implemented during this condition. However, the teachers monitored the target students’ behaviors using the IRBST, and the observers collected direct observational data 3-5 days per week. After each targeted instructional period, the teachers were provided with the IBRST to complete for the target students.

3) Student training

Two days before implementation of self-monitoring, the entire class received approximately 20-min training on using the IBRST. Teachers trained students using the information sheet provided by the researcher on how to use the IBRST and self-monitor their own behavior during class activities. The steps for student training included: (a) instructing the students on how to circle the corresponding point on the scale according to the amount of the target behavior that occurred during the instructional period, (b) modeling for the students on how to correspond the behavior scale with the point system, (c) practicing the skill by asking the students to score the teacher role-playing engaging in the target behaviors, and (d) providing positive feedback for the correct use of the IBRST. During the modeling portion of the training, the teachers modeled examples of on-task behavior. The teachers instructed the students to return to the IBRST after training. The researcher monitored the training provided by the teacher to help them reach 100% fidelity.

4) Intervention

During this phase, the teachers conducted lessons as usual, except for implementing the self-monitoring procedure with the IBRST. Before the instructional period started, the researcher provided the teachers with the fidelity checklist and reminded them to review the implementation steps. The students were asked to self-record their own behavior twice during class: at the end of 15 min and at the end of 30 min. At the beginning of the instructional time, the teachers stated the expectations and rules for the classroom, posted on the wall in the classroom. The teacher provided the target student(s) with his IBRST and the rest of the class with the class-wide IBRST.

After the expectations and rules were expressed and the materials were passed out, the teachers asked whether the students had any questions before starting the first 15 min timer. When the instructional period started, the teachers said, "You will start self-monitoring your behavior now," and then began the timer that provided an audible beep sound. The teachers then taught their lessons until the 15 min timer went off, indicating the end of the first self-monitoring time interval. When most of the class was on-task during instructions, the teachers would praise their class for being on-task. If a student engaged in the problem behavior during this time, the teacher provided verbal reminders about the class expectations and rules.

When the timer signaled, the teachers instructed their classes to stop working on their activity, place the IBRST sheets in front of them, and rate themselves based on the last 15 min of class. After students rated themselves, the teachers announced that the second 15 min period had begun and started the second 15 min timer. Again, the teachers continued their lessons until the second 15 min timer sounded, implementing the procedures described above. The teachers gave the students 2 min to complete this before collecting the IBRST sheets. While the teachers completed the IBRST sheets for target students at the end of each 15-min interval, they ensured that their students had rated themselves. The teachers did not check the accuracy of the students’ ratings; they only checked whether the students had circled two numbers: one for on-task and one for problem behavior. Although not explicitly trained, the students were self-graphing their data as graphing is a product of using the IBRST instrument. For example, if a student scored himself a 3 for calling out during the first 15 min period and then a 2 in the second 15 min period, the student would inadvertently chart two data points on the sheet showing a decrease in behavior. Having two ratings in each class time period, the rating sheet would create a data path over consecutive days. The pattern of the data would become clear after each additional rating.

The participating students must have reached their goal of at least a 30% decrease in problem behavior and a 30% increase in on-task behavior from the baseline condition for three consecutive sessions for the intervention to be faded out. Except for Gary, the target students reached their criterion during the intervention and participated in fading procedures before follow-up. Parent involvement was added to Gary's self-monitoring intervention during the later phase of intervention due to increases in his problem behavior. Gary’s parents were encouraged to review the self-monitoring checklist completed by Gary and problem solve with him on how to improve his classroom behavior if his goals were not met and how to provide positive feedback to Gary if the goals were met.

5) Fading and follow-up

When the students reached their goals of reducing problem behavior and increasing on-task behavior, the teachers started fading the use of self-monitoring. The fading process gradually decreased the frequency of using the self-monitoring with the IBRST by the students. The first phase of fading involved decreasing self-monitoring from twice to once per instructional period, and the second phase decreased self-monitoring to two times per week. If a student engaged in problem behavior during the fading phase, they were to return to the last successful phase of fading until they had achieved their goal of a 30% decrease in problem behavior and a 30% increase in on-task behavior from baseline data. However, none of the students were required to return to the last fading phase. Follow-up data were collected one week after the intervention ended.

III. Results

1. Direct Observational Data

Figure 1 displays direct observational data on the percentage of on-task behavior and the number of problem behavior incidents across three student participants. For all three participants, the baseline showed a lower level of on-task behavior and a higher level of problem behavior compared to the intervention levels. Implementing self-monitoring with the IBRST intervention immediately increased on-task behavior for all three participating students.

jbas-9-3-127-g1
Figure 1. Frequency of problem behavior and percentage of on-task behavior across students and phases.
Download Original Figure

For Gary, baseline data averaged 49.3% (range = 42%-57%) for on-task behavior and 7.3 (range = 6-8) instances for problem behavior. Implementation of self-monitoring with the IBRST resulted in an increase in on-task behavior. The average on-task behavior was 89.6%, with a range of 75%-100%. After the intervention was implemented, problem behavior immediately decreased to an average of 3.8 occurrences per session, ranging from 2 to 6 occurrences. After showing an increasing trend during initial intervention sessions, Gary's on-task behavior remained at 100% during the later phase of the intervention. However, after dramatic decreases during the first three sessions, his problem behavior showed an increasing trend during the next three sessions. When parent involvement was added, his problem behavior decreased to zero or one occurrence during the later intervention sessions. During both phases 1 and 2 of fading, Gary’s on-task and problem behavior were stable with 100% for on-task behavior and <1 instance of problem behavior in all sessions. On-task and problem behavior continued to be stable in during the 1-week follow-up with one data point at 100% on-task behavior and only one instance of problem behavior.

During baseline, Jorge’s on-task behavior occurred at 56.1% (range = 50%-66%), and problem behavior occurred an average of 20.8 instances per session (range = 19-23). Once the self-monitoring with the IBRST phase began, an immediate change occurred in on-task behavior with an average of 100% and problem behavior with an average of 0.5 instances. Jorge's data also remained stable during both phase 1 and phase 2 of fading, with 100% for on-task behavior and 0 instances of problem behavior in all sessions. During the 1-week follow-up, his data continued to show no instances of problem behavior and 100% for on-task behavior.

During baseline, Jerry performed at a low to moderate level for on-task behavior with an average of 25.3% (range = 25%-50%) and problem behavior with an average of 10.3 instances (rage =8-10). There was an immediate level change once the self-monitoring phase began. On-task behaviors increased to an average of 100%, and problem behavior decreased to an average of 0.5 instances. Finally, Jerry's on-task data remained high at 100%, and problem behavior remained low at an average of less than 1 instance across all fading and follow-up phases.

2. IBRST Data

Figure 2 presents student behavior rating scale data collected by teachers using the IBRST. Across target students, similar behavioral patterns to those of direct observational data were observed in both target behaviors across phases; the teachers consistently rated on-task behavior as occurring at low rates during baseline and high rates during intervention. They also rated problem behavior as occurring at high rates during baseline and low rates during intervention. Once the intervention was implemented, the teacher completed IBRST ratings of on-task behavior increased by 2-3 points, whereas problem behavior decreased by 3-4 points, on average, across students.

jbas-9-3-127-g2
Figure 2. Teacher-completed IBRST ratings of problem behavior and on-task behavior across students and phases.
Download Original Figure
3. Social Validity

At the end of the study, the teachers and students were given social validity questionnaires. The teachers rated their satisfaction with the intervention as ‘high.” The average rating was 4.5, ranging from 4 to 5 across teachers. The results indicated that the teachers were willing to implement self-monitoring with the IBRST with other students. Both teachers found the intervention to be easy to implement and were willing to recommend the intervention to other teachers. The results also indicated that the teachers found the intervention to result in an increase in on-task behavior and a decrease in problem behavior. All students also rated their satisfaction with self-monitoring as high, with a mean of 5, the highest rating. The responses indicated that the students found the IBRST to help them stay on-task and be easy to use and that the self-monitoring helped them improve their behavior during class time.

IV. Discussion

The current study examined the use of the IBRST as a self-monitoring tool with three 2nd-grade students at risk for developing severe problem behavior at a high-need public elementary school. The study focused on examining whether self-monitoring using the IBRST would increase on-task behavior and decrease problem behavior and whether the students’ improved levels of behaviors would be maintained during fading and follow-up phases. The results indicated that self-monitoring with the IBRST dramatically increased on-task behavior and decreased problem behavior across all three students. The fading phase data showed that with less frequent use of self-monitoring with the IBRST, the students' levels of both behaviors were maintained at the levels shown during the initial intervention phase. Although data were limited, two students maintained their improved behaviors at a 1-week follow-up without the intervention. The correspondence between direct observational data and teacher-collected IBRST data was high across behaviors and phases for all three students. The social validity data indicated that teachers and students were highly satisfied with the procedures and outcomes of self-monitoring using the IBRST.

This study supports previous findings that self-monitoring effectively increases on-task behavior and reduces problem behavior during academic periods (Chafouleas et al., 2006; Cook et al., 2017; McDougall & Brady, 1998). In particular, the study adds to the current literature on using rating scales as a self-monitoring tool for at-risk students needing Tier 2 intervention supports within SWPBIS (Bruhn et al., 2017; Chafouleas et al., 2012). The results of the study suggest that the IBRST may be an appropriate rating scale tool that can effectively prompt students to stay on-task and improve academic performance when used with the self-monitoring procedures. Although the teachers did not provide feedback to students on the accuracy of their self-monitoring during the intervention, the correspondence checks between student ratings and teacher ratings indicated that the target students used the IBRST correctly while self-monitoring their own behavior.

A notable finding of the current study is the impact of parent involvement on target behaviors for one of the participating students, Gary. Although Gary’s problem behavior decreased and on-task behavior increased during the initial intervention phase, Gary did not reach his goal of a 30% decrease. Due to this, Gary’s teacher collaborated with his parents to make the intervention more successful by using the IBRST as a home-school note during the later sessions of the initial intervention phase. This result indicates that the IRBST can be used as a communication tool between the school and home, similar to daily behavior report cards (Jurbergs, Palcic, & Kelley, 2010), which will promote parent-teacher collaboration in implementing interventions and enhancing the outcomes of the interventions. The results of the study also suggest that schools looking to implement self-monitoring with the IBRST should plan fading phases that systematically reduce the intensity of supports. As indicated by the data during the initial intervention and fading phases, the participating students did not require frequent self-monitoring of their behavior to increase and maintain their on-task behavior. During the intervention, the students self-recorded their behavior only twice, during (at 15 min) and toward the end of the class period (at 30 min).

Although the self-monitoring with the IBRST intervention successfully increased on-task behavior and reduced problem behavior in the classroom for all target students, results are limited to only three participating students. Due to the low number of participants, it is uncertain if other students would have had similar results. In addition, parental involvement occurred with only one student. Another limitation is that two of the students had the same teacher. Even though the student who received the intervention second did not receive an IBRST self-monitoring sheet during baseline, he was exposed to the teacher’s prompts to his peers to complete the IBRST sheet. Being exposed to the academic period where his peers were using the IBRST might have been a confounding factor that affected his behavior. However, considering the fact that his baseline data were stable, and his intervention data consistently showed increases in on-task behavior and decreases in problem behavior, the impact of being exposed to the IBRST prior to intervention might have been minimal.

Due to the school schedule and state testing dates, only one 1-week follow-up data point was collected for two students. Due to the limited follow–up data, it is unknown whether self-monitoring with the IBRST intervention would have long-term maintenance effects. It is also important to mention that the color system that the teachers used in their classrooms could be considered a modified form of self-monitoring. It might be possible that the teachers’ use of a color system might have been a confounding factor that affected the students’ use of IBRST and its outcomes.

A consideration for future research is to provide students with feedback on their ratings of their behavior using the IBRST. Although the current study compared the student-collected IBRST with teacher-collected IBRST to examine the accuracy of the student ratings, no feedback was given to students on whether they were using the self-monitoring tool correctly. Feedback on students’ ratings would ensure that students use the IBRST correctly and effectively and might enhance the outcomes of the self-monitoring intervention (Bruhn et al., 2015). Another area for future research is involving students in goal setting when using the IBRST. In the current study, the participating students self-graphed their data on the IBRST, which helped them self-evaluate changes in their target behaviors over time. Given that goal-setting can enhance the outcomes of self-monitoring by having students take ownership of behavior (McKenna, 2020), future researchers interested in replicating the study should consider evaluating the goal setting component within self-monitoring with IBRST. Future researchers should also consider examining the long-term maintenance effects of using the IBRST as a self-monitoring tool. Future research using additional measures of academic performance, generalization assessments, and long-term follow-up would contribute to the current literature on the IBRST.

Although the study has limitations, it contributes greatly to the current literature on self-monitoring tools. This study is the first study to evaluate the IBRST as a self-monitoring tool for students who were at risk for developing severe problem behavior, requiring a Tier 2 level intervention within SWPBIS. As indicated by the social validity data, the study provides evidence that self-monitoring using the IBRST can be a feasible and effective intervention that can be implemented by teachers in the general classroom setting. Although data are limited to one student, this study is also the first to use an in-class self-monitoring behavior rating procedure that involved parents. Although two students were successful without their parent involvement, involving parents in addressing academic and behavioral issues using self-monitoring with the IBRST may enhance the intervention outcomes and promote more significant or faster behavior changes in students.

Notes

* This article served as the first author’s master’s thesis in the Applied Behavior Analysis Program of the Department of Child and Family Studies at the University of South Florida

References

1.

Amato Zech, N. A., Hoff, K. E., & Doepke, K. J. (2006). Increasing on task behavior in the classroom: Extension of self monitoring strategies. Psychology in the Schools, 43(2), 211-221.

2.

Axelrod, M. I., Elizabeth, J. Z., Haugen, K. A., & Klein, J. A. (2009). Self-management of on-task homework behavior: A promising strategy for adolescents with attention and behavior problems. School Psychology Review, 38(3), 325-333.

3.

Barnes, S. A., Iovannone, R., Blair, K. C., Crosland, K., & Georger, H. P. (2019). An evaluation of the prevent-teach-reinforce model within a multi-tiered intervention system. Preventing School Failure: Alternative Education for Children and Youth, 64, 128–141.

4.

Brooks, A., Todd, A. W., Tofflemoyer, S., & Horner, R. H. (2003). Use of functional assessment and a self-management system to increase academic engagement and work completion. Journal of Positive Behavior Interventions, 5(3), 144–152.

5.

Bruhn, A., McDaniel, S., & Kreigh, C. (2015). Self-monitoring interventions for students with behavior problems: A systematic review of current research. Behavioral Disorders, 40(2), 102-121.

6.

Bruhn, A. L., Woods-Groves, S., Fernando, J., Choi, Taehoon, & Troughton, L. (2017). Evaluating technology-based self-monitoring as a tier 2 intervention across middle school settings. Behavioral Disorders, 42(3), 119–131.

7.

Carr, S. C., & Punzo, R. P. (1993). The effects of self-monitoring of academic accuracy and productivity on the performance of students with behavioral disorders. Behavioral Disorders, 18(4), 241-250.

8.

Chafouleas, S. M., Riley-Tillman, T. C., & Christ, T. J. (2009). Direct Behavior Rating (DBR): An emerging method for assessing social behavior within a tiered intervention system. Assessment for Effective Intervention, 34(4), 195–200.

9.

Chafouleas, S. M., Riley-Tillman, T. C., & Sassu, K. A. (2006). Acceptability and reported use of daily behavior report cards among teachers. Journal of Positive Behavior Interventions, 8(3), 174-182.

10.

Chafouleas, S. M., Sanetti, L. M. H., Jaffery, R., & Fallon, L. M. (2012). An evaluation of a class-wide intervention package involving self-management and a group contingency on classroom behavior of middle school students. Journal of Behavioral Education, 21(1), 34-57.

11.

Childs, K. E., Kincaid, D., & George, H. P. (2011). The revised school-wide PBS benchmarks of quality (BoQ). OSEP Technical Assistance Center on Positive Behavioral Interventions and Supports.

12.

Cook, S. C., Rao, K., & Collins, L. (2017). Self-monitoring interventions for students with EBD: Applying UDL to a research-based practice. Beyond Behavior, 26(1), 19-27.

13.

Dalton, T., Martella, R. C., & Marchand-Martella, N. E. (1999). The effects of a self-management program in reducing off-task behavior. Journal of Behavioral Education, 9(3-4), 157-176.

14.

DiGangi, S. A., Maag, J. W., & Rutherford, R. B. (1991). Self-graphing of on-task behavior: Enhancing the reactive effects of self-monitoring on on-task behavior and academic performance. Learning Disability Quarterly, 14(3), 221–230.

15.

Dunlap, G., Clarke, S., Jackson, M., Wright, S., Ramos, E., & Brinson, S. (1995). Self-monitoring of classroom behaviors with students exhibiting emotional and behavioral challenges. School Psychology Quarterly, 10(2), 165–177.

16.

Edwards, L., Salant, V., Howard, V. F., Brougher, J., & McLaughlin, T. F. (1995). Effectiveness of self-management on attentional behavior and reading comprehension for children with attention deficit disorder. Child & Family Behavior Therapy, 17(2), 1–17.

17.

Ennis, C. R., Blair, K. C., & George, H. P. (2015). An evaluation of group contingency interventions and the role of teacher preference. Journal of Positive Behavior Interventions, 18(1),17-28.

18.

Fuchs D., Fuchs, L. S., Bahr, M. W., Ferntrom, P., Stecker, P. M. (1990). Prereferral intervention: A prescriptive approach. Exceptional Children, 56(6),493-513.

19.

Ganz, J. B. (2008). Self-monitoring across age and ability levels: Teaching students to implement their own positive behavioral interventions. Preventing School Failure: Alternative Education for Children and Youth, 53(1), 39–48.

20.

Greenwood, C. R., Horton, B. T., & Utley, C. A. (2002). Academic engagement: Current perspectives on research and practice. School Psychology Review, 31(3), 328–349.

21.

Guzman, G., Goldberg, T. S., & Swanson, H. L. (2018). A meta-analysis of self-monitoring on reading performance of K–12 students. School Psychology Quarterly, 33(1), 160-168.

22.

Ha, T., & Choi, J. (2021). The effects of self-monitoring checklist containing self-management intervention on work productivity for student with intellectual disability who placed in special schools’ majoring courses. Journal of Behavior Analysis and Support, 8(2), 129-149.

23.

Holifield, C., Goodman, J., Hazelkorn, M., & Heflin, L. J. (2010). Using self-monitoring to increase attending to task and academic accuracy in children with autism. Focus on Autism and Other Developmental Disabilities, 25(4), 230-238.

24.

Iovannone, R., Greenbaum, P. E., Wang, W., Dunlap, G., & Kincaid, D. (2014). Interrater Agreement of the Individualized Behavior Rating Scale Tool. Assessment for Effective Intervention, 39(4), 195–207.

25.

Jurbergs, N., Palcic, J. L., & Kelley, M. L. (2010). Daily behavior report cards with and without home-based consequences: improving classroom behavior in low income, African American children with ADHD. Child & Family Behavior Therapy, 32(3), 177-195.

26.

Kulikowski, L. L., Blair, K. S. C., Iovannone, R., & Crosland, K. (2015). An evaluation of the prevent-teach-reinforce (PTR) model in a community preschool classroom. Journal of Behavior Analysis and Support, 2(1), 1–22.

27.

Levendoski, L. S., & Cartledge, G. (2000). Self-monitoring for elementary school children with serious emotional disturbances: Classroom applications for increased academic responding. Behavioral Disorders, 25(3), 211–224.

28.

Logan, K. R., Bakeman, R., & Keefe, E. B. (1997). Effects of instructional variables on engaged behavior of students with disabilities in general education classrooms. Exceptional Children, 63(4), 481–497.

29.

Martens, B. K., Witt. J. C., Elliott, S., & Darveaux, D. X. (1985). Teacher judgments concerning the acceptability of school-based interventions. Professional Psychology: Research and Practice, 16(2),191-198.

30.

Mathes, M. Y., & Bender, W. N. (1997). The effects of self-monitoring on children with attention-deficit/ hyperactivity disorder who are receiving pharmacological interventions. Remedial and Special Education, 18(2), 121-128.

31.

McKenna, K. M. (2020). Self-monitoring with goal setting: Decreasing disruptive behavior in children with attention-deficit/hyperactivity disorder [Unpublished doctoral dissertation, University of Connecticut]. Dissertations 2573.

32.

McDougall, D., & Brady, M. P. (1998). Initiating and fading self-management interventions to increase math fluency in general education classes. Exceptional Children, 64(4), 151–166.

33.

Miller, L. M., Dufrene, B. A., Joe. O. D., Tingstrom D., & Filce H. (2015). Self-monitoring as a viable fading option in check-in/check-out. Journal of School Psychology, 53(2),121-135.

34.

Miltenberger, R. G. (2012). Behavior modification: Principles and procedures (6th ed.). Cengage Learning.

35.

Moore, D. W., Anderson, A., Glassenbury, M., Lang, R., & Didden, R. (2013). Increasing on-task behavior in students in a regular classroom: Effectiveness of a self-management procedure using a tactile prompt. Journal of Behavioral Education, 22(4), 302-311.

36.

Narozanick, T., & Blair, K. C. (2019). Evaluation of the class pass intervention: An application to improve classroom behavior in children with disabilities. Journal of Positive Behavior Interventions, 21(3),159–170.

37.

Petscher, E. S., Bailey, J. S. (2006). Effects of training, prompting, and self-monitoring on staff behavior in a classroom for students with disabilities. Journal of Applied Behavior Analysis, 39,215-226.

38.

Rafferty, L. A., Arroyo, J., Ginnane, S., Wilczynski, K. (2011). Self-monitoring during spelling practice: Effects on spelling accuracy and on-task behavior of three students diagnosed with attention deficit hyperactivity disorder. Behavior Analysis in Practice, 4(1), 37-45.

39.

Rock, M. L. (2005). Use of strategic self-monitoring to enhance academic engagement, productivity, and accuracy of students with and without exceptionalities. Journal of Positive Behavior Interventions, 7(2), 3–17.

40.

Ruhl, K. L., & Berlinghoff, D. H. (1992). Research on improving behaviorally disordered students' academic performance: A review of the literature. Behavioral Disorders, 17(3),178-190.

41.

Schloss, P. J., & Smith, M. A. (1998). Applied behavior analysis in the classroom. Allyn and Bacon.

42.

Smith, D. J., Young, K. R., Nelson, J. R., & West, R. P. (1992). The effect of a self-management procedure on the classroom and academic behavior of students with mild handicaps. School Psychology Review, 21(2),59-72.

43.

Sugai, G., & Horner, R.H., (1994). Including students with severe behavior problems in general education settings: Assumptions, challenges, and solutions. In J. Marr, G. Sugai, & G. Tindal (Eds.). The Oregon conference monograph (pp.102–120). University of Oregon.

44.

Sullivan, K., Crosland, K., Iovannone, R., Blair, K. C., & Singer, L. (2021). Evaluating the effectiveness of prevent-teach-reinforce (PTR) for high-school students with emotional and behavioral disorders. Journal of Positive Behavior Interventions, 23(1), 3-16.

45.

Todd, A. W., Horner, R. H., & Sugai, G. (1999). Self-monitoring and self-recruited praise effects on problem behavior, academic engagement, and work completion in a typical classroom. Journal of Positive Behavior Interventions, 1(2), 66–122.

46.

Vannest, K. J., Davis, J. L., Davis, C. R., Mason, B. A., & Burke, M. D. (2010). Effective intervention for behavior with a daily behavior report card: A meta-analysis. School Psychology Review, 39(4), 654–672.

47.

Walker, H. M., Ramsey, E., & Gresham, F. M. (2003). Heading off disruptive behavior: How early intervention can reduce defiant behavior and how win back teaching time. American Educator(Winter),6–21, 45–46.

48.

Wilson, S. J., & Lipsey, M. W. (2007). School-based interventions for aggressive and disruptive behavior: Update of a meta-analysis. American Journal of Preventive Medicine, 33(2S), S130-S143.