Elgin ISD Special Ed. Job Alikes

Progress Monitoring~December 2014

Progress Monitoring

What is progress monitoring?

Progress monitoring is a scientifically based practice that is used to assess students’ academic performance and evaluate the effectiveness of instruction. Progress monitoring can be implemented with individual students or an entire class.

How does progress monitoring work?

To implement progress monitoring, the student’s current levels of performance are determined and goals are identified for learning that will take place over time. The student’s academic performance is measured on a regular basis (weekly or monthly). Progress toward meeting the student’s goals is measured by comparing expected and actual rates of learning. Based on these measurements, teaching is adjusted as needed. Thus, the student’s progression of achievement is monitored and instructional techniques are adjusted to meet the individual students learning needs.

What are the benefits of progress monitoring?

When progress monitoring is implemented correctly, the benefits are great for everyone involved. Some benefits include:
  • accelerated learning because students are receiving more appropriate instruction;
  • more informed instructional decisions;
  • documentation of student progress for accountability purposes;
  • more efficient communication with families and other professionals about students’ progress;
  • higher expectations for students by teachers; and
  • fewer Special Education referrals.

Overall, the use of progress monitoring results in more efficient and appropriately targeted instructional techniques and goals, which together, move all students to faster attainment of important state standards of achievement.

Who should be practicing progress monitoring?

Anyone who is interested in improving results for children should be implementing progress monitoring. Whether you are a regular educator, special educator, related service provider, administrator, or family member, you should be interested in implementing research-based progress monitoring practices.

What challenges face progress monitoring?

Educators and families need information about the effectiveness of progress monitoring that would encourage them to adopt the practice.
  • Teachers and other practitioners need support in translating progress monitoring research into easily implemented, usable strategies.
  • Technical assistance on progress monitoring must transfer knowledge in ways that accommodate differences in background, training, and beliefs, as well as differences in the nature and philosophy of the instructional programs and practices already in place.
  • This information dissemination must take place in a variety of formats, in usable forms, and at different levels of specificity.
  • Are there other names for progress monitoring?

    Progress monitoring is a relatively new term. Some other terms you may be more familiar with are Curriculum-Based Measurement and Curriculum-Based Assessment. Whatever method you decide to use, it is most important that you ensure it is a scientifically based practice that is supported by significant research.

    How Is CBM Used for Describing Present Levels of Performance on the IEP?

    The IEP team can transform the student’s average initial scores on CBM tests into an

    IEP statement of present level ofperformance. Because neither test administration nor scoring procedures differ and because the difficulty level of the tests remains the same over time, CBM scores can be compared across testing occasions. Current performance can be compared to subsequent performance later in the year. Thus, present level of performance can be written in the same fashion as a measurable, long-term goal that includes the learner behavior and conditions or stimulus materials. However, instead of projecting what constitutes student mastery, presentperformance merely describes the student’s current level of attainment in an academic area affected by student disability. When the ARD Committee knows how children typically read or perform mathematics calculations at particular ages or grades, the present level of performance written with CBM data also suggests how substantially the disability affects student performance in that academic area. Usually, the first three to six CBM scores are averaged to determine the PLAAFP.

    Sample PLAAFP for ReadingGiven randomly selected passages at the third-grade level, J. R. currently reads aloud

    65 words correct per minute.

    Sample PLAAFP for MathGiven 25 1x1 digit multiplication problems , J. R. currently writes 20 correct digits in 3 minutes.

    How Is CBM Used for Developing Long-Term Goals and Short-Term Objectives?

    Instructional programming first is addressed by establishing expected year-end goals.

    Because the CBM tests represent skills the student is expected to master by the end of

    the year, the IEP team also can write a measurable CBM goal statement that reflects long-term mastery. Teams can refer to normative CBM information for assistancein establishing ambitious, yet realistic goals for students (e.g., Deno, Fuchs, Marston, & Shin, 2001; L. S. Fuchs,Fuchs, Hamlett, Walz, & Germann, 1993). A goal line on the CBM graph is depicted by connecting the student’s average initial performance (i.e., baseline) to the end-ofyear goal and shows the rate of progress the student must maintain across the year in order to meet the long-term goal. By subtracting the average current performancefrom the long-term goal and dividing the difference by the number of weeks occurring between baseline and goal, theIEP team also can figure the weekly rate of improvement, or short-term objective, that the student needs to achieve in order to stay on track toward meeting the long-term goal.

    In fact, the teacher can use the goal line in benchmark fashion to determine at any point

    in time the level at which the student should be performing in order to make adequate

    progress toward the goal. The CBM graph showing both student performance data and

    the goal line, then, provides an efficient andeffective visual tool for communicating

    student progress with parents or other professionals (Deno, 2003).

    Sample IEP Long-Term Goal in ReadingIn 36 weeks, given randomly selected passages at the third-grade level, J. R. will read aloud 115 words correct per minute.

    Sample IEP Short-Term Objective in Reading

    Given randomly selected passages at the third-grade level, J. R. will read aloud 1.4

    additional words correct per minute each week [(115 – 65)/35 = 1.43].


    By the end of the first six weeks, given randomly selected passages at the third-grade level, J. R. will read aloud 8.4 additional words correct per minute each week. (I multiplied 6 x 1.4)

    Sample IEP Long-Term Goal in Math

    In 36 weeks, given 25 problems (obviously enter a skill here) of 1 digit x 1 digit mutiplication J. R. will write 40 correct digits in 3 minutes.

    Sample IEP Short-Term Objective in Math

    Given 25 problems of 1 digit x 1 digit mutiplication J. R. will write .6 additional

    correct digits in 3 minutes each week [(40 –20)/36 = .55].

    Common Progress-Monitoring Measures

    Progress can be monitored by a variety of methods. From a norm-referenced standpoint, it is possible to use widely available assessments such as the Test of Word Reading Efficiency (TOWRE; Torgesen et al., 1999) or the Woodcock-Johnson Achievement Battery (Woodcock, McGrew, & Mather, 2001). With such tests, alternate forms are available to demonstrate student improvement over time, but usually there is at least three months between administrations (Fletcher et al., 2007). Other measures, such as the Dynamic Indicators of Basic Literacy Skills (DIBELS; Good, Simmons, & Kame'enui, 2001), have been reviewed by the National Center for Student Progress Monitoring and vary considerably in reliability, validity, and other key progress-monitoring standards.

    CBM, one approach to progress monitoring, has the most well supported measures in the research base. According to Fuchs and Fuchs (2006),

    More than 200 empirical studies published in peer-review journals (a) provide evidence of CBM's reliability and validity for assessing the development of competence in reading, spelling, and mathematics and (b) document CBM's capacity to help teachers improve student outcomes at the elementary grades (p. 1).

    CBM is a form of classroom assessment that 1) describes academic competence in reading, spelling, and mathematics; 2) tracks academic development; and 3) improves student achievement (Fuchs & Stecker, 2003). It can be used to determine the effectiveness of the instruction for all students and to enhance educational programs for students who are struggling (McMaster & Wagner, 2007). Finally, findings of over 200 empirical studies indicate that CBM produces accurate, meaningful information about students’ academic levels and growth, is sensitive to student improvement, and when teachers use CBM to inform their instructional decisions, students achieve better (Fuchs & Fuchs, 2006).

    Fuchs and Stecker (2003) warn that most classroom assessment is based on mastery of a series of short-term instructional objectives or "mastery measurement." To implement this type of assessment the teacher determines the educational sequence for the school year and designs criterion-referenced tests to match each step in that educational sequence. According to Fuchs and Stecker, problems with mastery measurement include: 1) the hierarchy of skills is logical, not empirical; 2) assessment does not reflect maintenance or generalization; 3) measurement methods are designed by teachers, with unknown reliability and validity; and 4) the measurement framework is highly associated with a set of instructional methods. CBM combats these problems by making no assumptions about instructional hierarchy for measurement, so it fits with any instructional approach and by incorporating automatic tests of retention and generalization. According to Fuchs and Fuchs (2006), CBM and mastery measurement have another significant difference:

    CBM also differs from mastery measurement because it is standardized; that is, the progress monitoring procedures for creating tests, for administering and scoring those tests, and for summarizing and interpreting the resulting database are prescribed. By relying on standardized methods and by sampling the annual curriculum on every test, CBM produces a broad range of scores across individuals of the same age. The rank ordering of students on CBM corresponds with rank orderings on other important criteria of student competence. For example, students who score high (or low) on CBM are the same students who score high (or low) on the annual state tests. For these reasons, CBM demonstrates strong reliability and validity. At the same time, because each CBM test assesses the many skills embedded in the annual curriculum, CBM yields descriptions of students' strengths and weaknesses on each of the many skills contained in the curriculum. These skills profiles also demonstrate reliability and validity (p. 2).

    The tasks measured by CBM include 1) pre-reading (phoneme segmentation fluency; letter sound fluency); 2) reading (word identification fluency; passage reading fluency; maze fluency); 3) mathematics (computation; concepts and applications); 4) spelling; and 5) written expression (correct word sequences).

    Elements of Effective Progress-Monitoring Measures

    To be effective, progress-monitoring measures must be available in alternate forms, comparable in difficulty and conceptualization, and representative of the performance desired at the end of the year (Fuchs, Compton, Fuchs et al., 2008). Measures that vary in difficulty and conceptualization over time could possibly produce inconsistent results that may be difficult to quantify and interpret. Likewise, using the same measure for each administration may produce a testing effect, wherein performance on a subsequent administration is influenced by student familiarity with the content.
    By using measures that have alternate forms and are comparable in difficulty and conceptualization, a teacher can use slope (e.g., academic performance across time) to quantify rate of learning (Fuchs & Fuchs, 2008). Slope can also be used to measure a student’s response to a specific instructional program, signaling a need for program adjustment when responsiveness is inadequate (Fuchs et al., 2008).

    Effective progress-monitoring measures should also be short and easily administered by a classroom teacher, special education teacher, or school psychologist (Fuchs & Stecker, 2003). According to Fletcher et al. (2007), there is much research to support the use of short, fluency-based probes in deficit areas such as word reading fluency and accuracy, mathematics, and spelling. However, for areas such as reading comprehension and composition, there is less research support for specific kinds of probes because these domains demonstrate less rapid change and require methods for assessing progress over longer periods of time (Fletcher et al., 2007; McMaster & Wagner, 2007).

    This document was developed through Cooperative Agreement (#H326W30003) between the American Institutes for Research and the U.S. Department of Education, Office of Special Education Programs. The contents of this document do not necessarily reflect the views or policies of the Department of Education, nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government.