Special Education Today

Share this post
Olds: Evidence that progress monitoring improves outcomes
www.specialeducationtoday.com

Olds: Evidence that progress monitoring improves outcomes

Monitor progress...ask how the PM data are being used for instruction!

John Wills Lloyd
Jul 11, 2021
Comment2
Share

The present post appeared originally as “Does monitoring progress help?” on my blog, TeachEffectively on 5 May 2015. I’m republishing it here because I think it is still relevant.—JohnL

Does it actually help to monitor students’ progress and adjust instruction on the basis of how they are doing? Deborah Simmons and her colleagues provided compelling evidence that, within a tier-2 implementation of the Early Reading Intervention (ERI) program at the Kindergarten level, it surely does. 

Although it was published online earlier, in the May 2015 issue of Journal of Learning Disabilities, Professor Simmons and her team described a study in which they compared the reading performace of children for whom teachers had made adjustments in the pacing of instruction, either providing additional practice on lessons or skipping lessons, to the reading performance of children who had not received the adjustments. The adjustments were based on frequent assessments of students’ progress through the ERI program. 

Simmons Figure 2

Among the children who received the adjustment, they identified four different groups. The graphic here, taken from Simmons et al. (2015) Figure 2, depicts the four groups, as described in the following list.

  1. One group needed no modifications in the program; they followed the standard progression (“standard progression with targeted review”).

  2. A second group needed minimal repetition and the extra work they received was distributed throughout the course of the program (“minimal lesson repetition with targeted review”). 

  3. A third group essentially needed extra lessons early in the program, but then progressed pretty much normally the rest of the way (“decelerated repetitions to standard progression”). 

  4. A fourth group took off early and was able to skip many lessons throughout the program (“early/sustained acceleration”). 

It is important to note that these groups were not just in one particular teacher’s classroom (the Robins, Cardinals, etc.). There were a total 136 children in these four groups drawn from schools in three different states. That’s the number just in the adjusted-instruction group; it doesn’t count the number in the group that didn’t receive adjustments. 

To create a comparison group, Simmons and her colleagues used sophisticated matching techniques to find children who had the same demographic characteristics (e.g., gender, ethnic background, English-language learner status, etc.) and pre-test literacy skills (e.g., blending, letter-naming, sound matching, etc.). They aligned the comparison students who had gotten the ERI but who had not had adjusted instruction, with those in the four groups who had experienced changes in their instruction, and compared their outcomes after the 126-lesson, tier-2 ERI program. Children in both conditions had equivalent doses of instruction in otherwise comparable conditions (e.g., small-group lessons of 3-5 children with one interventionist). 

The results are intriguing and informative. The students in the early/sustained acceleration group markedly outperformed their peers; among the many significant differences, for five measures (letter-sound knowledge, sound matching, blending, word identification, and oral reading fluency) the effect sizes were near or greater than 1.0. 

For the lower-starting students (i.e., decelerated repetitions to standard progression) there were very clear benefits; there were many significant differences and some of those showed effect sizes greater than 0.45 appeared on letter-sound knowledge, segmentation fluency, blending, word identification, and oral reading fluency. 

The results for the other two groups were mixed and difficult to characterize. None of the differences were significant, so it it difficult to interpret the effect sizes. Part of this is because of the small number of children (n=9) in, at least, the standard progression group.

What’s the take-away? Here’s another illustration of the benefits of monitoring children’s performance and adjusting instruction on the basis of those data. Monitor progress. Use the data to guide instruction.

Reference

Simmons, D. C., Kim, M., Kwok, O., Coyne, M. D., Simmons, L. E., Oslund, E., Fogarty, M., Hagan-Burke, S., Little, M. E., & Rawlinson, D. (2015). Examining the Effects of Linking Student Performance and Progression in a Tier 2 Kindergarten Reading Intervention. Journal of Learning Disabilities, 48(3), 255–270. https://doi.org/10.1177/0022219413497097

Share

Comment2
ShareShare

Create your profile

0 subscriptions will be displayed on your profile (edit)

Skip for now

Only paid subscribers can comment on this post

Already a paid subscriber? Sign in

Check your email

For your security, we need to re-authenticate you.

Click the link we sent to , or click here to sign in.

Annmarie Urso
Jul 13, 2021Liked by John Wills Lloyd

If there is one practice in schools today that is the most detrimental to children's academic progress it is the failure to adjust instruction on the basis of how students are doing. I work to train preservice teachers to "know better to do better" and I also advocate for children who are referred for special education services. Districts, as a whole, are diligently progress monitoring and collecting the data, but many do not use the data to adjust instructional or programmatic practices! Now, if I could find a genie in a bottle, this would be one of my three wishes!

Expand full comment
Reply
1 reply by John Wills Lloyd
1 more comments…
TopNewCommunity

No posts

Ready for more?

© 2022 John Wills Lloyd
Privacy ∙ Terms ∙ Collection notice
Publish on Substack Get the app
Substack is the home for great writing