Effective instruction in reading for students with EBD
Do studies of kids with EBD receiving reading instruction show that it helps their reading?
Writing in the Journal of Behavioral Education, Argnue Chitiyo. Maria B. Sciuchetti, Holmes W. Finch, and Goodson C. Dzenga (2025) reported the results of a meta-analysis of 27 studies of reading interventions for students with emotional or behavioral disorders. They found that the studies showed beneficial effects of instruction on students’ reading performance for both decoding and comprehension measures. Although they were unable to examine which instructional programs, practices, and procedures were more effective than others, they reported additional more specific effects.
Professor Chitiyo, who is a member of the faculty at Ball State University in Ohio (US), and his colleagues employed typical meta-analytic procedures (search and select relevant studies, code the characteristics of the studies, calculate effect sizes. analyze the data). An important feature of their meta-analysis is that they integrated studies that employed very different research methods (group-contrast and single-case designs), a challenging task, but they employed respected procedures for doing so.1
One of the things they were not able to do, however, was to compare the effects of different interventions. The studies they integrated tested different interventions (e.g., repeated reading, peer-assisted learning, Corrective Reading, story mapping, cognitive mapping, and others). With so many different interventions and only 27 studies, they did not have enough studies of each different type to create an average effect size for the types of interventions.
There were enough studies to permit Professor Chitiyo and colleagues to examine some other matters, though. They found that the effects of the reading interventions (taken as a whole) differed depending on
Research method: Group-contrast studies yielded higher effect sizes than did single-subject studies;
Intervention agent: Interventions delivered by research team members had higher effects than those delivered by teachers (although the average effect size for each type of agent was substantial);
Outcome measure: Studies that targeted comprehension had larger average effects than those that targeted fluency and phonemic awareness (although, again, all three outcomes were substantially influenced by the interventions).
They did not find differences for other moderators.2 They examined setting (school vs. clinic), format (curriculum-based vs instruction-based), group size (tutoring vs. group), and gender, but the effect sizes were insignificant.
The finding of a difference for intervention agent is interesting. The result reported by Chitiyo et al. replicated a finding from an earlier meta-analyses of reading comprehension interventions for students with learning disabilities (Talbott et al., 1994). As Talbott and her colleagues noted, there are many interpretations for the greater effects for researchers. It may be that students respond better to having a different “teacher” when the researchers conduct the intervention. Perhaps the researchers run the interventions with higher fidelity than the teachers. Perhaps the researchers do not communicate how to run interventions to teachers well enough for teachers to conduct them with high fidelity. One can imagine other reasons for the finding. All of these explanations are speculative, however. Objective observations and experimental studies would be needed to establish that accounts for the difference. So, I caution readers not to make much of the finding about differences in outcomes depending on who implemented the interventions.
One additional caution: Chitiyo and colleagues found an overall effect size in their examination of the research on reading interventions for students with EBD. That is, they reported that the collection of diverse reading interventions that have been studied with this population were more effective than whatever was happening in comparison conditions. We don’t know what students in the control groups got. We don’t know much at all about what was happening in “baseline” in the single-case studies. It could be, for all we know, that the control conditions were worse than nothing. Maybe kids in the control groups got some literacy intervention that actually would be a mistake for teachers to provide. If we know that the control conditions in the studies were essentially, for example, “nothing,” then the findings of this meta-analysis are that these diverse interventions are “better than nothing.” If we know that the students in the control groups got “The Old Grey Mare Method,” then we can say that that the interventions were "“Better than the Old Grey Mare.”
So, what’s the take away? It appears that teaching reading skills (both decoding and comprehending skills) benefits students with EBD. This result confirms findings of other studies of the same general question (e.g., Burke et al., 2015; Coleman & Vaughn, 2000; Garwood et al., 2014; Roberts et al., 2020). I encourage readers interested in this topic to read some of the previous studies as well as the current one by Chitiyo et al. (2025).
References
Burke, M. D., Boon, R. T., Hatton, H., & Bowman-Perrott, L. (2015). Reading interventions for middle and secondary students with emotional and behavioral disorders: A quantitative review of single-case studies. Behavior Modification, 39(1), 43-68. https://doi.org/10.1177/0145445514547958
Chitiyo, A., Sciuchetti, M., Finch, H. W., & Dzenga, G. (2025). A Meta-analysis of reading interventions for students with emotional/behavioral disorders. Journal of Behavioral Education (13 September 2025). https://doi.org/10.1007/s10864-025-09597-5
Coleman M., & Vaughn S. (2000). Reading interventions for students with emotional/behavioral disorders. Behavioral Disorders, 25(1), 93–104. https://doi.org/10.1177/019874290002500201
Cooper, H. (2020). Reporting quantitative research in psychology: How to meet APA style journal article reporting standards (2nd ed.) American Psychological Association.
Garwood, J. D., Brunsting, M. A., & Fox, L. C. (2014). Improving reading comprehension and fluency outcomes for adolescents with emotional-behavioral disorders: Recent research synthesized. Remedial & Special Education, 35(3), 181-194. https://doi.org/10.1177/0741932513514856
Roberts, G. J., Cho, E., Garwood, J. D., Goble, G. H., Robertson, T., & Hodges, A. (2020). Reading interventions for students with reading and behavioral difficulties: A meta-analysis and evaluation of co-occurring difficulties. Educational Psychology Review, 32(1), 17-47. https://www.jstor.org/stable/pdf/48728023.pdf
Talbott, E., Lloyd, J. W., & Tankersley, M. (1994). Effects of reading comprehension interventions for students with learning disabilities. Learning Disability Quarterly, 17(3), 223-232. https://doi.org/10.2307/1511075
Footnote
They do not document whether they adhered to guidance about conducting reviews from widely recognized sources such as the Campbell Collaboration or Preferred Reporting Items for Systematic Reviews and Meta-Analyses. Nor do they follow the guidance for reporting meta-analyses, as recommended by the American Psychological Association (Cooper, 2020).
The published report of the coding for these variables was brief. I am, to some extent, guessing at the meanings of the codes. Importantly, Chitiyo and his team did not, that I found, report data about the trustworthiness of their codes or coding system. Usually it is important for meta-analysts to explain how reliable and valid their codes are.