The following is a comparison of approaches to reading assessment that I believe are part of the crux of the ‘reading wars’. I think it is important to recognize that teachers do it all each day.
If you’re a teacher working with students in kindergarten through third grade, you know that assessment is essential to the learning process. But have you ever stopped to consider the different approaches to assessment that are available to you? In this blog post, I will be comparing two approaches to assessment: bottom-up (basic skills) and top-down (constructivist) assessment.
I don’t mean to create a binary bias here – either this or that. It seems to me that we should be accessing all of the best research from all sides. This brief analysis does not even touch on other theoretical backgrounds, including those of Vygotsky – whom I deeply appreciate. So keep in mind that this is just a blog post, and not a comprehensive analysis of all reading assessment theoretical underpinnings.
Basically, the ‘bottom-up’ assessments represent a standardized approach that focuses on analyzing the individual components of reading and thinking (people often refer to this as the science of reading, though the research is actually much more broad). One example of this type of assessment is DIBELS (Dynamic Indicators of Basic Early Literacy Skills), which includes six measures of early literacy development such as initial sound fluency and oral reading fluency. This approach is often score-centered and positivist, assuming that objective reality can be measured. Grounded in behaviourism, and quantitative by nature.
Top-down assessment approaches, on the other hand, take a more comprehensive approach. One example is the Observation Survey of Early Literacy Achievement (OSELA), which was designed by Marie Clay for classroom teachers to use as they carefully observe their students’ reading and writing skills. It often includes running records, and using observable reading behaviours of students. This approach is meant to be child-centered and progressive, valuing activities, problem solving, and projects. It also recognizes that reality is organized and experienced by the individual, and that evaluation should be a collaborative process between teachers and students. Qualitative underpinnings inform this type of assessment as well, though components are quantified into error analyses including running records and PM Benchmarks.
So, what are the strengths and limitations of each approach? DIBELS is quick and easy to administer, providing a quick overview of students’ decoding skills. It does not support teacher mediation, dynamic assessment practices, self-regulation, nor does it care about each individual students experiences. Students from poverty, racialized, learning English as an Additional Language (EAL) can be unfairly disadvantaged by these assessments. Standardized assessments such as these can also be limited in that it separates reading skills from the larger context of reading as a whole, and it doesn’t take into account cultural context or the interactive flow of information between teacher and student. See more in this blog post: Key Issues in the Science of Reading.
OSELA on the other hand, has the advantage of being a more comprehensive assessment tool with a high degree of reliability. It places a strong emphasis on the role of the teacher as a facilitator, rather than a dispenser of knowledge. However, it can be difficult to quantify the data collected through OSELA and share it with policymakers, parents, and the public. It also requires that teachers have a strong understanding of reading behaviors.
So, what’s the best approach for the classroom? I believe that both quantitative and qualitative methods need to be used, and multiple theories of literacy learning must interact in our unique pedagogies as educators. It is incumbent on all educators to consider a literacy perspective, that includes a range of pedagogical practices and daily opportunities for reading and writing. What I love about this approach is that it values professional judgement and reflects the entire range of research on reading. Ultimately, the goal of literacy assessment should be to identify students’ abilities in pursuit of personal and social benefits, and to inform instructional next steps.
In short, both types of assessment have a role if we are teaching ethically, triangulating, and supportive of the whole student.
The following is are images of a basic document that I put together to compare the research-based approaches for language and literacy that are often discussed today. I organized each to understand the purpose, philosophical base, educational psychology theoretical base, and literacy development theory. Next, I compared strengths and limitations of an assessment like Dibels and OSELA.
Please keep in mind that this is highly generalized and leaves out much nuance and research. However, I find that it illustrates some of the differences that may help clear up some confusion about many of the choices we make in terms of language and literacy instruction and assessment:
Regardless of your assessment choices, it is always important to know why you are using it in the first place, and to ask the following questions:
- What is the purpose of your assessment?
- What are the basic assumptions and elements?
- What are the strengths and weaknesses of each tool?
- What particular role does the teacher play in the assessment process?
References (not a comprehensive list)
Cole. (2006). Scaffolding Beginning Readers: Micro and Macro Cues Teachers Use During
Student Oral Reading. The Reading Teacher, 59(5), 450–459. https://doi.org/10.1598/RT.59.5.4
Li, & Zhang, M. (2008). Reconciling DIBELS and OSELA: What Every Childhood Educator
Should Know. Journal of Research in Childhood Education, 23(1), 41–51. https://doi.org/10.1080/02568540809594645
Pearson, P. D. (2004). The reading wars. Educational Policy, 18, 216-252.
Pearson, P. D. (2006). Foreword. In K. S. Goodman (Ed.), The truth about DIBELS: What it is,
what it does (pp. v-xix). Portsmouth, NH: Heinemann.
Reynolds, M., & Wheldall, K. (2007). Reading Recovery 20 years down the track: Looking
forward, looking back. International Journal of Disability, Development and Education, 54(2), 199-223.