Curriculum-based measurement

Curriculum-based measurement, or CBM, is also referred to as a general outcomes measures (GOMs) of a student's performance in either basic skills or content knowledge.

Early history

CBM began in the mid 1970s with research headed by Stan Deno at the University of Minnesota.[1] Over the course of 10 years, this work led to the establishment of measurement systems in reading, writing, and spelling that were: (a) easy to construct, (b) brief in administration and scoring, (c) had technical adequacy (reliability and various types of validity evidence for use in making educational decisions), and (d) provided alternate forms to allow time series data to be collected on student progress.[2] This focus in the three language arts areas eventually was expanded to include mathematics, though the technical research in this area continues to lag that published in the language arts areas. An even later development was the application of CBM to middle-secondary areas: Espin and colleagues at the University of Minnesota developed a line of research addressing vocabulary and comprehension (with the maze) and by Tindal and colleagues at the University of Oregon developed a line of research on concept-based teaching and learning.[3]

Increasing importance

Early research on the CBM quickly moved from monitoring student progress to its use in screening, normative decision-making, and finally benchmarking. Indeed, with the implementation of the No Child Left Behind Act in 2001, and its focus on large-scale testing and accountability, CBM has become increasingly important as a form of standardized measurement that is highly related to and relevant for understanding student's progress toward and achievement of state standards.

Key feature

Probably the key feature of CBM is its accessibility for classroom application and implementation. It was designed to provide an experimental analysis of the effects from interventions, which includes both instruction and curriculum. This is one of the most important conundrums to surface on CBM: To evaluate the effects of a curriculum, a measurement system needs to provide an independent "audit" and not be biased to only that which is taught. The early struggles in this arena referred to this difference as mastery monitoring (curriculum-based which was embedded in the curriculum and therefore forced the metric to be the number (and rate) of units traversed in learning) versus experimental analysis which relied on metrics like oral reading fluency (words read correctly per minute) and correct word or letter sequences per minute (in writing or spelling), both of which can serve as GOMs. In mathematics, the metric is often digits correct per minute. N.B. The metric of CBM is typically rate-based to focus on "automaticity" in learning basic skills.[4]

Recent advancements

The most recent advancements of CBM have occurred in three areas. First, they have been applied to students with low incidence disabilities. This work is best represented by Zigmond in the Pennsylvania Alternate Assessment and Tindal in the Oregon and Alaska Alternate Assessments. The second advancement is the use of generalizability theory with CBM, best represented by the work of John Hintze, in which the focus is parceling the error term into components of time, grade, setting, task, etc. Finally, Yovanoff, Tindal, and colleagues at the University of Oregon have applied Item Response Theory (IRT) to the development of statistically calibrated equivalent forms in their progress monitoring system.[5]

Critique

Curriculum-based measurement emerged from behavioral psychology and yet several behaviorists have become disenchanted with the lack of the dynamics of the process.[6][7]

See also

References

  1. Deno, S.L. (1985). Curriculum-based measurement: The emerging alternative. Exceptional Children, 52(3), 219–32
  2. Skinner, Neddenriep, Bradley-Klug & Ziemann (2002) Advances in Curriculum-Based Measurement: Alternative Rate Measures for Assessing Reading Skills in Pre- and Advanced Readers. The Behavior Analyst Today, 3(3), 270–83 BAO
  3. Espin, C. & Tindal, G. (1998). Curriculum-based measurement for secondary students (pp. 214–53). In M.R. Shinn (Ed.), Advanced applications of curriculum-based measurement. New York: Guilford Press.
  4. Hale, A.D.; Skinner, C.H.; Williams, J.; Hawkins, R.; Neddenriep, C.E. & Dizer, J. (2007). Comparing Comprehension Following Silent and Aloud Reading across Elementary and Secondary Students: Implication for Curriculum-Based Measurement. The Behavior Analyst Today, 8(1), 9–23. BAO
  5. Rachel M. Stewart, Ronald C. Martella, Nancy E. Marchand-Martella and Gregory J. Benner (2005): Three-Tier Models of Reading and Behavior. JEIBI, 2(3), 115–24. BAO
  6. Williams, R.L.; Skinner, C.H. & Jaspers, K. (2008). Extending Research on the Validity of Brief Reading Comprehension Rate and Level Measures to College Course Success. The Behavior Analyst Today, 8(2), 163–74. BAO
  7. Ardoin, et al. Evaluating Curriculum-Based Measurement from a Behavioral Assessment Perspective. The Behavior Analyst Today, 9(1), 36–49 BAO

Further reading


This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.