Single subject design

From Wikipedia, the free encyclopedia

Single subject design or 'Single-Case Research Design' is a research design most often used in applied fields of psychology, education, and human behavior in which the subject serves as his/her own control, rather than utilizing another individual/group. Researchers utilize single subject design because these designs are sensitive to individual organism differences vs group designs which are sensitive to averages of groups. Often there will be large numbers of subjects in a research study using single-subject design, however--because the subject serves as their own control, this is still a single-subject design.[1] These designs are used primarily to evaluate the effect of a variety of interventions in applied research.[2]

Contents

[edit] Requirements of a single subject design

The following are requirments of single-subject designs: [3]

  • Continuous Assessment: The behavior of the individual is observed repeatedly over the course of the intervention. This insures that any treatment effects are observed long enough to convince the scientist that the treatment produces a lasting effect.
  • Baseline Assessment: Before the treatment is implemented, the behavior is repeatedly assessed until the researcher is convinced that the behavior would not improve over time without an intervention.
  • Stability of Performance: The only way to be confident that the behavior would not improve without an intervention is to observe that the baseline rate (or other dimension of behavior) is stable (i.e., not trending toward improvement over time). Likewise, clinicians are interested in treatments that yield stable performance improvements, so a single-subject design will make repeated observations during the treatment condition until the performance stabilizes.
  • Trend in Data: Continuous assessments allow the researcher to look for behavioral trends. If a treatment reverses a baseline trend (e.g., things were getting worse as time went on in baseline but the treatment reversed this trend) then this is powerful evidence suggesting (though not proving) a treatment effect.
  • Variability in Data: Because behavior is assessed repeatedly, the single-subject design allows the researcher to see how consistently the treatment changes behavior from day-to-day. Large-group statistical designs do not typically provide this information because repeated assessments are not usually not taken and the behavior of individuals in the groups are not scrutinized; instead, group means are reported.

[edit] Phases within single subject design

  • Baseline: this phase is one in which the researcher collects data on the dependent variable without any intervention in place.
  • Intervention: this phase is one in which the researcher introduces an independent variable (the intervention) and then collects data on the dependent variable.
  • Reversal: this phase is one in which the researcher removes the independent variable (reversal) and then collects data on the dependent variable.

It is important that the data are stable (steady trend and low variability) before the researcher moves to the next phase. Single-subject designs produce or approximate three levels of knowledge: (1) descriptive, (2) correlational, and (3) causal[4]

[edit] Flexibility of the design

Single subject designs are preferred because they are highly flexible and highlight individual differences in response to intervention effects[5]. In general, single subject designs have been shown to reduce interpretation bias for counselors when doing therapy[6]

[edit] Interpretation of data

In order to determine the effect of the independent variable on the dependent variable, the researcher will graph the data collected and visually inspect the differences between phases. If there is a clear distinction between baseline and intervention, and then the data returns to the same trends/level during reversal, a functional relation between the variables is inferred.[7] Sometimes, visual inspection of the data demonstrates results that statistical tests fail to find[8][9]

Researchers utilizing single subject design begin with Graphic analysis. During the baseline, data are repeatedly collected and then graphed on the behavior of interest. This provides a visual representation of the subject’s behavior before application of the intervention. It is critical that several (three to five is often recommended[10]) data points are collected during baseline to allow the researcher to describe the effects on the target behavior during intervention.

In interpreting, the general strategy of all single subject research is to use the subject as their own control. Experimental logic argues that the subjects baseline behavior would match its behavior in the intervention phase unless the intervention does something to change it. This logic then holds to rule out confound, one needs to replicate. It is the within subject replication and allows for the determination of functional relationships. Thus the goal is:

  • Prediction
  • Varification
  • Replication

[edit] Meta-Analysis of Single Subject Research

Currently, several efforts exist to combine single subject effects to determine the effect size of well researched interventions[11]. Meta-analysis, like all research, has the ability to change a profession. For example, Gresham and colleagues (2004) in a meta-analytic review of JABA articles found that functional analysis did not produce greater effect sizes compared to simple contingency management programs.[12]. Currently, researchers are debating the most effective and accurate way of doing a meta-analysis of single subject designs. The two choices being debated are the percentage nonoverlapping data(PND) vs. data points exceeding the median(PEM) method. [13][14][15] Noorgate and colleagues have argued that meta-analyses that analyze all linear trends in data don’t work since they don’t distinguish between effects on level and slope.[16][17]

[edit] Limitations of single subject design

Research designs are traditionally preplanned so that most of the details about to whom and when the intervention will be introduced are decided prior to the beginning of the study. However, in single-subject designs, these decisions are often made as the data are collected. [18]In addition, there are no widely agreed upon rules for altering phases, so -- this could lead to conflicting ideas as to how a research experiment should be conducted in single-subject design.

The major criticism of single subject designs are:

  • Carry-over effects: results from the previous phase carry-over into the next phase,
  • Order effects: the ordering (sequence) of the intervention or treatment affects what results
  • Irreversability: in some withdrawal designs, once a change in the independent variable occurs, the dependent variable is effected this cannot be undone by simply removing the independent variable.
  • Ethical problems: Withdrawal of treatment in the withdrawal design can at times present ethical and feasibility problems.

[edit] History

Historically, single subject designs have been closely tied to the experimental analysis of behavior and applied behavior analysis.[19]

[edit] See also

[edit] References

  1. ^ Cooper JO, Heron TE, Heward WL (2007). Applied Behavior Analysis, 2nd ed., Prentice Hall. ISBN 0-13-142113-1. 
  2. ^ Kazdin p 191
  3. ^ Kazdin, p 103-110
  4. ^ Tripodi, T. (1998). A Primer on Single-Subject Design for Clinical Social Workers. Washington, DC: National Association of Social Workers (NASW) Press,
  5. ^ Thompson, C. K. (1986)Flexibility of Single-subject Experimental Designs. Part III: Using Flexibility to Design or Modify Experiments." The Journal of Speech and Hearing Disorders 51.3 214-25
  6. ^ Moran, D.J. & Tai, W. (2001) Reducing Biases in Clinical Judgment with Single Subject Treatment Design. The Behavior Analyst Today, 2 (3), 196-206BAO
  7. ^ Backman, C.L., & Harris, S.R(1999) "Case Studies, Single-Subject Research, and N of 1 Randomized Trials. Comparisons and Contrasts." American Journal of Physical Medicine & Rehabilitation 78.2, 170-6.
  8. ^ Bobrovitz, C.D., & Ottenbacher, K.J. (1998). Comparison of Visual Inspection and Statistical Analysis of Single-Subject Data in Rehabilitation Research." Journal of Engineering and Applied Science 77(2), 94-102.
  9. ^ Nishith,P. Hearst,D.E., Mueser,K.T. & Foa,E. (1995). PTSD and Major Depression: Methodological and Treatment Considerations in a Single Case Design. Behavior Therapy, 26(2), 297-299
  10. ^ Alberto, P. A., & Troutman, A. C. (2006). Applied behavior analysis for teachers (7th ed.). Upper Saddle River, NJ: Pearson Education, Inc
  11. ^ Busse, R. T., Kratochwill, T. R., & Elliott, S. N. (1995). Meta-analysis for single-case consultation outcomes: Applications to research and practice. Journal of School Psychology. 33, 269 – 285.
  12. ^ Gresham, F., McIntyre,L.L., Olson-Tinker, H., Dolstra, L., McLaughlin, V. & Van, M.(2004). Relevance of functional behavioral assessment research for school based behavioral intervention and positive behavioral support. Research in Developmental Disabilities, 25, 19-37.
  13. ^ Yu-Jing Gao and Hsen-Hsing Ma (2006). Effectiveness of Interventions Influencing Academic Behaviors: A Quantitative Synthesis of Single-Subject Researches using the PEM Approach. The Behavior Analyst Today, 7.(4), 572-578.BAO
  14. ^ Van den Noortgate, W. & Onghena, P. (2007). Aggregating Single Case Results. The Behavior Analyst Today, 8(2), 196-209. BAO
  15. ^ Chen, C.W. & Ma, H.H. (2007). Effects of the treatment of disruptive behaviors: A quantative synthesis of single subject designs using the PEMS Approach. The Behavior Analyst Today, 8(4), 380-397 BAO
  16. ^ den Noortgate, W., & Onghena, P. (2003b). Hierarchical linear models for the quantitative integtration of effect sizes in single-case research. Behavior Research Methods, Instruments, & Computers. 35, 1 – 10
  17. ^ Van den Noortgate, W. & Onghena, P. (2007). Aggregating Single Case Results. The Behavior Analyst Today, 8(2), 196-209. [http://www.behavior-analyst-online.org BAO
  18. ^ Kazdin, p 284
  19. ^ Kazdin, p 291

[edit] Further reading

Kazdin, Alan (1982). Single-Case Research Designs. New York: Oxfor University Press. ISBN 0195030214.