Scale analysis (statistics)
From Wikipedia, the free encyclopedia
In statistics, scale analysis refers to a set of methods to analyse survey data, in which responses to questions are combined to measure a latent variable. These items can be dichotomous (e.g. yes/no, agree/disagree, correct/incorrect) or polytomous (e.g. disagree strongly/disagree/neutral/agree/agree strongly). Any measurement for such data is required to be reliable, valid, and homogeneous with comparable results over different studies.
Contents |
[edit] Measurement models
Measurement is the assignment of numbers to subjects in such a way that the relations between the objects are represented by the relations between the numbers (Michell, 1990).
[edit] Traditional models
- Likert scale
- Reliability analysis, see also Classical test theory and Cronbach's alpha
- Factor analysis
[edit] Modern models based on Item response theory
- Guttman scale
- Mokken scale
- Rasch model
- (Circular) Unfolding analysis
- Circumplex model
[edit] Other models
[edit] References
- Michell, J (1990). An Introduction to the logic of Psychological Measurement. Hillsdales, NJ: Lawrences Erlbaum Associates Publ.