Structural information theory

From Wikipedia, the free encyclopedia

Structural information theory (SIT) is a theory about human perception and, in particular, about perceptual organization, that is, about the way the human visual system organizes a raw visual stimulus into objects and object parts. SIT was initiated, in the 1960s, by Emanuel Leeuwenberg[1][2][3] and has been developed further by Hans Buffart, Peter van der Helm, and Rob van Lier. It has been applied to a wide range of research topics, mostly in visual form perception but also in, for instance, visual ergonomics, data visualization, and music perception.

SIT began as a quantitative model of visual pattern classification. Nowadays, it also includes quantitative models of symmetry perception and amodal completion, and it is theoretically founded in formalizations of visual regularity and viewpoint dependency. SIT has been argued[4] to be the best defined and most successful extension of Gestalt ideas. It is the only Gestalt approach providing a formal calculus that generates plausible perceptual interpretations.

Contents

[edit] The simplicity principle

Although visual stimuli are fundamentally multi-interpretable, the human visual system usually has a clear preference for only one interpretation. To explain this preference, SIT introduced a formal coding model starting from the assumption that the perceptually preferred interpretation of a stimulus is the one with the simplest code. A simplest code is a code with minimum information load, that is, a code that enables a reconstruction of the stimulus using a minimum number of descriptive parameters. Such a code is obtained by capturing a maximum amount of visual regularity and yields a hierarchical organization of the stimulus in terms of wholes and parts.

The assumption that the visual system prefers simplest interpretations is called the simplicity principle.[5] Historically, the simplicity principle is an information-theoretical descendant of the Gestalt law of Prägnanz,[6] which was based on the natural tendency of physical systems to settle into stable minimum-energy states. Furthermore, just as the later-proposed minimum description length principle in algorithmic information theory (AIT), it can be seen as a formalization of Occam's Razor in which the best hypothesis for a given set of data is the one that leads to the largest compression of the data.

[edit] Structural versus algorithmic information theory

Since the 1960s, SIT (in psychology) and AIT (in computer science) evolved independently as viable alternatives for Shannon's classical information theory which had been developed in communication theory.[7] In Shannon's approach, things are assigned codes with lengths based on their probability in terms frequencies of occurrence (as, e.g., in the Morse code). In many domains, including perception, such probabilities are hardly quantifiable if at all, however. Both SIT and AIT circumvent this problem by turning to descriptive complexities of individual things.

Although SIT and AIT share many starting points and objectives, there are also several relevant differences:

  • First, SIT makes the perceptually relevant distinction between structural and metrical information, whereas AIT does not;
  • Second, SIT encodes for a restricted set of perceptually relevant kinds of regularities, whereas AIT encodes for any imaginable regularity;
  • Third, in SIT, the relevant outcome of an encoding is a hierarchical organization, whereas in AIT, it is a complexity value.

[edit] Simplicity versus likelihood

In visual perception research, the simplicity principle contrasts with the Helmholtzian likelihood principle,[8] which assumes that the preferred interpretation of a stimulus is the one with the highest probability of being correct in this world. As shown within a Bayesian framework using AIT findings,[9] the simplicity principle would imply that perceptual interpretations are fairly veridical (i.e., thruthful) in many worlds rather than, as assumed by the likelihood principle, highly veridical in only one world. In other words, whereas the likelihood principle suggests that the visual system is a special-purpose system (i.e., dedicated to one world), the simplicity principle suggests that it is a general-purpose system (i.e., suited in many worlds).

Crucial to the latter finding is the distinction between, and integration of, viewpoint-independent and viewpoint-dependent factors in vision, as proposed in SIT's empirically successful model of amodal completion.[10] In the Bayesian framework, these factors correspond to prior probabilities and conditional probabilities, respectively. In SIT's model, however, both factors are quantified in terms of complexities, that is, complexities of objects and spatial relationships, respectively. This approach is consistent with neuroscientific ideas about the distinction and interaction between the ventral ("what") and dorsal ("where") streams in the brain.[11]

[edit] SIT versus connectionism and dynamic systems theory

On the one hand, a representational theory like SIT seems opposite to dynamic systems theory (DST). On the other hand, connectionism can be seen as something in between, that is, it flirts with DST when it comes to the usage of differential equations and it flirts with theories like SIT when it comes to the representation of information. In fact, the analyses provided by SIT, connectionism, and DST, correspond to what Marr called the computational, the algorithmic, and the implementational levels of description, respectively. According to Marr, such analyses are complementary rather than opposite.

What SIT, connectionism, and DST have in common is that they describe nonlinear system behavior, that is, a minor change in the input may yield a major change in the output. Their complementarity expresses itself in that they focus on different aspects:

  • First, DST focuses primarily on how the state of a physical system as a whole (in this case, the brain) develops over time, whereas both SIT and connectionism focus primarily on what a system does in terms of information processing; according to both SIT and connectionism, this information processing (which, in this case, can be said to constitute cognition) thrives on interactions between bits of information.
  • Second, regarding these interactions between bits of information, connectionism focuses primarily on the nature of concrete interaction mechanisms (assuming existing bits of information suited for any input), whereas SIT focuses primarily on the nature of the (assumed to be transient, i.e., input-dependent) bits of information involved and on the nature of the outcome of the interaction between them (modelling the interaction itself in a more abstract way).

[edit] Modelling principles

In SIT, candidate interpretations of a stimulus are represented by symbol strings, in which identical symbols refer to identical perceptual primitives (e.g., blobs or edges). Every substring of such a string represents a spatially contiguous part of an interpretation, so that the entire string can be read as a reconstruction recipe for the interpretation and, thereby, for the stimulus. These strings then are encoded (i.e., they are searched for visual regularities) to find the interpretation with the simplest code.

In SIT's formal coding model, this encoding is modelled by way of symbol manipulation. In psychology, this has led to critical statements of the sort of "SIT assumes that the brain performs symbol manipulation". Such statements, however, fall in the same category as statements such as "physics assumes that nature applies formulas such as Einstein's E=mc2 or Newton's F=ma" and "DST models assume that dynamic systems apply differential equations". That is, these statements ignore that the very concept of formalization means that things are represented by symbols and that relationships between these things are captured by formulas or, in the case of SIT, by simplest codes.

[edit] Visual regularity

To obtain simplest codes, SIT applies coding rules that capture the kinds of regularity called iteration, symmetry, and alternation. These have been shown[12] to be the only regularities that satisfy the formal accessibility criteria of

  • (a) being so-called holographic regularities that
  • (b) allow for so-called hierarchically transparent codes.

A crucial difference with respect to the traditional, so-called transformational, formalization of visual regularity is that, holographically, mirror symmetry is composed of many relationships between symmetry pairs rather than one relationship between symmetry halfs. Whereas the transformational characterization may be better suited for object recognition, the holographic characterization seems more consistent with the build up of mental representations in object perception.

The perceptual relevance of the criteria of holography and transparency has been verified in the so-called holographic approach to visual regularity.[13] This approach provides an empirically successful model of the detectability of single and combined visual regularities, whether or not perturbed by noise. Furthermore, the transparent holographic regularities have been shown to lend themselves for transparallel processing which means that, in the process of selecting a simplest code from among all possible codes, O(2N) codes can be taken into account as if only one code of length N were concerned.[14] This supports the computational tractability of simplest codes and, thereby, the feasibility of the simplicity principle in perceptual organization.

[edit] References

  1. ^ Leeuwenberg, E. L. J. (1968). Structural information of visual patterns: an efficient coding system in perception. The Hague: Mouton.
  2. ^ Leeuwenberg, E. L. J. (1969). Quantitative specification of information in sequential patterns. Psychological Review, 76, 216-220.
  3. ^ Leeuwenberg, E. L. J. (1971). A perceptual coding language for visual and auditory patterns. American Journal of Psychology, 84, 307-349.
  4. ^ Palmer, S. E. (1999). Vision science: Photons to phenomenology. Cambridge, MA: MIT Press.
  5. ^ Hochberg, J. E., & McAlister, E. (1953). A quantitative approach to figural "goodness". Journal of Experimental Psychology, 46, 361-364.
  6. ^ Koffka, K. (1935). Principles of gestalt psychology. London: Routledge & Kegan Paul.
  7. ^ Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27, 379-423, 623-656.
  8. ^ von Helmholtz, H. L. F. (1962). Treatise on Physiological Optics (J. P. C. Southall, Trans.). New York: Dover. (Original work published 1909)
  9. ^ van der Helm, P. A. (2000). Simplicity versus likelihood in visual perception: From surprisals to precisals. Psychological Bulletin, 126, 770-800.
  10. ^ van Lier, R. J., van der Helm, P. A., & Leeuwenberg, E. L. J. (1994). Integrating global and local aspects of visual occlusion. Perception, 23, 883-903.
  11. ^ Ungerleider, L. G., & Mishkin, M. (1982). Two cortical visual systems. In D. J. Ingle, M. A. Goodale, & R. J. W. Mansfield (Eds.), Analysis of Visual Behavior (pp. 549--586). Cambridge, MA: MIT Press.
  12. ^ van der Helm, P. A., & Leeuwenberg, E. L. J. (1991). Accessibility, a criterion for regularity and hierarchy in visual pattern codes. Journal of Mathematical Psychology, 35, 151-213.
  13. ^ van der Helm, P. A., & Leeuwenberg, E. L. J. (1996). Goodness of visual regularities: A nontransformational approach. Psychological Review, 103, 429-456.
  14. ^ van der Helm, P. A. (2004). Transparallel processing by hyperstrings. Proceedings of the National Academy of Sciences USA, 101 (30)}, 10862-10867.