Semantic memory

Semantic memory is one of the two types of declarative or explicit memory (our memory of facts or events that is explicitly stored and retrieved).[1] Semantic memory refers to general world knowledge that we have accumulated throughout our lives.[2] This general knowledge (facts, ideas, meaning and concepts) is intertwined in experience and dependent on culture. Semantic memory is distinct from episodic memory, which is our memory of experiences and specific events that occur during our lives, from which we can recreate at any given point.[3] For instance, semantic memory might contain information about what a cat is, whereas episodic memory might contain a specific memory of petting a particular cat. We can learn about new concepts by applying our knowledge learned from things in the past.[4] The counterpart to declarative, or explicit memory, is procedural memory, or implicit memory.[5]

History

The idea of semantic memory was first introduced following a conference in 1972 between Endel Tulving, of the University of Toronto, and W. Donaldson on the role of organization in human memory. Tulving constructed a proposal to distinguish between episodic memory and what he termed semantic memory.[6] He was mainly influenced by the ideas of Reiff and Scheers, who in 1959 made the distinction between two primary forms of memory.[7] One form titled remembrances and the other memoria. The remembrance concept dealt with memories that contained the experiences of an autobiographic index, whereas the memoria’ concept dealt with those memories without the experiences of an autobiographic index.[8] Semantic memory was to reflect our knowledge of the world around us. It holds generic information that is more than likely acquired across various contexts and is able to be used across different situations. According to Madigan in his book titled Memory, semantic memory is the sum of all knowledge you have obtained- whether it be your vocabulary, understanding of math, and all the facts you know. In his book titled "Episodic and Semantic Memory", Endel Tulving adopted the term semantic from linguists to refer to a system of memory for "words and verbal symbols, their meanings and referents, the relations between them, and the rules, formulas, or algorithms for influencing them.[9] The use of semantic memory is quite different from that of episodic memory. Semantic memory refers to general facts and meanings we share with others whereas episodic memory refers to unique and concrete personal experiences. Tulving's proposal of this distinction between semantic and episodic memory was widely accepted mainly because it allowed the separate conceptualization of knowledge of the world.[10] Tulving discusses these separate systems of conceptualization of episodic and semantic memory in his book titled Elements of Episodic Memory.[11] He states that both episodic and semantic memory differ in regards to several factors including:

  1. the characteristics of their operations,
  2. the kind of information they process, and
  3. their application to the real world as well as the memory laboratory.

Before this proposal by Tulving this area of human memory had been neglected by experimental psychologists. A number of experimenters have conducted tests to determine the validity of Tulving’s hypothesized distinction of episodic and semantic memory.

Recent research has focused on the idea that when people access a word's meaning, their sensorimotor information that is used to perceive and act on the concrete object to which the word suggests is automatically activated. In the theory of grounded cognition, the meaning of a particular word is grounded in the sensorimotor systems.[12] For example, when one thinks of a pear, knowledge regarding grasping, chewing, sights, sounds, and tastes used to encode episodic experiences of a pear are restored by way of sensorimotor simulation. A grounded simulation approach refers to context-specific re-activations that integrate the important features of episodic experience into a current depiction. Recent research has challenged the previous used amodal views. Amodal views (also known as amodal perception) is a way that the brain encodes multiple inputs such as words and pictures to integrate and create a larger conceptual idea. Instead of being representations in modality-specific systems, semantic memory representations had previously been viewed as redescriptions of modality specific states. Some accounts of category-specific semantic deficits that are amodal remain even though researchers are beginning to find support for theories in which knowledge is tied to modality-specific brain regions. This research defines a clear link between episodic experiences and semantic memory. The concept that semantic representations are grounded across modality-specific brain regions can be supported by the fact that episodic memory and semantic appear to function in different yet mutually dependent ways. The distinction between semantic and episodic became part of the broader scientific discourse. For example, it has been speculated that semantic memory captures the stable aspects of our personality while episodes of illness may have a more episodic nature.[13]

Empirical evidence

Kihlstrom (1980): Experiment 1

In this study four groups of University students, varying in their levels of hypnotic susceptibility, were hypnotized. While under hypnosis they learned a list of 16 common words using a multi-trial free recall method. Once the subjects were able to perfectly recall the list twice in succession they were told that after awakening they would not remember having learned any of the words while under hypnosis. However, given the signal of the experimenter not only will they remember having learned the words but they will also remember the words from the list.

During stage one of the experiment (after subjects were awakened) the number of words recalled by the subjects were used as a measure of performance for the episodic task of free recall. Most subjects remembered learning the list of words.

During the second stage the measure of semantic memory performance was assessed. Each subject was given a semantic free association test (where stimulus words were given to elicit the learned words).

As mentioned previously, the subjects represented various levels of hypnotic susceptibility as determined by their scores on the Stanford Hypnotic Susceptibility Scale. They were grouped according to their score.

The semantic free association probabilities were relatively the same across various hypnotized groups. However, the episodic free recall probabilities were significantly different across the groups. The percentage increased as the hypnotizability of subjects decreased. The subjects in the very high susceptibility group recalled almost nothing, whereas the medium and low groups recalled 86% of the learned words.

Because the free association test was not related to the hypnotic susceptibility of the subjects shows that amnesia presented after hypnosis determined the memory for the word-events that occurred in the study phase.

This study provides evidence that supports the episodic/semantic distinction hypothesized by Tulving.

Jacoby and Dallas (1981)[14]

This study was not created to solely provide evidence for the distinction of semantic and episodic memory stores. However, they did use the experimental dissociation method which provides evidence for Tulving’s hypothesis.

Part one

Subjects were presented with 60 words (one at a time) and were asked different questions.

Part Two

In the second phase of the experiment 60 “old words”- seen in stage one and “20 new words” not shown in stage one were presented to the subjects one at a time.

The subjects were given one of two tasks:

Results:

Conclusion:

It displays a strong distinction of performance of episodic and semantic tasks, thus supporting Tulving’s hypothesis.

Kelley et al. (2014)[15]

This experiment demonstrated if serial position functions perceived when the participants reconstructed the order of books or movies differed as a function of whether they "remember" the item in question or just "know" that the particular item occurred in a distinct order. One hundred eighty undergraduate students from Lake Forrest College completed the study in groups of approximately 40 students in a classroom setting. Each session lasted approximately 15 minutes.

"Part One"

Participants received a list of 7 book titles and 18 film titles. The instructions were to provide a familiarity rating for each book or film on a scale of 1-5.

"Part Two"

Participants received three separate free construction of order tasks. They were asked to reconstruct the original order of release for the books/films. Each book/film was reordered randomly and paired with a letter from the alphabet.

"Results"

The serial position functions observed, especially the recency effect, did not differ as a function of whether the participant involved had episodic awareness of the learning event. When recreating the order of the 7 books, "remember" serial position functions were indistinguishable from "know" serial position functions. There was one exception of a significant difference in familiarity ratings when considering just the second position from the end and the final position. After analyzing the data, the movies parallel the data from the books: when reconstructing the order of movies, "remember" serial position functions are essentially impartial from "know" serial functions.

"Conclusion"

This concludes that contract to Tulving's (1985b)[16] original hypothesis, "remember-know" discernments do not accurately reflect the memory supporting performance.

Models

The essence of semantic memory is that its contents are not tied to any particular instance of experience, as in episodic memory. Instead, what is stored in semantic memory is the "gist" of experience, an abstract structure that applies to a wide variety of experiential objects and delineates categorical and functional relationships between such objects.[17] Thus, a complete theory of semantic memory must account not only for the representational structure of such "gists", but also for how they can be extracted from experience. Numerous models of semantic memory have been proposed; they are summarized below.

Network models

Networks of various sorts play an integral part in many theories of semantic memory. Generally speaking, a network is composed of a set of nodes connected by links. The nodes may represent concepts, words, perceptual features, or nothing at all. The links may be weighted such that some are stronger than others or, equivalently, have a length such that some links take longer to traverse than others. All these features of networks have been employed in models of semantic memory, examples of which are found below.

Teachable Language Comprehender (TLC)

One of the first examples of a network model of semantic memory is the Teachable Language Comprehender (TLC).[18] In this model, each node is a word, representing a concept (like "Bird"). With each node is stored a set of properties (like "can fly" or "has wings") as well as pointers (i.e., links) to other nodes (like "Chicken"). A node is directly linked to those nodes of which it is either a subclass or superclass (i.e., "Bird" would be connected to both "Chicken" and "Animal"). Thus, TLC is a hierarchical knowledge representation in that high-level nodes representing large categories are connected (directly or indirectly, via the nodes of subclasses) to many instances of those categories, whereas nodes representing specific instances are at a lower level, connected only to their superclasses. Furthermore, properties are stored at the highest category level to which they apply. For example, "is yellow" would be stored with "Canary", "has wings" would be stored with "Bird" (one level up), and "can move" would be stored with "Animal" (another level up). Nodes may also store negations of the properties of their superordinate nodes (i.e., "NOT-can fly" would be stored with "penguin"). This provides an economy of representation in that properties are only stored at the category level at which they become essential, that is, at which point they become critical features (see below).

Processing in TLC is a form of spreading activation.[19] That is, when a node becomes active, that activation spreads to other nodes via the links between them. In that case, the time to answer the question "Is a chicken a bird?" is a function of how far the activation between the nodes for "Chicken" and "Bird" must spread, i.e., the number of links between the nodes "Chicken" and "Bird".

The original version of TLC did not put weights on the links between nodes. This version performed comparably to humans in many tasks, but failed to predict that people would respond faster to questions regarding more typical category instances than those involving less typical instances.[20] Collins and Quillian later updated TLC to include weighted connections to account for this effect.[21] This updated TLC is capable of explaining both the familiarity effect and the typicality effect. Its biggest advantage is that it clearly explains priming: you are more likely to retrieve information from memory if related information (the "prime") has been presented a short time before. There are still a number of memory phenomena for which TLC has no account, including why people are able to respond quickly to obviously false questions (like "is a chicken a meteor?"), when the relevant nodes are very far apart in the network.[22]

Semantic networks

TLC is an instance of a more general class of models known as semantic networks. In a semantic network, each node is to be interpreted as representing a specific concept, word, or feature. That is, each node is a symbol. Semantic networks generally do not employ distributed representations for concepts, as may be found in a neural network. The defining feature of a semantic network is that its links are almost always directed (that is, they only point in one direction, from a base to a target) and the links come in many different types, each one standing for a particular relationship that can hold between any two nodes.[23] Processing in a semantic network often takes the form of spreading activation (see above).

Semantic networks see the most use in models of discourse and logical comprehension, as well as in Artificial Intelligence.[24] In these models, the nodes correspond to words or word stems and the links represent syntactic relations between them. For an example of a computational implementation of semantic networks in knowledge representation, see Cravo and Martins (1993).[25]

Feature models

Feature models view semantic categories as being composed of relatively unstructured sets of features. The semantic feature-comparison model, proposed by Smith, Shoben, and Rips (1974),[26] describes memory as being composed of feature lists for different concepts. According to this view, the relations between categories would not be directly retrieved, they would be indirectly computed. For example, subjects might verify a sentence by comparing the feature sets that represent its subject and predicate concepts. Such computational feature-comparison models include the ones proposed by Meyer (1970),[27] Rips (1975),[28] Smith, et al. (1974).[26]

Early work in perceptual and conceptual categorization assumed that categories had critical features and that category membership could be determined by logical rules for the combination of features. More recent theories have accepted that categories may have an ill-defined or "fuzzy" structure[29] and have proposed probabilistic or global similarity models for the verification of category membership.[30]

Associative models

The "association"—a relationship between two pieces of information—is a fundamental concept in psychology, and associations at various levels of mental representation are essential to models of memory and cognition in general. The set of associations among a collection of items in memory is equivalent to the links between nodes in a network, where each node corresponds to a unique item in memory. Indeed, neural networks and semantic networks may be characterized as associative models of cognition. However, associations are often more clearly represented as an N×N matrix, where N is the number of items in memory. Thus, each cell of the matrix corresponds to the strength of the association between the row item and the column item.

Learning of associations is generally believed to be a Hebbian process; that is, whenever two items in memory are simultaneously active, the association between them grows stronger, and the more likely either item is to activate the other. See below for specific operationalizations of associative models.

Search of Associative Memory (SAM)

A standard model of memory that employs association in this manner is the Search of Associative Memory (SAM) model.[31] Though SAM was originally designed to model episodic memory, its mechanisms are sufficient to support some semantic memory representations, as well.[32] The SAM model contains a short- term store (STS) and long term store (LTS), where STS is a briefly activated subset of the information in the LTS. The STS has limited capacity and affects the retrieval process by limiting the amount of information that can be sampled and limiting the time the sampled subset is in an active mode. The retrieval process in LTS is cue dependent and probabilistic, meaning that a cue initiates the retrieval process and the selected information from memory is random. The probability of being sampled is dependent on the strength of association between the cue and the item being retrieved, with stronger associations being sampled and finally one is chosen. The buffer size is defined as r, and not a fixed number, and as items are rehearsed in the buffer the associative strengths grow linearly as a function of the total time inside the buffer.[33] In SAM, when any two items simultaneously occupy a working memory buffer, the strength of their association is incremented. Thus, items that co-occur more often are more strongly associated. Items in SAM are also associated with a specific context, where the strength of that association determined by how long each item is present in a given context. In SAM, then, memories consist of a set of associations between items in memory and between items and contexts. The presence of a set of items and/or a context is more likely to evoke, then, some subset of the items in memory. The degree to which items evoke one another—either by virtue of their shared context or their co-occurrence—is an indication of the items’ semantic relatedness.

In an updated version of SAM, pre-existing semantic associations are accounted for using a semantic matrix. During the experiment, semantic associations remain fixed showing the assumption that semantic associations are not significantly impacted by the episodic experience of one experiment. The two measures used to measure semantic relatedness in this model are the Latent semantic analysis (LSA) and the Word association spaces (WAS).[34] The LSA method states that similarity between words is reflected through their co-occurrence in a local context.[35] WAS was developed by analyzing a database of free association norms. In WAS, “words that have similar associative structures are placed in similar regions of space.” [36]

ACT-R: a production system model

The ACT (Adaptive Control of Thought)[37] (and later ACT-R (Adaptive Control of Thought-Rational)[38]) theory of cognition represents declarative memory (of which semantic memory is a part) with "chunks", which consist of a label, a set of defined relationships to other chunks (i.e., "this is a _", or "this has a _"), and any number of chunk-specific properties. Chunks, then, can be mapped as a semantic network, given that each node is a chunk with its unique properties, and each link is the chunk’s relationship to another chunk. In ACT, a chunk’s activation decreases as a function of the time since the chunk was created and increases with the number of times the chunk has been retrieved from memory. Chunks can also receive activation from Gaussian noise, and from their similarity to other chunks. For example, if "chicken" is used as a retrieval cue, "canary" will receive activation by virtue of its similarity to the cue (i.e., both are birds, etc.). When retrieving items from memory, ACT looks at the most active chunk in memory; if it is above threshold, it is retrieved, otherwise an "error of omission" has occurred, i.e., the item has been forgotten. There is, additionally, a retrieval latency, which varies inversely with the amount by which the activation of the retrieved chunk exceeds the retrieval threshold. This latency is used in measuring the response time of the ACT model, to compare it to human performance.[39]

While ACT is a model of cognition in general, and not memory in particular, it nonetheless posits certain features of the structure of memory, as described above. In particular, ACT models memory as a set of related symbolic chunks which may be accessed by retrieval cues. While the model of memory employed in ACT is similar in some ways to a semantic network, the processing involved is more akin to an associative model.

Statistical models

Some models characterize the acquisition of semantic information as a form of statistical inference from a set of discrete experiences, distributed across a number of "contexts". Though these models differ in specifics, they generally employ an (Item × Context) matrix where each cell represents the number of times an item in memory has occurred in a given context. Semantic information is gleaned by performing a statistical analysis of this matrix.

Many of these models bear similarity to the algorithms used in search engines (for example, see Griffiths, et al., 2007[40] and Anderson, 1990[41]), though it is not yet clear whether they really use the same computational mechanisms.

Latent Semantic Analysis (LSA)

Perhaps the most popular of these models is Latent Semantic Analysis (LSA).[42] In LSA, a T × D matrix is constructed from a text corpus where T is the number of terms in the corpus and D is the number of documents (here "context" is interpreted as "document" and only words—or word phrases—are considered as items in memory). Each cell in the matrix is then transformed according to the equation:

\mathbf{M}_{t,d}'=\frac{\ln{(1 + \mathbf{M}_{t,d})}}{-\sum_{i=0}^D P(i|t) \ln{P(i|t)}}

where P(i|t) is the probability that context i is active, given that item t has occurred (this is obtained simply by dividing the raw frequency, \mathbf{M}_{t,d} by the total of the item vector, \sum_{i=0}^D \mathbf{M}_{t,i}). This transformation—applying the logarithm, then dividing by the information entropy of the item over all contexts—provides for greater differentiation between items and effectively weights items by their ability to predict context, and vice versa (that is, items that appear across many contexts, like "the" or "and", will be weighted less, reflecting their lack of semantic information). A Singular Value Decomposition (SVD) is then performed on the matrix \mathbf{M}', which allows the number of dimensions in the matrix to be reduced, thus clustering LSA's semantic representations and providing for indirect association between items. For example, "cat" and "dog" may never appear together in the same context, so their close semantic relationship may not be well-captured by LSA's original matrix \mathbf{M}. However, by performing the SVD and reducing the number of dimensions in the matrix, the context vectors of "cat" and "dog"—which would be very similar—would migrate toward one another and perhaps merge, thus allowing "cat" and "dog" to act as retrieval cues for each other, even though they may never have co-occurred. The degree of semantic relatedness of items in memory is given by the cosine of the angle between the items' context vectors (ranging from 1 for perfect synonyms to 0 for no relationship). Essentially, then, two words are closely semantically related if they appear in similar types of documents.

Hyperspace Analogue to Language (HAL)

The Hyperspace Analogue to Language (HAL) model[43][44] considers context only as the words that immediately surround a given word. HAL computes an NxN matrix, where N is the number of words in its lexicon, using a 10-word reading frame that moves incrementally through a corpus of text. Like in SAM (see above), any time two words are simultaneously in the frame, the association between them is increased, that is, the corresponding cell in the NxN matrix is incremented. The amount by which the association is incremented varies inversely with the distance between the two words in the frame (specifically, \Delta=11-d, where d is the distance between the two words in the frame). As in LSA (see above), the semantic similarity between two words is given by the cosine of the angle between their vectors (dimension reduction may be performed on this matrix, as well). In HAL, then, two words are semantically related if they tend to appear with the same words. Note that this may hold true even when the words being compared never actually co-occur (i.e., "chicken" and "canary").

Other statistical models of semantic memory

The success of LSA and HAL gave birth to a whole field of statistical models of language. A more up-to-date list of such models may be found under the topic Measures of semantic relatedness.

Location of semantic memory in the brain

The cognitive neuroscience of semantic memory is a somewhat controversial issue with two dominant views.

On the one hand, many researchers and clinicians believe that semantic memory is stored by the same brain systems involved in episodic memory. These include the medial temporal lobes (MTL) and hippocampal formation. In this system, the hippocampal formation "encodes" memories, or makes it possible for memories to form at all, and the cortex stores memories after the initial encoding process is completed.

Recently, new evidence has been presented in support of a more precise interpretation of this hypothesis. The hippocampal formation includes, among other structures: the hippocampus itself, the entorhinal cortex, and the perirhinal cortex. These latter two make up the "parahippocampal cortices". Amnesics with damage to the hippocampus but some spared parahippocampal cortex were able to demonstrate some degree of intact semantic memory despite a total loss of episodic memory. This strongly suggests that encoding of information leading to semantic memory does not have its physiological basis in the hippocampus.[45]

Other researchers believe the hippocampus is only involved in episodic memory and spatial cognition. This then raises the question where semantic memory may be located. Some believe semantic memory lives in temporal neocortex. Others believe that semantic knowledge is widely distributed across all brain areas. To illustrate this latter view, consider your knowledge of dogs. Researchers holding the 'distributed semantic knowledge' view believe that your knowledge of the sound a dog makes exists in your auditory cortex, whilst your ability to recognize and imagine the visual features of a dog resides in your visual cortex. Recent evidence supports the idea that the temporal pole bilaterally is the convergence zone for unimodal semantic representations into a multimodal representation. These regions are particularly vulnerable to damage in semantic dementia, which is characterised by a global semantic deficit.

Neural correlates and biological workings

The hippocampal areas are important to semantic memory's involvement with declarative memory. The left inferior prefrontal cortex (PFC) and the left posterior temporal areas are other areas involved in semantic memory use. Temporal lobe damage affecting the lateral and medial cortexes have been related to semantic impairments. Damage to different areas of the brain affect semantic memory differently.[46]

Neuroimaging evidence suggests that left hippocampal areas show an increase in activity during semantic memory tasks. During semantic retrieval, two regions in the right middle frontal gyrus and the area of the right inferior temporal gyrus similarly show an increase in activity.[46] Damage to areas involved in semantic memory result in various deficits, depending on the area and type of damage. For instance, Lambon Ralph, Lowe, & Rogers (2007) found that category-specific impairments can occur where patients have different knowledge deficits for one semantic category over another, depending on location and type of damage. Category-specific impairments might indicate that knowledge may rely differentially upon sensory and motor properties encoded in separate areas (Farah and McClelland, 1991).[47]

Category-specific impairments can involve cortical regions where living and nonliving things are represented and where feature and conceptual relationships are represented. Depending on the damage to the semantic system, one type might be favored over the other. In many cases, there is a point where one domain is better than the other (i.e. - representation of living and nonliving things over feature and conceptual relationships or vice versa)[48]

Different diseases and disorders can affect the biological workings of semantic memory. A variety of studies have been done in an attempt to determine the effects on varying aspects of semantic memory. For example, Lambon, Lowe, & Rogers (2007) studied the different effects semantic dementia and herpes simplex virus encephalitis have on semantic memory. They found that semantic dementia has a more generalized semantic impairment. Additionally, deficits in semantic memory as a result of herpes simplex virus encephalitis tend to have more category-specific impairments. Other disorders that affect semantic memory - such as Alzheimer's disease - has been observed clinically as errors in naming, recognizing, or describing objects. Whereas researchers have attributed such impairment to degradation of semantic knowledge (Koenig et al. 2007).[49]

Various neural imaging and research points to semantic memory and episodic memory resulting from distinct areas in the brain. Still other research suggests that both semantic memory and episodic memory are part of a singular declarative memory system, yet represent different sectors and parts within the greater whole. Different areas within the brain are activated depending on whether semantic or episodic memory is accessed. Certain experts are still arguing whether or not the two types of memory are from distinct systems or whether the neural imaging makes it appear that way as a result of the activation of different mental processes during retrieval.[50]

Disorders

In order to understand semantic memory disorders, one must first understand how these disorders affect memory. Semantic memory disorders fractionate into two categories. Semantic category specific impairments and modality specific impairments are apparent in disorders of semantic memory. Understanding these types of impairments will give insight into how disorders of semantic memory function.

Semantic category specific impairments

Category specific impairments can result in widespread, patchy damage or localized damage. Category specific impairments can be broken down into four categories. Perceptual and functional features, topographic organization, informativeness and intercorrelations are areas of decreased functioning in disorders of semantic memory (Warrington and Shallice, 1984).[51] Alzheimer's disease is a semantic memory disorder that results in errors describing and naming objects, though not necessarily category-specific.[52] Semantic dementia is another disorder associated with semantic memory. Semantic dementia is a language disorder characterized by a deterioration in understanding and recognizing words. Impairments include difficulty in generating familiar words, difficulty naming objects and difficulties with visual recognition. Research suggests that the temporal lobe might be responsible for category specific impairments of semantic memory disorders. In addition to category specific impairments, modality specific impairments are included in disorders of semantic memory (Cohen et al. 2002).[53]

Modality specific impairments

Semantic memory is also discussed in reference to modality. Different components represent information from different sensorimotor channels. Modality specific impairments are divided into separate subsystems on the basis of input modality. Examples of different input modalities include visual, auditory and tactile input. Modality specific impairments are also divided into subsystems based on the type of information. Visual vs. verbal and perceptual vs. functional information are examples of information types.[54] Modality specificity can account for category specific impairments in semantic memory disorders. Damage to visual semantics primarily impairs knowledge of living things, and damage to functional semantics primarily impairs knowledge of nonliving things.

Semantic refractory access and semantic storage disorders

Semantic memory disorders fall into two groups. Semantic refractory access disorders are contrasted with semantic storage disorders according to four factors. Temporal factors, response consistency, frequency and semantic relatedness are the four factors used to differentiate between semantic refractory access and semantic storage disorders. A key feature of semantic refractory access disorders is temporal distortions. Decreases in response time to certain stimuli are noted when compared to natural response times. Response consistency is the next factor. In access disorders you see inconsistencies in comprehending and responding to stimuli that have been presented many times. Temporal factors impact response consistency. In storage disorders, you do not see an inconsistent response to specific items like you do in refractory access disorders. Stimulus frequency determines performance at all stages of cognition. Extreme word frequency effects are common in semantic storage disorders while in semantic refractory access disorders word frequency effects are minimal. The comparison of 'close' and 'distant' groups tests semantic relatedness. 'Close' groupings have words that are related because they are drawn from the same category. For example, a listing of clothing types would be a 'close' grouping. 'Distant' groupings contain words with broad categorical differences. Non-related words would fall into this group. Comparing close and distant groups shows that in access disorders semantic relatedness had a negative effect. This is not observed in semantic storage disorders. Category specific and modality specific impairments are important components in access and storage disorders of semantic memory.[55]

Present and future research

Semantic memory has had a comeback in interest in the past 15 years, due in part to the development of functional neuroimaging methods such as positron emission tomography (PET) and functional magnetic resonance imaging (fMRI), which have been used to address some of the central questions about our understanding of semantic memory.

Position emission tomography (PET) and functional magnetic resonance (fMRI) allow cognitive neuroscientists to explore different hypothesis concerning the neural network organization of semantic memory. By using these neuroimaging techniques researchers can observe the brain activity of participants while they perform cognitive tasks. These tasks can include, but are not limited to, naming objects, deciding if two stimuli belong in the same object category, or matching pictures to their written or spoken names.[56]

Rather than any one brain region playing a dedicated and privileged role in the representation or retrieval of all sorts of semantic knowledge, semantic memory is a collection of functionally and anatomically distinct systems, where each attribute-specific system is tied to a sensorimotor modality (i.e. vision) and even more specifically to a property within that modality (i.e. color). Neuroimaging studies also suggest a distinction between semantic processing and sensorimotor processing.

A new idea that is still at the early stages of development is that semantic memory, like perception, can be subdivided into types of visual information – color, size, form, and motion. Thompson-Schill (2003)[57] found that the left or bilateral ventral temporal cortex appears to be involved in retrieval of knowledge of color and form, the left lateral temporal cortex in knowledge of motion, and the parietal cortex in knowledge of size.

Neuroimaging studies suggest a large, distributed network of semantic representations that are organized minimally by attribute, and perhaps additionally by category. These networks include "extensive regions of ventral (form and color knowledge) and lateral (motion knowledge) temporal cortex, parietal cortex (size knowledge), and premotor cortex (manipulation knowledge). Other areas, such as more anterior regions of temporal cortex, may be involved in the representation of nonperceptual (e.g. verbal) conceptual knowledge, perhaps in some categorically-organized fashion."[58] It is suggested that within the temperoparietal network, the anterior temporal lobe is relatively more important for semantic processing, and posterior language regions are relatively more important for lexical retrieval.

See also

References

  1. Squire, L (1992). "Declarative and Nondeclarative Memory: Multiple Brain Systems Supporting Learning and Memory". Journal of Cognitive Neuroscience 4 (3): 232–243. doi:10.1162/jocn.1992.4.3.232.
  2. McRae, Ken; Jones, Michael (2013). Reisberg, Daniel, ed. The Oxford Handbook of Cognitive Psychology. New York, NY: Oxford University Press. pp. 206–216. ISBN 9780195376746.
  3. Tulving, Endel (2002). "Episodic Memory: From Mind to Brain". Annual Review of Psychology 53: 1–25. doi:10.1146/annurev.psych.53.100901.135114. PMID 11752477.
  4. Saumier, D.; Chertkow, H. (2002). "Semantic Memory". Current science 2: 516–522. doi:10.1007/s11910-002-0039-9.
  5. Tulving, E.; Schacter, D.L. (1990). "Priming and human memory systems. Bum". Science 247: 301–306. doi:10.1126/science.2296719. PMID 2296719.
  6. Klein, Stanley B (2013). "Episodic Memory and Autonoetic Awareness". Frontiers in Behavioral Neuroscience 7 (3): 1–12.
  7. Reif, R; Scheerer, M (1959). Memory and Hypnotic Age Regression: Developmental Aspects of Cognitive Function Explored Through Hypnosis. New York, NY: International Universities Press.
  8. Ramachandran, V.S. (1994). "Memory". Encyclopedia of Human Behavior 1: 137–148.
  9. Tulving, Endel (1972). Episodic and Semantic Memory: Organization of Memory (E. Tulving & W. Donaldson ed.). New York, NY: Academic Press. pp. 382–403.
  10. Tulving, Endel (1987). "Episodic and Semantic Memory". The Social Sciences Citation Index: Citation Classic.
  11. Tulving, Endel (1984). "Precise of Elements of Episodic Memory". The Behavioral and Brain Sciences 7 (2): 223–268. doi:10.1017/s0140525x0004440x.
  12. Pecher, D; Zwann, R.A. (2005). Grounding Cognition: The Role of Perception and Action in Memory, Language, and Thinking. Cambridge: Cambridge University Press.
  13. Ormel, J., Laceulle, O.M., Jeronimus, B.F. (2014). "Why Personality and Psychopathology Are Correlated: A Developmental Perspective Is a First Step but More Is Needed". European Journal of Personality 28 (4): 396–98. doi:10.1002/per.1971.
  14. Jacoby, L. L.; Dallas, M. (1981). "On the relationship between autobiographical memory and perceptual learning". Journal of Experimental Psychology: General 110 (3): 306–340. doi:10.1037/0096-3445.110.3.306.
  15. Murphy, Gregory L; Hampton, James A; Milovanvic, Goran S (2012). "Semantic Memory Redux: An Experimental Test os Hierarchical Category Representation". Journal of Memory and Language 67: 521–539. doi:10.1016/j.jml.2012.07.005.
  16. Tulving, Endel (1985). "Memory and Consciousness". Canadian Psychology/Psychologie Canadienne 26 (1): 1–12. doi:10.1037/h0080017.
  17. Kintsch, W (1988). "The role of knowledge in discourse comprehension: A construction-integration model". Psychological Review 95: 163–182. doi:10.1037/0033-295x.95.2.163.
  18. Collins, A. M.; Quillian, M. R. (1969). "Retrieval time from semantic memory". Journal of Verbal Learning and Verbal Behavior 8: 240–247. doi:10.1016/s0022-5371(69)80069-1.
  19. Collins, A. M. & Quillian, M. R. (1972). How to make a language user. In E. Tulving & W. Donaldson (Eds.), Organization of memory (pp. 309-351). New York: Academic Press.
  20. Rips, L. J.; Shoben, E. J.; Smith, F. E. (1973). "Semantic distance and the verification of semantic relations". Journal of Verbal Learning and Verbal Behavior 14: 665–681. doi:10.1016/s0022-5371(73)80056-8.
  21. Collins, A. M.; Loftus, E. F. (1975). "A spreading-activation theory of semantic processing". Psychological Review 82 (6): 407–428. doi:10.1037/0033-295x.82.6.407.
  22. Glass, A. L.; Holyoak, K. J.; Kiger, J. I. (1979). "Role of antonymy relations in semantic judgments". Journal of Experimental Psychology: Human Learning & Memory 5 (6): 598–606. doi:10.1037/0278-7393.5.6.598.
  23. Arbib, M. A. (Ed.). (2002). Semantic networks. In The Handbook of Brain Theory and Neural Networks (2nd ed.), Cambridge, MA: MIT Press.
  24. Barr, A. & Feigenbaum, E. A. (1982). The handbook of artificial intelligence. Lost Altos, CA: William Kaufman.
  25. Cravo, M. R.; Martins, J. P. (1993). "SNePSwD: A newcomer to the SNePS family". Journal of Experimental & Theoretical Artificial Intelligence 5: 135–148. doi:10.1080/09528139308953764.
  26. 1 2 Smith, E. E.; Shoben, E. J.; Rips, L. J. (1974). "Structure and process in semantic memory: A featural model for semantic decisions". Psychological Review 81: 214–241. doi:10.1037/h0036351.
  27. Meyer, D. E. (1970). "On the representation and retrieval of stored semantic information". Cognitive Psychology 1 (3): 242–299. doi:10.1016/0010-0285(70)90017-4.
  28. Rips, L. J. (1975). "Inductive judgments about natural categories". Journal of Verbal Learning & Verbal Behavior 14 (6): 665–681. doi:10.1016/s0022-5371(75)80055-7.
  29. McCloskey, M. E.; Glucksberg, S. (1978). "Natural categories: Well defined or fuzzy sets?". Memory & Cognition 6 (4): 462–472. doi:10.3758/bf03197480.
  30. McCloskey, M.; Glucksberg, S. (1979). "Decision processes in verifying category membership statements: Implications for models of semantic memory". Cognitive Psychology 11 (1): 1–37. doi:10.1016/0010-0285(79)90002-1.
  31. Raaijmakers, J. G. W.; Schiffrin, R. M. (1981). "Search of associative memory". Psychological Review 8 (2). pp. 98–134.
  32. Kimball, D. R., Smith, T. A. & Kahana, M. J. (in press). The fSAM model of false recall. Psychological Review.
  33. Raaijmakers, J.G.; Shiffrin R.M. (1980). "SAM: A theory of probabilistic search of associative memory". The psychology of learning and motivation:Advances in research and theory 14: 207–262. doi:10.1016/s0079-7421(08)60162-0.
  34. Sirotin, Y.B.; Kahana, d. R (2005). "Going beyond a single list: Modeling the effects of prior experience on episodic free recall". Psychonomic bulletin & Review 12 (5): 787–805. doi:10.3758/bf03196773.
  35. Landauer, T.K; Dumais S.T. "Solution to Plato's problem: the latent semantic analysis theory of acquisition, induction, and representation of knowledge". Psychological Review 104: 211–240. doi:10.1037/0033-295x.104.2.211.
  36. Steyvers, M; Shiffrin, Nelson (2004). "Word association spaces for predicting semantic similarity effects in episodic memory". Experimental cognitive psychology and its applications: Festschrift in honor of Lyle Bourne, Walter Kintsch, and Thomas Landauer: 237–249.
  37. Anderson, J. R. (1983). The Architecture of Cognition. Cambridge, MA: Harvard University Press.
  38. Anderson, J. R. (1993b). Rules of the mind. Hillsdale, NJ: Erlbaum.
  39. Anderson, J. R.; Bothell, D.; Lebiere, C.; Matessa, M. (1998). "An integrated theory of list memory". Journal of Memory and Language 38: 341–380. doi:10.1006/jmla.1997.2553.
  40. Griffiths, T. L.; Steyvers, M.; Firl, A. (2007). "Google and the mind: Predicting fluency with PageRank". Psychological Science 18 (12): 1069–1076. doi:10.1111/j.1467-9280.2007.02027.x.
  41. Anderson, J. R. (1990). The adaptive character of thought. Hillsdale, NJ: Lawrence Erlbaum Associates.
  42. Landauer, T. K.; Dumais, S. T. (1997). "A solution to Plato's problem: The Latent Semantic Analysis theory of the acquisition, induction, and representation of knowledge". Psychological Review 104: 211–240. doi:10.1037/0033-295x.104.2.211.
  43. Lund, K., Burgess, C. & Atchley, R. A. (1995). Semantic and associative priming in a high-dimensional semantic space. Cognitive Science Proceedings (LEA), 660-665.
  44. Lund, K.; Burgess, C. (1996). "Producing high-dimensional semantic spaces from lexical co-occurrence". Behavior Research Methods, Instruments, and Computers 28 (2): 203–208. doi:10.3758/bf03204766.
  45. Vargha-Khadem; et al. (1997). "Differential Effects of Early Hippocampal Pathology on Episodic and Semantic Memory". Science 277 (5324): 376–380. doi:10.1126/science.277.5324.376. PMID 9219696.
  46. 1 2 Burianova, H.; Grady, C. L. (2007). "Common and Unique Neural Activations in Autobiographical, Episodic, and Semantic Retrieval". Journal of Cognitive Neuroscience 19 (9): 1520–34. doi:10.1162/jocn.2007.19.9.1520.
  47. Lambon Ralph, M.; Lowe, C.; Rogers, T.T. (2007). "Neural Basis of Category-specific Semantic Deficits for Living Things: Evidence from semantic dementia, HSVE and a Neural Network Model". Brain: A Journal of Neurology 130 (4): 1127–37. doi:10.1093/brain/awm025.
  48. Garrard; et al. (2001). "Longitudinal Profiles of Semantic Impairment for Living and Nonliving Concepts in Dementia of Alzheimer's Type". Journal of Cognitive Neuroscience 13 (7): 892–909. doi:10.1162/089892901753165818.
  49. Lambon, R.; Matthew, A. (2007). "Neural Basis of Category-specific Semantic Deficits for Living Things: Evidence from semantic dementia, HSVE and a Neural Network Model". Brain: A Journal of Neurology 130 (4): 1127–37. doi:10.1093/brain/awm025.
  50. Rajah, M.N.; McIntosh, A.R. (2005). "Overlap in the Functional Neural Systems Involved in Semantic and Episodic Memory Retrieval". Journal of Cognitive Neuroscience 17 (3): 470–482. doi:10.1162/0898929053279478.
  51. Warrington, E. K.; Shallice, T. (1984). "Category specific semantic impairments". Brain 107: 829–853. doi:10.1093/brain/107.3.829. PMID 6206910.
  52. Laws, KR; Adlington, RL; Gale, TM; Moreno-Martinez, FJ; Sartori, G (2007). "A meta-analytic review of category naming in Alzheimer's disease". Neuropsychologia 45 (12): 2674–82. doi:10.1016/j.neuropsychologia.2007.04.003.
  53. Cohen, G., Johnston, R. & Plunkett, K. (2002). Exploring cognition: Damaged brains and neural networks. Erlbaum: Psychology Press.
  54. Valentine, T., Brennen, T. & Bredart, S. (1996). The Cognitive psychology of proper names: On the importance of being ernest. London: Routledge.
  55. McCarthy, R. (1995). Semantic knowledge and semantic representations. Erlbaum: Psychology Press.
  56. Eiling, Yee; Chrysikou, Evangelia G; Thompson-Schill, Sharon L (2013). "Semantic Memory". The Oxford Handbook of Cognitive Neuroscience. New York, NY: Oxford UP. pp. 353–369.
  57. Thompson-Schill, S.L. (2003). "Neuroimaging studies of semantic memory: inferring "how" from "where"". Neuropsychologia 41: 280–292. doi:10.1016/s0028-3932(02)00161-6.
  58. Thompson-Schill, S.L. (2003). "Neuroimaging studies of semantic memory: inferring "how" from "where"". Neuropsychologia 41: 280–122. doi:10.1016/s0028-3932(02)00161-6.

Further reading

  • John Hart, Michael A. Kraut. 2007. Neural Basis of Semantic Memory. Publisher-Cambridge University Press. ISBN 0521848709, 9780521848701
  • Rosale McCarthy. 1995.Semantic Knowledge And Semantic Representations: A Special Issue Of Memory. Publisher Psychology Press. ISBN 0863779360, 9780863779367
  • Frank Krüger. 2000. Coding of temporal relations in semantic memory. Publisher-Waxmann Verlag. ISBN 3893259430, 9783893259434
  • Sandra L. Zoccoli. 2007. Object Features and Object Recognition: Semantic Memory Abilities During the Normal Aging Process. Publisher-ProQuest. ISBN 0549321071, 9780549321071
  • Wietske Vonk. 1979. Retrieval from semantic memory. Publisher Springer-Verlag.
  • Sarí Laatu. 2003. Semantic memory deficits in Alzheimer's disease, Parkinson's disease and multiple sclerosis: impairments in conscious understanding of concept meanings and visual object recognition. Publisher-Turun Yliopisto
  • Laura Eileen Matzen. 2008. Semantic and Phonological Influences on Memory, False Memory, and Reminding. Publisher-ProQuest. ISBN 0549909958, 9780549909958
  • William Damon, Richard M. Lerner, Nancy Eisenberg. 2006. Handbook of Child Psychology, Social, Emotional, and Personality Development. Publisher John Wiley & Sons. ISBN 0471272906, 9780471272908
  • Omar, Rohani; Hailstone, Julia C.; Warren, Jason D. (2012). "Semantic Memory for Music in Dementia". Music Perception: An Interdisciplinary Journal 29 (5): 467–477. doi:10.1525/mp.2012.29.5.467. 
  • Vanstone, Ashley D.; Sikka, Ritu; Tangness, Leila; Sham, Rosalind; Garcia, Angeles; Cuddy, Lola L. (2012). "Episodic and Semantic Memory for Melodies in Alzheimer's Disease". Music Perception: An Interdisciplinary Journal 29 (5): 501–507. doi:10.1525/mp.2012.29.5.501. 
  • Smith, Edward E. (2000). "Neural Bases of Human Working Memory". Current Directions in Psychological Science 9 (2): 45–49. doi:10.1111/1467-8721.00058. 

External links

This article is issued from Wikipedia - version of the Wednesday, February 10, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.