The N170 is a component of the event-related potential that reflects the neural processing of faces.
When potentials evoked by images of faces are compared to those elicited by other visual stimuli, the former show increased negativity 130-200 ms after stimulus presentation. This response is maximal over occipito-temporal electrode sites, which is consistent with a source located at the fusiform and inferior-temporal gyri. The N170 generally displays right-hemisphere lateralization and has been linked with the structural encoding of faces[1].
Contents |
The N170 was first described by Shlomo Bentin and colleagues in 1996[2], who measured ERPs from participants viewing faces and other objects. They found that human faces and face parts (such as eyes) elicited different responses than other stimuli, including animal faces, body parts, and cars.
Earlier work performed by Botzel and Grusser and first reported in 1989[3] also attempted to find a component of the ERP that corresponded to the processing of human faces. They showed observers line drawings (in one experiment) and black and white photographs (in two additional experiments) of faces, trees, and chairs. They found that, compared to the other stimulus classes, faces elicited a larger positive component approximately 150 ms after onset, which was maximal at central electrode cites (at the top of the head). The topography of this effect and lack of lateralization led to the conclusion that this face-specific potential did not arise in face-selective areas in the occipital-temporal region, but instead in the limbic system. Subsequent work referred to this component as the vertex positive potential (VPP)[4].
In an attempt to rectify these two apparently conflicting results, Joyce and Rossion[5] recorded ERPs from 53 scalp electrodes while participants viewed faces and other visual stimuli. After recording, they re-referenced the data to several commonly-used reference electrode sites, including the nose and mastoid process. They found that the N170 and VPP can be accounted for by the same dipole arrangement arising from the same neural generators, and therefore reflect the same process.
Three of the most studied attributes of the N170 include manipulations of face inversion, facial race, and emotional expressions.
It has been established that inverted faces (i.e., those presented upside-down) are more difficult to perceive[6] (the Thatcher effect is a good illustration of this). In their landmark study, Bentin et al. found that inverted faces increased the latency of the N170 component[2]. Jacques and colleagues further studied the timecourse of the face inversion effect (FIE) using an adaptation paradigm[7]. When the same stimulus is presented multiple times, the neuronal response decreases over time; when a different stimulus is presented, the response recovers. The conditions under which a “release from adaptation” occurs therefore provides a way to measure stimulus similarity. In their experiment, Jacques et al. found that the release from adaptation is smaller and occurs 30 ms later for inverted faces, indicating that the neuronal population encoding face identity require additional processing time to detect the identity of inverted faces.
In an experiment examining the effects of race on the N170’s amplitude, it was found that an “Other-Race Effect” was elicited in conjunction with face inversions. Vizioli and colleagues examined the effect of face recognition impairment while subjects process same race (SR) or other race (OR) pictures.[8] The research team devised a N170 experiment based on the premise that visual expertise plays a critical role in inversion, hypothesizing that viewers’ greater level of expertise with SR faces (holistic processing) should elicit a stronger FIE compared to OR face stimuli. The authors recorded EEGs from Western Caucasian and East Asian subjects (two separate groups) who were presented with pictures of Western Caucasian, East Asian and African America faces in upright and inverted orientations. All the facial stimuli were cropped to remove external features (i.e. hair, beards, hats, etc). Both groups displayed a later N170 with larger amplitude (over the right hemisphere) for inverted than upright same-race (SR) faces, but showed no inversion effect for OR and AA photo stimuli. Moreover, no race effects were observed in regard to the peak amplitude of the N170 for upright faces in both groups of participants. The results also found no significant latency differences among the races of stimuli, but facial inversion did increase and delay the N170 amplitude and onset respectively. They conclude that the subjects’ lack of experience with inverted faces makes processing such stimuli more difficult than pictures shown in their canonical orientation, regardless of what race the stimulus is.
Besides modulation by inversion and race, emotional expressions have also been a focus of N170 face research. In an experiment conducted by Righart and de Gelder, ERP results show that the early stages of face processing may be affected by emotional scenes when categorizations of fearful and happy facial expressions are made by subjects[9]. In this paradigm subjects had to view color pictures of happy or fearful faces that were centrally overlaid on pictures of natural scenes. And in order to control for low level features, such as color and other items that could care meaning, all the scene pictures were scrambled by randomizing the position of pixels across the image. The final results of the experiment show that emotion effects were associated with the N170 in which there was a larger (negative) amplitude for faces when they appeared in a fearful context then when placed in happy or neutral scenes. In fact, left occipito-temporal distributed N170 amplitudes were dramatically increased for intact fearful faces when they appeared in a fearful scene, though levels were not as high when a fearful face was presented in a happy or neutral scene. Similar results did occur in regard to intact happy faces, but the amplitudes were not as high as those related to fearful scenes or expressions[10]. Righart and de Gelder conclude that information from task-irrelevant scenes is rapidly combined with the information from facial expressions, and that subjects use context information in the early stage of processing when they need to discriminate/categorize facial expressions.
Given the ease and rapidity with which humans can recognize faces, a great deal of neuroscientific research has endeavored to understand how and where the brain processes them. Early research on prosopagnosia, or “face blindness”, found that damage to the occipito-temporal region led to an impaired or complete inability for people to recognize faces. Convergent evidence for the importance of this region in face processing came through the use of fMRI, which found that a region of the fusiform gyrus, the “fusiform face area”, responded selectively to images of faces.
An investigation of the N170 undertaken by Itier and Taylor[11] used ERP source-localization techniques to estimate the location of the neural generator of the N170. They concluded that the N170 arose from the posterior superior temporal sulcus. However, it should be noted that these techniques are fraught with potential sources of error, and there is disagreement on the validity of inferences drawn from such findings[12].
Halgren and colleagues[13] instead used magnetoencephalography (MEG) to investigate the time-course and location of face processing in the human brain. MEG and EEG are complimentary techniques; the former measures the magnetic fields produced by neural activity, and the latter measures electric fields. However, while the electric fields generated by neurons are distorted by the skull and other tissue, these are magnetically “invisible”, allowing for an accurate localization of the signal source. The time course and polarity differences in the MEG response to faces (versus other objects) was very similar to the N170. However, when the authors applied source localization techniques to the MEG results, they identified the fusiform gyrus as a source of this signal.
In 2007, Guillaume Thierry and colleagues[14] presented evidence that called into question the face-specificity of the N170. Most earlier experiments found an N170 when the response to frontal views of faces was compared to those of other objects that could appear in more variable poses and configurations. In their study, they introduced a new factor: stimuli could be faces or non-faces, and either class could have high or low similarity. Similarity was measured by calculating the correlation between pixel values in pairs of same-category stimuli. When ERPs were compared for these conditions, they found a typical N170 effect in the low-similarity non-face vs. high-similarity face comparison. However, high-similarity non-faces showed a significant N170, while low-similarity faces did not. These results led the authors to conclude that the N170 is actually a measure of stimulus similarity, and not face processing per se.
In response to this, Rossion and Jacques[1] measured similarity as above for several object categories used in a previous study of the N170. They found that faces elicited a larger N170 than other classes of objects that had similar or higher similarity values, such as houses, cars, and shoes. While it remains uncertain why Thierry et al. observed an effect of similarity on the N170, Rossion and Jacques speculate that lower similarity leads to more variance in the latency of the response. Since ERP components are measured by averaging the results from many individual trials, high latency variance effectively “smears” the response, reducing the amplitude of the average. Rossion and Jacques also offer criticism of the methodology used by Thierry and colleagues, arguing that their failure to find a difference between high-similarity faces and high-similarity non-faces was due to a poor choice of electrode sites.