Illusory superiority is a cognitive bias that causes people to overestimate their positive qualities and abilities and to underestimate their negative qualities, relative to others. This is evident in a variety of areas including intelligence, performance on tasks or tests, and the possession of desirable characteristics or personality traits. It is one of many positive illusions relating to the self, and is a phenomenon studied in social psychology.
Illusory superiority is often referred to as the above average effect. Other terms include superiority bias, leniency error, sense of relative superiority, the primus inter pares effect,[1] and the Lake Wobegon effect (named after Garrison Keillor's fictional town where "all the children are above average"). The phrase "illusory superiority" was first used by Van Yperen and Buunk in 1991.[1]
Contents |
Illusory superiority has been found in individuals' comparisons of themselves with others in a wide variety of different aspects of life, including performance in academic circumstances (such as class performance, exams and overall intelligence), in working environments (for example in job performance), and in social settings (for example in estimating one's popularity, or the extent to which one possesses desirable personality traits, such as honesty or confidence), as well as everyday abilities requiring particular skill.[1]
For illusory superiority to be demonstrated by social comparison, two logical hurdles have to be overcome. One is the ambiguity of the word "average". It is logically possible for nearly all of the set to be above the mean if the distribution of abilities is highly skewed. An example is that the mean number of human legs is slightly lower than two, because of the small minority that have one or no legs. Hence experiments usually compare subjects to the median of the peer group, since by definition it is impossible for a majority to exceed the median.
A further problem in inferring inconsistency is that subjects might interpret the question in different ways, so it is logically possible that a majority of them are, for example, more generous than the rest of the group each on their own understanding of generosity.[2] This interpretation is confirmed by experiments which varied the amount of interpretive freedom subjects were given. As subjects evaluate themselves on a specific, well-defined attribute, illusory superiority remains.[3]
One of the main effects of illusory superiority in IQ is the Downing effect. This describes the tendency of people with a below average IQ to overestimate their IQ, and of people with an above average IQ to underestimate their IQ. The propensity to predictably misjudge one's own IQ was first noted by C. L. Downing who conducted the first cross-cultural studies on perceived 'intelligence'. His studies also evidenced that the ability to accurately estimate others' IQ was proportional to one's own IQ. This means that the lower the IQ of an individual, the less capable they are of appreciating and accurately appraising others' IQ. Therefore individuals with a lower IQ are more likely to rate themselves as having a higher IQ than those around them. Conversely, people with a higher IQ, while better at appraising others' IQ overall, are still likely to rate people of similar IQ as themselves as having higher IQs.
The disparity between actual IQ and perceived IQ has also been noted between genders by British psychologist Adrian Furnham, in whose work there was a suggestion that, on average, men are more likely to overestimate their intelligence by 5 points, while women are more likely to underestimate their IQ by a similar margin.[4][5]
Illusory superiority has been found in studies comparing memory self-report, such as Schmidt, Berg & Deelman's research in older adults. This study involved participants aged between 46 and 89 years of age comparing their own memory to that of peers of the same age group, 25-year-olds and their own memory at age 25. This research showed that participants exhibited illusory superiority when comparing themselves to both peers and younger adults, however the researchers asserted that these judgements were only slightly related to age.[6]
In Kruger and Dunning's experiments participants were given specific tasks (such as solving logic problems, analyzing grammar questions, and determining whether or not jokes were funny), and were asked to evaluate their performance on these tasks relative to the rest of the group, enabling a direct comparison of their actual and perceived performance.[7]
Results were divided into four groups depending on actual performance and it was found that all four groups evaluated their performance as above average, meaning that the lowest-scoring group (the bottom 25%) showed a very large illusory superiority bias. The researchers attributed this to the fact that the individuals who were worst at performing the tasks were also worst at recognizing skill in those tasks. This was supported by the fact that, given training, the worst subjects improved their estimate of their rank as well as getting better at the tasks.[7]
The paper, titled "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments," won a 2000 Ig Nobel Prize.[8]
In 2003 Dunning and Joyce Ehrlinger, also of Cornell University, published a study that detailed a shift in people's views of themselves influenced by external cues. Participants in the study (Cornell University undergraduates) were given tests of their knowledge of geography, some intended to positively affect their self-views, some intended to affect them negatively. They were then asked to rate their performance, and those given the positive tests reported significantly better performance than those given the negative.[9]
Daniel Ames and Lara Kammrath extended this work to sensitivity to others, and the subjects' perception of how sensitive they were.[10] Work by Burson Larrick and Joshua Klayman has suggested that the effect is not so obvious and may be due to noise and bias levels.[11]
Dunning, Kruger, and coauthors' latest paper on this subject comes to qualitatively similar conclusions after making some attempt to test alternative explanations.[12]
In a survey of faculty at the University of Nebraska, 68% rated themselves in the top 25% for teaching ability.[13]
In a similar survey, 87% of MBA students at Stanford University rated their academic performance as above the median.[14]
Findings of illusory superiority in research have also explained phenomena such as the large amount of stock market trading (as each trader thinks they are the best, and most likely to succeed),[15] and the number of lawsuits that go to trial (because, due to illusory superiority, many lawyers have an inflated belief that they will win a case).[16]
One of the first studies that found the effect of illusory superiority was carried out in 1976 by the College Board in the USA.[17] A survey was attached to the SAT exams (taken by approximately one million students per year), asking the students to rate themselves relative to the median of the sample (rather than the average peer) on a number of vague positive characteristics. In ratings of leadership ability, 70% of the students put themselves above the median. In ability to get on well with others, 85% put themselves above the median, and 25% rated themselves in the top 1%.
More recent research [18] has found illusory superiority in a social context, with participants comparing themselves to friends and other peers on positive characteristics (such as punctuality and sensitivity) and negative characteristics (such as naivety or inconsistency). This study found that participants rated themselves more favorably than their friends, but rated their friends more favorably than other peers. These findings were, however, affected by several moderating factors.
Research by Perloff and Fetzer,[19] Brown,[20] and Tajfel and Turner[21] also found similar effects of participants rating friends higher than other peers. Tajfel and Turner attributed this to an "ingroup bias" and suggested that this was motivated by the individual's desire for a "positive social identity".
In Zuckerman and Jost's study, participants were given detailed questionnaires about their friendships and asked to assess their own popularity. By using social network analysis, they were able to show that the participants generally had exaggerated perceptions of their own popularity, particularly in comparison to their own friends.[22]
Researchers have also found the effects of illusory superiority in studies into relationship satisfaction. For example, one study found that participants perceived their own relationships as better than others' relationships on average, but thought that the majority of people were happy with their relationships. Also, this study found evidence that the higher the participants rated their own relationship happiness, the more superior they believed their relationship was. The illusory superiority exhibited by the participants in this study also served to increase their own relationship satisfaction, as it was found that–in men especially–satisfaction was particularly related to the perception that one's own relationship was superior as well as to the assumption that few others were unhappy with their relationship, whereas women's satisfaction was particularly related to the assumption that most others were happy with their relationship.[23]
Illusory superiority effects have been found in a self-report study of health behaviors (Hoorens & Harris, 1998). The study involved asking participants to estimate how often they, and their peers, carried out healthy and unhealthy behaviors. Participants reported that they carried out healthy behaviors more often than the average peer, and unhealthy behaviors less often, as would be expected given the effect of illusory superiority. These findings were for both past self-report of behaviors and expected future behaviors.[24]
Svenson (1981) surveyed 161 students in Sweden and the United States, asking them to compare their driving safety and skill to the other people in the experiment. For driving skill, 93% of the US sample and 69% of the Swedish sample put themselves in the top 50% (above the median). For safety, 88% of the US group and 77% of the Swedish sample put themselves in the top 50%.[25]
McCormick, Walkey and Green (1986) found similar results in their study, asking 178 participants to evaluate their position on eight different dimensions relating to driving skill (examples include the "dangerous-safe" dimension and the "considerate-inconsiderate" dimension. Only a small minority rated themselves as below average (the midpoint of the dimension scale) at any point, and when all eight dimensions were considered together it was found that almost 80% of participants had evaluated themselves as being above the average driver.[26]
Subjects describe themselves in positive terms compared to other people, and this includes describing themselves as less susceptible to bias than other people. This effect is called the bias blind spot and has been demonstrated independently.
A vast majority of the literature on self-esteem originates from studies on participants in the United States. However, research that only investigates the effects in one specific population is severely limited as this may not be a true representation of human psychology as a whole. As a result, more recent research has focused on investigating quantities and qualities of self-esteem around the globe. The findings of such studies suggest that illusory superiority varies between cultures.
While a great deal of evidence suggests that we compare ourselves favorably to others on a wide variety of traits, the links to self-esteem are uncertain. The theory that those with high self-esteem maintain this high level by rating themselves over and above others does carry some evidence behind it; it has been reported that non-depressed subjects rate their control over positive outcomes higher than that of a peer; despite an identical level in performance between the two individuals.[27]
Furthermore, it has been found that non-depressed students will also actively rate peers below themselves, as opposed to rating themselves higher; students were able to recall a great deal more negative personality traits about others than about themselves.[28]
The data suggests those with a positive self view are more likely to display the above-average effect, as opposed to those with a negative self appraisal. Similarly, those with low self-esteem appear to engage in far less illusory superiority, showing more realism in their self rating.
These results go against a basic humanistic principle within psychology. In particular, Carl Rogers, a pioneer of humanistic psychology, claims that those with low self-esteem will be far more likely to attempt to belittle others, with the aim of strengthening their fragile self view. On the other hand, Rogers hypothesizes that those with high self-esteem will have no need to put others down or below themselves; and therefore, would be unlikely to exhibit illusory superiority.
It should be noted though, that in these studies there was no distinction made between people with legitimate and illegitimate high self-esteem, as other studies have found that absence of positive illusions may coexist with high self-esteem[29] and that self-determined individuals with personality oriented towards growth and learning are less prone to these illusions.[30] Thus it may be likely that while illusory superiority is associated with illegitimate high self-esteem, people with legitimate high self-esteem don't exhibit it.
Psychology has traditionally assumed that generally accurate self-perceptions are essential to good mental health.[2] This was challenged by a 1988 paper by Taylor and Brown, who argued that mentally healthy individuals typically manifest three cognitive illusions, namely illusory superiority, illusion of control and optimism bias.[2] This idea rapidly became very influential, with some authorities concluding that it would be therapeutic to deliberately induce these biases.[31] Since then, further research has both undermined that conclusion and offered new evidence associating illusory superiority with negative effects on the individual.[2]
One line of argument was that in the Taylor and Brown paper, the classification of people as mentally healthy or unhealthy was based on self-reports rather than objective criteria.[31] Hence it was not surprising that people prone to self-enhancement would exaggerate how well-adjusted they are. One study claimed that "mentally normal" groups were contaminated by defensive deniers who are the most subject to positive illusions.[31] A longitudinal study found that self-enhancement biases were associated with poor social skills and psychological maladjustment.[2] In a separate experiment where videotaped conversations between men and women were rated by independent observers, self-enhancing individuals were more likely to show socially problematic behaviors such as hostility or irritability.[2] A 2007 study found that self-enhancement biases were associated with psychological benefits (such as subjective well-being) but also inter- and intra-personal costs (such as anti-social behavior).[32]
The degree to which people view themselves as more desirable than the average person links to reduced activation in their orbitofrontal cortex and dorsal anterior cingulate cortex. This is suggested to link to the role of these areas in processing "cognitive control".[33]
Alicke and Govorun (2005) note that five main mechanisms have been proposed, that attempt to explain why illusory superiority (which they refer to as the better-than-average effect) occurs.[17]
This is the idea that when making a comparison with a peer an individual will select their own strengths and the other's weaknesses in order that they appear better on the whole. This theory was first tested by Weinstein (1980); however, this was in an experiment relating to optimistic bias, rather than the better-than-average effect. The study involved participants rating certain behaviors as likely to increase or decrease the chance of a series of life events happening to them. It was found that individuals showed less optimistic bias when they were allowed to see others' answers.[34]
Perloff and Fetzer (1986) suggested that when comparing themselves to an average peer on a particular ability or characteristic an individual would choose a comparison target (the peer being compared) that scored less well on that ability or characteristic, in order that the individual would appear to be better than average. To test this theory Perloff and Fetzer asked participants to compare themselves to specific comparison targets (a close friend), and found that illusory superiority decreased when specific targets were given, rather than vague constructs such as the "average peer". However these results are not completely reliable and could be affected by the fact that individuals like their close friends more than an "average peer" and may as a result rate their friend as being higher than average, therefore the friend would not be an objective comparison target.[19]
The second explanation for how the better-than-average effect works is egocentrism. This is the idea that an individual places greater importance and significance on their own abilities, characteristics and behaviors than those of others. Egocentrism is therefore a less overtly self-serving bias. According to egocentrism, individuals will overestimate themselves in relation to others because they believe that they have an advantage that others do not have, as an individual considering their own performance and another's performance will consider their performance to be better, even when they are in fact equal. Kruger (1999) found support for the egocentrism explanation in his research involving participant ratings of their ability on easy and difficult tasks. It was found that individuals were consistent in their ratings of themselves as above the median in the tasks classified as "easy" and below the median in the tasks classified as "difficult", regardless of their actual ability. In this experiment the better-than-average effect was observed when it was suggested to participants that they would be successful, but also a worse-than-average effect was found when it was suggested that participants would be unsuccessful.[35]
The third explanation for the better-than-average effect is focalism, the idea that greater significance is placed on the object that is the focus of attention. Most studies of the better-than-average effect place greater focus on the self when asking participants to make comparisons (the question will often be phrased with the self being presented before the comparison target – e.g. "compare yourself to the average person..."). According to focalism this means that the individual will place greater significance on their own ability or characteristic than that of the comparison target. This also means that in theory if, in an experiment on the better-than-average effect, the questions were phrased so that the self and other were switched (e.g. "compare the average peer to yourself") the better-than-average effect should be lessened.[36]
Research into focalism has focused primarily on optimistic bias rather than the better-than-average effect. However, two studies found a decreased effect of optimistic bias when participants were asked to compare an average peer to themselves, rather than themselves to an average peer.[37][38]
Windschitl, Kruger & Simms (2003) have conducted research into focalism, focusing specifically on the better-than-average effect, and found that asking participants to estimate their ability and likelihood of success in a task produced results of decreased estimations when they were asked about others' chances of success rather than their own.[39]
This idea, put forward by Giladi and Klar, suggests that when making comparisons any single member of a group will be evaluated to rank above that group's statistical mean performance level or the median performance level of its members. Research has found this effect in many different areas of human performance and has even generalized it beyond individuals' attempts to draw comparisons involving themselves.[40] Findings of this research therefore suggest that rather than individuals evaluating themselves as above average in a self-serving manner, the better-than-average effect is actually due to a general tendency to evaluate any single person or object as better than average.
Alicke and Govorun proposed this idea, that rather than individuals consciously reviewing and thinking about their own abilities, behaviors and characteristics and comparing them to those of others, it is likely that people instead have what they describe as an "automatic tendency to assimilate positively-evaluated social objects toward ideal trait conceptions".[17] For example if an individual evaluated themselves as honest, they would be likely to then exaggerate their characteristic towards their perceived ideal position on a scale of honesty. Importantly, Alicke has noted that this ideal position is not always the top of the scale, for example in the case of honesty someone who is always brutally honest may be regarded as rude. Instead, the ideal is a balance perceived differently by different individuals.
The better-than-average effect may not have wholly social origins: judgements about inanimate objects suffer similar distortions.[40]
While illusory superiority has been found to be somewhat self-serving this does not mean that it will predictably occur: it is not constant. Instead the strength of the effect is moderated by many factors, the main examples of which have been summarized by Alicke and Govorun (2005).[17]
This is a phenomenon that Alicke and Govorun have described as "the nature of the judgement dimension" and refers to how subjective (abstract) or objective (concrete) the ability or characteristic being evaluated is.[17] Research by Sedikides & Strube (1997) has found that people are more self-serving (the effect of illusory superiority is stronger) when the event in question is more open to interpretation,[41] for example social constructs such as popularity and attractiveness are more interpretable than characteristics such as intelligence and physical ability.[42] This has been partly attributed also to the need for a believable self-view.[43]
The idea that ambiguity moderates illusory superiority has empirical research support from a study involving two conditions: in one, participants were given criteria for assessing a trait as ambiguous or unambiguous, and in the other participants were free to assess the traits according to their own criteria. It was found that the effect of illusory superiority was greater in the condition where participants were free to assess the traits.[44]
The effects of illusory superiority have also been found to be strongest when people rate themselves on abilities at which they are totally incompetent. These subjects have the greatest disparity between their actual performance (at the low end of the distribution) and their self-rating (placing themselves above average). This Dunning–Kruger effect is interpreted as a lack of metacognitive ability to recognize their own incompetence.[7]
The method used in research into illusory superiority has been found to have an implication on the strength of the effect found. Most studies into illusory superiority involve a comparison between an individual and an average peer, of which there are two methods: direct comparison and indirect comparison. A direct comparison–which is more commonly used–involves the participant rating themselves and the average peer on the same scale, from "below average" to "above average"[45] and results in participants being far more self-serving.[46] Researchers have suggested that this occurs due to the closer comparison between the individual and the average peer, however use of this method means that it is impossible to know whether a participant has overestimated themselves, underestimated the average peer, or both.
The indirect method of comparison involves participants rating themselves and the average peer on separate scales and the illusory superiority effect is found by taking the average peer score away from the individual's score (with a higher score indicating a greater effect). While the indirect comparison method is used less often it is more informative in terms of whether participants have overestimated themselves or underestimated the average peer, and can therefore provide more information about the nature of illusory superiority.[45]
The nature of the comparison target is one of the most fundamental moderating factors of the effect of illusory superiority, and there are two main issues relating to the comparison target that need to be considered.
First, research into illusory superiority is distinct in terms of the comparison target because an individual compares themselves with a hypothetical average peer rather than a tangible person. Alicke et al. (1995) found that the effect of illusory superiority was still present but was significantly reduced when participants compared themselves with real people (also participants in the experiment, who were seated in the same room), as opposed to when participants compared themselves with an average peer. This suggests that research into illusory superiority may itself be biasing results and finding a greater effect than would actually occur in real life.[45]
Further research into the differences between comparison targets involved four conditions where participants were at varying proximity to an interview with the comparison target: watching live in the same room; watching on tape; reading a written transcript; or making self-other comparisons with an average peer. It was found that when the participant was further removed from the interview situation (in the tape observation and transcript conditions) the effect of illusory superiority was found to be greater. Researchers asserted that these findings suggest that the effect of illusory superiority is reduced by two main factors, individuation of the target and live contact with the target.
Second, Alicke et al.'s (1995) studies investigated whether the negative connotations to the word "average" may have an effect on the extent to which individuals exhibit illusory superiority, namely whether the use of the word "average" increases illusory superiority. Participants were asked to evaluate themselves, the average peer and a person whom they had sat next to in the previous experiment, on various dimensions. It was found that they placed themselves highest, followed by the real person, followed by the average peer, however the average peer was consistently placed above the mean point on the scale, suggesting that the word "average" did not have a negative effect on the participant's view of the average peer.[45]
An important moderating factor of the effect of illusory superiority is the extent to which an individual believes they are able to control and change their position on the dimension concerned. According to Alicke & Govorun positive characteristics that an individual believes are within their control are more self-serving, and negative characteristics that are seen as uncontrollable are less detrimental to self-enhancement.[17] This theory was supported by Alicke's (1985) research, which found that individuals rated themselves as higher than an average peer on positive controllable traits and lower than an average peer on negative uncontrollable traits. The idea, suggested by these findings, that individuals believe that they are responsible for their success and some other factor is responsible for their failure is known as the self-serving bias.
Personality characteristics vary widely between people and have been found to moderate the effects of illusory superiority, one of the main examples of this is self-esteem. Brown (1986) found that in self-evaluations of positive characteristics participants with higher self-esteem showed greater illusory superiority bias than participants with lower self-esteem.[47] Similar findings come from a study by Suls, Lemos & Stewart (2002), but in addition they found that participants pre-classified as having high self-esteem interpreted ambiguous traits in a self-serving way, whereas participants who were pre-classified as having low self-esteem did not do this.[18]
In contrast to what is commonly believed, research has found that better-than-average effects are not universal. In fact much recent research has found the opposite effect in many, especially more difficult, tasks.[48]