Lip sync, lip-sync, lip-synch (short for lip synchronization) is a technical term for matching lip movements with sung or spoken vocals. The term can refer to any of a number of different techniques and processes, in the context of live performances and recordings.
In the case of live concert performances, lip-synching is a commonly-used shortcut, but it can be considered controversial. In film production, lip synching is often part of the post-production phase. Dubbing foreign-language films and making animated characters appear to speak both require elaborate lip-synching. Strategy video games make extensive use of lip-synced sound files to create an immersive environment.
Contents |
Though lip-synching, also called miming, can be used to make it appear as though actors have musical ability (e.g., The Partridge Family) or to misattribute vocals (e.g. Milli Vanilli), it is more often used by recording artists to create a particular effect, to enable them to perform live dance numbers, or to cover for illness or other deficiencies during live performance. Sometimes lip-synching performances are forced by television for short guest appearances, as it requires less time for rehearsals and hugely simplifies the process of sound mixing. Some artists, however, lip-synch as they are not as confident singing live and lip-synching can eliminate the possibility of hitting any bad notes. The practice of lip synching during live performances is frowned on by many who view it as a crutch only used by lesser talents.
Because the film track and music track are recorded separately during the creation of a music video, artists usually lip-sync to their songs and often imitate playing musical instruments as well. Artists also sometimes move their lips at a faster speed from the track, to create videos with a slow-motion effect in the final clip, which is widely considered to be complex to achieve. Similarly, some artists have been known to lip-sync backwards for music videos such that, when reversed, the singer is seen to sing forwards while time appears to move backwards for his or her surroundings.
Artists often lip-sync certain portions during strenuous dance numbers in both live and recorded performances, due to lung capacity being needed for physical activity (both at once would require incredibly trained lungs). For example, artist Will Smith avoided lip-syncing during his dance and vocal performance of "Wild Wild West" at the 1999 MTV Movie Awards, where it was evident that he was nearly out of breath at the song's conclusion. Artists may also lip-sync in situations in which their back-up bands and sound systems cannot be accommodated, such as the Macy's Thanksgiving Day Parade which features popular singers lip-synching while riding floats, or to disguise their lacking of singing ability, particularly in live or non-studio environments. Some singers habitually lip-sync during live performance, both concert and televised. Some artists switch between live singing and lip-synching during the performance of a single song.
Michael Jackson's performance on the television special Motown 25: Yesterday, Today, Forever (1983) changed the scope of live stage show. Ian Inglis, author of Performance and Popular Music: History, Place and Time (2006) notes the fact that "Jackson lip-synced 'Billie Jean' is, in itself, not extraordinary, but the fact that it did not change the impact of the performance is extraordinary; whether the performance was live or lip-synced made no difference to the audience."[2] In 1989, a New York Times article claimed that "Bananarama's recent concert at the Palladium", the "first song had a big beat, layered vocal harmonies and a dance move for every line of lyrics", but "the drum kit was untouched until five songs into the set, or that the backup vocals (and, it seemed, some of the lead vocals as well-a hybrid lead performance) were on tape along with the beat". The article also claims that "British band Depeche Mode, ...add vocals and a few keyboard lines to taped backup onstage" although this practice is common place in the genre of electric music.[3]
Chris Nelson of The New York Times reported that by the 1990s, "[a]rtists like Madonna and Janet Jackson set new standards for showmanship, with concerts that included not only elaborate costumes and precision-timed pyrotechnics but also highly athletic dancing. These effects came at the expense of live singing."[4] Edna Gundersen of USA Today reported: "The most obvious example is Madonna's Blond Ambition World Tour, a visually preoccupied and heavily choreographed spectacle. Madonna lip-syncs the duet Now I'm Following You, while a Dick Tracy character mouths Warren Beatty's recorded vocals. On other songs, background singers plump up her voice, strained by the exertion of non-stop dancing."[5]
Similarly, in reviewing Janet Jackson's Rhythm Nation World Tour, Michael MacCambridge of the Austin American-Statesman commented "[i]t seemed unlikely that anyone—even a prized member of the First Family of Soul Music—could dance like she did for 90 minutes and still provide the sort of powerful vocals that the '90s super concerts are expected to achieve."[6]
The music video for Electrasy's 1998 single "Morning Afterglow" featured lead singer Alisdair McKinnell lip-syncing the entire song backwards. This allowed the video to create the effect of an apartment being tidied by 'un-knocking over' bookcases, while the music plays forwards.
In 2004, US pop singer Ashlee Simpson appeared on the live comedy TV show Saturday Night Live, and during her performance, "she was revealed to apparently be lip-synching". According to "her manager-father[,]...his daughter needed the help because acid reflux disease had made her voice hoarse." Her manager stated that "Just like any artist in America, she has a backing track that she pushes so you don’t have to hear her croak through a song on national television." During the incident, vocal parts from a previously performed song began to sound while the singer was "holding her microphone at her waist"; she made "some exaggerated hopping dance moves, then walked off the stage".[1]
During the 2008 Beijing Olympics, CTV news reported that a "nine-year-old Chinese girl's stunning performance at the Beijing Olympics opening ceremony has been marred by revelations she was lip-synching". The article states that "Lin Miaoke was lip-synching Friday to a version of "Ode to the Motherland" sung by seven-year-old Yang Peiyi, who was deemed not pretty enough to perform as China's representative".[7]
During Super Bowl XLIII, "Jennifer Hudson's flawless performance of the national anthem" was "lip-synched ...to a previously recorded track, and apparently so did Faith Hill who performed before her". The singers lip-synched "...at the request of Rickey Minor, the pregame show producer", who argued that "There's too many variables to go live."[8] Subsequent Super Bowl national anthems were performed live.
On the 2009 finals of The X Factor, Cheryl Cole partly mimed one of her new songs. In 2009, US pop singer Britney Spears was " 'extremely upset' over the savaging she has received after lip-synching at her Australian shows".[9]
Teenage viral video star Keenan Cahill lip-syncs popular songs on his YouTube channel. His popularity has increased as he included guests such as rapper 50 Cent in November 2010 and David Guetta in January 2011, sending him to be one of the most popular channels on YouTube in January 2011.[10][11][12]
In 1981 Wm. Randy Wood started lip sync contests at the Underground Nightclub in Seattle, Washington to attract customers. The contests were so popular he took the contests nationwide. By 1984 he had contests running in over 20 cities and after submitting a show proposal went to work for Dick Clark Productions as consulting producer for the TV series Puttin' on the Hits. The show received an impressive 9.0 rating the first season and was nominated twice for the Daytime Emmy Awards. In the United States, this hobby reached its peak during the 1980s, when several game shows, such as Puttin' on the Hits and Lip Service, were created. The Family Channel had a Saturday morning show called Great Pretenders where kids lip-synced their favorite songs.
In film production, lip synching is often part of the post-production phase. Most film today contains scenes where the dialogue has been re-recorded afterwards; lip-synching is the technique used when animated characters speak; and lip synching is essential when films are dubbed into other languages. In many musical films, actors sang their own songs beforehand in a recording session and lipsynched during filming. Marni Nixon sang for Deborah Kerr in the King and I, for Audrey Hepburn in My Fair Lady, and for Natalie Wood in West Side Story. In the 1950s MGM classic Singin' in the Rain, lip synching is a major plot point.
Automated dialogue replacement, also known as "ADR" or "looping," is a film sound technique involving the re-recording of dialogue after photography. Sometimes actors lip-synch during filming and sound is added later to reduce costs.
Another manifestation of lip synching is the art of making an animated character appear to speak in a prerecorded track of dialogue. The lip sync technique to make an animated character appear to speak involves figuring out the timings of the speech (breakdown) as well as the actual animating of the lips/mouth to match the dialogue track. The earliest examples of lip-sync in animation were attempted by Max Fleischer in his 1926 short My Old Kentucky Home. The technique continues to this day, with animated films and television shows such as Shrek, Lilo & Stitch, and The Simpsons using lip-synching to make their artificial characters talk. Lip synching is also used in comedies such as This Hour Has 22 Minutes and political satire, changing totally or just partially the original wording. It has been used in conjunction with translation of films from one language to another, for example, Spirited Away. Lip-synching can be a very difficult issue in translating foreign works to a domestic release, as a simple translation of the lines often leaves overrun or underrun of high dialog to mouth movements.
Quality film dubbing requires that the dialogue is first translated in such a way that the words used can match the lip movements of the actor. This is often hard to achieve if the translation is to stay true to the original dialogue. Elaborate lip-synch of dubbing is also a lengthy and expensive process.
In English-speaking countries, many foreign TV series (especially Japanese anime like Pokémon) are dubbed for television broadcast. However, cinematic releases of films tend to come with subtitles instead. The same is true of countries in which the local language is not spoken widely enough to make the expensive dubbing commercially viable (in other words, there is not enough market for it).
However, most non-English-speaking countries with a large enough population dub all foreign films into their national language cinematic release. In such countries, people are accustomed to dubbed films, so less than optimal matches between the lip movements and the voice are not generally noticed . Dubbing is preferred by some because it allows the to viewer focus on the on-screen action, without reading subtitles.
Early video games did not use any voice sounds, due to technical limitations. In the 1970s and early 1980s, most video games used simple electronic sounds such as bleeps and simulated explosion sounds. At most, these games featured some generic jaw or mouth movement to convey a communication process in addition to text. However, as games become more advanced in the 1990s and 2000s, lip sync and voice acting has become a major focus of many games.
Lip sync is a minor focus in role-playing video games. Because of the sheer amount of information conveyed through the game, the majority of communication is done through the use of scrolling text. Older RPGs rely solely on text, using inanimate portraits to provide a better sense of who is speaking. Some games make use of voice acting, such as Grandia II, but due to simple character models, there is no mouth movement to simulate speech. RPGs for hand-held systems are still largely based on text, with the rare use of lip sync and voice files being reserved for full motion video cutscenes. New RPGs, however, usually use some degree of voice overs. These games are typically for computers or modern console systems and include such games as Mass Effect and The Elder Scrolls IV: Oblivion. In these full voice over games, lip sync is crucial.
Unlike RPGs, strategy video games make extensive use of sound files to create an immersive battle environment. Most games simply played a recorded audio track on cue with some games providing inanimate portraits to accompany the respective voice. StarCraft used full motion video character portraits with several generic speaking animations that did not synchronize with the lines spoken in the game. The game did, however, make extensive use of recorded speech to convey the game's plot, with the speaking animations providing a good idea of the flow of the conversation. Warcraft III used fully rendered 3D models to animate speech with generic mouth movements, both as character portraits as well as the in-game units. Like the FMV portraits, the 3D models did not synchronize with actual spoken text, while in-game models tended to simulate speech by moving their heads and arms rather than using actual lip synchronization. Similarly, the game Codename Panzers uses camera angles and hand movements to simulate speech, as the characters have no actual mouth movement.
FPS is a genre that generally places much more emphasis on graphical display, mainly due to the camera almost always being very close to character models. Due to increasingly detailed character models requiring animation, FPS developers assign many resources to create realistic lip synchronization with the many lines of speech used in most FPS games. Early 3D models used basic up-and-down jaw movements to simulate speech. As technology progressed, mouth movements began to closely resemble real human speech movements. Medal of Honor: Frontline dedicated a development team to lip sync alone, producing the most accurate lip synchronization for games at that time. Since then, games like Medal of Honor: Pacific Assault and Half-Life 2 have made use of coding that dynamically simulates mouth movements to produce sounds as if they were spoken by a live person, resulting in astoundingly life-like characters. Gamers who create their own videos using character models with no lip movements, such as the helmeted Master Chief from Halo, improvise lip movements by moving the characters' arms, bodies and making a bobbing movement with the head (see Red vs. Blue).
An example of a lip synchronization problem, also known as lip sync error is the case in which television video and audio signals are transported via different facilities (e.g., a geosynchronous satellite radio link and a landline) that have significantly different delay times, respectively. In such cases it is necessary to delay the earlier of the two signals electronically.
Lip sync issues have become a serious problem for the television industry world wide. Lip sync problems are not only annoying, but can lead to subconscious viewer stress which in turn leads to viewer dislike of the television program they are watching.[13] Television industry standards organizations have become involved in setting standards for lip sync errors.[14]
The miming of the playing of a musical instrument is equivalent of lip-synching. A notable example of miming includes John Williams' piece at President Obama’s inauguration, which was a recording made two days earlier and mimed by musicians Yo-Yo Ma, Itzhak Perlman. The musicians wore earpieces to hear the playback.[15]