The Full Wiki

Lip sync: Map

  
  
  
  

Wikipedia article:

Map showing all locations mentioned on Wikipedia article:



Lip-sync or Lip-synch (short for lip synchronization) is a technical term for matching lip movements with voice. The term can refer to: a technique often used for performances in the production of film, video and television programs; the science of synchronization of visual and audio signals during post-production and transmission; the common practice of people including singers performing with recorded audio as a source of entertainment and; matching lip movements of animated characters (including computer facial animation). In the case of live concert performances lip-synching is generally considered controversial although in many instances it is required from a production standpoint to ensure quality for broadcast or a performer may be harmonizing with their own vocals.

Lip-synching in music

Though lip-synching, also called miming, can be used to make it appear as though actors have musical ability (e.g., The Partridge Family) or to misattribute vocals (e.g. Milli Vanilli), it is more often used by recording artists to create a particular effect, to enable them to perform live dance numbers, or to cover for illness or other deficiencies during live performance. Sometimes lip-synching performances are forced by television for short guest appearances, as it requires less time for rehearsals and hugely simplifies the process of sound mixing. Some artists, however,lip-synch as they are not as confident singing live and lip-synching can eliminate the possibility of hitting any bad notes. The practice of lip synching during live performances is frowned on by some who view it as a crutch only used by lesser talents.

Because the film track and music track are recorded separately during the creation of a music video, artists usually lip-synch to their songs and often imitate playing musical instruments as well. Artists also sometimes move their lips at a faster speed from the track, to create videos with a slow-motion effect in the final clip, which is widely considered to be complex to achieve. However, Bruce Springsteen's hit Streets of Philadelphia only uses the instruments as a backing track while the vocals were recorded with a microphone attached on the singer, giving it a different feel to it.

Artists often lip-synch during strenuous dance numbers in both live and recorded performances, due to lung capacity being needed for physical activity (both at once would require incredibly trained lungs). They may also lip-synch in situations in which their back-up bands and sound systems cannot be accommodated, such as the Macy's Thanksgiving Day Parade which features popular singers lip-synching while riding float, or to disguise their lacking of natural singing ability, particularly in live or non-studio environments.

Some singers habitually lip-synch during live performance, both concert and televised. Others sing the lead part over a complete recording or over just pre-recorded music and backing vocals. Sometimes when this is done, the live vocals are less audible than the backing track. Some groups lip-synch supporting vocal parts or shared parts in order to maintain vocal harmony or to ensure balance of volume among several singers.Some artists switch between live singing and lip-synching during the performance of a single song.

Lip synching contests and game shows

In 1981 Wm. Randy Wood started lip sync contests at the Underground Nightclub in Seattle, Washingtonmarker to attract customers. The contests were so popular he took the contests nationwide. By 1984 he had contests running in over 20 cities and after submitting a show proposal went to work for Dick Clark Productions as consulting producer for the TV series Puttin' on the Hits. The show received an impressive 9.0 rating the first season and was nominated twice for the Daytime Emmy Awards. In the United Statesmarker, this hobby reached its peak during the 1980s, when several game shows, such as Puttin' on the Hits and Lip Service, were created.

Lip-synching in films

In film production lip synching is often part of the post-production phase. Most film today contains scenes where the dialogue has been re-recorded afterwards, lip-synching is the technique used when animated characters speak, and lip synching is essential when films are dubbed into other languages. In many musical films, actors sang their own songs before hand in a recording session and lipsynched during filming. Marni Nixon sang for Deborah Kerr in the King and I, Audrey Hepburn in My Fair Lady and Natalie Wood in West Side Story is an example of someone else having their own voice dubbed in, as demonstrated in the 1950's MGM classic Singing In The Rain as a major plot point.

ADR

Automated dialogue replacement, also known as "ADR" or "looping," is a film sound technique involving the re-recording of dialogue after photography. Sometimes actors lip-synch during filming and sound is added later to reduce costs.

Animation

Another manifestation of lip synching is the art of making a character appear to speak in a prerecorded track of dialogue. The lip sync technique to make an animated character appear to speak involves figuring out the timings of the speech (breakdown) as well as the actual animating of the lips/mouth to match the dialogue track. The earliest examples of lip-sync in animation were attempted by Max Fleischer in his 1926 short My Old Kentucky Home. The technique continues to this day, with animated films and television shows such as Shrek, Lilo & Stitch, and The Simpsons using lip-synching to make their artificial characters talk. Lip synching is also used in comedies such as This Hour Has 22 Minutes and political satire, changing totally or just partially the original wording. It has been used in conjunction with translation of films from one language to another, for example, Spirited Away. Lip-synching can be a very difficult issue in translating foreign works to a domestic release, as a simple translation of the lines often leaves overrun or underrun of high dialog to mouth movements.

Language dubbing

Quality film dubbing requires that the dialogue is first translated in such a way that the words used can match the lip movements of the actor. This is often hard to achieve if the translation is to stay true to the original dialogue. Elaborate lip-synch of dubbing is also a lengthy and expensive process.

In English-speaking countries, many foreign TV series (especially Japanesemarker anime like Pokémon) are dubbed to be put on television. However, cinematic releases of films tend to come with subtitles instead. The same is true of countries in which a language is spoken that is not spoken widely enough to make the expensive dubbing commercially viable (in other words, there is not enough market for it).

However, most non-English-speaking countries with a large enough population dub all foreign films into their national language before releasing them to cinemas. In such countries, people are accustomed to dubbed films so much that somewhat less than optimal matches between the lip movements and the speech are not generally noticed. They are also preferred by some because they let the viewer focus on the on-screen action. Subtitles inherently require the viewer to follow both on-screen action and subtitles concurrently.

Lip-synching in video games

Early video games did not feature prominent use of voice, mainly being text-based. At most, games featured some generic jaw or mouth movement to convey a communication process in addition to text. However, as games become more advanced, lip sync and voice acting has become a major focus of many games.

Role-playing games

Lip sync is a minor focus in role-playing games. Because of the sheer amount of information conveyed through the game, the majority of communication is done through the use of scrolling text. Older RPGs rely solely on text, using inanimate portraits to provide a better sense of who is speaking. Some games make use of voice acting, such as Grandia II, but due to simple character models, there is no mouth movement to simulate speech. RPGs for hand-held systems are still largely based on text, with the rare use of lip sync and voice files being reserved for full motion video cutscenes. New RPGs, however, usually use some degree of voice overs. These games are typically for computers or modern console systems and include such games as Mass Effect and The Elder Scrolls: Oblivion. In these full voice over games, lip sync is crucial.

Strategy games

Unlike RPGs, strategy games make extensive use of sound files to create an immersive battle environment. Most games simply played a recorded audio track on cue with some games providing inanimate portraits to accompany the respective voice. StarCraft used full motion video character portraits with several generic speaking animations that did not synchronize with the lines spoken in the game. The game did, however, make extensive use of recorded speech to convey the game's plot, with the speaking animations providing a good idea of the flow of the conversation. Warcraft III used fully rendered 3D models to animate speech with generic mouth movements, both as character portraits as well as the in-game units. Like the FMV portraits, the 3D models did not synchronize with actual spoken text, while in-game models tended to simulate speech by moving their heads and arms rather than using actual lip synchronization. Similarly, the game Codename Panzers uses camera angles and hand movements to simulate speech, as the characters have no actual mouth movement.

First-person shooters

FPS is a genre that generally places much more emphasis on graphical display, mainly due to the camera almost always being very close to character models. Due to increasingly detailed character models requiring animation, FPS developers assign many resources to create realistic lip synchronization with the many lines of speech used in most FPS games. Early 3D models used basic up-and-down jaw movements to simulate speech. As technology progressed, mouth movements began to closely resemble real human speech movements. Medal of Honor: Frontline dedicated a development team to lip sync alone, producing the most accurate lip synchronization for games at that time. Since then, games like Medal of Honor: Pacific Assault and Half-Life 2 have made use of coding that dynamically simulates mouth movements to produce sounds as if they were spoken by a live person, resulting in astoundingly life-like characters. To date, the most accurate lip synching in any video game was displayed in a video featuring the new lip synching technology used in the co-op FPS Team Fortress 2. Gamers who create their own videos using character models with no lip movements, such as the helmeted Master Chief from Halo, improvise lip movements by moving the characters' arms, bodies and making a bobbing movement with the head (see Red vs. Blue).

Television transmission synchronization

An example of a lip synchronization problem, also known as lip sync error is the case in which television video and audio signals are transported via different facilities (e.g., a geosynchronous satellite radio link and a landline) that have significantly different delay times, respectively. In such cases it is necessary to delay the earlier of the two signals electronically to allow for the difference in propagation times. See also audio video sync and audio synchronizer.

Lip sync issues have become a serious problem for the television industry world wide. Lip sync problems are not only annoying, but can lead to subconscious viewer stress which in turn leads to viewer dislike of the television program they are watching. Television industry standards organizations have become involved in setting standards for lip sync errors.

Miming

The miming of the playing of a musical instrument is equivalent of lip-synching. A notable example of miming includes John Williams' piece at President Obama’s inauguration, which was a recording made two days earlier and mimed by musicians Yo-Yo Ma, Itzhak Perlman. The musicians wore earpieces to hear the playback.

See also



References

  1. "Effects of Audio-Video Asynchrony on Viewer's Memory, Evaluation of Content and Detection Ability" by Reeves and Voelker.
  2. ATSC Document IS-191 ([1])


External links




Embed code:






Got something to say? Make a comment.
Your name
Your email address
Message