Subtitle (captioning)

Example of a television broadcast with subtitles

Subtitles are textual versions of the dialog in films and television programs, usually displayed at the bottom of the screen. They can either be a form of written translation of a dialog in a foreign language, or a written rendering of the dialog in the same language, with or without added information to help viewers who are deaf and hard-of-hearing to follow the dialog. Television teletext subtitles, which are hidden unless requested by the viewer from a menu or by selecting the relevant teletext page (e.g. p888), always carry additional sound representations for deaf and hard of hearing viewers. Teletext subtitle language follows the original audio, except in multi-lingual countries where the broadcaster may provide subtitles in additional languages on other teletext pages.

Sometimes, mainly at film festivals, subtitles may be shown on a separate display below the screen, thus saving the film-maker from creating a subtitled copy for perhaps just one showing. Television subtitling for the deaf and hard-of-hearing is closed captioning.

Contents

Creation of subtitles

Today professional subtitlers usually work with specialized computer software and hardware where the video is digitally stored on a hard disk, making each individual frame instantly accessible. Besides creating the subtitles, the subtitler usually also tells the computer software the exact positions where each subtitle should appear and disappear. For cinema film, this task is traditionally done by separate technicians. The end result is a subtitle file containing the actual subtitles as well as position markers indicating where each subtitle should appear and disappear. These markers are usually based on timecode if it is a work for electronic media (e.g. TV, video, DVD), or on film length (measured in feet and frames) if the subtitles are to be used for traditional cinema film.

The finished subtitle file is used to add the subtitles to the picture, either directly into the picture (open subtitles); embedded in the vertical interval and later superimposed on the picture by the end user with the help of an external decoder or a decoder built into the TV (closed subtitles on TV or video); or converted to tiff or bmp graphics that are later superimposed on the picture by the end user (closed subtitles on DVD).

Subtitles can also be created by individuals using freely-available subtitle-creation software like Subtitle Workshop and then hardcode them onto a video file with programs such as VirtualDub in combination with VSFilter which could also be used to show subtitles as softsubs in many software video players.

For multimedia-style Webcasting, check SMIL Synchronized Multimedia Integration Language.

Same language captions

Same language captions, i.e., without translation, are primarily intended as an aid for people who are deaf or hard-of-hearing. Subtitles in the same language as the dialog are sometimes edited for reading speed and readability. This is especially true if they cover a situation where many people are speaking at the same time, or where speech is unstructured or contains redundancy.

Closed captions

Main article: Closed captioning
The "CC in a TV" symbol Jack Foley created, while senior graphic designer at Boston public broadcaster WGBH that invented captioning for television, is public domain so that anyone who captions TV programs can use it.

Closed captioning is the American term for closed subtitles specifically intended for people who are deaf and hard-of-hearing. These are a transcription rather than a translation, and usually contain descriptions of important non-dialog audio as well such as "(sighs)" or "(door creaks)". From the expression "closed captions" the word "caption" has in recent years come to mean a subtitle intended for the hard of hearing, be it "open" or "closed". In British English "subtitles" usually refers to subtitles for the hard-of-hearing (HoH), as translation subtitles are so rare on British cinema and TV; however, the term "HoH subtitles" is sometimes used when there is a need to make a distinction between the two.

Realtime

Programs such as news bulletins, current affairs programs, sport, some talk shows and political and special events utilize realtime or online captioning.[1] Live captioning is increasingly common, especially in the United Kingdom and the United States, as a result of regulations that stipulate that virtually all TV eventually must be accessible for people who are deaf and hard–of–hearing.

Pre-prepared

Some programs may be prepared in their entirety several hours before broadcast, but with insufficient time to prepare a timecoded caption file for automatic play-out. Pre-prepared captions look very similar to offline captions, although the accuracy of cueing may be compromised slightly as the captions are not locked to program timecode.[1]

Newsroom captioning involves the automatic transfer of text from the newsroom computer system to a device which outputs it as captions. It does work, but its suitability as an exclusive system would only apply to programs which had been scripted in their entirety on the newsroom computer system, such as short interstitial updates.[1]

In the United States and Canada, some broadcasters have used it exclusively and simply left uncaptioned sections of the bulletin for which a script was unavailable.[1] Newsroom captioning limits captions to pre-scripted materials and, therefore, does not cover 100% of the news, weather and sports segments of a typical local news broadcast. It does not cover such things as the weather and sports segments which are typically not pre-scripted, last second breaking news or changes to the scripts, ad lib conversations of the broadcasters, emergency or other live remote broadcasts by reporters in-the-field. By failing to cover items such as these, newsroom style captioning (or use of the TelePrompTer for captioning) typically results in coverage of less than 30% of a local news broadcast.[2]

Live

Communication Access Real-Time Translation (CART) stenographers, who use a computer with using either stenotype or Velotype keyboards to transcribe stenographic input for presentation as captions within 2-3 seconds of the representing audio, must caption anything which is purely live and unscripted,[1] however, the most recent developments include operators using voice recognition software and revoicing the dialog. Voice recognition technology has advanced so quickly in the United Kingdom that about 50% of all live captioning is through voice recognition as of 2005. Realtime captions look different from offline captions, as they are presented as a continuous flow of text as people speak.[1]

Realtime stenographers are the most highly skilled in their profession. Stenography is a system of rendering words phonetically, and English, with its multitude of homophones (e.g. there, their, they’re), is particularly unsuited to easy transcriptions. Stenographers working in courts and inquiries usually have 24 hours in which to deliver their transcripts. Consequently they may enter the same phonetic stenographic codes for a variety of homophones, and fix up the spelling later. Realtime stenographers must deliver their transcriptions accurately and immediately. They must therefore develop techniques for keying homophones differently, and be unswayed by the pressures of delivering accurate product on immediate demand.[1]

Submissions to recent captioning-related inquiries have revealed concerns from broadcasters about captioning sports. Captioning sports may also affect many different people because of the weather outside of it. In much sport captioning's absence, the Australian Caption Centre submitted to the National Working Party on Captioning (NWPC), in November of 1998, three examples of sport captioning, each performed on tennis, rugby league and swimming programs:

  1. Heavily reduced: Captioners ignore commentary and provide only scores and essential information such as “try” or “out”.
  2. Significantly reduced: Captioners use QWERTY input to type summary captions yielding the essence of what the commentators are saying, delayed due to the limitations of QWERTY input.
  3. Comprehensive realtime: Captioners use stenography to caption the commentary in its entirety.[1]

The NWPC concluded that the standard they accept is the comprehensive realtime method, which gives them access to the commentary in its entirety. Also, not all sports are live. Many events are pre-recorded hours before they are broadcast, allowing them a captioner to caption them using offline methods.[1]

Hybrid

Because different programs are produced under different conditions, a case-by-case basis must consequently determine captioning methodology. Some bulletins may have a high incidence of truly live material, or insufficient access to video feeds and scripts may be provided to the captioning facility, making stenography unavoidable. Other bulletins may be pre-recorded just before going to air, making pre-prepared text preferable.[1]

In Australia and the United Kingdom, hybrid methodologies have proven to be the best way to provide comprehensive, accurate and cost-effective captions on news and current affairs programs. News captioning applications currently available are designed to accept text from a variety of inputs: stenography, Velotype, QWERTY, ASCII import, and the newsroom computer. This allows one facility to handle a variety of online captioning requirements and to ensure that captioners properly caption all programs.[1]

Current affairs programs usually require stenographic assistance. Even though the segments which comprise a current affairs program may be produced in advance, they are usually done so just before on-air time and their duration makes QWERTY input of text unfeasible.[1]

News bulletins, on the other hand can often be captioned without stenographic input (unless there are live crosses or ad-libbing by the presenters). This is because:

  1. Most items are scripted on the newsroom computer system and this text can be electronically imported into the captioning system.
  2. Individual news stories are of short duration, so even if they are made available only just prior to broadcast, there is still time to QWERTY in text.[1]

Offline

As a woman walks away from a man, she says, “Thank you.” He screams, “Let me help you!” The pop-up closed captions are an offline style, which appear staggered below each of the two characters to identify the speakers, appearing in all caps, typical, but not pervasive, of closed caption dialogue, and “{{screaming}}” is a non-dialogue identifier for viewers who are deaf or hard-of-hearing to understand the tone of the man's voice, which is in lowercase and in italics, typical, but not pervasive, of closed caption identifiers, however, the two braces (“{{” and “}}”) are uncommon, since most closed caption identifiers use parentheses (“(” and “)”) or square brackets (“[” and “]”).

For non-live, or pre-recorded programs, television program providers can choose offline captioning. Captioners gear offline captioning toward the high-end television industry, providing highly customized captioning features, such as pop-on style captions, specialized screen placement, speaker identifications, italics, special characters, and sound effects.[3]

Offline captioning involves a five-step design and editing process, and does much more than simply display the text of a program. Offline captioning helps the viewer follow a story line, become aware of mood and feeling, and allows them to fully enjoy the entire viewing experience. Offline captioning is the preferred presentation style for entertainment-type programming.[3]

SDH

"SDH" is an American term the DVD industry introduced. It is an acronym for "Subtitles for the deaf and hard-of-hearing", and refers to regular subtitles in the original language where important non-dialog audio has been added, as well as speaker identification, useful when the viewer cannot otherwise visually tell who is saying what.

The only significant difference for the user between "SDH" subtitles and "closed captions" is their appearance: SDH subtitles usually are displayed with the same proportional font used for the translation subtitles on the DVD; however, closed captions are displayed as white text on a black band, which blocks a large portion of the view. Closed captioning is falling out of favor as many users have no difficulty reading SDH subtitles, which are text with contrast outline. In addition, DVD subtitles can specify many colors, on the same character: primary, outline, shadow, and background. This allows subtitlers to display subtitles on a usually translucent band for easier reading, however, this is rare, since most subtitles use an outline and shadow instead, in order to block a smaller portion of the picture. Closed captions may still supersede DVD subtitles, since many SDH subtitles present all of the text centered, while closed captions usually specify position on the screen: centered, left align, right align, top, etc. This is very helpful for speaker identification and overlapping conversation. Some SDH subtitles do have positioning, but it is not as common.

DVDs for the US market now sometimes have three forms of English subtitles: SDH subtitles, English subtitles, helpful for viewers who are Hearing and whose first language may not be English (although they are usually an exact transcript and not edited into Simple English), and closed caption data that is decoded by the end-user’s closed caption decoder.

High definition disc media (HD DVD, Blu-ray disc) uses SDH subtitles as the sole method because technical specifications do not require HD to support line 21 closed captions. Some blu-ray discs, however, are said to carry a closed caption stream that only displays through standard definition connections. Many HDTVs allow the end–user to customize the captions, including the ability to remove the black band.

Use by those not deaf or hard-of-hearing

Although same-language subtitles and captions are produced primarily with the deaf and hard-of-hearing in mind, many hearing film and television viewers choose to use them. This is often done, because the presence of closed captioning and subtitles ensures that not one word of dialog will be missed. Films and television shows often have subtitles displayed in the same language, if the speaker has a speech disability and/or an accent. In addition, captions may further reveal information that would be difficult to pick up on otherwise. Some examples of this would be the song lyrics; dialog spoken quietly or by those with unfamiliar accents; or supportive, minor dialog from background characters. It is argued that such additional information and detail will enhance the overall experience and allow the viewer a better grasp on the material. Furthermore, people learning a foreign language may sometimes use same-language subtitles to better understand the dialog while not having to resort to a translation.

Asia

In some Asian television programming, captioning is considered a part of the genre, and has evolved beyond simply capturing what is being said. The captions are used artistically; it is common to see the words appear one by one as they are spoken, in a multitude of fonts, colors, and sizes that capture the spirit of what is being said. Languages like Japanese also have a rich vocabulary of onomatopoeia which are used in captioning.

East Asia

In some East Asian countries, such as China, Korea and Japan, subtitling is common in some genres of television. In these languages, written text is less ambiguous than spoken text, so subtitling may offer a distinct advantage to aid comprehension. Furthermore, the various spoken dialects of Chinese are mutually incomprehensible, but all understand the one standard form of written Chinese. Subtitling means someone who only understands one dialect could watch a show filmed in another. Subtitling is also common in taped interviews during news broadcasts, as accents in East Asian languages can be difficult to understand.

South Asia

In some South Asian countries, such as India and Pakistan, Same Language Subtitles (SLS) are common for films and music videos. In India, 84% of people are early- or non-literate.[6] With SLS, "[r]eading becomes automatic, subconscious, in everyday entertainment."[7] SLS are karaoke-style subtitles, "highlighted in perfect timing, as they are sung [or spoken]. This association of the spoken and written word is a proven method to improve reading skills."[8]

Translation

Subtitles can be used to translate dialog from a foreign language to the native language of the audience. It is the quickest and the cheapest method of translating content, and is usually praised for the possibility to hear the original dialog and voices of the actors.

Translation of subtitling is sometimes very different from the translation of written text. Usually, when a film or a TV program is subtitled, the subtitler watches the picture and listens to the audio sentence by sentence. The subtitler may or may not have access to a written transcript of the dialog. Especially in commercial subtitles, the subtitler often interprets what is meant, rather than translating how it is said, i.e. meaning being more important than form. The audience does not always appreciate this, and it can be frustrating to those who know some of the spoken language, due to the fact that spoken language may contain verbal padding or culturally implied meanings, in confusing words, if not adapted in the written subtitles. The subtitler does this when the dialog must be condensed in order to achieve an acceptable reading speed. i.e. purpose being more important than form.

Especially in fansubs, the subtitler may translate both form and meaning. The subtitler may also choose to display a note in the subtitles, usually in parentheses (“(” and “)”). This allows the subtitler to preserve form and achieve an acceptable reading speed, by leaving the note on the screen, even after the character has finished speaking, to both preserve form and allow for understanding. For example, the Japanese language has multiple first-person pronouns (see Japanese pronouns), and using one instead of another implies a different degree of politeness. In order to compensate, when translating to English, the subtitler may reformulate the sentence, add appropriate words and/or use notes.

Captioning

A subtitler uses a computer to produce German teletext subtitles for Swiss television. Since Swiss speak German, French, Italian, and Romansh,[9] Swiss television often has captions in multiple languages.

Realtime

Realtime translation subtitling, usually involving simultaneous interpreter listening to the dialog quickly translating, while a stenographer types, is rare. The unavoidable delay, typing errors, lack of editing, and high costs regard very little need for translation subtitling. Allowing the interpreter to directly speak to the viewers is usually both cheaper and quicker, however, the translation is not accessible to people who are deaf and hard–of–hearing.

Offline

Some subtitlers purposely provide edited subtitles or captions, to match the needs of their audience, for learners of the spoken dialog as a second or foreign language, visual learners, beginning readers who are deaf or hard-of-hearing and for people with learning and/or mental disabilities. For example, for many of its films and television programs, PBS displays standard captions representing speech the program audio, word-for-word, if the viewer selects "CC1", by using the television remote control or on-screen menu, however, they also provide edited captions to present simplified sentences at a slower rate, if the viewer selects "CC2". Programs with a very diverse audience also often have captions in another language. This is common with popular Spanish soap operas. Since CC1 and CC2 share bandwidth, the FCC recommends translation subtitles be placed in CC3. CC4, which shares bandwidth with CC3, is also available, but programs very seldom use it.

Subtitles vs. dubbing and lectoring

The two alternative methods of 'translating' films in a foreign language are dubbing, in which other actors record over the voices of the original actors in a different language, and lectoring, a form of voice-over for fiction material where a narrator tells the audience what the actors are saying while their voices can be heard in the background. Lectoring is common for television in Russia, Poland, and a few other East European countries, while cinemas in these countries commonly show films dubbed or subtitled.

The preference for dubbing or subtitling in various countries is largely based on decisions taken in the late 1920s and early 1930s. With the arrival of sound film, the film importers in Germany, Italy, France and Spain decided to dub the foreign voices, while the rest of Europe elected to display the dialog as translated subtitles. The choice was largely due to financial reasons (subtitling is inexpensive and quick, while dubbing is very expensive and thus requires a very large audience to justify the cost), but during the 1930s it also became a political preference in Germany, Italy and Spain; an expedient form of censorship that ensured that foreign views and ideas could be stopped from reaching the local audience, as dubbing makes it possible to create a dialogue which is totally different from the original. In Spain the compulsory dubbing was also employed for encouraging the use of Spanish language (Castilian) among non-Spanish-speaking population (languages such as Galician, Catalan and Basque were forbidden and prosecuted during Franco's dictatorship).

Dubbing is still the norm and favored form in these four countries, but the proportion of subtitling is slowly growing, mainly to save cost and turnaround-time, but also due to a growing acceptance among younger generations, who are better readers and increasingly have a basic knowledge of English (the dominant language in film and TV) and thus prefer to hear the original dialogue.

Nevertheless, in Spain, for example, only public TV channels show subtitled foreign films, usually at late night. It is extremely rare that any Spanish TV channel shows subtitled versions of TV programs, series or documentaries. In addition, only a small proportion of cinemas shows subtitled films. Films talking in Galician, Catalonian or Basque are always dubbed, not subtitled, when they are showed in the rest of the country. Some non-Spanish-speaking TV stations subtitle interviews in Spanish; others do not.

In many Latin American countries, local network television will show dubbed versions of English-language programs and movies, while cable stations (often international) more commonly broadcast subtitled material. Preference for subtitles or dubbing varies according to individual taste and reading ability, and theaters may order two prints of the most popular films, allowing moviegoers to chose between dubbing or subtitles. Animation and children's programming, however, is nearly universally dubbed, as in other regions.

In the traditional subtitling countries, dubbing is generally regarded as something very strange and unnatural and is only used for animated films and TV programs intended for pre-school children. As animated films are "dubbed" even in their original language and ambient noise and effects are usually recorded on a separate sound track, dubbing a low quality production into a second language produces little or no noticeable effect on the viewing experience. In dubbed live-action television or film, however, viewers are often distracted by the fact that the audio does not match the actors' lip movements. Furthermore, the dubbed voices may seem detached, inappropriate for the character, or overly expressive, and some ambient sounds may not be transferred to the dubbed track, creating a less enjoyable viewing experience.

Subtitling as a practice

In several countries or regions nearly all foreign language TV programs are subtitled, instead of dubbed, notably in:

It is also common that television services in minority languages subtitle their programmes in the dominating language as well. Examples include the Welsh S4C and Irish TG4 who subtitle in English and the Swedish FST5 who subtitle in Finnish.

In Wallonia (Belgium) films are usually dubbed, but sometimes they are played on two channels at the same time: one dubbed (on La Une) and the other subtitled (on La Deux), but due to low ratings not much anymore.

In Australia, one FTA network, SBS airs its foreign-language shows subtitled in English.

Categories

Subtitles in the same language on the same production can be in different categories:

Types

While distributing content, subtitles can appear in one of 3 types:

In other categorization, digital video subtitles are sometimes called internal, if they're embedded in a single video file container along with video and audio streams, and external if they are distributed as separate file (that is less convenient, but it is easier to edit/change such file).

Comparison table
Feature Hard Prerendered Soft
Can be turned off/on No Yes Yes
Multiple subtitle variants (for example, languages) Yes 'Though all displayed at the same time Yes Yes
Editable No Difficult, but possible Yes
Player requirements None Majority of players support DVD subtitles Usually requires installation of special software, unless national regulators mandate its distribution
Visual appearance, colors, font quality Low, depends on video resolution/compression Low Low to High, depends on player and subtitle file format
Transitions, karaoke and other special effects Highest Low Depends on player and subtitle file format, but generally poor
Distribution Inside original video Separate low-bitrate video stream, commonly multiplexed Relatively small subtitle file or instructions stream, multiplexed or separate
Additional overhead None, though subtitles added by re-encoding of the original video may degrade overall image quality, and the sharp edges of text may introduce artifacts in surrounding video High Low

Subtitle formats

For software video players

Comparison table
Name Extension Type Text Styling Metadata Timings Timing Precision
AQTitle .aqt Text-based No No Framings Dependent on Frame
JACOSub .jss Text-based Yes No Elapsed Time 10 Milliseconds
MicroDVD .sub Text-based No No Framings Dependent on Frame
MPEG-4 Timed Text .ttxt XML Yes No Elapsed Time 1 Millisecond
MPSub .sub Text-based No Yes Sequential Time 10 Milliseconds
Ogg Writ N/A (mixed with audio/video stream) Text-based Yes Yes Sequential Granules Dependent on Bitstream
Phoenix Subtitle .pjs Text-based No No Framings Dependent on Frame
PowerDivX .psb Text-based No No Elapsed Time 1 Second
RealText .rt HTML-based Yes No Elapsed Time 10 Milliseconds
SAMI .smi HTML-based Yes Yes Framings Dependent on Frame
Structured Subtitle Format .ssf XML Yes Yes Elapsed Time 1 Millisecond
SubRip .srt Text-based No No Elapsed Time 1 Millisecond
(Advanced) SubStation Alpha .ssa or .ass (advanced) Text-based Yes Yes Elapsed Time 10 Milliseconds
SubViewer .sub Text-based No Yes Elapsed Time 10 Milliseconds
Universal Subtitle Format .usf XML Yes Yes Elapsed Time 1 Millisecond
VobSub .sub + .idx Image-based N/A N/A Elapsed Time 1 Millisecond
XSUB N/A (embedded in .divx container) Image-based N/A N/A Elapsed Time 1 Millisecond

There are still many more not very common formats. Most of them are Text-based and have the extension .txt.

For media

Subtitles created for TV broadcast are stored in a variety of file formats. The majority of these formats are proprietary to the vendors of subtitle insertion systems.

Broadcast subtitle formats include

.PAC .RAC .CHK .AYA .890 .CIP .CAP .ULT .USF .CIN .L32 .ST4 .ST7 .TIT .STL

The EBU format defined by Technical Reference 3264-E [10]is an 'open' format intended for subtitle exchange between broadcasters. Files in this format have the extension .stl.

The Timed Text format currently being drafted by the W3C (called DFXP[11]) is also proposed as an 'open' format for subtitle exchange.

Subtitles as a source of humor

Occasionally, movies will use subtitles as a source of humor.

One unintentional source of humor in subtitles comes from illegal DVDs produced in non-English-speaking countries (esp. China). These DVDs often contain poorly-worded subtitle tracks, possibly produced by machine translation, with humorous results. One of the better-known examples is a copy of Star Wars Episode III: Revenge of the Sith whose opening title was subtitled, "Star war: The backstroke of the west". [3]

Controversy

One recent controversy about the necessity of subtitles involved the Mel Gibson movie The Passion of the Christ. All the dialog in this film was in Aramaic, Latin and Hebrew. Gibson initially intended not to include subtitles in the belief that the audience already knew the story, but the distributors ordered him to include them by arguing that audiences would refuse to watch a film whose dialog was entirely untranslated.

Another controversy arising out of bad subtitling was of Bollywood's Lagaan. There was a reference to the Hindu God Hanuman as Monkey in one of the foreign release prints in English. This resulted in widespread protests, leading distributors to change the subtitling and issue an apology.

Yet another controversy surrounded Dreamworks' original DVD release of the anime Ghost in the Shell 2: Innocence, as the DVD did not have a subtitle track; only "closed captioning for the hearing impaired", which included text descriptions of audio events such as "[tires screech]" and "[birds cry]". So much customer dissatisfaction followed, that when Dreamworks made a revision of the DVD which finally did include proper subtitles, they offered free replacement to customers who had already bought the original release.

Natsuko Toda, a prominent subtitle translator in Japan, is known for her stylized and sometimes dogmatic Japanese translation. In Star Wars Episode I The Phantom Menace, she called the character Jar Jar Binks a native of the planet Local, when Obi-Wan Kenobi says he is 'a local'. There were some more significant errors in the theatrical releases of the Lord of the Rings trilogy movies, which Toda translated without reading the original books. One of the most prominent contextual errors she made was killing off a king, the father of Boromir, in The Fellowship of the Ring, with her misguided translation. Such errors caused an uproar among the well established fans of the books in Japan, which eventually lead to a fan petition to the Japanese distributer to correct the mistakes. Word reached director Peter Jackson, and when the DVD versions came out, these mistakes were corrected with the assistance of Taeko Nakamura, a translator who was an assistant to Teiji Seta the translator of the Lord of the Rings books into Japanese.

New technology and research

Esist is a non-profit organization which has members interested in research for subtitling.

Media Movers, Inc. has developed proprietary software which renders automated timing (spotting) for audio/video content. They are also in research for "automated translation" for multiple languages for any content.

TM Systems received Emmy awards in 2002 and 2007 for their dubbing and subtitling software.[12]

See also

References

  1. 1.00 1.01 1.02 1.03 1.04 1.05 1.06 1.07 1.08 1.09 1.10 1.11 1.12 Department of Communications, Information Technology and the Arts; Australian Caption Centre (1999-02-26). "Submissions to the captioning standards review | Department of Communications, Information Technology and the Arts" (Microsoft Word). Retrieved on 2007-04-04. "http://www.dcita.gov.au/__data/assets/word_doc/10835/Australian_Caption_Centre.doc"
  2. Caption Colorado (2002). "Caption Colorado". Retrieved on 2007-10-24. ""Real-time" vs. Newsroom Captioning
    Caption Colorado offers "real-time" closed captioning that utilizes unique technologies coupled with the talents of highly skilled captioners who use stenographic court reporting machines to transcribe the audio on-the-fly, as the words are spoken by the broadcasters. real-time captioning is not limited to pre-scripted materials and, therefore, covers 100% of the news, weather and sports segments of a typical local news broadcast. It will cover such things as the weather and sports segments which are typically not pre-scripted, last second breaking news or changes to the scripts, ad lib conversations of the broadcasters, emergency or other live remote broadcasts by reporters in-the-field. By failing to cover items such as these, newsroom style captioning (or use of the TelePrompTer for captioning) typically results in coverage of less than 30% of a local news broadcast. … 2002"
  3. 3.0 3.1 Caption Colorado (2002). "Caption Colorado". Retrieved on 2007-10-24. "Offline Captioning
    For non-live, or pre-recorded programs, you can choose from two presentation styles models for offline captioning or transcription needs in English or Spanish.

    Premiere Offline Captioning
    Premiere Offline Captioning is geared toward the high-end television industry, providing highly customized captioning features, such as pop-on style captions, specialized screen placement, speaker identifications, italics, special characters, and sound effects.

    Premiere Offline involves a five-step design and editing process, and does much more than simply display the text of a program. Premiere Offline helps the viewer follow a story line, become aware of mood and feeling, and allows them to fully enjoy the entire viewing experience. Premiere Offline is the preferred presentation style for entertainment-type programming. … 2002"
  4. Federal Communications Commission (FCC). (2008-05-01). Closed Captioning and the DTV Transition (swf). Event occurs at 1m58s. "In addition to passing through closed caption signals, many converter boxes also include the ability to take over the captioning role that the tuner plays in your analog TV set. To determine whether your converter box is equipped to generate captions in this way, you should refer to the user manual that came with the converter box. If your converter box. If your converter box is equipped to generate captions in this way, then follow the instructions that came with the converter box to turn the captioning feature on/off via your converter box or converter box remote control. When you access the closed captions in the way, you also will be able to change the way your digital captions look. The converter box will come with instructions on how to change the caption size, font, caption color, background color, and opacity. This ability to adjust your captions is something you cannot do now with an analog television and analog captions."
  5. The Digital TV Transition - Audio and Video (2008-05-01). "What you need to know about the DTV Transition in American Sign Language: Part 3 - Closed Captioning - Flash Video". The Digital TV Transition: What You Need to Know About DTV. Federal Communications Commission (FCC). Retrieved on 2008-05-01. "http://www.dtv.gov/video/DTV_ASL-Part3.swf"
  6. Brij Kothari; Stuart Gannes & Kenneth Keniston (2006-12-21). ":: PlanetRead ::". SLS (Same Language Subtitles). PlanetRead. Retrieved on 2007-01-25. "http://www.planetread.org/images/res_01.jpg"
  7. Brij Kothari; Stuart Gannes & Kenneth Keniston (2006-12-21). ":: PlanetRead ::". SLS (Same Language Subtitles). PlanetRead. Retrieved on 2007-01-25. "Reading becomes automatic, subconscious, in everyday entertainment"
  8. Brij Kothari; Stuart Gannes & Kenneth Keniston (2006-12-21). ":: PlanetRead ::". SLS (Same Language Subtitles). PlanetRead. Retrieved on 2007-01-25. "http://www.planetread.org/images/paper_01_new-02.jpg"
  9. Switzerland's Federal Authorities (2006-08-08). "SR 101 Art. 4 Landessprachen (Bundesverfassung der Schweizerischen Eidgenossenschaft)" (in Swiss German). admin.ch - Homepage. The Federal Authorities of the Swiss Confederation. Retrieved on 2007-09-11. "1. Titel: Allgemeine Bestimmungen… Art. 4 Landessprachen
    Die Landessprachen sind Deutsch, Französisch, Italienisch und Rätoromanisch.

    1. Title: General Provisions… Article 4 National Languages
    The country's languages are German, French, Italian and Romansh." (in English)
  10. http://www.ebu.ch/CMSimages/en/tec_doc_t3264_tcm6-10528.pdf
  11. http://www.w3.org/AudioVideo/TT/
  12. Television Academy announces recipients of the 2007 primetime Emmy Engineering Awards

Further reading

  1. "A semiolinguistic study of subtitling for an Automatically Processed Concise Writing (©APCW-ECAO) with an audiovisual application.” Paris-X Nanterre University ; National Center for Scientific Research (CNRS/ENS). France, 2005. Doctoral Thesis summa cum laude to be downloaded (pdf) at http://www.paulmemmi.com/?article=2&lang=en

External links