Audio mastering/temp
From Wikipedia, the free encyclopedia
Audio Mastering (Post-Production) is the art of optimizing sound frequencies in order to meet industry standards; and in addition, to transfer the recorded audio as a cohesive program to a data storage device that will be used in the production of copies.
This cohesive audio program, in a tonal context, is fine tuned through equalization and compression. More tasks such as editing, pre-gapping, leveling, fading in and out, noise reduction and other signal restoration and enhancement processes can be applied, or be part of the mastering stage.
A mastering engineer maybe required to take other steps, i.e. the creation of a PMCD (Pre Mastered Compact Disc), where this cohesive material needs to be transferred to a master disc for mass replication.
A good architecture of the PMCD is crucial for a successful transfer to a glass master that will generate stampers for reproduction.
Thus, the chosen medium is used as the source from which all copies will be made (via methods such as pressing, duplication or replication).
Contents |
[edit] History
[edit] Pre-1940s
In the earliest days of the recording industry, all phases of the recording and mastering process were entirely achieved by mechanical processes. Performers sang and/or played into a large acoustic horn and the master recording was created by the direct transfer of acoustic energy from the diaphragm of the recording horn to the mastering lathe, which was typically located in an adjoining room. The cutting head, driven by the energy transferred from the horn, inscribed a modulated groove into the surface of a rotating cylinder or disc. These masters were usually made from either a soft metal alloy or from wax; this gave rise to the colloquial term "waxing", referring to the cutting of a record.
After the introduction of the microphone and electronic amplification in the late 1920s the mastering process became electro-mechanical, and electrically driven mastering lathes came into use for cutting master discs (the cylinder format by then having been superseded).
However, until the introduction of tape recording, master recordings were almost always cut direct-to-disc. Artists performed live in a specially-designed studio and as the performance was underway, the signal was routed from the microphones via a mixing desk in the studio control room to the mastering lathe, where the disc was cut as the performance took place. Only a small minority of recordings were mastered using previously recorded material sourced from other discs.
[edit] Advances
The recording industry was revolutionized by the introduction of magnetic tape in the late 1940s, which enabled master discs to be cut separately in time and space from the actual recording process. Although tape and other technical advances dramatically improved audio quality of commercial recordings in the post-war years, the basic constraints of the electro-mechanical mastering process remained, and the inherent physical limitations of the main commercial recording media -- the 78rpm disc and the later 7" single and LP record -- meant that the audio quality, dynamic range and running time of master discs was still relatively limited compared to later media such as the compact disc.
Running times were constrained by the diameter of the disc and the density with which grooves could be inscribed on the surface without cutting into each other. Dynamic range was also limited by the fact that, if the signal level coming from the master tape was too high, the highly sensitive cutting head might jump off the surface of the disc during the cutting process.
From the 1950s until the advent of digital recording in the late 1980s, the mastering process typically went through several stages. Once the studio recording on multi-track tape was complete, a final mix was prepared and dubbed down to the master tape -- usually either a single-track monophonic or two-track stereo tape.
Prior to the cutting of the master disc, the master tape was often subjected to further electronic treatment by a specialist mastering engineer. After the advent of tape it was found that, especially for pop recordings, master recordings could be 'optimized' by making fine adjustments to the balance and equalization prior to the cutting of the master disc.
Mastering became a highly skilled craft and it was widely recognized that good mastering could make or break a commercial pop recording. As a result, during the peak years of the pop music boom from the 1950s to the 1980s, the best mastering engineers were in high demand.
In large recording companies such as EMI, the mastering process was usually controlled by specialist staff technicians who were conservative in their work practices. These big companies were often reluctant to make changes to their recording and production processes -- for example, EMI was very slow in taking up innovations in multi-track recording and they did not install 8-track recorders in their Abbey Road Studios until the late 1960s, more than a decade after the first commercial 8-track recorders were installed by American independent studios. As a result, by the time The Beatles were making their groundbreaking recordings in the mid-Sixties, they often found themselves at odds with EMI's mastering engineers, who were unwilling to meet the group's demands to 'push' the mastering process, because it was feared that if levels were set too high it would cause the needle to jump out of the groove when the record was played by consumers.
[edit] Digital technology
In the 1990s, the old electro-mechanical processes were largely superseded by digital technology, with digital recordings tranferred to digital masters by an optical etching process that employs laser technology. The digital audio workstation (DAW) became common in many mastering facilities, allowing the off-line manipulation of recorded audio via a graphical user interface (GUI).
[edit] Process
The process of audio mastering varies depending on the specific needs of the audio to be processed. Steps of the process typically include but are not limited to:
- Transferring the recorded audio tracks into the Digital Audio Workstation (DAW).
- Adjust volume levels and tonal balance in order to create a "cohesive" program that meets industry standards.
- Sequence the separate songs or tracks (The spaces in between) as it will appear on the final product (for example, an Audio CD).
- Transfer the audio to the final master format (i.e., Red Book compatible audio CD or a CD-Rom data etc.).
Examples of possible actions taken during mastering:
- Apply noise reduction to eliminate hum and hiss.
- Peak Limit the tracks. They maybe set to a much higher output level and according to the needs and demands of the artists and producers of the record; the overall audio output should never exceed 0 dBFS.
- Equalize audio between tracks to ensure program cohesiveness.
- Widen the stereo field or stereo image as needed.
The guidelines above are mainly descriptive of the digital mastering process and not considered specific instructions that may or may not be applied in a given situation. Mastering needs to examine the types of input media, the expectations of the source producer or recipient, the limitations of the end medium and process the subject accordingly. General rules of thumb can rarely be applied.
[edit] RMS in music, average loudness
The Root Mean Square (RMS) in audio production terminology is a measure of average level and is found widely in software tools. In practice, a larger RMS number means higher average level; i.e. -9 dBFSD RMS is 2 dB louder than -11 dBFSD RMS. The maximum value for the RMS number is therefore zero. The loudest records of modern music are -7 to -9 dBFSD RMS, the softest -12 to -16 dBFSD RMS. The RMS level is no absolute guarantee of loudness, however; perceived loudness of signals of similar RMS level can vary widely since perception of loudness is dependent on several factors, including the spectrum of the sound (see Fletcher-Munson) and the density of the music (e.g., slow ballad versus fast rock).
[edit] Audio mastering tools
|
|