A synth (or synthesiser) is an electronic instrument capable of producing a variety of sounds by generating and combining signals of different frequencies. A modern digital synthesizer uses a frequency synthesizer microprocessor component to calculate mathematical functions, which generate signals of different frequencies. There are three main types of synthesizers, which differ in operation: analog, digital and software-based. Synthesizers create electrical signals, rather than direct acoustic sounds, which are then amplified through a loudspeaker or set of headphones.
Synthesizers are typically controlled with a piano-style keyboard, in which key functions as a switch to turn electronic circuits on and off. Although keyboards are the most common control interface, other devices such as saxophone-style wind controllers, MIDI-equipped electric guitars, drum pads or computers are used to control synthesizers. Synthesizers can produce a wide range of sounds, which can either imitate other instruments or generate unusual new timbres.
The first electric synthesizer was invented in 1876 by Elisha Gray, who is best known for his development of a telephone prototype.[1][2] Robert Moog created a revolutionary synthesizer which was used by Wendy Carlos's Switched-On Bach (1968) a popular recording which introduced many musicians to the sound of synthesizers. In the 1970s, the development of miniaturized solid-state components allowed synthesizers to become self-contained, portable instruments, which made them easier to use in live performances. By the early 1980s, companies such as Yamaha began selling compact, modestly priced synthesizers such as the DX7, and MIDI (Musical Instrument Digital Interface) was developed, which made it easier to integrate and synchronize synthesizers with other electronic instruments. In the 1990's complex synthesizers no longer required specialist hardware and began to appear as software for the PC, often as hardware emulators with on-screen knobs and panels.
Contents |
A modern digital synthesizer uses a frequency synthesizer microprocessor component to calculate mathematical functions, which generate signals of different frequencies. These frequencies are played through an output device such as a loudspeaker or set of headphones. In most conventional synthesizers, simulations of real instruments consist of several components. These sounds represent the acoustic responses of different parts of the instrument. These include the sounds produced by the instrument during different parts of a performance, or the behavior of the instrument under different playing conditions (changes in pitch, intensity of playing, fingering). The distinctive timbre, intonation, and attack of a real instrument can therefore be created by mixing these components in a way that resembles the natural behavior of the actual instrument. Nomenclature varies by synthesizer methodology and manufacturer, but the components are often referred to as oscillators or partials. A higher-fidelity reproduction of a natural instrument can be typically achieved using more oscillators, but increased computer power and human programming is required. Most synthesizers use between one and four oscillators per voice by default.
The device used to trigger musical sounds in the synthesizer is called the controller. Performers often play a synthesizer by depressing keys on a musical keyboard; however, a number of other controllers are used, including saxophone-style MIDI wind controllers and MIDI guitar synthesizer controllers. Most electronic keyboards use a keyboard matrix circuit in which the rows and columns are made up of wiring. On electric and electronic keyboards, there is an electric switch under each key. Depressing a key connects a circuit, which causes the tone generation mechanism to be triggered.
There are three main types of synthesizers: analog, digital and software. In addition, some synthesizers rely upon combinations of these three types and are known as hybrid synthesizers.
Wavetable synthesis uses a digital recording of an existing sound. These are known as samples, replayed at a range of pitches.[3] Sample playback replaces the oscillator circuit found in other synthesizers.[4] Most music workstations process sounds using effects such as filters, low-frequency oscillation, and ring modulators. Sample playback commonly replays samples at a faster speed, instead of changing the pitch. For example, to alter the frequency of a sound one octave higher, it is played at double speed; inversely, to shift the frequency of the sound one octave lower, it is played at half-speed. Instruments dedicated to recording and playing samples are known as samplers.
Due to the nature of digital sound storage, anti-aliasing and interpolation techniques are used to achieve a natural-sounding waveform. This is especially important if more than one note is played, or if arbitrary tone intervals are used. The calculations on sample-data must be very precise (for high quality, around 64 bits), especially if various parameters are needed to create a specific sound. If too many parameters are used, excessive calculations need to be made to avoid the rounding errors of the multiple calculations taking place.
Wavetable synthesis is used in certain digital music synthesizers to implement real-time additive synthesis and direct digital synthesis with minimum hardware. The technique was first developed by Wolfgang Palm in 1978, and has since been used in other synthesizers built by Yamaha, Korg and Waldorf Instruments.[5] It is commonly used in low-end MIDI instruments such as educational keyboards, and low-end sound cards.[3]
Physical modeling synthesis is the synthesis of sound by using a set of equations and algorithms to simulate a real instrument, or some other physical source of sound. When an initial set of parameters is run through the physical simulation, the simulated sound is generated. Although physical modeling was not a new concept in acoustics and synthesis, it wasn't until the development of the Karplus-Strong algorithm and the increase in DSP power in the late 1980s that commercial implementations became feasible.
Digital synthesizers generate a digital sample, corresponding to a sound pressure, at a given sampling frequency (typically 44,100 samples per second). In the most basic case, each digital oscillator is modeled by a counter. For each sample, the counter of each oscillator is advanced by an amount that varies depending on the frequency of the oscillator. For harmonic oscillators, the counter indexes a table containing the oscillator's waveform. For random-noise oscillators, the most significant bits index a table of random numbers. The values indexed by each oscillator's counter are mixed, processed, and then sent to a digital-to-analog converter, followed by an analog amplifier.
To eliminate the difficult multiplication step in the envelope generation and mixing, some synthesizers perform all of the above operations in a logarithmic coding, and add the current ADSR and mix levels to the logarithmic value of the oscillator, to effectively multiply it. To add the values in the last step of mixing, they are converted to linear values. Some digital synthesizers now exist in the form of software synthesizers, which synthesize sound using conventional computer hardware, usually a sound card. Others use a specialized digital signal processor.
A fingerboard synthesizer is a synthesizer with a ribbon controller or other fingerboard-like user interface used to control parameters of the sound processing. A ribbon controller is similar to a touchpad. However, most ribbon controllers only register linear motion. Although it could be used to operate any sound parameter, a ribbon controller is most commonly associated with pitch control or pitch bending.
Older fingerboards used resistors with a long wire pressed to the resistive plate. Modern ribbon controllers do not contain moving parts. Instead, a finger pressed down and moved along it creates an electrical contact at some point along a pair of thin, flexible longitudinal strips whose electric potential varies from one end to the other. Different fingerboards instruments were developed like the Ondes Martenot, Hellertion, Heliophon, Trautonium, Electro-Theremin, Fingerboard-Theremin and the The Persephone.
A ribbon controller is used as an additional controller in the Yamaha CS-80 and CS-60, the Korg Prophecy, the Kurzweil synthesizers, Moog synthesizers and many others. Ribbon controllers can serve as a main MIDI controller instead of keyboard (Continuum).
The earliest digital synthesis was performed by software synthesizers on mainframe computers using methods exactly like those described in digital synthesis, above. Music was coded using punch cards to describe the type of instrument, note and duration. The formants of each timbre were generated as a series of sine waves, converted to fixed-point binary suitable for digital-to-analog converters, and mixed by adding and averaging. The data was written slowly to computer tape and then played back in real time to generate the music.
Today, a variety of software is available to run on modern high-speed personal computers. DSP algorithms are commonplace, and permit the creation of fairly accurate simulations of physical acoustic sources or electronic sound generators (oscillators, filters, VCAs, etc). Some commercial programs offer quite lavish and complex models of classic synthesizers--everything from the Yamaha DX7 to the original Moog modular. Other programs allow the user complete control of all aspects of digital music synthesis, at the cost of greater complexity and difficulty of use.
The first electric synthesizer was invented in 1876 by Elisha Gray [1], who is best known for his development of a telephone prototype. The "Musical Telegraph" was a chance by-product of his telephone technology. Gray accidentally discovered that he could control sound from a self vibrating electromagnetic circuit and in doing so invented a basic single note oscillator. The Musical Telegraph used steel reeds whose oscillations were created and transmitted, over a telephone line, by electromagnets. Gray also built a simple loudspeaker device in later models consisting of a vibrating diaphragm in a magnetic field to make the oscillator audible.
Other early synthesizers used technology derived from electronic analog computers, laboratory test equipment, and early electronic musical instruments. Ivor Darreg created his microtonal 'Electronic Keyboard Oboe' in 1937. Another early synthesizer was the ANS synthesizer, constructed by the Russian scientist Evgeny Murzin from 1937 to 1958. Only two models were built, the only one survived is currently stored at the Lomonosov University in Moscow[6]. It has been used in many Russian movies (like Solaris (1972 film) to produce unusial, "cosmic" sounds.
RCA produced experimental devices to synthesize voice and music in the 1950s. The Mark II Music Synthesizer, housed at the Columbia-Princeton Electronic Music Center in New York City in 1958, was only capable of producing music once it had been completely programmed.[2] The vacuum tube system had to be manually patched to create each type of sound. It used a paper tape sequencer punched with holes to control pitch sources and filters, similar to a mechanical player piano, but capable of generating a wide variety of sounds. In 1959, Daphne Oram at the BBC Radiophonic Workshop produced a novel synthesizer using her "Oramics" technique, driven by drawings on a 35 mm film strip; it was used for a number of years at the BBC.[7] Hugh Le Caine, John Hanert, Raymond Scott, composer Percy Grainger (with Burnett Cross), and others built a variety of automated electronic-music controllers during the late 1940s and 1950s.
By the 1960s, synthesizers were developed which could be played in real time, but were usually confined to studios due to their size. These synthesizers were usually configured using a modular design, with standalone signal sources and processors being connected with patch cords or by other means, and all controlled by a common controlling device.
Some early analog synthesizers were monophonic, producing only one tone at a time. A few, such as the Moog Sonic Six, ARP Odyssey and EML 101, were capable of producing two different pitches at a time when two keys were pressed. Polyphony (multiple simultaneous tones, which enables chords), was only obtainable with electronic organ designs at first. Popular electronic keyboards combining organ circuits with synthesizer processing included the ARP Omni and Moog's Polymoog and Opus 3. During the late 1970s and early 1980s, DIY (Do it yourself) designs were published in hobby electronics magazines (notably the Formant modular synth, a DIY clone of the Moog system, published by Elektor) and kits were supplied by companies such as Paia in the US, and Maplin Electronics in the UK.
Most early synthesizers were experimental modular designs. Don Buchla, Hugh Le Caine, Raymond Scott and Paul Ketoff were among the first to build such instruments, in the late 1950s and early 1960s. Buchla later produced a commercial modular synthesizer, the Buchla Music Easel.[8] Robert Moog, who had been a student of Peter Mauzey and one of the RCA Mark II engineers, created a revolutionary synthesizer that could be used by musicians. Moog designed the circuits used in his synthesizer while he was at Columbia-Princeton. The Moog synthesizer was first displayed at the Audio Engineering Society convention in 1964.[9] Like the RCA Mark II, it required more experience to set up new sounds, but it was smaller and more intuitive than what had come before. Less like a machine and more like a musical instrument, the Moog synthesizer was at first a curiosity, but by 1968 had caused a sensation.
Moog also established standards for control interfacing, with a logarithmic 1-volt-per-octave pitch control and a separate pulse triggering signal. This standardization allowed synthesizers from different manufacturers to operate simultaneously. Pitch control is usually performed either with an organ-style keyboard or a music sequencer, which produces a series of control voltages over a fixed time period and allows some automation of music production. Other early commercial synthesizer manufacturers included ARP, who also started with modular synthesizers before producing all-in-one instruments, and British firm EMS.
Micky Dolenz of The Monkees bought the third Moog synthesizer in existence. They were the first band to release an album featuring music from a Moog with Pisces, Aquarius, Capricorn & Jones Ltd. in 1967.[10] It also became the first album featuring a synthesizer to hit #1 on the charts. During the late 1960s, hundreds of other popular recordings used Moog synthesizer sounds. The Moog synthesizer even spawned a subculture of record producers who made novelty "Moog" recordings, depending on the odd new sounds made by their synthesizers (which were not always Moog units) to draw attention and sales.
In 1970, Moog designed an innovative synthesizer with a built-in keyboard and without modular design - the analog circuits were retained, but made interconnectable with switches in a simplified arrangement called "normalization". Though less flexible than a modular design, normalization made the instrument more portable and easier to use. This first pre-patched synthesizer, the Minimoog, became highly popular, with over 12,000 units sold.[11] The Minimoog also influenced the design of nearly all subsequent synthesizers, with integrated keyboard, pitch wheel and modulation wheel, and a VCO->VCF->VCA signal flow.
In the 1970s miniaturized solid-state components allowed synthesizers to become self-contained, portable instruments, which soon began to be used in live performances. Electronic synthesizers had quickly become a standard part of the popular-music repertoire. The first movie to make use of synthesized music was the James Bond film On Her Majesty's Secret Service in 1969. After the release of the film, a large number of movies were made with synthesized music. A few of them, such as 1982's John Carpenter's "The Thing", used only synthesized music in their scores.[12]
By 1976, the first true music synthesizers to offer polyphony had begun to appear, most notably in the form of the Yamaha GX1, CS-50, CS-60 and Yamaha CS-80 and the Oberheim Four-Voice. These early instruments were very complex, heavy, and costly. Another feature that began to appear was the recording of knob settings in a digital memory, allowing the changing of sounds quickly. When microprocessors first appeared on the scene in the early 1970s, they were expensive and difficult to apply.
The first practical polyphonic synth, and the first to use a microprocessor as a controller, was the Sequential Circuits Prophet-5 introduced in late 1977.[13] For the first time, musicians had a practical polyphonic synthesizer that allowed all knob settings to be saved in computer memory and recalled by pushing a button. The Prophet-5 was also physically compact and lightweight, unlike its predecessors. This basic design paradigm became a standard among synthesizer manufacturers, slowly pushing out the more complex and recondite modular design. One of the first real-time polyphonic digital music synthesizers was the Coupland Digital Music Synthesizer. It was much more portable than a piano but never reached commercial production.
The Fairlight CMI (Computer Musical Instrument) was the first polyphonic digital sampling synthesizer.[14] It was designed in 1978 by the founders of Fairlight, Peter Vogel and Kim Ryrie, and based on a dual microprocessor computer designed by Tony Furse in Sydney, Australia. The Fairlight CMI gave musicians the ability to modify volume, attack, decay, and special effects like vibrato. Waveforms could also be modified on a computer monitor using a light pen.[15] It rose to prominence in the early 1980s and competed in the market with the Synclavier from New England Digital. The first buyers of the new system were Herbie Hancock, Peter Gabriel, Richard James Burgess, Todd Rundgren, Nick Rhodes of Duran Duran, producer Rhett Lawrence, Stevie Wonder and Ned "EBN" Liben of Ebn Ozn, who acted as Fairlight's New York expert liaison to the American musician community.[16]
The Kurzweil K250, first produced in 1983, was also a successful polyphonic digital music synthesizer.[17] It was noted for its ability to reproduce several instruments synchronously; the Kurzweil K250 also had a velocity-sensitive keyboard. It was priced at US$ 10,000.[18]
Most new synthesizers since the mid 1980s have been digital. Japanese manufacturers Yamaha and Casio both influenced digital synthesizers during the 1980s and 1990s. John Chowning, a professor at Stanford University, exclusively licensed his FM synthesis patent to Yamaha in 1975.[19] Yamaha subsequently released their first FM synthesizers, the GS-1 and GS-2, which were costly and heavy. The GS series followed, which used a pair of smaller, preset versions—the CE20 and CE25 Combo Ensembles. These models were targeted primarily at the home organ market and featured four-octave keyboards.[20] Yamaha's third generation of digital synthesizers was a commercial success; it consisted of the DX7 and DX9 (1983). Both models were compact, reasonably priced, and dependent on custom digital integrated circuits to produce FM tonalities. The DX7 was the first mass market all-digital synthesizer.[21] It became indispensable to many music artists of the 1980s, and demand soon exceeded supply.[22] The DX7 sold over 200,000 units within three years.[23]
After the introduction of the DX series, Bo Tomlyn, original DX7 project manager Mike Malizola, and Chuck Monte founded Key Clique, Inc, which sold thousands of ROM cartridges with new FM/DX7 sounds to DX7 owners. This led to the demise of the heavy, electro-mechanical Rhodes piano during the 1980s, until its comeback in the 1990s. Yamaha later licensed its FM technology to other manufacturers. When the Stanford patent expired, many personal computers already contained an audio input-output system with a built-in 4-operator FM digital synthesizer.
Following the success of Yamaha's licensing of Stanford's FM synthesis patent, Yamaha signed a contract with Stanford University in 1989 to develop jointly digital waveguide synthesis. As such, most patents related to the technology are owned by Stanford or Yamaha. The first commercial physical modeling synthesizer was Yamaha's VL-1 in 1994.[24] Analog synthesizers have also revived in popularity since the 1980s. In recent years, the two trends have sometimes been combined as analog modeling synthesizers, or digital synthesizers that model analog synthesis using digital signal processing techniques. New analog instruments now also accompany the large number from the digital world.
Synthesizers became easier to integrate and synchronize with other electronic instruments and controllers with the introduction of Musical Instrument Digital Interface (MIDI) in 1983.[25] First proposed in 1981 by engineer Dave Smith of Sequential Circuits, the MIDI standard was developed by a consortium now known as the MIDI Manufacturers Association.[26] MIDI is an opto-isolated serial interface and communication protocol.[26] It provides for the transmission from one device or instrument to another of real-time performance data. This data includes note events, commands for the selection of instrument presets (i.e. sounds, or programs or patches, previously stored in the instrument's memory), the control of performance-related parameters such as volume, effects levels and the like, as well as synchronization, transport control and other types of data. MIDI interfaces are now almost ubiquitous on music equipment and are commonly available on personal computers (PCs).[26]
The General MIDI (GM) software standard was devised in 1991 to serve as a consistent way of describing a set of over 200 tones (including percussion) available to a PC for playback of musical scores.[27] For the first time, a given MIDI preset would consistently produce an instrumental sound on any GM-conforming device. The Standard MIDI File (SMF) format (extension .mid
) combined MIDI events with delta times - a form of time-stamping - and became a popular standard for exchange of music scores between computers. In the case of SMF playback using integrated synthesizers (as in computers and cell phones), the hardware component of the MIDI interface design is often unneeded.
Open Sound Control (OSC) is a proposed replacement for MIDI designed for networking. In contrast with MIDI, OSC allows thousands of synthesizers or computers to share music performance data over the Internet in realtime.
The synthesizer has had a large impact on modern music over the past forty years.[28] The first significant influence of the instrument came during the 1970s and 1980s. Wendy Carlos's Switched-On Bach (1968), recorded using Moog synthesizers, influenced numerous musicians of that era. Switched-On Bach is one of the most popular classical music recordings ever made, and the first to go Platinum.[29] During the late 1960s, hundreds of other popular recordings used Moog synthesizers. The Moog synthesizer spawned a subculture of record producers who made novelty "Moog" recordings, using synthesizers to create new sounds to draw attention and sales.
The synthesizer's notable influence during the late 1970s and 1980s led to mainstream popularity among renowned music artists. The first major artists to fully use the synthesizer included Wendy Carlos,[29] Jean Michel Jarre, Vangelis, Tangerine Dream, Kitaro, Stevie Wonder, Peter Gabriel, Kate Bush, Kraftwerk, Ultravox and Yellow Magic Orchestra. English musician Gary Numan was influenced by Kraftwerk, Ultravox and David Bowie. Numan's 1979 hit Are 'Friends' Electric? used synthesizers heavily.[30] Numan continued to use synthesizers throughout most of his career, including the 1980 hit Cars.[31]
The influence of synthesizers on the Synthpop movement in the United Kingdom during the 1980s was evident from its usage by Nick Rhodes, keyboardist of Duran Duran, who used Roland Jupiter-4 and Jupiter-8 synthesizers.[32] The emergence of Synthpop, a subgenre of New Wave, can be largely credited to the synthesizer. It lasted from the late 1970s to the mid 1980s. The influences of synthesizer technology and Germanic ambience of Kraftwerk and of David Bowie during his Berlin period (1976-77) were both crucial in the development of the synthpop genre.[33] By 1981, many artists had adopted the synthpop sound and experienced chart success, such as Depeche Mode, Visage, OMD, and Ultravox.[33] Duran Duran and Spandau Ballet were classed as leaders of the genre in 1981. Many other acts followed, including Soft Cell, Culture Club, Eurythmics and Blancmange, by which time synthesizers were one of the most important instruments within the music industry.[33]
The synthesizer introduced many recognizable sounds in the 1980s. OMD's Enola Gay (1980) used a distinctive electronic percussion and synthesized melody. Soft Cell used a synthesized melody in their 1981 hit Tainted Love.[34] Other chart hits include Depeche Mode's Just Can't Get Enough (1981),[34] and The Human League's Don't You Want Me.[35] The sounds varied between artists and songs, but all were distinctively produced using synthesizers.[36]