MUSIC-N
From Wikipedia, the free encyclopedia
MUSIC-N refers to a family of computer music programs and programming languages descended from or influenced by MUSIC, a program written by Max Mathews in 1957 at Bell Labs, it is widely considered to be the first computer program for making music (in actuality, sound) on a digital computer, and was certainly the first program to gain wide acceptance in the music research community as viable for that task. The world's first computer music was generated in Australia by programmer Geoff Hill on the CSIRAC computer which was designed and built by Trevor Pearcey and Maston Beard. However, CSIRAC did not produce digital audio, like the MUSIC-series of programs, and its music was effectively created by a computer-controlled (analogue) audio device (thus making it a predecessor to MIDI).
MUSIC had a number of descendants, e.g.:
- MUSIC II, MUSIC III, MUSIC IV (all developed at Bell Labs)
- MUSIC IV-B (developed at Princeton University to run on an IBM mainframe)
- MUSIC IV-BF (re-written in FORTRAN, therefore portable)
- MUSIC V (the last of the Bell Labs line)
- MUSIC 360 (written by Barry Vercoe at MIT, descended from MUSIC IV-BF)
- Csound (descended from MUSIC 360 and in wide use today)
- CMix / Real-Time Cmix (by Paul Lansky, Brad Garton, and others)
- CMusic (by F. Richard Moore)
Less obviously, MUSIC can be seen as the parent program for:
- Max/MSP (perhaps obvious, given that the program was named after Max Mathews)
- Pure Data
- SuperCollider
- JSyn
- Common Lisp Music
- ChucK
- Any other computer synthesis language that relies on a modular system (e.g. Reaktor).
All MUSIC-N derivative programs have a (more-or-less) common design, made up of a library of functions built around simple signal-processing and synthesis routines (written as opcodes or unit generators). These simple opcodes are then constructed by the user into an instrument (usually through a text-based instruction file, but increasingly through a graphical interface) that defines a sound which is then "played" by a second file (called the score) which specifies notes, durations, pitches, amplitudes, and other parameters relevant to the musical informatics of the piece. Some variants of the language merge the instrument and score, though most still distinguish between control-level functions (which operate on the music) and functions that run at the sampling rate of the audio being generated (which operate on the sound). A notable exception is ChucK, which unifies audio-rate and control-rate timing into a single framework, allowing arbitrarily fine time granularity and also one mechanism to manage both. This has the advantage of more flexible and readable code as well as drawbacks of reduced system performance.
A number of highly original (and to this day largely unchallenged) assumptions are implemented in MUSIC and its descendants about the best way to create sound on a computer. Many of Mathew's implementations (such as using pre-calculated arrays for waveform and envelope storage, the use of a scheduler that runs in musical time rather than at audio rate) are the norm for most hardware and software synthesis and audio DSP systems today.