Concatenative synthesis

Concatenative synthesis is a technique for synthesising sounds by concatenating short samples of recorded sound (called units). The duration of the units is not strictly defined and may vary according to the implementation, roughly in the range of 10 milliseconds up to 1 second. It is used in speech synthesis and music sound synthesis to generate user-specified sequences of sound from a database built from recordings of other sequences.

In speech

In music

Concatenative synthesis for music started to develop in the 2000s in particular through the work of Schwarz [1] and Pachet [2] (so-called musaicing). The basic techniques are similar to those for speech, although with differences due to the differing nature of speech and music: for example, the segmentation is not into phonetic units but often into subunits of musical notes or events.[1][3][4]

See also

References

  1. 1.0 1.1 Schwarz, Diemo (2004-01-23), Data-Driven Concatenative Sound Synthesis, retrieved 2010-01-15
  2. Zils, A. and Pachet, F. (2001), "Musical Mosaicing" (PDF), Proceedings of the COST G-6 Conference on Digital Audio Effects (DaFx-01), University of Limerick: 39–44, retrieved 2011-04-27
  3. Schwarz, D. (2005), "Current research in Concatenative Sound Synthesis" (PDF), Proceedings of the International Computer Music Conference (ICMC)
  4. Maestre, E. and Ramírez, R. and Kersten, S. and Serra, X. (2009), "Expressive Concatenative Synthesis by Reusing Samples from Real Performance Recordings", Computer Music Journal 33 (4): 23–42, doi:10.1162/comj.2009.33.4.23