Voice activity detection

Voice activity detection (VAD), also known as speech activity detection or speech detection, is a technique used in speech processing in which the presence or absence of human speech is detected.[1] The main uses of VAD are in speech coding and speech recognition. It can facilitate speech processing, and can also be used to deactivate some processes during non-speech section of an audio session: it can avoid unnecessary coding/transmission of silence packets in Voice over Internet Protocol applications, saving on computation and on network bandwidth.

VAD is an important enabling technology for a variety of speech-based applications. Therefore various VAD algorithms have been developed that provide varying features and compromises between latency, sensitivity, accuracy and computational cost. Some VAD algorithms also provide further analysis, for example whether the speech is voiced, unvoiced or sustained. Voice activity detection is usually language independent.

It was first investigated for use on time-assignment speech interpolation (TASI) systems.

Algorithm overview

The typical design of a VAD algorithm is as follows:[1]

  1. There may first be a noise reduction stage, e.g. via spectral subtraction.
  2. Then some features or quantities are calculated from a section of the input signal.
  3. A classification rule is applied to classify the section as speech or non-speech – often this classification rule finds when a value exceeds a threshold.

There may be some feedback in this sequence, in which the VAD decision is used to improve the noise estimate in the noise reduction stage, or to adaptively vary the threshold(s). These feedback operations improve the VAD performance in non-stationary noise (i.e. when the noise varies a lot).[1]

A representative set of recently published VAD methods formulates the decision rule on a frame by frame basis using instantaneous measures of the divergence distance between speech and noise. The different measures which are used in VAD methods include spectral slope, correlation coefficients, log likelihood ratio, cepstral, weighted cepstral, and modified distance measures.

Independently from the choice of VAD algorithm, we must compromise between having voice detected as noise or noise detected as voice (between false positive and false negative). A VAD operating in a mobile phone must be able to detect speech in the presence of a range of very diverse types of acoustic background noise. In these difficult detection conditions it is often preferable that a VAD should fail-safe, indicating speech detected when the decision is in doubt, to lower the chance of losing speech segments. The biggest difficulty in the detection of speech in this environment is the very low signal-to-noise ratios (SNRs) that are encountered. It may be impossible to distinguish between speech and noise using simple level detection techniques when parts of the speech utterance are buried below the noise.

Applications

For a wide range of applications such as digital mobile radio, Digital Simultaneous Voice and Data (DSVD) or speech storage, it is desirable to provide a discontinuous transmission of speech-coding parameters. Advantages can include lower average power consumption in mobile handsets, higher average bit rate for simultaneous services like data transmission, or a higher capacity on storage chips. However, the improvement depends mainly on the percentage of pauses during speech and the reliability of the VAD used to detect these intervals. On the one hand, it is advantageous to have a low percentage of speech activity. On the other hand clipping, that is the loss of milliseconds of active speech, should be minimized to preserve quality. This is the crucial problem for a VAD algorithm under heavy noise conditions.

Use in telemarketing

One controversial application of VAD is in conjunction with predictive dialers used by telemarketing firms. In order to maximize agent productivity, telemarketing firms set up predictive dialers to call more numbers than they have agents available, knowing most calls will end up in either “Ring – No Answer” or answering machines. When a person answers, they typically speak briefly (“Hello”, “Good evening”, etc.) and then there is a brief period of silence. Answering machine messages usually contain 3–15 seconds of continuous speech. By setting VAD parameters correctly, dialers can determine whether a person or a machine answered the call, and if it's a person, transfer the call to an available agent. If it detects an answering machine, the dialer hangs up. Often, the system correctly detects a person answering the call, but no agent is available. This leaves the called party frustratedly repeating “Hello? Hello?” into the phone, and when combined with the volume of agents that did get through, created the impetus to develop “Do Not Call” lists across the US.

Performance evaluation

To evaluate a VAD, its output using test recordings is compared with those of an “ideal” VAD – created by hand-annotating the presence/absence of voice in the recordings. The performance of a VAD is commonly evaluated on the basis of the following four parameters:[2]

Although the method described above provides useful objective information concerning the performance of a VAD, it is only an approximate measure of the subjective effect. For example, the effects of speech signal clipping can at times be hidden by the presence of background noise, depending on the model chosen for the comfort noise synthesis, so some of the clipping measured with objective tests is in reality not audible. It is therefore important to carry out subjective tests on VADs, the main aim of which is to ensure that the clipping perceived is acceptable. This kind of test requires a certain number of listeners to judge recordings containing the processing results of the VADs being tested. The listeners have to give marks on the following features:

These marks, obtained by listening to several speech sequences, are then used to calculate average results for each of the features listed above, thus providing a global estimate of the behavior of the VAD being tested. To conclude, whereas objective methods are very useful in an initial stage to evaluate the quality of a VAD, subjective methods are more significant. As, however, they are more expensive (since they require the participation of a certain number of people for a few days), they are generally only used when a proposal is about to be standardized.

Implementations

See also

References

  1. 1 2 3 Ramírez, J.; J. M. Górriz; J. C. Segura (2007). "Voice Activity Detection. Fundamentals and Speech Recognition System Robustness" (PDF). In M. Grimm and K. Kroschel. Robust Speech Recognition and Understanding. pp. 1–22. ISBN 978-3-902613-08-0.
  2. Beritelli, F.; Casale, S.; Ruggeri, G.; Serrano, S. (March 2002). "Performance evaluation and comparison of G.729/AMR/fuzzy voice activity detectors". IEEE Signal Processing Letters 9 (3): 85–88. doi:10.1109/97.995824.
  3. Freeman, D. K. (May 1989). "The voice activity detector for the Pan-European digital cellular mobile telephone service". Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP-89). pp. 369–372. doi:10.1109/ICASSP.1989.266442.
  4. Benyassine, A.; Shlomot, E.; Huan-yu Su; Massaloux, D.; Lamblin, C.; Petit, J.-P. (Sep 1997). "ITU-T Recommendation G.729 Annex B: a silence compression schemefor use with G.729 optimized for V.70 digital simultaneous voice anddata applications". IEEE Communications Magazine 35 (9): 64–73. doi:10.1109/35.620527.
  5. ETSI (1999). "GSM 06.42, Digital cellular telecommunications system (Phase 2+); Half rate speech; Voice Activity Detector (VAD) for half rate speech traffic channels". 8.0.1. ETSI.
  6. Cohen, I. (Sep 2003). "Noise spectrum estimation in adverse environments: improved minima controlled recursive averaging". IEEE Transactions on Speech and Audio Processing 11 (5): 466–475. doi:10.1109/TSA.2003.811544.
  7. "LibVAD - multi platform Voice Activity Detection library".
This article is issued from Wikipedia - version of the Friday, January 01, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.