Wave field synthesis

From Wikipedia, the free encyclopedia

WFS Principle, see as 60 sec. animation in demo below
WFS Principle, see as 60 sec. animation in demo below

Wave field synthesis (WFS) is a spatial audio rendering procedure tending of creating a virtual copy of acoustic environments. It produces "artificial" wave fronts synthesized from a large number of individually driven speakers. Such wave fronts seem to originate from a virtual starting point, the virtual source. Contrary to traditional spatialization techniques such as stereo, the localization of virtual sources in WFS does not depend on or change with the listener's position.


Contents

[edit] Physical fundamentals

WFS is based on the Huygens Principle, which states that any wave front can be regarded as a superposition of elementary spherical waves. Therefore, any wave front can be synthesized from such elementary waves. In practice, a computer controls a large array of individual loudspeakers and actuates each one in exactly the same moment when the desired virtual wave front would pass through it.

The basic procedure was developed in 1988 by Professor Berkhout at the University of Delft.[1] Its mathematical basis is the Kirchhoff-Helmholtz integral. It states that the sound pressure is completely determined within a volume free of sources, if sound pressure and velocity are determined in all points on its surface.

\boldsymbol{P}(w,z)=\iint_{dA}{G(w,z\vert{z\,'}})\frac \partial {\partial n}P(w,z\,')- P(w,z\,')\frac \partial {\partial n}{G(w,z\vert{z\,'}})dz\,'

Therefore each sound field can be reconstructed, if sound pressure and acoustic velocity are restored on all points of the surface of the volume. Such a spatial approach describes the principle of the Holophony.

According to that approach, the entire surface of the volume would have to be equipped with closely spaced monopoles and dipoles, each individually driven by its respective signal. Moreover, the listening area would have to be anechoic, in order to comply with the source-free volume assumption. For this reason the problem was not feasible in that manner.

In accordance with Rayleigh II the sound pressure is determined in each point of a half-space, if known the sound pressure in each point of a level. Because our acoustic perception is most exact in the azimuth level, the procedure until today is reduced generally to horizontal a loudspeaker row around listener.

The starting point of the synthesized wave front can be at any point within the horizontal level of the loudspeakers. It represents the virtual acoustic source, which hardly differs in its behaviour from a material acoustic source at their position. It doesn’t shifting any longer, like the phantom source in conventionally reproduction, if the listener in the rendition room moves. Convex or also concave wave fronts may produce; the virtual acoustic source can also situate within the loudspeaker arrangement.

[edit] Procedure advantages

By means of level and time information, stored in the impulse response of the recording room or on help of the model- based mirror- source approach, a sound field with very stable position of the acoustic sources can be established by wave field synthesis. For the Holofonie approach in principle it would be even possible, to establish a virtually copy of the genuine sound field, indistinguishable from the real sound. Changes of the listener position in the rendition area would affect acoustically exactly the same impression like an appropriate change of location in the recording room. Already with the loudspeaker lines the sound field becomes "crossable".

By the Moving Picture Expert Group was standardized the object-oriented transmission standard MPEG4, which allows a separate transmission of contents of (dry recorded audio signal) and form (the impulse response or the acoustic model). Each virtual acoustic source needs its own (mono) audio channel. The spatial sound field in the recording room consists of the direct wave of the acoustic source and a spatially distributed pattern of mirror acoustic sources, caused by the reflections by the recording room surfaces. To reduce that spatial mirror source distribution onto a few transmitting canals inevitably must cause a significant loss of spatial information. Much more accurately this spatial distribution can be synthesized by the rendition side.

Concerning the conventional, channel- orientated rendition procedures the wfs- rendition provides a clear advantage: "Virtual panning spots" called virtual acoustic sources, guided which the signal content of the associated channels, can be positioned far beyond the material rendition area. That reduces the influence of the listener position, because the relative changes in angles and levels are clearly smaller, as with closely fixed material loudspeaker boxes. That extended the sweet spot considerably; he can now nearly cover the entire rendition area. The procedure of the wave field synthesis thus is not only compatibly, it improved the reproduction for the conventional transmission methods clearly, yet.

[edit] Remaining problems

The most perceptible difference concerning the original sound field is the reduction on the horizontal level of the loudspeaker lines until today. It is particularly noticeable; because of the necessarily of acoustic damping the rendition area hardly mirror acoustic sources occur outside this level. However the condition of the source- free volume from the mathematical approach would be hurt without this acoustic treatment.

Since with the WFS is trying to simulate the recording room acoustic, the acoustics of the rendition area must be suppressed. The one possibility for is arranging the walls accordingly absorptive. The second possibility is the rendition in the near field. For that reason the loudspeakers must align very closely at the hearing zone or the diaphragm surface must be very large.

Disturbing is also the "truncation- effect". Because of the resulting wave front are composite from elementary waves, a sudden change of pressure occur, if no further speakers deliver its contribution by the end of the speaker row. The caused "shadow wave" can be weakened, if the outer loudspeakers in their level are already reduced. However for virtual acoustic sources in front of the loudspeaker arrangement this pressure change hurries ahead the actual wave front, whereby it becomes clearly audible.

A further problem is until today the high expenditure. A large number of individual transducers must be developed very closely neighbouring. Otherwise spatial Aliasing effects becomes audibly. They develop, because an unlimited number of elementary waves cannot be produced, as describing the mathematical approach.

The discretisation caused position-dependent narrow-band break-downs in the frequency response within the rendition range. Their frequency depends on the angle of the virtual acoustic source and on the angle of the listener against the producing loudspeaker arrangement:

f_{alias}=\frac {c}{\Delta x\mid sin\Theta^{sec}- sin\Theta^v\mid}

For aliasing free rendition in the entire audio range thereafter a distance of the single emitters below 2 cm would be necessary. But fortunately our ear is not particularly sensitive for this effect, so that already with 10 - 15 cm emitter distance it is hardly disturbing still. On the other hand the size of the emitter field does limit the representation range; outside of its borders no virtual acoustic sources be produced. Therefore the reduction of the procedure seems today justified on the horizontal level until.

[edit] Research and market maturity

The newer development for the WFS was setting in from 1988 by the Delft University. In the context of the CARROUSO project, promoted by the European Union (January 2001 to June 2003), Europe- wide ten institutes was include in this research. The WFS- sound system IOSONO was developed by the Fraunhofer Institute for digital media technology (IDMT) by the technical University of Ilmenau. Such loudspeaker rows become installed in some cinemas and theatres and in public range with good success. For home -audio application was not prospering the procedure until now. Large acceptance problems remain besides the high effort. If solved those problems, the prospects for creating virtual acoustic environments becomes very interesting, hence.

[edit] References

  • Berkhout, A.J.: A Holographic Approach to Acoustic Control, J.Audio Eng.Soc., vol. 36, Dezember 1988, pp. 977–995
  • Berkhout, A.J.; De Vries, D.; Vogel, P.: Acoustic Control by Wave Field Synthesis, J.Acoust.Soc.Am., vol. 93, Mai 1993, pp. 2764–2778

[edit] External links

[edit] Demo

Languages