# THE AUDITORY MODELING TOOLBOX

Applies to version: 0.9.9

Go to function

# DEMO_MAY2011 - Demo of the model estimating the azimuths of concurrent speakers

demo_may2011 generates figures showing the result of the model estimating the azimuth position of three concurrent speakers. Also, it returns the estimated azimuths.

Set demo to the following flags to shows other conditions:

1R
one speaker in reverberant room
2
two speakers in free field
3
three speakers in free field (default)
5
five speakers in free field

Time-frequency-based azimuth estimates

This figure shows the azimuth estimates in the time-frequency domain for three speakers.

Interaural time differences (ITDs)

This figure shows the ITDs in the time-frequency domain estimated from the mixed signal of three concurrent speakers.

Interaural level differences (ILDs)

This figure shows the ILDs in the time-frequency domain estimated from the mixed signal of three concurrent speakers.

Interaural coherence

This figure shows the interaural coherence in the time-frequency domain estimated from the mixed signal of three concurrent speakers.

Frame-based azimuth estimates

This figure shows the azimuth directions in the time domain estimated from the mixed signal of three concurrent speakers.

GMM pattern

This figure shows the pattern and the histogram obtained from the GMM-estimator for the mixed signal of three concurrent speakers.

This code produces the following output:

azEst =

-29.8977   29.9242    0.0926


## References:

T. May, S. van de Par, and A. Kohlrausch. A probabilistic model for robust localization based on a binaural auditory front-end. IEEE Trans Audio Speech Lang Proc, 19:1-13, 2011.