THE AUDITORY MODELING TOOLBOX

Applies to version: 1.5.0

View the code

Go to function

TAKANEN2013_FORMBINAURALACTIVITYMAP - Steer the what cues on a topographic map using the where cues

Usage:

[activityMap colorGains colorMtrx levels] = takanen2013_formbinauralactivitymap(thetaL,thetaR,eL,eR,fs,fc,printFigs);

Input parameters:

thetaL "where" cue of the left hemisphere
thetaR "where" cue of the right hemisphere
eL "what" cue of the left hemisphere
eR "what" cue of the right hemisphere
fs sampling rate
fc characteristic frequencies
printFigs boolean value describing whether the computations in contralateralcomparison are illustrated or not
printMap optional boolean value describing whether the resulting activity map is plotted (by default) or not.

Output parameters:

activityMap Matrix that describes in which of the six frequency ranges there is activation on a given location on the map at a specific time instant
colorGains Matrix that describes the signal level dependent gains for the different activation values on the activityMap
colorMtrx RGB color codes employed for the different frequency ranges on the binaural activity map
levels vector specifying the left/right location

Description:

This function takes as input the where cue values of the left and right hemisphere and the respective what cues. The principle is to create an image for each what cues on the map at locations specified by the where cues, and to enhance the contrast between the hemispheres with a method denoted as contralateral comparison. In each position of the map, there is thought to exist several frequency-selective neurons. For illustrative purposes, the frequencies are divided into six frequency areas and different colors are used for each of them. In the resulting map, different colors indicate different frequencies, brightness of the colors indicate the energy and the location indicates the direction of the activity.

References:

M. Takanen, O. Santala, and V. Pulkki. Visualization of functional count-comparison-based binaural auditory model output. Hearing research, 309:147--163, 2014. PMID: 24513586. [ DOI ]