Assumption Of Normal Hearing Acuity

Problem:

The variance in human response is acknowledged in nearly every instance while at the same time there is no preclassification of subjects according to subclinical hearing differences.

Solution:

There is a small but growing literature showing that individuals with normal hearing may be classified into widely differing groups on the basis of subtle/subclinical differences which can be demonstrated with physiological results.

Examples:

The differentiation of earcons seems successful when there are few, but as the number of complexity grows greater than three, the ability to do so may depend strongly on the extent of depletion of cochlear reserve.


I don't see why this would differ greatly from vision, where some scenes are 'emotional' (for example) to some people and not emotive to others. I expect that 'expertise' in a a domain will 'overpower' some of the possible confounds.

And that comment is exactly why the above needs to be translated from gobbledygook. Cause I can read that gobbledygook and I'm getting a radically different understanding of it than you are.


Can you translate from gobbledygook? And perhaps relate this to programming or computers somehow?

It seems to be more in the HumanComputerInteraction/usability domain. An "earcon" (by punny analogy to eye-con perhaps?) is an abstract synthetic snippet of sound used to convey information, such as a simple chime new-mail sound or a HotPlug? connect/disconnect sound. The more different event sounds you have in use at a time, the harder they get to distinguish; all the more so when the sound is parameterized, such as varying the length or pitch of notes in a scale to indicate stock movement. The gobbledygook just points out that various people with "normal" hearing acuity may have wildly varying bandwidth for these sorts of audio cues. -jh


This page was created as part of the SonificationDesignPatterns


EditText of this page (last edited August 12, 2010) or FindPage with title or text search