Why do we hear what we hear? James D. Johnston Chief Scientist, DTS, Inc.

Slides:



Advertisements
Similar presentations
Mr. McCormick A.P. Psychology
Advertisements

Why do we hear what we hear? James D. Johnston Chief Scientist DTS, Inc.
A primer by J. D. Johnston Microsoft Corporation
ECEU300 Ethics in the Workplace Why talk about Ethics? Everyone is ethical, everyone knows how to behave at work. Everyone gets it about not stealing stuff.
Music Introduction to Humanities. Music chapter 9 Music is one of the most powerful of the arts partly because sounds – more than any other sensory stimulus.
Hearing, Touch, Taste and Smell. Hearing Audition – the sense of hearing.
HEARING.
3/23/2002 Copyright James D. Johnston Permission granted for any educational use. 1 What can we hear? James D. Johnston home.comcast.net/~retired_old_jj.
M.Sc. in Medical Engineering
Sensory Systems: Auditory. What do we hear? Sound is a compression wave: When speaker is stationary, the air is uniformly dense Speaker Air Molecules.
The Auditory System. Audition (Hearing)  Transduction of physical sound waves into brain activity via the ear. Sound is perceptual and subjective. 
Sensory Systems: Auditory. What do we hear? Sound is a compression wave: When speaker is stationary, the air is uniformly dense Speaker Air Molecules.
Jon Epp’s Office Hours: Tues in EP1246 or by appointment
Pheromones are not smells Pheromones are chemical signals sent from one animal to another Pheromones.
Chapter 6: The Human Ear and Voice
Unit 4: Sensation & Perception
Sensation and Perception: Hearing
Hearing.
Chapter 4 Powerpoint: Hearing
HEARING. Audition  What is Audition?  Hearing  What sounds do we hear the best?  Sounds with the frequencies in the range corresponding to the human.
Audiology Training Course ——Marketing Dept. Configuration of the ear ① Pinna ② Ear canal ③ Eardrum ④ Malleus ⑤ Incus ⑥ Eustachian tube ⑦ Stapes ⑧ Semicircular.
Acting Out How the Ear Hears
Copyright © 2009 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 3 Structure and Function of the Auditory System.
1 Hearing or Audition Module 14. Hearing Our auditory sense.
By: Ellie Erehart, Angie Barco, Maggie Rieger, Tj Myers and Kameron Thomas.
The Care and Feeding of Loudness Models J. D. (jj) Johnston Chief Scientist Neural Audio Kirkland, Washington, USA.
Hearing Test ng_test/ ng_test/
When to Code WHEN NOT TO CODE James D. (jj) Johnston Chief Scientist DTS, Inc.
© 2002 John Wiley & Sons, Inc. Huffman: PSYCHOLOGY IN ACTION, 6E PSYCHOLOGY IN ACTION Sixth Edition by Karen Huffman PowerPoint  Lecture Notes Presentation.
© 2011 The McGraw-Hill Companies, Inc. Instructor name Class Title, Term/Semester, Year Institution Introductory Psychology Concepts Hearing.
Chapter Five Sensation. The Basics  Sensation  The mechanical process by which we “take in” physical information from the outside world  Psychophysics.
Hearing Our auditory sense We hear sound WAVES Frequency: the number of complete wavelengths that pass through point at a given time. This determines.
Auditory - Hearing Definition: The hearing system, also known as the auditory system involves the outer ear, middle ear, inner ear, and central auditory.
Personal Health Ears. Function A. Function of ears is to gather sensory information: 1. Sound waves for sense of hearing 2. Gravity and movement for sense.
1 PSYCHOLOGY (8th Edition, in Modules) David Myers PowerPoint Slides Aneeq Ahmad Henderson State University Worth Publishers, © 2007.
It’s Not Just About the Horses: How to Bring Out the Best In the People You Work With John J. Martin Dina Parrello.
Sensation- Day 2 Review Questions: 1.Define sensation and perception, and discriminate between the two. 2.What is the retina, and what happens there? 3.Describe.
 Focuses sound waves onto the ear drum  Two parts 1. The pinna which concentrates sound waves into the auditory canal. 2. The auditory canal which.
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
Sensory Systems: Auditory. What do we hear? Sound is a compression wave: When speaker is stationary, the air is uniformly dense Speaker Air Molecules.
By Sarita Jondhale 1 The process of removing the formants is called inverse filtering The remaining signal after the subtraction of the filtered modeled.
HEARING. The Nature of Sound Sound, like light, comes in waves Sound is vibration Features of sound include: –Pitch / Hertz – Loudness / Decibels.
Introduction to psycho-acoustics: Some basic auditory attributes For audio demonstrations, click on any loudspeaker icons you see....
HEARING Do you know how you are able to hear your phone ringing? A baby crying? Leaves rustling? Sound travels through the air in waves. It is caused.
Wed. Mar. 12 Do Now: What kind of wave is a sound wave?
COLD READING UNIT. WHAT DO YOU THINK ABOUT WHEN YOU HEAR “COLD READING?”
Listener Fatigue – Some speculations James D. (jj) Johnston Chief Scientist Neural Audio, Kirkland, Wa.
Hearing. Anatomy of the Ear How the Ear Works The outer ear The pinna, which is the external part of the ear, collects sounds and funnels them through.
Chapter 4 Sensation and Perception. The Ear Audition = hearing Audition = hearing Sounds = mechanical energy typically caused by vibrating objects Sounds.
The Nature of Sound Sound, like light, comes in waves Sound is vibration Features of sound include: –Pitch –Hertz –Decibels.
Chapter 13: Sound and Music. Human Ear Label: -External Auditory Canal (label it “ear canal”) -Tympanic Membrane (label it “eardrum”) -Malleus (label.
Hearing Aka: Audition. Frequency the number of complete wavelengths that pass through point at a given time. This determines the pitch of a sound.
The physics of hearing and other odds and ends. The range of human hearing The range of average human hearing is from about 15Hz to about 15,000Hz. Though.
Mindfulness. Introduction  Mindfulness is a way of using our minds to be aware of what’s going on inside and outside of us at the moment it is happening.
Hearing Module 14.
Review: Hearing.
Hearing.
Chapter 6 (D): Hearing.
Your Ear…. Your Ear…..
Hearing Aka: Audition.
Presentation by Maliha Khan and Kevin Kemelmakher
Hearing, not trying out for a play
Sensation Notes 5-3 (obj 11-16)
Psychoacoustics: Sound Perception
Introducing the Ideas One of Six Traits:
Hearing Aka: Audition.
Ms. Saint-Paul A.P. Psychology
Sound, language, thought and sense integration
Human Hearing 096 Part 1.
Presentation transcript:

Why do we hear what we hear? James D. Johnston Chief Scientist, DTS, Inc.

First, some notes The talk I’m about to give presents ideas gathered from a variety of papers and experiments, done by many people, over a long period of time. – It is not inviolate. – It is a discussion of phenomena – The mechanism is, in most cases, unknown, once one gets beyond the basilar membrane – There will be revisions as time goes on. There will ALWAYS be revisions

The auditory system Periphery CNS

What am I calling “peripheral” HRTF’s, including ear canal and middle ear functions Cochlear analysis Reduction of sound into partial loudness as a function of time

Partial Loudness? First, two terms: – Intensity Sound Pressure Level MEASURED – Loudness Sensation Level Perceived The inner ear reduces the sounds that reach your eardrum to partial loudness. That is the information, in a time/frequency analysis that results in loudness vs. frequency vs. time, that goes down the auditory nerve.

And Part of the CNS? Everything else – Reduction from partial loudness to auditory features – Reduction of auditory features to auditory objects – Storage in short-term and long-term memory

Anything more about the CNS? It’s extremely flexible – It can consciously change what it does (leaving aside for now the definition of consciousness) – Its “output” is what finally matters to us – It evolved to do an extremely, distinctly excellent job of associating information from all senses and knowledge into the final result. All the time Everywhere

What actually gets to the CNS? Whatever is detected by the auditory periphery – We will leave out extremely intense LF and VHF signals, which can be detected by other means, these are extreme conditions and should not generally be experienced by a listener. How does the auditory periphery deal with the sound waves in the atmosphere?

What does the periphery do? First the periphery adds directional information via HRTF and ITD Then, the cochlea does a time/frequency analysis The time/frequency analysis is converted into loudness via compression in each “band”, introducing – Differences between loudness and intensity – The Haas (precedence) effect The partial loudness across frequency is encoded into a kind of biological PPM and transmitted across the auditory nerve. (No, it’s really not that simple, but it will do for now.)

A Key Point or Two The auditory periphery analyzes all signals in a time/frequency tiling called “ERB’s” or “Barks”. Due to the mechanics of the cochlea, first arrivals have very strong, seemingly disproportionate influence on what you actually hear – But this is actually useful in the real world Signals inside an ERB mutually compress Signals outside an ERB do not mutually compress.

Then what? The short-term loudness, called partial loudness, is, roughly speaking, integrated across a short amount of time (200 milliseconds or less) – Level Roving Experiments show that when delays of over 200 milliseconds exist between two sources, the ability to discern fine differences in loudness or timbre is reduced.

What happens after this Loudness Memory? Deep inside the CNS, in a fashion that I would not even care to speculate on, it seems clear that these partial loudness sensations are analyzed into both monaural and binaural auditory features: – There is a great deal of data “loss” at this juncture – This memory can last “seconds or so” – The analysis from partial loudness to features can be very strongly guided by learning, experience, and cognition

And then? These features are turned into what I refer to as “auditory objects” – These can be committed to long-term memory – There is another substantial reduction in data rate – This process can be entirely steered by attention, cognition, other stimulii, etc.

A schematic of sorts: Loudness “integration” Mbits/second Feature Analysis Auditory Object Analysis Mbit/sec Kb/sec bit/sec Cognitive and other Feedback

Something to notice Look at the amount of information lost at each step. You can guide the loss of information. Consider the implications. – You control what gets lost and what stays. – This is true both consciously and unconsciously. You WILL integrate the input from all of your senses. – It’s how people work. – Even when they try not to.

What does this imply? 1.If you listen to something differently (for different features or objects) a)You will REMEMBER different things b)This is not an illusion 2.If you have reason to assume things may be different a)You will most likely listen differently b)Therefore, you will remember different things

So what? What this all means, in effect, is that any test of auditory stimulii that wants to distinguish only in terms of the auditory stimulii must: 1.Have a falsifiable nature (i.e. be able to distinguish between perception and an actual effect) 2.Must isolate the subject from changes in other stimulii than audio 3.Must be time-proximate 4.Must have Controls 5.Must have trained, comfortable listeners

Controls? What? NOW what are you on about? A control is a test condition that tests the test. There can be many kinds of controls: – A positive control This is a condition that a subject should be able to detect. If they don’t, you have a problem. – A negative control A vs. A is the classical negative control If your subject hears a difference, you have a problem – Anchoring elements Conditions that relate scoring of this test to results in other tests These can vary depending on need, and may not be obligatory

Do I have to have controls? YES Well, unless you don’t want to know how good your test is, of course. 

How does all this apply to High Fidelity and such? When somebody guides your listening, you will change what you listen to. If you know something is changed in the system, you will expect changes in the output, and probably refocus. This is normal human behavior. – It is something everyone does – It goes along with cognition, and is very nearly a property of cognition.

A word on cables. When you are doing something like ‘auditioning cables’, if you do that kind of thing, remember: – First, remove and replace the existing cables. RCA connectors need to be moved around once in a while to “wipe” the corrosion – Have a third party swap cables without your knowledge See if you can tell which is which – If you can, then it’s up to your preference But that means either one of the cables is broken, or One of the cables does deliberate frequency shaping or other modification. – Remember to remove and replace the connecting cables. DO THAT FIRST.

Is that all? Not even close, but we’re talking about basics today. Before the break, I will attempt a demo that shows the effects of expectation. After the break, several of us will discuss some of the more “interesting” audio products available.