SPEECH PROCESSING AND BRAIN SIGNATURES OF SPEECH, PARTICULARLY IN DISTINGUISHING TRUE/FALSE OR YES/NO RESPONSES  Speech processing can refer either to.

Slides:



Advertisements
Similar presentations
Audio-based Emotion Recognition for Advanced Information Retrieval in Judicial Domain ICT4JUSTICE 2008 – Thessaloniki,October 24 G. Arosio, E. Fersini,
Advertisements

Nick Gomes BRAIN CONTROL INTERFACE (BCI) Richard Canton first discovers electrical signals on the surface of animal brains 1940s - Wilder Penfield.
Irek Defée Signal Processing for Multimodal Web Irek Defée Department of Signal Processing Tampere University of Technology W3C Web Technology Day.
Controlling Assistive Machines in Paralysis Using Brain Waves and Other Biosignals HCC 741- Dr. Amy Hurst Wajanat Rayes.
Symbiotic Brain-Machine Interfaces Justin C. Sanchez, Ph.D. Assistant Professor Neuroprosthetics Research Group (NRG) University.
Speech Translation on a PDA By: Santan Challa Instructor Dr. Christel Kemke.
Using Emotion Recognition and Dialog Analysis to Detect Trouble in Communication in Spoken Dialog Systems Nathan Imse Kelly Peterson.
Brain Waves. Brain Fingerprinting Forensic Science, Biometrics, etc…
Brain-Computer Natural-Language Interface ● What is it? An interface between the brain and a computer that enables natural language communication within.
Auditory User Interfaces
Chapter 15 Speech Synthesis Principles 15.1 History of Speech Synthesis 15.2 Categories of Speech Synthesis 15.3 Chinese Speech Synthesis 15.4 Speech Generation.
BCI Systems Brendan Allison, Ph.D. Institute for Automation University of Bremen 6 November, 2008.
Cyberlink Headband Brainfingers: Hands-Free Computer Access Solution.
A PRESENTATION BY SHAMALEE DESHPANDE
Communication Communication is a symbolic, transactional process, or the process of creating and sharing meaning. Transactional means that when people.
Future Brain Professor Keith Kendrick. Future Brain.
1. Introduction to Pattern Recognition and Machine Learning. Prof. A.L. Yuille. Dept. Statistics. UCLA. Stat 231. Fall 2004.
Track: Speech Technology Kishore Prahallad Assistant Professor, IIIT-Hyderabad 1Winter School, 2010, IIIT-H.
UNDERSTANDING AUDITORY PROCESSING DISORDER (APD) Maureen E. Jones, M.A., CCC-SLP 1.
Engineering the Brain KAIST 바이오및뇌공학과 정재승. Ardipithecus.
ACCURATE TELEMONITORING OF PARKINSON’S DISEASE SYMPTOM SEVERITY USING SPEECH SIGNALS Schematic representation of the UPDRS estimation process Athanasios.
Chapter 7: The Nervous System Introduction - Page 222.
Brain-Computer Interfaces for Communication in Paralysis: A Clinical Experimental Approach By Adil Mehmood Khan.
Information Technology – Dialogue Systems Ulm University (Germany) Alexander Schmitt, Gregor Bertrandt, Tobias Heinroth,
Lecture 12: 22/6/1435 Natural language processing Lecturer/ Kawther Abas 363CS – Artificial Intelligence.
Some Thoughts on HPC in Natural Language Engineering Steven Bird University of Melbourne & University of Pennsylvania.
Conversational Applications Workshop Introduction Jim Larson.
Center for Human Computer Communication Department of Computer Science, OG I 1 Designing Robust Multimodal Systems for Diverse Users and Mobile Environments.
Microphone Integration – Can Improve ARS Accuracy? Tom Houy
Artificial Intelligence CS105. Team Meeting Time (10 minutes) Find yourself a team Find your team leader Talk about topics and responsibilities.
AgentSheets ® Thought Amplifier End User Development WHO needs it? Alexander Repenning CS Prof. University of Colorado CEO AgentSheets Inc.
CP SC 881 Spoken Language Systems. 2 of 23 Auditory User Interfaces Welcome to SLS Syllabus Introduction.
Foundations of Computer Science Computing …it is all about Data Representation, Storage, Processing, and Communication of Data 10/4/20151CS 112 – Foundations.
APML, a Markup Language for Believable Behavior Generation Soft computing Laboratory Yonsei University October 25, 2004.
1 Computational Linguistics Ling 200 Spring 2006.
Damaris Escobar What is neurolinguistics?  It is the study of the neural mechanisms in the human brain that control the comprehension, production,
17.0 Distributed Speech Recognition and Wireless Environment References: 1. “Quantization of Cepstral Parameters for Speech Recognition over the World.
Graz-Brain-Computer Interface: State of Research By Hyun Sang Suh.
THE NATURE OF TEXTS English Language Yo. Lets Refresh So we tend to get caught up in the themes on English Language that we need to remember our basic.
Dept. of Computer Science University of Rochester Rochester, NY By: James F. Allen, Donna K. Byron, Myroslava Dzikovska George Ferguson, Lucian Galescu,
The Berlin Brain-Computer Interface: Machine Learning-Based Detection of User Specific Brain States Umar Farooq Berlin Brain Computer Interface.
WHAT IS LINGUISTICS? MGTER RAMON GUERRA. Each human language is a complex of knowledge and abilities enabling speakers of the language to communicate.
1 EEG-based Online Brain- Computer Interface System Chi-Ying Chen,Chang-Yu Tsai,Ya-Chun Tang Advisor:Yong-Sheng Chen.
Synthesis and Processing of Materials U.S. Army Research, Development and Engineering Command Cognition, Computers and Cooperation Presented at TRADOC.
Speech Interfaces User Interfaces Spring 1998 Drew Roselli.
Cognitive Systems Foresight Language and Speech. Cognitive Systems Foresight Language and Speech How does the human system organise itself, as a neuro-biological.
Brain-computer Interface
Workshop on direct brain/computer interface & control Febo Cincotti Fondazione Santa Lucia IRCCS Brussels, August 2, 2006.
1 Branches of Linguistics. 2 Branches of linguistics Linguists are engaged in a multiplicity of studies, some of which bear little direct relationship.
S PEECH T ECHNOLOGY Answers to some Questions. S PEECH T ECHNOLOGY WHAT IS SPEECH TECHNOLOGY ABOUT ?? SPEECH TECHNOLOGY IS ABOUT PROCESSING HUMAN SPEECH.
Different Types of HCI CLI Menu Driven GUI NLI
Presentation by A.Sudheer kumar Dept of Information Technology 08501A1201.
ICT-enabled assistive systems based on non-invasive BCI Joseph Bremer European Commission, DG Information Society and Media E-Inclusion Unit (H3) BRAIN-COMPUTERINTERACTIONBRAIN-COMPUTERINTERACTION.
Brain Computer Interface Nicolette Oviedo. Brain on Silicone ❖ This tool will make it possible for you to use only the “ power of the mind” to control.
Mind Controlled Gadgets Sierra Forbush Period 1 I chose this because…
Technology for deaf people. City Lit This session is relevant to: Assignment 4 Technology for deaf people 4a Emerging technology Analyse the current developments.
B rain- C omputer I Luigi Bianchi Università di Roma “Tor Vergata” Luigi Bianchi Università di Roma “Tor Vergata”
Detection Of Anger In Telephone Speech Using Support Vector Machine and Gaussian Mixture Model Prepared By : Siti Marahaini Binti Mahamood.
Perceptive Computing Democracy Communism Architecture The Steam Engine WheelFire Zero Domestication Iron Ships Electricity The Vacuum tube E=mc 2 The.
Linguistic knowledge for Speech recognition
Speech recognition in mobile environment Robust ASR with dual Mic
Studying Intonation Julia Hirschberg CS /21/2018.
CEN3722 Human Computer Interaction Advanced Interfaces
Artificial Intelligence Includes:
Figure 1 General framework of brain–computer interface (BCI) systems
Studying digital logic
Topic: Language perception
Artificial Intelligence 2004 Speech & Natural Language Processing
Kick-off Meeting Luigi Bianchi “Tor Vergata” University of Rome, Italy
Presentation transcript:

SPEECH PROCESSING AND BRAIN SIGNATURES OF SPEECH, PARTICULARLY IN DISTINGUISHING TRUE/FALSE OR YES/NO RESPONSES  Speech processing can refer either to a device that receives and interprets speech then performing a command in response or a machine that interprets brain wave signals related to thoughts of speech then performing a command.  The brain quickly interprets speech; this includes understanding a statement based on semantics, grammar, and intonation (Buzo ). Goal is to create a machine that can do the same and allow LIS patients to communicate effectively.  There’s little difference between the psychophysiological responses and brain signatures of an objectively true statement and those of a delusional (subjectively true) statement. (Langleben )  The brain shows increased activity to noises that have pitches or are decibels outside that of everyday speech. (Zatorre )  Through the processing of brain signatures through BMI (or BCI) to a speech synthesizer, individuals in a locked-in state can form speech and potentially engage in verbal conversation. (Guenther )

To create these communication devices, the first step is to create binary communication devices though the processing of Yes/No thinking. This is done by first semantic classical conditioning cortically evoked responses to the meaning of a word or sentence

CITATIONS  Besserve & Co., “Extraction of functional information from ongoing brain electrical activity”, Tubingen, Germany.  Buzo & Co., “Word Error Rate Improvement and Complexity Reduction in Automatic Speech Recognition”, 2011 Speech Technology and Human Computer Dialogue.  Guenther & Co., "Brain-machine interfaces for real-time speech synthesis”, 2011 Annual International Conference of IEEE.  Henig, R., “Looking for the Lie”, New York Times.  Langleben & Co., “True lies: delusions and lie-detection technology”, Neuroethics Publications Center for Neuroscience & Society.  Mozsary & Co., “Comparison of feature extraction methods for speech recognition in noise-free and in traffic noise environment”, Speech Technology and Human-Computer Dialogue.

 Ruf & Co., “Semantic Classical Conditioning and Brain-Computer Interface Control: Encoding of Affirmative and Negative Thinking”.  Schipor & Co., “Towards a multimodal emotion recognition framework to be integrated in a Computer Based Speech Therapy System”, 2011, Speech Technology and Human-Computer Dialogue.  Sundaram & Co., “Experiments in context-independent recognition of non-lexical ‘yes’ or ‘no’ responses”, 2011, Acoustics, Speech and Signal Processing.  Zatorre & Co., “Lateralization of phonetic and pitch discrimination in speech processing”