Hardware and system development:

Slides:



Advertisements
Similar presentations
Cognitive Systems, ICANN panel, Q1 What is machine intelligence, as beyond pattern matching, classification and prediction. What is machine intelligence,
Advertisements

Conformance Testing of MOST based Applications Towards Effective System Testing André Baresel, Michael Schmidt - DaimlerChrysler AG Contact:
NCCR-MICS Project MP3 on Btnode. Main Idea Btnode designed as clever « sensor » Btnode designed as clever « sensor » Goal : Use it as audio sensor (AudioNode)
Uncovering Performance and Interoperability Issues in the OFED Stack March 2008 Dennis Tolstenko Sonoma Workshop Presentation.
ENTERFACE’08 Multimodal Communication with Robots and Virtual Agents.
Yiannis Demiris and Anthony Dearden By James Gilbert.
Soul Envoy Final Year Project 22nd April 2006 By Zhu Jinhao.
EE141 1 Broca’s area Pars opercularis Motor cortexSomatosensory cortex Sensory associative cortex Primary Auditory cortex Wernicke’s area Visual associative.
USB – An Overview Group 3 Kaushik Nandha Bikram What is the Universal Serial bus (USB)? Is a cable bus that supports data exchange between a host computer.
Behavior- Based Approaches Behavior- Based Approaches.
Robots at Work Dr Gerard McKee Active Robotics Laboratory School of Systems Engineering The University of Reading, UK
Biointelligence Laboratory School of Computer Science and Engineering Seoul National University Cognitive Robots © 2014, SNU CSE Biointelligence Lab.,
Fuzzy control of a mobile robot Implementation using a MATLAB-based rapid prototyping system.
Institute of Perception, Action and Behaviour (IPAB) Director: Prof. Sethu Vijayakumar.
Input/OUTPUT [I/O Module structure].
DARPA ITO/MARS Project Update Vanderbilt University A Software Architecture and Tools for Autonomous Robots that Learn on Mission K. Kawamura, M. Wilkes,
Audio Compression Usha Sree CMSC 691M 10/12/04. Motivation Efficient Storage Streaming Interactive Multimedia Applications.
Mantychore Oct 2010 WP 7 Andrew Mackarel. Agenda 1. Scope of the WP 2. Mm distribution 3. The WP plan 4. Objectives 5. Deliverables 6. Deadlines 7. Partners.
Upgrade to Real Time Linux Target: A MATLAB-Based Graphical Control Environment Thesis Defense by Hai Xu CLEMSON U N I V E R S I T Y Department of Electrical.
Computer Architecture and Organization Introduction.
Umm Al-Qura University Collage of Computer and Info. Systems Computer Engineering Department Automatic Camera Tracking System IMPLEMINTATION CONCLUSION.
INTRODUCTION Generally, after stroke, patient usually has cerebral cortex functional barrier, for example, the impairment in the following capabilities,
Towards Cognitive Robotics Biointelligence Laboratory School of Computer Science and Engineering Seoul National University Christian.
1 Advanced topics in OpenCIM 1.CIM: The need and the solution.CIM: The need and the solution. 2.Architecture overview.Architecture overview. 3.How Open.
Beyond Gazing, Pointing, and Reaching A Survey of Developmental Robotics Authors: Max Lungarella, Giorgio Metta.
Chapter 1 Introduction. Architecture & Organization 1 Architecture is those attributes visible to the programmer —Instruction set, number of bits used.
Emergence of Cognitive Grasping through Emulation, Introspection and Surprise GRASP EUl 7 th Framework Program GRASP Emergence of Cognitive Grasping through.
Advanced Design and System Patterns The Microkernel Pattern.
Boundary Assertion in Behavior-Based Robotics Stephen Cohorn - Dept. of Math, Physics & Engineering, Tarleton State University Mentor: Dr. Mircea Agapie.
Emergency Vehicle Detector for Use in Consumer’s Motor Vehicle Georgia Institute of Technology School of Electrical and Computer Engineering ECE 4007.
DARPA ITO/MARS Project Update Vanderbilt University A Software Architecture and Tools for Autonomous Robots that Learn on Mission K. Kawamura, M. Wilkes,
Abstract A Structured Approach for Modular Design: A Plug and Play Middleware for Sensory Modules, Actuation Platforms, Task Descriptions and Implementations.
Realtime Robotic Radiation Oncology Brian Murphy 4 th Electronic & Computer Engineering.
WP6 Emotion in Interaction Embodied Conversational Agents WP6 core task: describe an interactive ECA system with capabilities beyond those of present day.
Summary of IAPP scientific activities into 4 years P. Giannetti INFN of Pisa.
Emotional Intelligence Vivian Tseng, Matt Palmer, Jonathan Fouk Group #41.
SIE 515 Brain-Computer Interfaces (BCI) and Sensory Substitution
COMPSCI 110 Operating Systems
Human Computer Interaction (HCI)
WP 7: Management Stipulated versus actual work
September 6th, Cognitive Frameworks
WP2 – Testing campaign and beyond
Heiner Löllmann and Christine Evers
WP 2: Acoustic Scene Analysis
Voice Manipulator Department of Electrical & Computer Engineering
IPAB Research Areas and Strengths
WP2 INERTIA Distributed Multi-Agent Based Framework
System integration – current status and future priorities
WP5 INERTIA Integration & Lab Testing
Vision for Robotic Applications
Aaron Lucia, Niket Gupta, Matteo Puzella, Nathan Dunn
Presentation of the System
Serial Data Hub (Proj Dec13-13).
WP 1: Embodied Acoustic Sensing for Real-world Environments
CEN3722 Human Computer Interaction Displays
ECEG-3202 Computer Architecture and Organization
Low-Cost Fundus Camera
Visual Tracking on an Autonomous Self-contained Humanoid Robot
Network Coding Testbed
Simulink Support for VEX Cortex BEST Robotics Sandeep Hiremath
ECEG-3202 Computer Architecture and Organization
John H.L. Hansen & Taufiq Al Babba Hasan
Microphone array beamforming
Designed by Hwandong Joo
A Virtual Machine Monitor for Utilizing Non-dedicated Clusters
Automotive Testing Automation
Chapter 13: I/O Systems.
A Low-Cost EEG System-Based Hybrid Brain-Computer Interface for Humanoid Robot Navigation and Recognition Bongjae Choi, Sungho Jo Presented by Megan Fillion.
Chapter 13: I/O Systems “The two main jobs of a computer are I/O and [CPU] processing. In many cases, the main job is I/O, and the [CPU] processing is.
Presentation transcript:

Hardware and system development: Synchronisation and Data communication Verena V. Hafner, Claas-Norman Ritter, Guido Schillaci Project Meeting Erlangen, November 30, 2016

Deliverable D4.3 (M36) “Human-Robot Interaction”

T4.1 Learning internal models for interaction and prediction Ego-noise prediction: Nao head movements top: 4 MFCC features bottom: head yaw initial joint configuration rotation applied to the joints from the initial positions

T4.1 Learning internal models for interaction and prediction Ego-noise experiments (head movements, Mel log-filterbank energies) coherent not coherent shifted left: coherent center: not coherent right: shifted Mel log-filterbank energies

Main Achievements Y3 Internal model framework for ego-noise prediction on the Nao robot (UBER, FAU) Robot egosphere for visual and auditory attention (UBER, INRIA) Linking natural robot behaviour to audio information (BGU, UBER) Synchronisation of audio, video and motor signals on the Nao (ALD, UBER) Integration inside (12-mic) Nao robot

T1.2 Design of adaptive robomorphic microphone array Preparatory work: UBER was involved in tackling the issues of synchronising audio signals, motor signals and camera data streams, as well as the communication between NaoQI and MATLAB that have been addressed in two workshops (code camps) in the reporting period. UBER exploited the functionalities provided by Modularity and implemented a set of (Modularity) filters for the gathering of audio, visual and motor data from the robot, for the alignments of asynchronous data into fixed buffers and for the training of internal models and ego-noise classification, as reported in Deliverable D4.2.

T1.2 Design of adaptive robomorphic microphone array 12-mic signal integration external sound card feeding signal back into robot synchronising with Aldebaran Modularity framework

T4.1 Learning internal models for interaction and prediction in parallel: worked on synchronisation of audio, video and motor commands on the Nao 2 workshops on synchronisation in Berlin and Erlangen now ready to be integrated in our internal model framework

Tech details • data available on Nao • motor/sensor (≈ 10 ms) • images (≈ max 33 ms) • audio (≈ max 40 ms) data must be time stamped  audio only 4 channel on v5 head  no internal audio with 12 microphone prototype head  but 16 channel linux compatible USB-card since begin of April 2016 working on Nao

Tech details - efforts recompile kernel with USB audio support setup package for all Naos lsl (lab streaming layer) as protocol for sending and outside processing with synchronized data NaoQi interfaces to the egosphere (can receive external data) e.g. python script reading matlab DOA outputs (BGU+IMPERIAL), NaoQi methods for communicating with the dialog manager (ALD) and face tracker (INRIA) Egosphere as central hub for behavioral mechanisms solved lots of open issues, such as initial delay of data processing up to several seconds + increasing delay also: synched data will be used for audio tracker (IMP)

Data Synchronisation Timing Diagram

Data Synchronisation Timing Diagram

Overview about Developed Algorithms

Internal Models Architecture Addressed task learning and predicting ego noise Concept Self-Organizing Maps and Hebbian learning Data synchronisation Main benefit over existing approaches adaptivity online learning multimodal body representation Demonstration presented in D4.2 live demo at review meeting in Berlin video at IROS and RoboCup live demo at IROS tech tour in Berlin H. Löllmann: Overview about Developed Algorithms

T4.1 Learning internal models for interaction and prediction Body representation - adaptive model Prediction / forward model Sensory attenuation Sense of agency / self-other distinction

Demo 1 showing internal models for ego-noise prediction

T4.1 Learning internal models for interaction and prediction Ego-noise experiments (head movements, Mel log-filterbank energies) coherent not coherent shifted left: coherent center: not coherent right: shifted Mel log-filterbank energies

Egosphere and attention mechanisms Addressed task Visual and auditory saliency detection Robot behaviors emerging from attentive mechanisms analysis of participants’ perception of robot behaviour Concept multimodal saliency maps synchronisation short-term memory mechanisms habituation and inhibition, display of robot internal attentive state Main benefit over existing approaches intuitive human-robot interaction integration of different algorithms + interfacing robot skills Demonstration Final real-time prototype live demo at review meeting Berlin H. Löllmann: Overview about Developed Algorithms

Demo showing visual and auditory attention using the robot egosphere, integration into EARS demo with algorithms from WP2 and WP3.

Deliverable D4.3 (M36) “Human-Robot Interaction”

Synchronisation and Data communication Addressed task Synchronisation and Data communication Concept buffering and synchronization of multimodal data Main benefit over existing approaches prerequisite for many other algorithms Demonstration Real-time prototype H. Löllmann: Overview about Developed Algorithms

Data Synchronisation Timing Diagram

Data Synchronisation Timing Diagram

more algorithms simulation of robomorphic microphone positions goal babbling for efficient sensorimotor learning sensory prediction attenuation for self-other distinction development of sense of object permanence