Presentation is loading. Please wait.

Presentation is loading. Please wait.

Self-Organized Recurrent Neural Learning for Language Processing www.reservoir-computing.org April 1, 2009 - March 31, 2012 State from June 2009.

Similar presentations


Presentation on theme: "Self-Organized Recurrent Neural Learning for Language Processing www.reservoir-computing.org April 1, 2009 - March 31, 2012 State from June 2009."— Presentation transcript:

1 Self-Organized Recurrent Neural Learning for Language Processing www.reservoir-computing.org April 1, 2009 - March 31, 2012 State from June 2009

2 2 The task (www.georgholzer.at) (introspectreangel.wordpress.com) (coli.uni-saarland.de/~steiner/) writing/speech sourcefeature stream (compuskills.com.cy) AI machine Speech and handwriting recognition = essentially same problem Humans can do it -- but only after years of learning: thus, a very difficult problem No human-level AI solution in sight

3 3 Mission Establish neurodynamical architectures as viable alternative to statistical methods for speech and handwriting recognition. State-of-the-art Speech recognition = statistical data analysis problem Leads to data-driven, feedforward "serial" learning and representation techniques (HMMs) Performance appears to asymptote well below human performance ORGANIC alternative Speech recognition = an achievement of human brains Leads to neural computation and cognitive neuroscience modelling with recurrent dynamics (cyclic top- down and bottom-up paths) Potential to come closer to human performance (From Rabiner 1990, classical speech recognition tutorial) (From Dominey et al 1995)

4 4 Basic paradigm: reservoir computing (RC) Also known as Echo State Networks and Liquid State Machines Discovered in 2000, now an established paradigm in computational neuroscience and machine learning RC makes, for the first time, training of recurrent neural networks practically feasible: a major enabling technology RC is biologically plausible Consortium comprises pioneers and leading investigators of RC field Principle of RC: Use large, fixed, random recurrent network as excitable medium Excite by input signal Read out desired output by trainable output weights (red)

5 5 Scientific objectives Basic blueprints: Design and proof-of-principle tests of fundamental architecture layouts for hierarchical neural system that can learn multi- scale sequence tasks. Reservoir adaptation: Investigate mechanisms of unsupervised adaptation of reservoirs. Spiking vs. non-spiking neurons, role of noise: Clarify the functional implications for spiking vs. non-spiking neurons and the role of noise. Single-shot model extension, lifelong learning capability: Develop learning mechanisms which allow a learning system to become extended in “single-shot” learning episodes to enable lifelong learning capabilities. Working memory and grammatical processing: Extend the basic paradigm by a neural index-addressable working memory. Interactive systems: Extend the adaptive capabilities of human-robot cooperative interaction systems by on-line and lifelong learning capabilities. Integration of dynamical mechanisms: Integrate biologically mechanisms of learning, optimization, adaptation and stabilization into coherent architectures.

6 6 Community service and dissemination objectives High performing, well formalized core engine: Collaborative development of a well formalized and high performing core Engine, which will be made publicly accessible. Comply to FP6 unification initiatives: Ensure that the Engine integrates with the standards set in the FACETS FP6 IP, and integrate with other existing code. Benchmark repository: Create a database with temporal, multi-scale benchmark data sets which can be used as an international touchstone for comparing algorithms.

7 7 Consortium InstitutionGroupResearch Jacobs University Bremen Machine Learning (Herbert Jaeger) Recurrent neural networks, nonlinear dynamics, pattern recognition Technical University Graz Computational Neuroscience (Wolfgang Maass) Spiking neurodynamics, generic neural microcircuits, reinforcement learning INSERM Lyon Human and Robot Interactive Cognitive Systems Team (Peter F. Dominey) Cognitive neuroscience, human cortical sequence processing and speech recognition Universiteit Gent Reservoir Computing Lab (Benjamin Schrauwen) Reservoir computing applications, algorithm design Speech processing group (Jean-Pierre Martens) Speech recognition methods research and application development Planet intelligent systems GmbH Research and Development (Welf Wustlich) Text and handwriting recognition solutions, address recognition

8 8 Workpackages and collaboration scheme


Download ppt "Self-Organized Recurrent Neural Learning for Language Processing www.reservoir-computing.org April 1, 2009 - March 31, 2012 State from June 2009."

Similar presentations


Ads by Google