Presentation is loading. Please wait.

Presentation is loading. Please wait.

Template design only ©copyright 2008 Ohio UniversityMedia Production 740.597-2521 Spring Quarter  A hierarchical neural network structure for text learning.

Similar presentations


Presentation on theme: "Template design only ©copyright 2008 Ohio UniversityMedia Production 740.597-2521 Spring Quarter  A hierarchical neural network structure for text learning."— Presentation transcript:

1 Template design only ©copyright 2008 Ohio UniversityMedia Production 740.597-2521 Spring Quarter  A hierarchical neural network structure for text learning is obtained through self-organization  Similar representation for text based semantic network was used [2]  An input layer takes in characters, then learns and activates words stored in memory  Direct activation of words requires large computational cost for large dictionaries  Extension to phrases, sentences or paragraphs would render such a network impractical due to associated computational cost  Computer memory required would also be tremendously large  This leads to a sparse hierarchical structure.  The higher layers represent more complex concepts.  Basic nodes in this network are capable of differentiating input sequences.  Sequence learning is prerequisite to building spatio-temporal memories.  This is performed using laminar minicolumn [3] LTM cells (Fig.1)  In such networks, the interconnection scheme is naturally obtained through sequence learning and structural self-organization.  No prior assumption about locality of connections or structure sparsity is made.  Machine learns only inputs useful to its objectives, a process that is regulated by reinforcement signals and self organization. Hierarchical Neural Network for Text Based Learning Janusz A Starzyk, Basawaraj Ohio University, Athens, OH Introduction References Hierarchical Network  Traditional approach is to describe semantic network structure and/or probabilities of transition in associated Markov models  Biological networks learn  Different Neural Network structures, but common goal  Simple and efficient to solve the given problem  Sparsity is essential  Size of the network and time to train important for large data sets  Hierarchical structure of identical processing units was proposed [1]  Layered organization and sparse structure is biologically inspired  Neurons on different layers interact through trained links  Mountcastle, V. B., et. al, Response Properties of Neurons of Cat’s Somatic Sensory Cortex to Peripheral Stimuli, J. Neurophysiology, vol. 20, 1957, pp. 374-407.  Rogers, T. T., McClelland, J. L., Semantic Cognition text: A parallel Distributed Processing Approach, 2004, MIT Press.  Grossberg, S., How does the cerebral cortex work? Learning, attention and grouping by the laminar circuits of visual cortex. Spatial Vision, 12, 163-186,1999.  Starzyk J.A., Liu Y., Hierarchical spatio-temporal memory for machine learning based on laminar minicolumn structure, 11 th ICCNS, Boston, 2007. Network Simplification  Proposed approach uses intermediate neurons to lower the computational cost  Intermediate neurons decrease number of activations associated with higher level neurons  This concept can be extended to associations of words  Small number of rules for concurrent processing are used  We can arrive at local optimum of network structure / performance  The network topology is self-organizing through addition and removal of neurons and redirecting of neuron connections  Neurons are described by their sets of input and output neurons  Local optimization criteria are checked by searching the set SL A before the structure is updated when creating or merging the neurons.  where IL X is the input list of neuron X and  OL A is the output list of neuron A Fig. 3 If create a new node C. B A B Fig. 4 Neuron “A” with single output is merged with Neuron “B”, and “A” is removed. B A A Fig. 5 Neuron “B” with single input is merged with Neuron “A”, and “B” is removed. Batch Mode:  All words used for training are available at initiation.  Network simplification & optimization is done by processing - all the words in the training set.  Total number of neurons is 23% higher than the reference (6000) Dynamic Mode:  Words used for training are increased incrementally, - one word at a time  Simplification & optimization is done by processing - one word at a time.  Total number of neurons is 68% higher than the reference (6000) Implementation Results and Conclusion  Tests were run with dictionary up to 6000 words  The percent reduction in number of interconnections increases (by up to 65 – 70%) as the number of words increase.  The time required to process network activation for all the words used decreases as the number of words increases (reduction by a factor of 55, in batch mode; and 35, in dynamic mode; for 6000 words).  Dynamic implementation takes longer compared to the batch implementation, mainly due to the additional overhead required for bookkeeping.  The savings (connections and activations) obtained in case of dynamic implementation are less compared to the batch implementation  Combination of both methods is advisable for continuous learning and self-organization. A B X4X4 X1X1 X2X2 X3X3 AB X4X4 C X1X1 X2X2 X3X3 Rules for Self-Organization Fig. 2 If create a new node C. X1X1 B X3X3 X4X4 X2X2 A B A C X4X4 X3X3 X2X2 X1X1  Few simple rules used for self organization  Rules Contd.


Download ppt "Template design only ©copyright 2008 Ohio UniversityMedia Production 740.597-2521 Spring Quarter  A hierarchical neural network structure for text learning."

Similar presentations


Ads by Google