Poster template by ResearchPosters.co.za When do behavioural interventions work and why? Towards identifying principles for clinical intervention in developmental.

Slides:



Advertisements
Similar presentations
Introduction to Neural Networks Computing
Advertisements

For Wednesday Read chapter 19, sections 1-3 No homework.
Measuring Perceived Quality of Speech and Video in Multimedia Conferencing Applications Anna Watson and M. Angela Sasse Dept. of CS University College.
Passive comprehension by Slovak typically and atypically developing children Master thesis in psychology, Author: Radka Antalíková, Supervisor: Kristine.
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Introduction Semantic Feature Analysis (SFA) is a treatment technique designed to improve the naming abilities by increasing the level of activation within.
Juvenile and young offenders: speech, language & communication needs Professor Karen Bryan Faculty of Health and Medical Sciences, University of Surrey.
Accounting for Psychological Determinants of Treatment Response in Health Economic Simulation Models of Behavioural Interventions A Case Study in Type.
Improving the uptake of cardiac rehabilitation: using theoretical modelling to design an intervention Mosleh S 1, Campbell N 2, Kiger A 1, 1 Centre for.
Simple Neural Nets For Pattern Classification
Unsupervised Learning With Neural Nets Deep Learning and Neural Nets Spring 2015.
CS 8751 ML & KDDEvaluating Hypotheses1 Sample error, true error Confidence intervals for observed hypothesis error Estimators Binomial distribution, Normal.
Chapter 5 NEURAL NETWORKS
Spatio-chromatic image content descriptors and their analysis using Extreme Value theory Vasileios Zografos and Reiner Lenz
1 Representing Regularity: The English Past Tense Matt Davis William Marslen-Wilson Centre for Speech and Language Birkbeck College University of London.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Artificial Intelligence Research Centre Program Systems Institute Russian Academy of Science Pereslavl-Zalessky Russia.
The BIM Project Execution Planning Procedure
Longbiao Kang, Baotian Hu, Xiangping Wu, Qingcai Chen, and Yan He Intelligent Computing Research Center, School of Computer Science and Technology, Harbin.
Connectionist Time and Dynamic Systems Time in One Architecture? Modeling Word Learning at Two Timescales Jessica S. Horst Bob.
Assessment Report Department of Psychology School of Science & Mathematics D. Abwender, Chair J. Witnauer, Assessment Coordinator Spring, 2013.
BDAE: Acoustic Comprehension Scores
Prototypical Level 4 Performances Students use a compensation strategy, recognizing the fact that 87 is two less than 89, which means that the addend coupled.
Plasticity and Sensitive Periods in Self-Organising Maps Fiona M. Richardson and Michael S.C. Thomas
Many children with speech-language impairment will have difficulty with reading. Even those children who begin kindergarten with adequate early literacy.
Curriculum Learning Yoshua Bengio, U. Montreal Jérôme Louradour, A2iA
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
CONTENTS:  Introduction  What is neural network?  Models of neural networks  Applications  Phases in the neural network  Perceptron  Model of fire.
Classification / Regression Neural Networks 2
Variables, sampling, and sample size. Overview  Variables  Types of variables  Sampling  Types of samples  Why specific sampling methods are used.
What is modularity good for? Michael S. C. Thomas, Neil A. Forrester, Fiona M. Richardson
The Influence of Feature Type, Feature Structure and Psycholinguistic Parameters on the Naming Performance of Semantic Dementia and Alzheimer’s Patients.
MATH 3400 Computer Applications of Statistics Lecture 6 Data Visualization and Presentation.
Introduction to Neural Networks and Example Applications in HCI Nick Gentile.
Akram Bitar and Larry Manevitz Department of Computer Science
CSC321 Introduction to Neural Networks and Machine Learning Lecture 3: Learning in multi-layer networks Geoffrey Hinton.
Perseveration following a temporal delay in the Dimensional Change Card Sort. Anthony Steven Dick and Willis F. Overton Temple University Correspondence.
© 2009 Pearson Education, Upper Saddle River, NJ All Rights ReservedFloyd, Digital Fundamentals, 10 th ed Binary Numbers For digital systems, the.
Chapter 6 Neural Network.
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
Lecture 7: Bivariate Statistics. 2 Properties of Standard Deviation Variance is just the square of the S.D. If a constant is added to all scores, it has.
1 © 2008 Brooks/Cole, a division of Thomson Learning, Inc Tests for Homogeneity and Independence in a Two-Way Table Data resulting from observations.
Effects of Word Concreteness and Spacing on EFL Vocabulary Acquisition 吴翼飞 (南京工业大学,外国语言文学学院,江苏 南京211816) Introduction Vocabulary acquisition is of great.
Institution represented by the author(s)
Comparison with other Models Exploring Predictive Architectures
Applying Neural Networks
Family Paradigm Assessment Scale (F-PAS) Test-Retest Reliability
James L. McClelland SS 100, May 31, 2011
Hope S. Lancaster1  Stephen M. Camarata2
Cognitive Science Computational modelling
Emergence of Semantics from Experience
RNNs: Going Beyond the SRN in Language Prediction
Institution represented by the author(s)
of the Artificial Neural Networks.
John Widloski, Ila R. Fiete  Neuron 
CA3 Retrieves Coherent Representations from Degraded Input: Direct Evidence for CA3 Pattern Completion and Dentate Gyrus Pattern Separation  Joshua P.
Chapter 3 Special Section
Dynamic Coding for Cognitive Control in Prefrontal Cortex
Natural Language to SQL(nl2sql)
Institution represented by the author(s)
Institution represented by the author(s)
Ajay S. Pillai, Viktor K. Jirsa  Neuron 
Ajay S. Pillai, Viktor K. Jirsa  Neuron 
Uma R. Karmarkar, Dean V. Buonomano  Neuron 
Dennis Zhao,1 Dragomir Radev PhD1 LILY Lab
The Normalization Model of Attention
20th Iranian Chemistry Congress
18 Iranian In th Paper Title (Times New Roman 50pt. bold)
Metacognitive Failure as a Feature of Those Holding Radical Beliefs
Akram Bitar and Larry Manevitz Department of Computer Science
Presentation transcript:

Poster template by ResearchPosters.co.za When do behavioural interventions work and why? Towards identifying principles for clinical intervention in developmental language disorders from a neurocomputational perspective Anna Fedor* 1, Wendy Best 2, Jackie Masterson 3, Michael S. C. Thomas 1 Introduction Our questions  What are the principles that underlie effective interventions for developmental disorders of language and cognition?  Are the best interventions specific to problem domains, specific to deficit types, and/or dependent on when in development they take place?  How are atypical internal representations reshaped by alternative training regimes?  What are the neurocomputational mechanisms of development and intervention?  Different ways of intervening have not been researched yet by developmental models Interventions in practice  Types of interventions employed are diverse and multiple factors feed into clinical decision making  Studies in the domain of word retrieval conflict in their conclusions regarding optimum intervention (Ebbels et al., 2012)  Further evidence is needed to distil the active ingredients of interventions with children with language needs (Lindsay et al., 2011) Model aims  We require a simple modeling environment to start an investigation of the principles of intervention from a neurocomputational perspective  Initial model drawn from the field of language development (acquisition of inflectional morphology; Forrester & Plunkett, 1994)  Aim: Create typically and atypically developing models; expose atypical models to new training environments to attempt to rescue behavior / normalize internal representations Atypical development  Low connectivity of hidden units: o Marked deficit (see Figure 3) – never reaches TD performance o High individual variability depending on the location of the missing connections  Shallow sigmoid: o Developmental delay – slower learning but usually reaches TD performance o Lower individual variability Results Typical development (TD)  The model learnt the categories in both tasks in less than 1000 training epochs  This means successful generalization beyond the items of the training set (i.e., 10% of the items)  More training was needed to learn the islands task  We identified 4 phases of development (Figure 2) Intervention  Improvement score: improvement in performance due to intervention (model with deficit compared to same model with intervention)  Type of deficit: o Shallow sigmoid: Intervention usually increased the speed of learning o Low connectivity: Heterogeneity in response to intervention: intervention did not help in many cases, but in some cases it increased performance  Timing of intervention: o In phase 4 (the ‘adult’ state), most interventions had no effect o Before the adult state, some interventions for some deficits were more effective at earlier phases, but this was not uniform  Type of intervention (Table 3) o Generally the best: random items (Intervention 1) and transect (Intervention2) o Generally the worst: separate patches of the categories (Intervention 3 and 4) o Deficit-specific intervention: Items from around the boundaries of the categories (Intervention 5) increased performance in the shallow sigmoid case in both problems o Task-specific intervention: Bigger corners helped learning the diagonal with both deficits Conclusions & Discussion  Does timing of interventions matter? o The effect of the timing of intervention was not uniform across conditions before the adult phase; o Interventions were generally ineffective in the adult phase.  Are there interventions that generally work? o Best interventions across deficits and tasks: random items and items from the transect – both provide representative sample from all categories o Worst interventions across deficits and tasks: separate patches  Are there interventions that are especially effective to improve a certain deficit? o Deficit-specific intervention: items from the boundaries of the categories for the shallow sigmoid deficit (helps sharpening the boundaries)  Are there interventions that are especially effective to improve performance in a certain task? o Task-specific intervention: bigger corners for the diagonal task – provides more of the same kind of information  Modeling can elucidate the principles that guide clinical interventions by aiding our theoretical understanding of the key issues  Future work: scale up model and apply to more realistic rendition of language acquisition tasks References Ebbels, S. H. et al. (2012). Effectiveness of semantic therapy for word-finding difficulties in pupils with persistent language impairments: a randomized control trial. IJLCD, 47(1), Forrester, N. & Plunkett, K. (1994). Learning the Arabic plural: the case for minority default mappings in connectionist networks. In A. Ramand & K. Eiselt (Eds.), Proceedings of the 16 th Annual Conference of the Cognitive Science Society (pp ). Mahwah, NJ: Erlbaum. Law, J., Campbell, C., Roulstone, S., Adams, C. & Boyle, J. (2007). Mapping practice onto theory: The speech and language practitioner’s construction of receptive language impairment. IJLCD, 43, 245–63. Lindsay, G., Dockrell, J., Law, J., & Roulstone, S. (2011). Better communication research programme: 1st interim report. Technical report, UK Department for Education. Affiliations & Acknowledgements * Corresponding author: 1 Developmental Neurocognition Lab, Department of Psychological Sciences, Birkbeck College London, Malet Street, London WC1E 7HX, UK 2 Division of Psychology & Language Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, UK 3 Department of Psychology and Human Development, Institute of Education University of London, 20 Bedford Way, London, WC1H 0AL, UK This work was supported by ESRC grant RES Methods The model  Simple neural network model with 50 hidden units (Figure 1)  Input units represent dimensions in a 2D space  Output units represent category of items in the input space (e.g., inflectional categories, lexical categories, semantic categories) Training  Two learning problems (Table 1): o Regular categories: diagonal o Idiosyncratic categories: islands  Input values varied between -0.5 and +0.5 on each input unit  Input space consisted of 10,000 items  Training set consisted of 10% of the input space  Model was trained to learn categories with the backpropagation algorithm Developmental deficits  Low connectivity of hidden units (C=30% instead of 100%)  Insensitive processing units - Shallow sigmoid (temperature, T=0.5 instead of 1)  We have explored other computational constraints such as: processing noise, low learning rate, and low number of hidden units  Two-by-two design: deficit vs. learning task (Table 2) Intervention  Intervention was modeled as items added to the original training set (intervention complements normal experience)  Interventions were designed either to add sampling across the input space, to add training in areas that were ‘prototypical’ or central to each category, or in areas that demarcated category boundaries; see Table 2  Interventions applied at different time points during development  10 replications in each condition Table 1. Target patterns, training patterns and target activations for the diagonal and the islands problem. In the first and second columns different colors represent different target categories. In the rest of the figures red = active, blue = inactive. Figure 1. Network architecture (not all hidden units shown) Deficit vs. learning taskLow connectivityShallow sigmoid Diagonal problem Scenario 1Scenario 3 Islands problem Scenario 2Scenario 4 Figure 2. Developmental trajectories and phases for learning the diagonal (left) and the islands (right). Top figure: performance (blue) and mean square error (red) across development. Phase boundaries are indicated by green vertical lines. Second to fourth row of figures: snapshots of activation patterns of Output unit 1 to 3 at phase boundaries. Activation values are color-coded as temperature plots: red and blue indicates activation close to one and zero, respectively. Table 3. Summary of the results. Numbers in bold represent phases in which a particular intervention was successful according to the t-tests. Below these, numbers lists the phases in which more than 7 networks improved. Figure 3. Developmental trajectories and internal representations in a typical case, an atypical case with low connectivity and the same atypical case with intervention. Top figure: Developmental trajectories. Vertical lines show epochs at which snapshots were taken. Colored figures: snapshots of the activation pattern of Unit 2 in the three cases. Table 2. Training scenarios