Self Organizing Maps: Parametrization of Parton Distribution Functions

Slides:



Advertisements
Similar presentations
2806 Neural Computation Self-Organizing Maps Lecture Ari Visa.
Advertisements

Self Organization of a Massive Document Collection
Un Supervised Learning & Self Organizing Maps. Un Supervised Competitive Learning In Hebbian networks, all neurons can fire at the same time Competitive.
Neural Networks Chapter 9 Joost N. Kok Universiteit Leiden.
Unsupervised learning. Summary from last week We explained what local minima are, and described ways of escaping them. We investigated how the backpropagation.
Self Organization: Competitive Learning
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
Kostas Kontogiannis E&CE
X0 xn w0 wn o Threshold units SOM.
Self Organizing Maps. This presentation is based on: SOM’s are invented by Teuvo Kohonen. They represent multidimensional.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
1 MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING By Kaan Tariman M.S. in Computer Science CSCI 8810 Course Project.
KNN, LVQ, SOM. Instance Based Learning K-Nearest Neighbor Algorithm (LVQ) Learning Vector Quantization (SOM) Self Organizing Maps.
A Hybrid Self-Organizing Neural Gas Network James Graham and Janusz Starzyk School of EECS, Ohio University Stocker Center, Athens, OH USA IEEE World.
Lecture 09 Clustering-based Learning
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
Lecture 12 Self-organizing maps of Kohonen RBF-networks
KOHONEN SELF ORGANISING MAP SEMINAR BY M.V.MAHENDRAN., Reg no: III SEM, M.E., Control And Instrumentation Engg.
Unsupervised Learning and Clustering k-means clustering Sum-of-Squared Errors Competitive Learning SOM Pre-processing and Post-processing techniques.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Self Organizing Maps (SOM) Unsupervised Learning.
Self Organized Map (SOM)
CZ5225: Modeling and Simulation in Biology Lecture 5: Clustering Analysis for Microarray Data III Prof. Chen Yu Zong Tel:
Artificial Neural Networks
Self-organizing map Speech and Image Processing Unit Department of Computer Science University of Joensuu, FINLAND Pasi Fränti Clustering Methods: Part.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
Artificial Neural Network Unsupervised Learning
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
Hierarchical Distributed Genetic Algorithm for Image Segmentation Hanchuan Peng, Fuhui Long*, Zheru Chi, and Wanshi Siu {fhlong, phc,
COMMON EVALUATION FINAL PROJECT Vira Oleksyuk ECE 8110: Introduction to machine Learning and Pattern Recognition.
Using Neural Networks to Predict Claim Duration in the Presence of Right Censoring and Covariates David Speights Senior Research Statistician HNC Insurance.
NEURAL NETWORKS FOR DATA MINING
Self-Organizing Maps Corby Ziesman March 21, 2007.
Applying Neural Networks Michael J. Watts
Institute for Advanced Studies in Basic Sciences – Zanjan Kohonen Artificial Neural Networks in Analytical Chemistry Mahdi Vasighi.
Self Organizing Feature Map CS570 인공지능 이대성 Computer Science KAIST.
Hierarchical Clustering of Gene Expression Data Author : Feng Luo, Kun Tang Latifur Khan Graduate : Chien-Ming Hsiao.
381 Self Organization Map Learning without Examples.
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Semiconductors, BP&A Planning, DREAM PLAN IDEA IMPLEMENTATION.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
A Self-organizing Semantic Map for Information Retrieval Xia Lin, Dagobert Soergel, Gary Marchionini presented by Yi-Ting.
A Document-Level Sentiment Analysis Approach Using Artificial Neural Network and Sentiment Lexicons Yan Zhu.
Computational Intelligence: Methods and Applications Lecture 9 Self-Organized Mappings Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
Machine Learning Supervised Learning Classification and Regression
Big data classification using neural network
Self-Organizing Network Model (SOM) Session 11
Applying Neural Networks
Data Mining, Neural Network and Genetic Programming
Other Applications of Energy Minimzation
Self organizing networks
Chapter 12 Advanced Intelligent Systems
Lecture 22 Clustering (3).
Self-Organizing Maps Corby Ziesman March 21, 2007.
Polarized PDF (based on DSSV) Global Analysis of World Data
Neural Networks and Their Application in the Fields of Coporate Finance By Eric Séverin Hanna Viinikainen.
MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING
MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING
Self-organizing map numeric vectors and sequence motifs
Model generalization Brief summary of methods
Introduction to Cluster Analysis
Artificial Neural Networks
Presentation transcript:

Self Organizing Maps: Parametrization of Parton Distribution Functions Daniel Perry University of Virginia DIS 2010 April 19-23, 2010 Florence, Italy

Outline Introduction Algorithm SOMPDFs Conclusions/Outlook

History/Organization of work 2005 An interdisciplinary group - Physics/Computer Science - was formed in order to investigate new computational methods in theoretical particle physics (NSF ) 2006-2007 PDF Parametrization Code - SOMPDF.0 - using Python, C++, fortran. Preliminary results discussed at conferences: DIS 2006,… 2008 First analysis published -- J. Carnahan, H. Honkanen, S.Liuti, Y. Loitiere, P. Reynolds, Phys Rev D79, 034022 (2009) 2009 New group formed (K. Holcomb, D. Perry, S. Taneja + Jlab) Rewriting, reorganization and translation of First Code into a uniform language, fortran 95. 2010 Implementation of Error analysis. Extension to new data analyses. Group Website: http://faculty.virginia.edu/sompdf/

 Conventional approaches tend to explain such patterns in terms of Introduction The study of hadron structure in the LHC era involves a large set of of increasingly complicated and diverse observations from which PDFs, TMDs, GPDs, etc…are extracted and compared to patterns predicted theoretically. Experimental observations can be linked to the momentum, spin, and spatial configurations of the hadrons’ constituents.  Conventional approaches tend to explain such patterns in terms of the microscopic properties of the theory (based on two-body interactions). We attack the problem from a different perspective: Study the global behavior of systems as they evolve from a large and varied number of initial conditions.  This goal is at reach within modern High Performance Computing

SOMs SOMs were developed by T. Kohonen in ‘80s (T. Kohonen, Self-Organzing Maps, Springer, 1995, 1997, 2006) SOMs are a type of neural network whose nodes/neurons -- map cells --are tuned to a set of input signals/data/samples according to a form of adaptation (similar to regression). The various nodes form a topologically ordered map during the learning process. The learning process is unsupervised  no “correct response” reference vector is needed. The nodes are decoders of the input signals -- can be used for pattern recognition. Two dimensional maps are used to cluster/visualize high-dimensional data.

Map representation of 5 initial samples: blue, yellow, red, green, magenta

Learning: Map cells, Vi, that are close to “winner neuron” activate each other to “learn” from x training iteration neighborhood function  decreases with “n” and “distance” *two versions size of map size of neighborhood  decreases with “n”

Initialization: functions are placed on map Training: “winner” node is selected, Learning: adjacent nodes readjust according to similarity criterion Final Step : clusters of similar functions form input data get distributed on the map (see colors example)

SOMPDF Method Initialization: a set of database/input PDFs is formed by selecting at random from existing PDF sets and varying their parameters. Baryon number and momentum sum rules are imposed at every step. These input PDFs are used to initialize the map. Training: A subset of input PDFs is used to train the map. The similarity is tested by comparing the PDFs at given (x,Q2) values. The new map PDFs are obtained by averaging the neighboring PDFs with the “winner” PDFs.

2 minimization through genetic algorithm  Once the first map is trained, the 2 per map cell is calculated.  We take a subset of PDFs that have the best 2 from the map and form a new initialization set including them.  We train a new map, calculate the 2 per map cell, and repeat the cycle.  We iterate until the 2 stops varying.

Error Analysis Treatment of experimental error is complicated because of incompatibility of various experimental 2. Treatment of theoretical error is complicated because they are not well known, and their correlations are not well known. In our approach we defined a statistical error on an ensemble of SOMPDF runs Additional evaluation using Lagrange multiplier method is in progress

Preliminary results Q2=7.0 GeV2

Mixing The need to avoid functional bias has led to a method of input PDF generation that is a weighted average of several different randomized PDF parameterizations These PDFs are then used to train the map instead of the single function versions Advantages No bias due to parameterization More diverse and random input PDFs for comparison

No mixing – Single Function (grv92) Q2=7.5 GeV2

Conclusions/Future Work We presented a new computational method, Self-Organizing Maps for parametrizing nucleon PDFs The method works well: we succeeded in minimizing the 2 and in performing error analyses Next: larger sets of maps for error analysis, examination of effect of SOM parameters on results (training length, map size, neighborhood function…) Near Future: applications to more varied sets of data where predictivity is important (polarized scattering, x  1, low x…) More distant Future: study connection with complexity theory (GPDs)…