“Statistical Approach Using Neural Nets” Nuclear Masses and Half-lives E. Mavrommatis S. Athanassopoulos A. Dakos University of Athens K. A. Gernoth J. W. Clark UMIST Washington University, Saint Louis NUPECC, Town Meeting, GSI, 2003
Contents Introduction ANNs for global modeling of nuclear properties Nuclear Masses Half-lives of β - decaying nuclides Conclusions - Prospects
Global models Hamiltonian Masses Möller et al. (FRDM) Pearson et al. (HFBCS-1) Half-lives Möller et al. (FRDM) Klapdor et al Statistical Neural networks ……………………. Number of parameters Input
Artificial Neural Networks ANNs Systems of neuron-like units that are arranged in some architecture and are connected with each other through weights Different applications of ANNs… Scientific problems J. W. Clark, T. Lindenau & F. Ristig, Scientific Applications of NNs (Springer, 99) We focus on the task Approximation of a fundamental mapping from one set of physical quantities to another ANN is required under training with a subset of estimating data base to create a statistical model for subsequent use in prediction.
Neural network models We use multi-layered feed-forward supervised neural networks 1.Architecture and Dynamics [I-H 1 -H 2 -…H L -O], activation function 2. Training Algorithms Back-propagation (SB), Modified back-propagation (MB1) 3. Data sets Learning, validation, test 4. Coding at Input and Output Interfaces 5. Performance measures
Neural networks elements w 13 w 36 Input units Output unit 6. Learning continues until the error criteria is satisfied. 5. The procedure repeats for the next pattern. 4. The connection weights change so that the cost function proportional to (t-o) 2 reduces. 3. The output value is compared with target value t 2. The information proceeds towards the output unit 1. Independent variables of a known pattern are presented at the input units. We use multi-layered feed-forward supervised neural networks. χ1χ1 χ2χ2 ο
Supervised learning (on line updating) Feedforward neural network Training data Input patterns Target outputs Adjust weights to reduce error YesInput pattern Error criteria satisfied No Calculate error Stop Get next pattern Target output
References (calculations of ΔΜ with artificial neural networks) - S. Gazula, J. W. Clark and H. Bohr, Nucl. Phys. A540 (1992) 1 - K. A. Gernoth, J. W. Clark, J. S. Prater and H. Bohr, Phys. Lett. B300 (1993) 1 - K. A. Gernoth and J. W. Clark, Comp. Phys. Commun. 88 (1995) 1 - E. Mavrommatis, S. Athanassopoulos, K. A. Gernoth, and J. W. Clark, Condensed Matter Theories, Vol. 15, edited by G. S. Anagnostakos et al. (Nova Science Publishers, N.Y. 2000) p J. W. Clark, E. Mavrommatis, S. Athanassopoulos, A. Dakos and K. A. Gernoth in Proceedings of the Conference on ``Fission Dynamics of Atomic Clusters and Nuclei'', D. M. Brink et al., eds. (World Scientific, Singapore 2002) p.76 - S. Athanassopoulos, E. Mavrommatis, K. A. Gernoth, and J. W. Clark, submitted to Phys. Rev. C. (calculations of T 1/2 with artificial neural networks) - E. Mavrommatis, A. Dakos, K. A. Gernoth, and J. W. Clark, Condensed Matter Theories, Vol. 13, J. Da Providencia and F. B. Malik, eds. (Nova Science Publishers, Commack, NY, 1998) p E. Mavrommatis, S. Athanassopoulos, A. Dakos, K. A. Gernoth, and J. W. Clark, in Proceedings of the International Conference on “Structure of the Nucleus at the Dawn of the 21 st Century”, eds. Bonsignori et al. (World Scientific, Singapore 2001) p A. Dakos, E. Mavrommatis, K. A. Gernoth, and J. W. Clark, to be submitted for publication
Network architecture (I-H 1 -…-H L -O) [P] Learning set σ RMS (MeV) Validation set σ RMS (MeV) Test set σ RMS (MeV) Möller et al (O) (N) ( ) [421] Z & N in binary A & Z – N in analog (O) (N) (4-40-1) [245]1.068 (O) (N) ( ) + [281] Z & N in analog and parity (O)2.280 (NB)2.158 (N) Möller et al. (FRDM) ANDT 59 (1995) (O)0.735 (N)0.697 (NB) Pearson et al. (HFBCS -1) ANDT 77 (2001) (NB) ( ) * [281] Z & N in analog and parity (O)0.962 (N)1.485 (NB) ( ) ** [281] Z & N in analog and parity (NB) Results (Mass Excess ΔΜ)
Nuclear Masses with ANNs Mass Excess ΔΜ [Binding Energies, Separation Energies, Q-values] Experimental values from NUBASE (G. Audi et al. Nucl. Phys. A624 (1997) 1) Net: [ ] ** [281] Data sets: learning: 1303, validation: 351 from mixed MN (FRDM) prediction: NUBASE 158 (NB) Training:as above Coding: 4 input units: Z, N in analog, Z, N parities 1 output unit: ΔΜ analog (S3 scaling) Performance measure: σ RMS Net: [ ] * [281] Data sets: learning: 1323(O), validation: 351(N) from MN 1981 prediction: NUBASE 158 (NB) Training:Modified Back Propagation Algorithm Modification of learning and momentum parameters Coding: 4 input units: Z, N in analog, Z, N parities 1 output unit: ΔΜ analog (S3 scaling) Performance measure: σ RMS
Separate O & N data sets (ΜΝ) net [ ] *
Mixed data sets (O & N) (MN) net [ ] **
Nuclear Half-lives of β - decaying nuclides with ANNs Half life T 1/2 (lnT 1/2) ) [ground state, β - mode, branching ratio=1] Experimental values from NUBASE (G. Audi et al. Nucl. Phys. A624 (1997) 1) Best net: [ ] * [191] Data sets: T 1/2 ≤10 6 sec [ learning: 518, prediction: 174 ] (base B) Training:Standard Back - propagation (with momentum term) Coding: 16 input units: Z, N in binary ; 1 input unit: Q in analog 1 output unit: lnT 1/2 in analog Performance measures: σ RMS (Möller et al.) (Klapdor et al.)
Conclusions Global Models based on ANNs for Nuclear Masses are approaching the accuracy of models based on Hamiltonian theory. Global Models based on ANNs for half-lives of β - decay are promising. Prospects Further development of global models based on ANNs for nuclear masses and half-lives etc. (optimization techniques, pruning, construction, etc.) Further investigation for models of the mass differences D M=ΔM exp -ΔM FRDM Further insight in the statistical interpretation and modeling with ANNs Inverse problem
Neural Network Modeling as well as other statistical strategies based on new algorithms of artificial intelligence may prove to be a useful asset in the further exploration of nuclear phenomena far from β - stability.