Download presentation
Presentation is loading. Please wait.
Published byTheodora McDonald Modified over 9 years ago
1
CERN 18 December 2013 Genève Scientific collaboration with CERN J. Chaskalovic Institut Jean le Rond d’Alembert University Pierre and Marie Curie F. Assous Mathematics Department Bar Ilan Ariel University
2
Agenda The team Theoretical and numerical approaches for charged particle beams Actual items Actual items Our future projects Our future projects Data Mining for the CERN
3
The team
4
Joel Chaskalovic: Dual expertise – PhD in Theoretical Mechanics (University Pierre & Marie Curie) and Engineer of « Ecole Nationale des Ponts & Chaussées ». – Associate Professor in Mathematical Modeling applied to Engineering Sciences, (University Pierre & Marie Curie). – Director of Data Mining and Media Research, Publicis Group, (1993-2007). Franck Assous: Academic and industrial experiment – PhD in Applied Mathematics (Dauphine University, Paris 9). – Associate Professor in Applied Mathematics, Bar Ilan Ariel University, (Israel). – Scientific Consultant, CEA, (France), (1990-2002). The team
5
Theoretical and numerical approaches for particles accelerators
6
Actual items A new method to evaluate asymptotic numerical models by Data Mining techniques On a new paraxial model Data Mining: a tool to evaluate the quality of models
7
A new method to evaluate asymptotic numerical models by Data Mining techniques
8
The physical problem Physical frameworks: collisionless charged particles beams (Accelerators, F.E.L, … ) Physical frameworks: collisionless charged particles beams (Accelerators, F.E.L, … )
9
The mathematical model
10
Approximate models Poisson model Neglect the time derivative Magneto-static model Neglect the time derivative Darwin model Neglect the transverse part of Paraxial model Use the paraxial property Exploit given physical/experimental assumptions:
11
How we derive a paraxial model 1.Write the equations in the beam frame. 2.Introduce a scaling of the equations. 3.Define a small parameter. 4.Use expansion techniques and retains the first orders. 5.Build an ad hoc discretization. 6.Simulations with numerical results.
12
The asymptotic expansions
13
Second order: Second order: The first paraxial model (axisymmetric case) First order: First order: Zero order: Zero order:
14
Numerical Results
15
But…fundamental questions Despite a theoretical result (controlled accuracy)… How many terms to retain in the asymptotic expansion to get a “precise” model ? How to compare the different orders of approximation: - What each order of the asymptotic expansion brings to the numerical results ? - Which variables are responsible of the improvement between models M i and M i+1 ? Use of Data Mining Methodology
16
Data processing 100 time steps 1250 space nodes 125 000 rows 26 columns
17
The Database
18
1,2 is around 1: equivalence of numerical results obtained between the two models M 1 and M 2 for the calculation of X. 1,2 is either very small or very great compared to 1: the numerical results between M 1 and M 2 are significantly different. Data Modeling
19
E z (2) is the most discriminate predictor. (Expected because E z (1) = 0). The second most important predictor is E r (2). (Non expected because E r (1) 0). B z (2) appears as a non significant predictor. (Non expected because B z (1) = 0). Data Mining Exploration Significant differences between V r (1) and V r (2) (F. Assous and J. Chaskalovic, J. Comput. Phys., 2011)
20
Future developments Which is the best asymptotic expansion? Globally the second order is better than the first order. But locally, could we status when and where the first one could be better ? Data Experiments and Data Mining Data Experiments and Data Mining
21
On a new paraxial model
22
Revisiting the scaling The characteristic longitudinal dimension L z is chosen different from the characteristic transverse dimension L r. Z = L z Z’, r = L r r’, (L z L r ) LzLzLzLz LrLrLrLr
23
The new paraxial model (axisymmetric case) (F. Assous and J. Chaskalovic, CRAS, 2012) Zero order: Zero order: First order: First order:
24
Future developments Numerical simulations. Validation and characterization by Data Mining techniques of significant differences between the two asymptotic models (L z = L r ) and (L z L r ). Comparison with experimental data.
25
Data Mining a tool to evaluate the quality of models
26
The four sources of error Error sources The modeling error The approximation error The discretization error The parameterization error
27
The famous theorems of calculus Rolle’s theorem Lagrange’s theorem Taylor’s theorem
28
The discretization error is the error which corresponds to the difference of order between two numerical models (MN 1 ) and (MN 2 ) from a given family of approximations methods. Suppose we solve a given mathematical model (E) with finite elements P 1 and P 2. The discretization error Bramble-Hilbert theorem claims:
29
The discretization error P 1 - P 2 finite elements method for numerical approximation to Vlasov-Maxwell equations
30
The P 1 – P 2 finite elements Database
31
“surprising ” rows w.r.t Bramble Hilbert theorem theorem If |Er 2 -Er 1 | ≤ 0.65 (5% of Max |Er 2 -Er 1 |) P 1 vs P 2 = Same order Same order 14 % of the Dataset
32
Kohonen’ cards
33
Kohonen’s Cluster Analysis
34
P1 vs P2 Rules of Cluster “P 1 – P 2 same order” P1 vs P2
35
E r (1) and E r (2) are equivalent on 14% elements of the data. Data Mining techniques identified the number of time steps t n as the most discriminate predictor. The critical computed threshold of t n is equal to 42 on 100 time steps. An example : Equivalent results between P 1 and P 2 finite elements P 2 finite elements overqualified at the beginning of the propagation P 2 finite elements overqualified at the beginning of the propagation
36
Future developments Physical interpretations of the above results : The threshold t n = 42. Robustness of the results: comparison with other data technologies, (Neural Networks, Kohonen Cards, etc.). Extensions to other physical unknowns. Sensibility regarding the Data. Coupling errors. Data Mining
37
Data Mining for the CERN
38
« Les expériences du Large Hadron Collider représentent environ 150 millions de capteurs délivrant des données 40 millions de fois par seconde. Large Hadron ColliderLarge Hadron Collider Il y a autour de 600 millions de collisions par seconde, et après filtrage, il reste 100 collisions d’intérêt par seconde. En conséquence, il y a 25 Po de données à stocker chaque année. » (source : Wikipédia) The CERN and the Data Mining
39
Project Management BusinessExpertise SoftwareEngineering Data Exploration Data Mining : les clefs pour une exploitation pertinente des données
40
Data Scan : inventory of potential and explicative variables. Data Scan : inventory of potential and explicative variables. Data Management : collection, arrangement and presentation of the Data in the right way for mining. Data Management : collection, arrangement and presentation of the Data in the right way for mining. Data Modeling : Data Modeling : Learning Clustering Forecasting The Data Mining is a discovery process Data Mining and not Data Analysis
41
Supervised Data Mining : One or more target variables must be explained in terms of a set of predictor variables. Data Mining Principles Non supervised Data Mining : No variable to explain, all available variables are considered to create groups of individuals with homogeneous behavior. Segmentation by Decision Tree, Neural Networks, etc. Typology by Kohonen’s cards, Clustering, etc.
42
Future developments Accuracy comparison of asymptotic models. Choice of a given order accuracy. Accuracy comparison of numerical methods. Curvature of the trajectories. Non relativistic beams. Etc.Outlooks Data Mining with CERN
43
Merci !
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.