INFSO-RI-508833 Enabling Grids for E-sciencE Fusion Status Report Francisco Castejón CIEMAT. Madrid, Spain.

Slides:



Advertisements
Similar presentations
EXTENDED MHD SIMULATIONS: VISION AND STATUS D. D. Schnack and the NIMROD and M3D Teams Center for Extended Magnetohydrodynamic Modeling PSACI/SciDAC.
Advertisements

Introduction to Plasma-Surface Interactions Lecture 6 Divertors.
Korean Modeling Effort : C2 Code J.M. Park NFRC/ORNL In collaboration with Sun Hee Kim, Ki Min Kim, Hyun-Sun Han, Sang Hee Hong Seoul National University.
Chalkidikhi Summer School Plasma turbulence in tokamaks: some basic facts… W.Fundamenski UKAEA/JET.
6th Japan Korea workshop July 2011, NIFS, Toki-city Japan Edge impurity transport study in stochastic layer of LHD and scrape-off layer of HL-2A.
1 CENTER for EDGE PLASMA SCIENCES C E PS Status of Divertor Plasma Simulator – II (DiPS-II) 2 nd PMIF Workshop Sep. 19, 2011 Julich, Germany H.-J. Woo.
Numerical investigations of a cylindrical Hall thruster K. Matyash, R. Schneider, O. Kalentev Greifswald University, Greifswald, D-17487, Germany Y. Raitses,
Fusion_RDIG applications. Fusion_RDIG EGEE applications Already ported to EGEE infrastructure: – Plasma devises magnetic configuration optimisation. Conventional.
Chamber Dynamic Response Modeling Zoran Dragojlovic.
HEAT TRANSPORT andCONFINEMENTin EXTRAP T2R L. Frassinetti, P.R. Brunsell, M. Cecconello, S. Menmuir and J.R. Drake.
Nuclear Fusion: Using the energy of the stars on Earth.
Tomographic Imaging in Aditya Tokamak Nitin Jain DivyaDrishti, Nuclear Engineering and Technology Programme Indian Institute of Technology Kanpur.
INFSO-RI Enabling Grids for E-sciencE FloodGrid application Ladislav Hluchy, Viet D. Tran Institute of Informatics, SAS Slovakia.
Massively Parallel Magnetohydrodynamics on the Cray XT3 Joshua Breslau and Jin Chen Princeton Plasma Physics Laboratory Cray XT3 Technical Workshop Nashville,
Advanced Tokamak Plasmas and the Fusion Ignition Research Experiment Charles Kessel Princeton Plasma Physics Laboratory Spring APS, Philadelphia, 4/5/2003.
1 October 7, 2011 ADAS WORKSHOP Mi-Young Song Mi-Young Song Atomic and Molecular research activities of Data Center for Plasma Properties (NFRI)
1 Association Euratom-Cea TORE SUPRA Tore Supra “Fast Particles” Experiments LH SOL Generated Fast Particles Meeting Association Euratom IPP.CR, Prague.
PIC simulations of the propagation of type-1 ELM-produced energetic particles on the SOL of JET D. Tskhakaya 1, *, A. Loarte 2, S. Kuhn 1, and W. Fundamenski.
Excitation of ion temperature gradient and trapped electron modes in HL-2A tokamak The 3 th Annual Workshop on Fusion Simulation and Theory, Hefei, March.
Status and Plans Transport Model Validation in ITER-similar Current-Ramp Plasmas D. R. Mikkelsen, PPPL ITPA Transport & Confinement Workshop San Diego.
A short introduction to the Worldwide LHC Computing Grid Maarten Litmaath (CERN)
Visualisation of Plasma in Fusion Devices Interactive European Grid 30 th May 2007.
A Grid fusion code for the Drift Kinetic Equation solver A.J. Rubio-Montero, E. Montes, M.Rodríguez, F.Castejón, R.Mayo CIEMAT. Avda Complutense, 22. Madrid.
IST E-infrastructure shared between Europe and Latin America High Energy Physics Applications in EELA Raquel Pezoa Universidad.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks EGEE – paving the way for a sustainable infrastructure.
Introduction to Plasma- Surface Interactions Lecture 3 Atomic and Molecular Processes.
OPERATIONAL SCENARIO of KTM Dokuka V.N., Khayrutdinov R.R. TRINITI, Russia O u t l i n e Goal of the work The DINA code capabilities Formulation of the.
RF simulation at ASIPP Bojiang DING Institute of Plasma Physics, Chinese Academy of Sciences Workshop on ITER Simulation, Beijing, May 15-19, 2006 ASIPP.
14 Oct. 2009, S. Masuzaki 1/18 Edge Heat Transport in the Helical Divertor Configuration in LHD S. Masuzaki, M. Kobayashi, T. Murase, T. Morisaki, N. Ohyabu,
INFSO-RI Enabling Grids for E-sciencE SALUTE – Grid application for problems in quantum transport E. Atanassov, T. Gurov, A. Karaivanova,
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
INFSO-RI Enabling Grids for E-sciencE Workflows in Fusion applications José Luis Vázquez-Poletti Universidad.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Introduction of 9th ITPA Meeting, Divertor & SOL and PEDESTAL Jiansheng Hu
OPERATIONAL SCENARIO of KTM Dokuka V.N., Khayrutdinov R.R. TRINITI, Russia O u t l i n e Goal of the work The DINA code capabilities Formulation of the.
Edge-SOL Plasma Transport Simulation for the KSTAR
Comprehensive ITER Approach to Burn L. P. Ku, S. Jardin, C. Kessel, D. McCune Princeton Plasma Physics Laboratory SWIM Project Meeting Oct , 2007.
User Forum 2006 Francisco Castejón As coordinator of NA4-Fusion: SW-Federation (CIEMAT, BIFI, UCM, INTA -Spain-), Russian.
EGEE USER FORUM, 2007 #0 F. Castejón 1,7 I. Campos 2, A. Cappa 1, M. C á rdenas 1, L. A. Fernández 3,7, E. Huedo 4, I.M. Llorente.
Magnetic Reconnection in Plasmas; a Celestial Phenomenon in the Laboratory J Egedal, W Fox, N Katz, A Le, M Porkolab, MIT, PSFC, Cambridge, MA.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Antonio Gómez Asociación Euratom/CIEMAT.
Università di Perugia Enabling Grids for E-sciencE Status of and requirements for Computational Chemistry NA4 – SA1 Meeting – 6 th April.
INFSO-RI Enabling Grids for E-sciencE The EGEE Project Owen Appleton EGEE Dissemination Officer CERN, Switzerland Danish Grid Forum.
52nd Annual Meeting of the Division of Plasma Physics, November , 2010, Chicago, Illinois Non-symmetric components were intentionally added to the.
1 Peter de Vries – ITPA T meeting Culham – March 2010 P.C. de Vries 1,2, T.W. Versloot 1, A. Salmi 3, M-D. Hua 4, D.H. Howell 2, C. Giroud 2, V. Parail.
Nonlinear Simulations of Energetic Particle-driven Modes in Tokamaks Guoyong Fu Princeton Plasma Physics Laboratory Princeton, NJ, USA In collaboration.
NSTX Meeting name – abbreviated presentation title, abbreviated author name (??/??/20??) Goals of NSTX Advanced Scenario and Control TSG Study, implement,
Simulation of Turbulence in FTU M. Romanelli, M De Benedetti, A Thyagaraja* *UKAEA, Culham Sciance Centre, UK Associazione.
1 Recent Progress on QPS D. A. Spong, D.J. Strickler, J. F. Lyon, M. J. Cole, B. E. Nelson, A. S. Ware, D. E. Williamson Improved coil design (see recent.
Presented by Yuji NAKAMURA at US-Japan JIFT Workshop “Theory-Based Modeling and Integrated Simulation of Burning Plasmas” and 21COE Workshop “Plasma Theory”
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks F. Castejón Asociación Euratom/CIEMAT para.
Bootstrap current in quasi-symmetric stellarators Andrew Ware University of Montana Collaborators: D. A. Spong, L. A. Berry, S. P. Hirshman, J. F. Lyon,
EGEE is a project funded by the European Union under contract IST Compchem VO's user support EGEE Workshop for VOs Karlsruhe (Germany) March.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks FAFNER2: adaptation of a code for estimating.
54 th APS-DPP Annual Meeting, October 29 - November 2, 2012, Providence, RI Study of ICRH and Ion Confinement in the HSX Stellarator K. M. Likin, S. Murakami.
E-science grid facility for Europe and Latin America Drift Kinetic Equation solver for Grid (DKEsG) A J. Rubio-Montero 1, L. A. Flores 1,
E-science grid facility for Europe and Latin America Executions of a Fusion Drift Kinetic Equation solver on Grid A J. Rubio-Montero.
NIMROD Simulations of a DIII-D Plasma Disruption S. Kruger, D. Schnack (SAIC) April 27, 2004 Sherwood Fusion Theory Meeting, Missoula, MT.
Enabling Grids for E-sciencE LRMN ThIS on the Grid Sorina CAMARASU.
INFSO-RI Enabling Grids for E-sciencE Kurchatov Institute Genetic Stellarator Optimisation in Grid Vladimir Voznesensky
ScotGRID is the Scottish prototype Tier 2 Centre for LHCb and ATLAS computing resources. It uses a novel distributed architecture and cutting-edge technology,
Unstructured Meshing Tools for Fusion Plasma Simulations
Recent Progress in Stellarator Optimization
Executions of the DKES code on the EELA-2 e-Infrastructure
Fusion in the GRID (na4-egeeII) Francisco Castejón
Finite difference code for 3D edge modelling
Introduction Motivation Objective
MFE Simulation Data Management
Mikhail Z. Tokar and Mikhail Koltunov
Presentation transcript:

INFSO-RI Enabling Grids for E-sciencE Fusion Status Report Francisco Castejón CIEMAT. Madrid, Spain.

Grid Users Forum, CERN, March Enabling Grids for E-sciencE INFSO-RI Outline Strategy. Fusion Deployment and VO setup. –The problem of the name Present Applications: Computing in Plasma Physics. Future Applications in the grid. Data storage and handling. Conclusions.

Grid Users Forum, CERN, March Enabling Grids for E-sciencE INFSO-RI Strategy Computing: –Identify common Codes suitable for GRID. (Ongoing) –Adapt codes to the GRID. (Ongoing) –Set up VO (Ongoing) –Production phase. Data handling: Define strategies for data storage. & database organization. Protocol for data Access.

Grid Users Forum, CERN, March Enabling Grids for E-sciencE INFSO-RI ITER: Making decisions in real Time !! Data Acquisition and Storage (Grid, Supercomputers) Data Analysis and Reduction: Artificial Intelligence, Neural Network, Pattern Recognition Simulations: Large codes in different platforms (Grid, Supercomputers) Decision for present/next shot One half an hour shot every one hour and a half: Decisions in real time.

Grid Users Forum, CERN, March Enabling Grids for E-sciencE INFSO-RI ITER Partners Distributed Participation. Data access. Remote Control Rooms?

Grid Users Forum, CERN, March Enabling Grids for E-sciencE INFSO-RI International Tokamak (ITPA) and Stellarator (SIA) collaborations. USA: Alcator C-Mod (MIT) DIII-D (San Diego) NSTX (Princeton) NCSX (Princeton) HSX (Wisconsin) QPS (Oak-Ridge) USA Fusion Grid Russia: T-10 (Kurchatov) Globus (Ioffe) T-11M (TRINITI) L-2 (Gen. Inst. Phys.) EGEE Project EU: JET (EFDA) ASDEX (Ger.) TORE SUPRA (Fran.) MAST (UK) TEXTOR (Ger.) TCV (Switz.) FTU (Italy) W7-X (Ger.) TJ-II (Spain) EGEE Project Japan: JT-60 (Naka) LHD (Toki) CHS (Nagoya) H-J (Kyoto) GRID Project ? China, Brazil, Korea, India: KSTAR (Korea) TCBRA (Bra.) H-7 (China) U2A (China) SST1 (India) EGEE Project

Grid Users Forum, CERN, March Enabling Grids for E-sciencE INFSO-RI PARTNERS and Resources for VO SW Federation: CIEMAT, BIFI, UCM, INTA (Spain) Kurchatov (Russia). Culham Laboratory- UKAEA (UK) KISTI (South Korea). ENEA (Italy). CEA-Cadarache (France). … Experience in using and developing Fusion Applications. Experience in porting applications and developing Grid Technologies.  Connection with EELA (Some fusion partners: Brazil,Mexico,Argentina)  Needed: Join IPP-Max Planck (Germany) and other EFDA Associations.  Also needed: contact with USA, China, Japan,…

Grid Users Forum, CERN, March Enabling Grids for E-sciencE INFSO-RI VO Deployment Present: CIEMAT: 27 KSpecInts; BIFI: 8 KSpecInts; INTA: 6 KSpecInts Resource Broker in BIFI (Spain) VO Manager: I. Campos (BIFI. Spain) Resource Broker in BIFI (Spain) VO Manager: I. Campos (BIFI. Spain) http: //www-fusion.ciemat.es/colla boration/egee/ Within less than 6 months: JET: 38 KSpecInts; BIFI: 32 KSpecInts; CEA-Cadarache ? KISTI?, INTA?, ENEA? Within less than 6 months: JET: 38 KSpecInts; BIFI: 32 KSpecInts; CEA-Cadarache ? KISTI?, INTA?, ENEA? Beginning of 2007: JET: 32 additional cores; BIFI: 32 additional cores; CIEMAT ?; CEA-Cadarache ?(second phase already committed). Beginning of 2007: JET: 32 additional cores; BIFI: 32 additional cores; CIEMAT ?; CEA-Cadarache ?(second phase already committed).

Grid Users Forum, CERN, March Enabling Grids for E-sciencE INFSO-RI VO Deployment: The problem of the name. Russian Grid has adopted the same name of Fusion, as we have done. The works that are sent by our resource broker go to such Grid. Our VO deployment is hindered. They should change the name in short term (~1 week). A suitable name: Fusion-RIDG Otherwise: We have to change the name. Consequences on Russian certificates.

Grid Users Forum, CERN, March Enabling Grids for E-sciencE INFSO-RI COMPUTING in the GRID: Present Applications –Applications with distributed calculations: Monte Carlo, Separate estimates, … –Multiple Ray Tracing: e. g. TRUBA. –Stellarator Optimization: VMEC – Transport and Kinetic Theory: Monte Carlo Codes.

Grid Users Forum, CERN, March Enabling Grids for E-sciencE INFSO-RI Multiple Ray Tracing: TRUBA Single Ray (1 PE): Hamiltonian Ray Tracing Equations. Beam Simulation: Bunch of rays with beam waist far from the critical layer ( rays) Bunch of rays with beam waist close to the critical layer ( rays) x ( wave numbers) ~10 5 GRID PROBLEM

Grid Users Forum, CERN, March Enabling Grids for E-sciencE INFSO-RI TRUBA: Multiple Ray Tracing TRUBA for EBW: TRUBA for EBW: -Real geometry in TJ-II:Coming from a supercomputer (VMEC). Real geometry in TJ-II:Coming from a supercomputer (VMEC).Real geometry in TJ-II:Coming from a supercomputer (VMEC). - A single Non-relativistic ray (about 18’). A single Non-relativistic ray (about 18’). A single Non-relativistic ray (about 18’). - A single relativistic ray (about 40’). A single relativistic ray (about 40’). A single relativistic ray (about 40’). - Some problems with Geometry libraries. Some problems with Geometry libraries. Some problems with Geometry libraries. - Ported to the grid using Grid Way (for the moment). Ported to the grid using Grid Way (for the moment). Ported to the grid using Grid Way (for the moment). -See: See: J. L. Vázquez-Poleti. “Massive Ray Tracing in Fusion Plasmas on EGEE”. User Forum, J. L. Vázquez-Poleti. “Massive Ray Tracing in Fusion Plasmas on EGEE”. User Forum, 2006.

Grid Users Forum, CERN, March Enabling Grids for E-sciencE INFSO-RI Optimised Stellarators QPS and NCSX Supercomputer Optimization NCSXQPS

Grid Users Forum, CERN, March Enabling Grids for E-sciencE INFSO-RI Stellarator optimization in the Grid - A lot of different Magnetic Configurations operating nowadays. OPTIMIZATION NECESITY BASED ON KNOWLEDGE OF STELLARATOR PHYSICS. Every variant computed on a separate processor (~10’) VMEC (Variational Momentum Equilibrium Code) 120 Fourier parameters are varied. Plasma configuration may be optimised numerically by variation of the field parameters.

Grid Users Forum, CERN, March Enabling Grids for E-sciencE INFSO-RI VMEC on Kurchatov GRID LCG-2 - based Russian Data Intensive Grid consortium resources. About cases computed (about was not VMEC- computable, i.e. no equilibrium). Each case took about 20 minutes. Up to 70 simultaneous jobs running on the grid. Genetic Algorith used to select the optimum case. -See: V. Voznesensky. “Genetic Optimisations in Grid”. User Forum, LCG-2 - based Russian Data Intensive Grid consortium resources. About cases computed (about was not VMEC- computable, i.e. no equilibrium). Each case took about 20 minutes. Up to 70 simultaneous jobs running on the grid. Genetic Algorith used to select the optimum case. -See: V. Voznesensky. “Genetic Optimisations in Grid”. User Forum, 2006.

Grid Users Forum, CERN, March Enabling Grids for E-sciencE INFSO-RI Kinetic Transport Following independent particle orbits Montecarlo techniques: Particles distributed according to experimental density and ion temperature profiles (Maxwellian distribution function) SUITABLE PROBLEM FOR CLUSTER AND GRID TECHNOLOGIES

Grid Users Forum, CERN, March Enabling Grids for E-sciencE INFSO-RI Kinetic Transport Example of orbit in the real 3D TJ-II Geometry (single PE). ~1 GBy data, 24 h x 512 PE Distribution function of parallel velocity at a given position (Data Analysis).

Grid Users Forum, CERN, March Enabling Grids for E-sciencE INFSO-RI Kinetic transport No collisions: 0.5 ms of trajectory takes 1 sec. CPU.. No collisions: 0.5 ms of trajectory takes 1 sec. CPU.. Collisions: 1 ms of trajectory takes 4 sec CPU. Collisions: 1 ms of trajectory takes 4 sec CPU. Particle life: ms. Single particle ~ 10 min. Particle life: ms. Single particle ~ 10 min. Necessary statistics for TJ-II 10 7 particles. Necessary statistics for TJ-II 10 7 particles.

Grid Users Forum, CERN, March Enabling Grids for E-sciencE INFSO-RI COMPUTING in the GRID: Future applications –EDGE2D Application for tokamaks –Transport Analysis of multiple shots (typically 10 4 shots) or Predictive Transport with multiple models: e. g. ASTRA. CIEMAT(Spa) + IPP(Ger) + Kurchatov(Rus) + EFDA(UE) + … –Neutral Particle Dynamics: EIRENE: CIEMAT(Spa) + IPP(Ger)

Grid Users Forum, CERN, March Enabling Grids for E-sciencE INFSO-RI JET – Flagship of Worldwide Fusion: EDGE2D Equilibrium code.

Grid Users Forum, CERN, March Enabling Grids for E-sciencE INFSO-RI Cross section of present EU D-shaped tokamaks compared to the ITER project EDGE2D: Determine plasma shape from Measurements: Plasma current, Pressure, Magnetic field… -EDGE2D code solves the 2 D fluid equations for the conservation of energy, momentum and particles in the plasma edge region. -Ions, electrons and all ionisation stages of multiple species are considered. -Interaction with the vessel walls is simulated by coupling to monte-carlo codes, to provide the neutral ion and impurity sources.

Grid Users Forum, CERN, March Enabling Grids for E-sciencE INFSO-RI Massive Transport Calculations For Instance: Enhanced heat Confinement in TJ-II. Lower heat diffusivity for low electron density and high absorbed power density. A different case on every PE.

Grid Users Forum, CERN, March Enabling Grids for E-sciencE INFSO-RI EIRENE Code Trayectory of a He atom in TJ-II. Vertical and horizontal proyections. It starts in the green point and is absorbed in the plasma by an ionization process. The real 3D geometry of TJ-II vacuum chamber is considerd.

Grid Users Forum, CERN, March Enabling Grids for E-sciencE INFSO-RI DATA HANDLING Storage: Large data flux: 10 4 sensors x kHz sampling= 1-10 GBy per second raw data x 0.5 h= 3 TBy per shot in ITER every 1,5 h Data Access & Sharing in Large Cooperative Experiments: Strategy to be defined: -Database with distributed access or distributed storage? -The evolution of technologies until ITER works. If distributed storage: We need a standard representation for experimental data in LCG-2/gLite CE middleware. Storage should allow to do some basic processing: neural network, clustering…

Grid Users Forum, CERN, March Enabling Grids for E-sciencE INFSO-RI DAS Tools: Visualization, DAQ and processing To add grid-aware protocols for: Data navigation and mining Data exchange Data search Event catch

Grid Users Forum, CERN, March Enabling Grids for E-sciencE INFSO-RI Conclusions VO of Fusion grid is almost ready (the problem of the name). Effort to get more partners in and outside EFDA is being done. Several applications are running in the grid. Future applications for the grid are identified. A deep discussion and investigation on large amount of data handling is needed. Cook book for data handling in the grid is desirable.