A Grid fusion code for the Drift Kinetic Equation solver A.J. Rubio-Montero, E. Montes, M.Rodríguez, F.Castejón, R.Mayo CIEMAT. Avda Complutense, 22. Madrid.

Slides:



Advertisements
Similar presentations
Chapter 1 An Overview of Computers and Programming Languages.
Advertisements

DPP 2006 Reduction of Particle and Heat Transport in HSX with Quasisymmetry J.M. Canik, D.T.Anderson, F.S.B. Anderson, K.M. Likin, J.N. Talmadge, K. Zhai.
Parallel Computation of the 2D Laminar Axisymmetric Coflow Nonpremixed Flames Qingan Andy Zhang PhD Candidate Department of Mechanical and Industrial Engineering.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
CSE351/ IT351 Modeling and Simulation
A Parallel Structured Ecological Model for High End Shared Memory Computers Dali Wang Department of Computer Science, University of Tennessee, Knoxville.
Workload Management Massimo Sgaravatto INFN Padova.
“Early Estimation of Cache Properties for Multicore Embedded Processors” ISERD ICETM 2015 Bangkok, Thailand May 16, 2015.
Parallel Computing The Bad News –Hardware is not getting faster fast enough –Too many architectures –Existing architectures are too specific –Programs.
An approach for solving the Helmholtz Equation on heterogeneous platforms An approach for solving the Helmholtz Equation on heterogeneous platforms G.
Gilbert Thomas Grid Computing & Sun Grid Engine “Basic Concepts”
Massively Parallel Magnetohydrodynamics on the Cray XT3 Joshua Breslau and Jin Chen Princeton Plasma Physics Laboratory Cray XT3 Technical Workshop Nashville,
The SWIM Fast MHD Campaign Presented by S. C. Jardin Princeton Plasma Physics Laboratory P.O. Box 451 Princeton, NJ Simulation of Wave Interaction.
Elastic Applications in the Cloud Dinesh Rajan University of Notre Dame CCL Workshop, June 2012.
This work was carried out in the context of the Virtual Laboratory for e-Science project. This project is supported by a BSIK grant from the Dutch Ministry.
Introduction to the Particle In Cell Scheme for Gyrokinetic Plasma Simulation in Tokamak a Korea National Fusion Research Institute b Courant Institute,
Challenging problems in kinetic simulation of turbulence and transport in tokamaks Yang Chen Center for Integrated Plasma Studies University of Colorado.
AN EXTENDED OPENMP TARGETING ON THE HYBRID ARCHITECTURE OF SMP-CLUSTER Author : Y. Zhao 、 C. Hu 、 S. Wang 、 S. Zhang Source : Proceedings of the 2nd IASTED.
Computer Programming A program is a set of instructions a computer follows in order to perform a task. solve a problem Collectively, these instructions.
Through the development of advanced middleware, Grid computing has evolved to a mature technology in which scientists and researchers can leverage to gain.
BLU-ICE and the Distributed Control System Constraints for Software Development Strategies Timothy M. McPhillips Stanford Synchrotron Radiation Laboratory.
EGEE-III INFSO-RI Enabling Grids for E-sciencE Lessons learnt from the EGEE Application Porting Support activity Gergely Sipos Coordinator.
Visualisation of Plasma in Fusion Devices Interactive European Grid 30 th May 2007.
Supercomputing Center CFD Grid Research in N*Grid Project KISTI Supercomputing Center Chun-ho Sung.
Supercomputing Cross-Platform Performance Prediction Using Partial Execution Leo T. Yang Xiaosong Ma* Frank Mueller Department of Computer Science.
CSF4 Meta-Scheduler Name: Zhaohui Ding, Xiaohui Wei
BOF: Megajobs Gracie: Grid Resource Virtualization and Customization Infrastructure How to execute hundreds of thousands tasks concurrently on distributed.
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks S. Natarajan (CSU) C. Martín (UCM) J.L.
E-science grid facility for Europe and Latin America E2GRIS1 Gustavo Miranda Teixeira Ricardo Silva Campos Laboratório de Fisiologia Computacional.
PhyloGrid: a development for a workflow in Phylogeny E. Montes 1, R. Isea 2 and R. Mayo 1 1 CIEMAT, Avda. Complutense, 22, Madrid, Spain 2 Fundación.
Blue Brain Project Carlos Osuna, Carlos Aguado, Fabien Delalondre.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks, Novelties and Features around the GridWay.
EGEE-III INFSO-RI Enabling Grids for E-sciencE Application Porting Support in EGEE Gergely Sipos MTA SZTAKI EGEE’08.
1 Confinement Studies on TJ-II Stellarator with OH Induced Current F. Castejón, D. López-Bruna, T. Estrada, J. Romero and E. Ascasíbar Laboratorio Nacional.
INFSO-RI Enabling Grids for E-sciencE Workflows in Fusion applications José Luis Vázquez-Poletti Universidad.
Discharge initiation and plasma column formation in aspect ratio A=2 tokamak. R.R. Khayrutdinov 1 E.A. Azizov 1, A.D. Barkalov 1, G.G.Gladush 1, I.L.Tajibaeva.
Using Cache Models and Empirical Search in Automatic Tuning of Applications Apan Qasem Ken Kennedy John Mellor-Crummey Rice University Houston, TX Apan.
1 Grid Activity Summary » Grid Testbed » CFD Application » Virtualization » Information Grid » Grid CA.
User Forum 2006 Francisco Castejón As coordinator of NA4-Fusion: SW-Federation (CIEMAT, BIFI, UCM, INTA -Spain-), Russian.
EGEE USER FORUM, 2007 #0 F. Castejón 1,7 I. Campos 2, A. Cappa 1, M. C á rdenas 1, L. A. Fernández 3,7, E. Huedo 4, I.M. Llorente.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
Installing Java on a Home machine For Windows Users: Download/Install: Go to downloads html.
DIRAC Project A.Tsaregorodtsev (CPPM) on behalf of the LHCb DIRAC team A Community Grid Solution The DIRAC (Distributed Infrastructure with Remote Agent.
Transport analysis of the LHD plasma using the integrated code TASK3D A. Wakasa, A. Fukuyama, S. Murakami, a) C.D. Beidler, a) H. Maassberg, b) M. Yokoyama,
52nd Annual Meeting of the Division of Plasma Physics, November , 2010, Chicago, Illinois Non-symmetric components were intentionally added to the.
INFSO-RI Enabling Grids for E-sciencE Fusion Status Report Francisco Castejón CIEMAT. Madrid, Spain.
GWpilot: a personal pilot system A.J. Rubio-Montero, E. Huedo and R. Mayo-García EGI Technical Forum 2012 Prague – 20 Sep 2012.
53rd Annual Meeting of the Division of Plasma Physics, November , 2011, Salt Lake City, Utah When the total flow will move approximately along the.
Simulation of Turbulence in FTU M. Romanelli, M De Benedetti, A Thyagaraja* *UKAEA, Culham Sciance Centre, UK Associazione.
EGI-InSPIRE RI EGI Community Forum 2012 EGI-InSPIRE EGI-InSPIRE RI EGI Community Forum 2012 Kepler Workflow Manager.
1 Recent Progress on QPS D. A. Spong, D.J. Strickler, J. F. Lyon, M. J. Cole, B. E. Nelson, A. S. Ware, D. E. Williamson Improved coil design (see recent.
Presented by Yuji NAKAMURA at US-Japan JIFT Workshop “Theory-Based Modeling and Integrated Simulation of Burning Plasmas” and 21COE Workshop “Plasma Theory”
Bootstrap current in quasi-symmetric stellarators Andrew Ware University of Montana Collaborators: D. A. Spong, L. A. Berry, S. P. Hirshman, J. F. Lyon,
54th Annual Meeting of the APS Division of Plasma Physics, October 29 – November 2, 2012, Providence, Rhode Island 3-D Plasma Equilibrium Reconstruction.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks FAFNER2: adaptation of a code for estimating.
E-science grid facility for Europe and Latin America Drift Kinetic Equation solver for Grid (DKEsG) A J. Rubio-Montero 1, L. A. Flores 1,
E-science grid facility for Europe and Latin America Executions of a Fusion Drift Kinetic Equation solver on Grid A J. Rubio-Montero.
1 An unattended, fault-tolerant approach for the execution of distributed applications Manuel Rodríguez-Pascual, Rafael Mayo-García CIEMAT Madrid, Spain.
Grid computing simulation of superconducting vortex lattice in superconducting magnetic nanostructures M. Rodríguez-Pascual 1, D. Pérez de Lara 2, E.M.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
VGrADS and GridSolve Asim YarKhan Jack Dongarra, Zhiao Shi, Fengguang Song Innovative Computing Laboratory University of Tennessee VGrADS Workshop – September.
Installing Java on a Home machine
Executions of the DKES code on the EELA-2 e-Infrastructure
WORKFLOW PETRI NETS USED IN MODELING OF PARALLEL ARCHITECTURES
Installing Java on a Home machine
Mikhail Z. Tokar and Mikhail Koltunov
German Astrophysical Virtual Observatory
Department of Computer Science, University of Tennessee, Knoxville
Presentation transcript:

A Grid fusion code for the Drift Kinetic Equation solver A.J. Rubio-Montero, E. Montes, M.Rodríguez, F.Castejón, R.Mayo CIEMAT. Avda Complutense, 22. Madrid (SPAIN) Abstract Several Drift Kinetic Equation solvers (DKEs) codes are used to simulate the neoclassical transport for the stellerators and tokamaks fusion reactors. These applications offer an approach more near to analytic solution than Monte Carlo methods, but usually requiring a great amount of memory, making their porting to Grid difficult. This work explains the gridification process of a DKEs code widely accepted by the Fusion community. Also, performance and portability gains are evaluated. The tests and results obtained have been applied to the TJ-II Flexible Heliac at National Fusion Laboratory (Spain) by using mainly the EELA-2 infrastructure. Impact, objectives and Grid-added value Calculating neoclassical transport Is a fundamental part of the complete simulation cycle of the behaviour of plasmas inside fusion reactors. It is also usually used to determine the energy flux to the plasma wall for a certain magnetic field, and therefore, the efficiency of a certain coil configuration before it is implemented (i. e. ITER). Neoclassical transport can be calculated by means of Monte Carlo methods, which have been successfully deployed on Grid to solve a wide range of scientific challenges in many disciplines, among them Physics, but for this specific case they only can offer an estimation of the perpendicular diffusive part of the transport matrix. On the other hand, DKE solvers provide correct quantitative results of the complete transport matrix, with the drawback of high computation time and memory consumption, being usually executed on shared memory computers. Then, the objective is offer to Scientifics a new framework, called DKEsG (Drift Kinetic Equation solver for Grids), to easily retrieve all neoclassical transport coefficients:  Reducing time execution and increasing precision making use of heterogeneous resources from different Grid infrastructures.  Filling a database with the complete “configuration - transport matrix – state” for several fusion reactors to:  Bring results to Fusion Community  Avoid performing the same simulation twice and easy re-analysis  Easy interaction with other fusion applications to build complex workflows Original Variational DKES code [1]  Applied to TJ-II, HSX, CHS, LHD, ATF, VII-AS, VII-X, NCSX, QPS, ITER and other devices.  Implemented for obsolete shared memory hosts (CRAY-1, CRAY-X-MP) and modified to run on SGI Origin 3000 MIPS based machines.  Requires continuously manual configuring, recompiling code and compilation of high number of output data files  Calculates only diffusion coefficients for a certain number of collisionality/energy and radial electric field values Current Status [2]  All code in a nutshell  Algorithms from DKES software up-to date and extended  Ported and optimized to Linux x86-32, x86-64 and ia-64 platforms  Only a binary per architecture and algorithm  No software must be installed in resources  Grid-enabled: Run on Globus and gLite based Grids, being tested on EGEE-III, EELA-2 and regional infrastructures. Future  High level implementation on java DRMAA standard  Suitable to run on Globus/gLite Grids through GridWay or on local clusters through PBS, SGE or LSF  Integration in complex application level workflows and databases  Independent GUI  A new module to calculate transport coefficients has been developed  Fill coefficients database and offer it as virtual observatory References [1]W.I. van Rij and S.P. Hirshman. “Variational bounds for transport coefficients in three-dimensional toroidal plasmas”, Phys. Fluids B 1 (3), (1989) [2]A.J. Rubio-Montero, E. Montes, F. Castejón, L.A. Flores, M. Rodríguez, R.Mayo. Calculations of Neoclassical Transport on the Grid. Proceedings of First EELA-2 Conference, Vol.1, p Some tests calculating diffusion coefficients  Task description  5 collisionallity variations per task  Fixed electric field  100 Legendre and 343 Fourier polynomials  ~450 seconds on a Xeon(2006) 3.2 GHz  ~285 seconds on a Itanium2(2005) 1.5GHz  1ms of simulation requires more than 10 6 jobs ( > 14 years in a single Xeon processor)  10 3 jobs spent experimentally 4h 20’ on a heterogeneous Grid with only 60 available processors, ~34% less than 50 processors from SGI Origin Neoclassical transport coefficients for all possible magnetic surface, temperature, density are indexed in a public database by fusion device. Diffusion coefficients calculated by DKES by collisionallity for a single electric field SGI 3800 (50 MPI processors 600Mhz) CIEMAT Internal Grid (limited to 25 slots among 5 heterogeneous resources) EELA + Internal + others (limited to 60 slots among 12 available resources) Jobs completed Performance (jobs x second) 36.8% 105.2%