OPTIMIZATION OF DIESEL INJECTION USING GRID COMPUTING Miguel Caballer Universidad Politécnica de Valencia.

Slides:



Advertisements
Similar presentations
Buffers & Spoolers J L Martin Think about it… All I/O is relatively slow. For most of us, input by typing is painfully slow. From the CPUs point.
Advertisements

P3- Represent how data flows around a computer system
Setting up Small Grid Testbed
GLOBUS PLUG-IN FOR WINGS WOKFLOW ENGINE Elizabeth Martí ITACA Universidad Politécnica de Valencia
1 OBJECTIVES To generate a web-based system enables to assemble model configurations. to submit these configurations on different.
Consorzio COMETA - Progetto PI2S2 UNIONE EUROPEA NEMO Monte Carlo Application on the Grid R. Calcagno for the NEMO Collaboration.
Towards an agent integrated speculative scheduling service L á szl ó Csaba L ő rincz, Attila Ulbert, Tam á s Kozsik, Zolt á n Horv á th ELTE, Department.
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
MultiJob PanDA Pilot Oleynik Danila 28/05/2015. Overview Initial PanDA pilot concept & HPC Motivation PanDA Pilot workflow at nutshell MultiJob Pilot.
Reproducible Environment for Scientific Applications (Lab session) Tak-Lon (Stephen) Wu.
Enabling Grids for E-sciencE Medical image processing web portal : Requirements analysis. An almost end user point of view … H. Benoit-Cattin,
Introduction to HP LoadRunner Getting Familiar with LoadRunner >>>>>>>>>>>>>>>>>>>>>>
Chapter 4 COB 204. What do you need to know about hardware? 
Track 1: Cluster and Grid Computing NBCR Summer Institute Session 2.2: Cluster and Grid Computing: Case studies Condor introduction August 9, 2006 Nadya.
Building a Real Workflow Thursday morning, 9:00 am Lauren Michael Research Computing Facilitator University of Wisconsin - Madison.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
Grid Data Management A network of computers forming prototype grids currently operate across Britain and the rest of the world, working on the data challenges.
A Distributed Computing System Based on BOINC September - CHEP 2004 Pedro Andrade António Amorim Jaime Villate.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
INTRODUCTION TO CLOUD COMPUTING CS 595 LECTURE 2.
1 Dynamic Application Installation (Case of CMS on OSG) Introduction CMS Software Installation Overview Software Installation Issues Validation Considerations.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
OpenSees on NEEShub Frank McKenna UC Berkeley. Bell’s Law Bell's Law of Computer Class formation was discovered about It states that technology.
:: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 1 MPI On Grids September 3 rd, GridKA School 2009.
(1) A Beginner’s Quick Start to SIMICS. (2) Disclaimer This is a quick start document to help users get set up quickly Does not replace the user guide.
Grid Technologies  Slide text. What is Grid?  The World Wide Web provides seamless access to information that is stored in many millions of different.
Wenjing Wu Andrej Filipčič David Cameron Eric Lancon Claire Adam Bourdarios & others.
Wenjing Wu Computer Center, Institute of High Energy Physics Chinese Academy of Sciences, Beijing BOINC workshop 2013.
Nadia LAJILI User Interface User Interface 4 Février 2002.
Sep 21, 20101/14 LSST Simulations on OSG Sep 21, 2010 Gabriele Garzoglio for the OSG Task Force on LSST Computing Division, Fermilab Overview OSG Engagement.
Status of StoRM+Lustre and Multi-VO Support YAN Tian Distributed Computing Group Meeting Oct. 14, 2014.
HPC computing at CERN - use cases from the engineering and physics communities Michal HUSEJKO, Ioannis AGTZIDIS IT/PES/ES 1.
Enabling Grids for E-sciencE EGEE-III INFSO-RI Using DIANE for astrophysics applications Ladislav Hluchy, Viet Tran Institute of Informatics Slovak.
E-science grid facility for Europe and Latin America E2GRIS1 Gustavo Miranda Teixeira Ricardo Silva Campos Laboratório de Fisiologia Computacional.
Building a Real Workflow Thursday morning, 9:00 am Lauren Michael Research Computing Facilitator University of Wisconsin - Madison.
Getting started DIRAC Project. Outline  DIRAC information system  Documentation sources  DIRAC users and groups  Registration with DIRAC  Getting.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Status report on Application porting at SZTAKI.
Interactive Workflows Branislav Šimo, Ondrej Habala, Ladislav Hluchý Institute of Informatics, Slovak Academy of Sciences.
Running Kuali: A Technical Perspective Ailish Byrne (Indiana University) Jonathan Keller (University of California, Davis)
Amir Iqbal L Mahwish Khan L Rabia Akhtar L Nida Sarwar L Cloud Computing Based – Online IDE.
Efficient Live Checkpointing Mechanisms for computation and memory-intensive VMs in a data center Kasidit Chanchio Vasabilab Dept of Computer Science,
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
A.Abhari CPS1251 Topic 1: Introduction to Computers Computer Hardware Computer components Connecting Computers Computer Software Operating System (OS)
1 DIRAC Job submission A.Tsaregorodtsev, CPPM, Marseille LHCb-ATLAS GANGA Workshop, 21 April 2004.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
INFSO-RI Enabling Grids for E-sciencE Using of GANGA interface for Athena applications A. Zalite / PNPI.
Vincenzo Innocente, CERN/EPUser Collections1 Grid Scenarios in CMS Vincenzo Innocente CERN/EP Simulation, Reconstruction and Analysis scenarios.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Practical using WMProxy advanced job submission.
Simulation of O2 offline processing – 02/2015 Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture Eugen Mudnić.
Next Generation of Apache Hadoop MapReduce Owen
Joint Institute for Nuclear Research Synthesis of the simulation and monitoring processes for the data storage and big data processing development in physical.
Hernán García CeCalcULA Universidad de los Andes.
StratusLab is co-funded by the European Community’s Seventh Framework Programme (Capacities) Grant Agreement INFSO-RI Demonstration StratusLab First.
GDB Meeting CERN 09/11/05 EGEE is a project funded by the European Union under contract IST A new LCG VO for GEANT4 Patricia Méndez Lorenzo.
Enabling Grids for E-sciencE LRMN ThIS on the Grid Sorina CAMARASU.
Enabling Grids for E-sciencE Work Load Management & Simple Job Submission Practical Shu-Ting Liao APROC, ASGC EGEE Tutorial.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
HPC need and potential of ANSYS CFD and mechanical products at CERN A. Rakai EN-CV-PJ2 5/4/2016.
Dr.S.Sridhar, Director, RVCT, RVCE, Bangalore
OpenPBS – Distributed Workload Management System
Work report Xianghu Zhao Nov 11, 2014.
Dr.S.Sridhar, Director, RVCT, RVCE, Bangalore
Status of Storm+Lustre and Multi-VO Support
MC data production, reconstruction and analysis - lessons from PDC’04
Simulation use cases for T2 in ALICE
EPANET-MATLAB Toolkit An Open-Source Software for Interfacing EPANET with MATLAB™ Demetrios ELIADES, Marios KYRIAKOU, Stelios VRACHIMIS and Marios POLYCARPOU.
Overview Introduction VPS Understanding VPS Architecture
Alice Software Demonstration
Distributing META-pipe on ELIXIR compute resources
Presentation transcript:

OPTIMIZATION OF DIESEL INJECTION USING GRID COMPUTING Miguel Caballer Universidad Politécnica de Valencia

INTRODUCTION CaviGrid studies the fluid dynamics of the injection of turbo diesel engines. Multi-parametric studies of the geometry of the orifices of the cavity. Impact: – Reduction of fuel consumption by tackling wider simulation configurations.

INTRODUCTION Individual simulation features: CPU Time: ≈ 4-6 days Input File:≈ MB Memory: RAM≈ 2GB HD≈ 10GB

OPENFOAM Uses OpenFOAM tool: – Open Source tool. – Widely used Computational Fluid Dynamics (CFD) software package. – Supports parallel execution: MPI, OpenMP. – Non-trivial installation. A lot of dependencies. Long and difficult compilation. – Issues with different versions of GCC.

GRID SOLUTION Grid is used in the CaviGrid application to launch a set of multi-parametric simulations. It also can be used in any other OpenFOAM app. OpenFOAM save simulation results after a specified time steps. – It can be used as checkpoint data. – The results can be read before the whole simulation ends.

GRID SCHEME Two steps: – Input file generation. Specific for each application. Create the OpenFOAM input files. Upload them to an SE. – Modify the relevant parameter for the experiment. – CaviGrid valuates the influence of the position of the needle on the internal flow for different levels of injection pressure and a set of discharge pressures. – Launch the set of jobs. General for any OpenFOAM app.

OPENFOAM BINARIES Use a pre-packaged OpenFOAM file. – About 100 MB – Currently stored in a SE. Can be stored in any accessible location – All the apps and libraries needed. It can be executed in (almost) any WN.

GRID JOB 1.Download OpenFOAM & the input file. 2.If available it also downloads the last simulation step stored in the SE. – In case of failure the job will start from the last checkpoint available. – The user can access the SE to download the available results. 3.Launches the OpenFOAM solver in background. 4.Monitors the app and the simulation steps saved into disk. 5.Each simulation step saved is packaged and uploaded to the SE in the same directory of the initial file.

CAVIGRID SIMULATIONS Sequential multi-parametric simulations: – 5 different geometries – 2 levels of injection pressure – 2 sets of discharge pressures. A first set with 15 pressures for the first level of injection pressure A second set with 4 pressures for the second level of injection pressure – Total number of cases: 95 More than one year of CPU time Total output files: > 50 GB

CAVIGRID WMS SIMULATIONS TIMES Total response time for all the simulations: – ≈22 Days Problems found: – Slow CEs or CEs with too many queued jobs The WMS server does not make a good CE selection Use white list of CEs – Job Monitor developed Script from UI Cancel not running jobs – For a long period of time Resubmit the jobs – Network issues

CAVIGRID IN DIRAC Access more resources – Homogeneous access to different resources Grid & Cloud CaviGrid has a model that fits better an IaaS. – Can use pre-configured VMIs with OpenFOAM Better Workload Management. – Job Monitor will not be necessary. Pilot Jobs: – CaviGrid does not fit well with Pilot jobs Too large jobs (4-6 days) – Other OpenFOAM apps with smaller jobs can fit with Pilot Jobs: Enable to reuse downloaded data for smaller jobs: – OpenFOAM binaries – Input data files