GRID Applications for the LHC ALICE Experiment Máté Ferenc Nagy Eötvös University Applied-Physics student Supervisor : Gergely Barnaföldi MTA KFKI RMKI.

Slides:



Advertisements
Similar presentations
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Advertisements

Multi-core and tera- scale computing A short overview of benefits and challenges CSC 2007 Andrzej Nowak, CERN
GPU Virtualization Support in Cloud System Ching-Chi Lin Institute of Information Science, Academia Sinica Department of Computer Science and Information.
GPU System Architecture Alan Gray EPCC The University of Edinburgh.
Early Linpack Performance Benchmarking on IPE Mole-8.5 Fermi GPU Cluster Xianyi Zhang 1),2) and Yunquan Zhang 1),3) 1) Laboratory of Parallel Software.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
IMGD 4000: Computer Graphics in Games Emmanuel Agu.
OpenFOAM on a GPU-based Heterogeneous Cluster
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
1 ITCS 6/8010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 19, 2011 Emergence of GPU systems and clusters for general purpose High Performance Computing.
Understanding Operating Systems 1 Overview Introduction Operating System Components Machine Hardware Types of Operating Systems Brief History of Operating.
3D Computer Rendering Kevin Ginty Centre for Internet Technologies
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Contemporary Languages in Parallel Computing Raymond Hummel.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
GPGPU overview. Graphics Processing Unit (GPU) GPU is the chip in computer video cards, PS3, Xbox, etc – Designed to realize the 3D graphics pipeline.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
Accelerating SQL Database Operations on a GPU with CUDA Peter Bakkum & Kevin Skadron The University of Virginia GPGPU-3 Presentation March 14, 2010.
GPU Programming with CUDA – Accelerated Architectures Mike Griffiths
A+ Guide to Hardware: Managing, Maintaining, and Troubleshooting, Sixth Edition Chapter 9, Part 11 Satisfying Customer Needs.
1 Integrating GPUs into Condor Timothy Blattner Marquette University Milwaukee, WI April 22, 2009.
Training Program on GPU Programming with CUDA 31 st July, 7 th Aug, 14 th Aug 2011 CUDA Teaching UoM.
COLLABORATIVE EXECUTION ENVIRONMENT FOR HETEROGENEOUS PARALLEL SYSTEMS Aleksandar Ili´c, Leonel Sousa 2010 IEEE International Symposium on Parallel & Distributed.
Chapter 2 Computer Clusters Lecture 2.3 GPU Clusters for Massive Paralelism.
CuMAPz: A Tool to Analyze Memory Access Patterns in CUDA
Shared memory systems. What is a shared memory system Single memory space accessible to the programmer Processor communicate through the network to the.
Havok. ©Copyright 2006 Havok.com (or its licensors). All Rights Reserved. HavokFX Next Gen Physics on ATI GPUs Andrew Bowell – Senior Engineer Peter Kipfer.
Computer Graphics Graphics Hardware
Unit 2 - Hardware Graphics Cards. Why do we need graphics cards? ● The processor executes commands for many different purposes. ● Graphics processing.
GPUs and Accelerators Jonathan Coens Lawrence Tan Yanlin Li.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Use of GPUs in ALICE (and elsewhere) Thorsten Kollegger TDOC-PG | CERN |
Presented by Andrew Walker vs. What is the difference?
Applying GPU and POSIX Thread Technologies in Massive Remote Sensing Image Data Processing By: Group 17 King Mongkut's Institute of Technology Ladkrabang.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
Hardware. Make sure you have paper and pen to hand as you will need to take notes and write down answers and thoughts that you can refer to later on.
Emergence of GPU systems and clusters for general purpose high performance computing ITCS 4145/5145 April 3, 2012 © Barry Wilkinson.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
Introducing collaboration members – Korea University (KU) ALICE TPC online tracking algorithm on a GPU Computing Platforms – GPU Computing Platforms Joohyung.
1 Latest Generations of Multi Core Processors
GPUs: Overview of Architecture and Programming Options Lee Barford firstname dot lastname at gmail dot com.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
Personal Chris Ward CS147 Fall  Recent offerings from NVIDA show that small companies or even individuals can now afford and own Super Computers.
Havok FX Physics on NVIDIA GPUs. Copyright © NVIDIA Corporation 2004 What is Effects Physics? Physics-based effects on a massive scale 10,000s of objects.
Status of India CMS Grid Computing Facility (T2-IN-TIFR) Rajesh Babu Muda TIFR, Mumbai On behalf of IndiaCMS T2 Team July 28, 20111Status of India CMS.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
 System Requirements are the prerequisites needed in order for a software or any other resources to execute efficiently.  Most software defines two.
3/12/2013Computer Engg, IIT(BHU)1 CUDA-3. GPGPU ● General Purpose computation using GPU in applications other than 3D graphics – GPU accelerates critical.
December 13, G raphical A symmetric P rocessing Prototype Presentation December 13, 2004.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Instituto de Biocomputación y Física de Sistemas Complejos Cloud resources and BIFI activities in JRA2 Reunión JRU Española.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
Heterogeneous Processing KYLE ADAMSKI. Overview What is heterogeneous processing? Why it is necessary Issues with heterogeneity CPU’s vs. GPU’s Heterogeneous.
Pledged and delivered resources to ALICE Grid computing in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
Availability of ALICE Grid resources in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
Computer Graphics Graphics Hardware
NFV Compute Acceleration APIs and Evaluation
GRID OPERATIONS IN ROMANIA
Low-Cost High-Performance Computing Via Consumer GPUs
Heterogeneous Computation Team HybriLIT
GSIAF & Anar Manafov, Victor Penso, Carsten Preuss, and Kilian Schwarz, GSI Darmstadt, ALICE Offline week, v. 0.8.
Super Computing By RIsaj t r S3 ece, roll 50.
Support for ”interactive batch”
Computer Graphics Graphics Hardware
6- General Purpose GPU Programming
Presentation transcript:

GRID Applications for the LHC ALICE Experiment Máté Ferenc Nagy Eötvös University Applied-Physics student Supervisor : Gergely Barnaföldi MTA KFKI RMKI Hungarian ALICE Group

Overview The GRID GRID implementation: the middleware AliEn, the ALICE GRID RMKI resources A job’s life ROOT and AliROOT GP-GPU RMKI test cluster Architecture Summary Animations / Questions Part I.: The CERN GRID Part II.: GP-GPU Applications in Physics

GRID What is a GRID? A distributed system, a group of interconnected computers that share a common interface to share resources. Why would we need such a thing? „Anything that can be calculated on paper has already been done.” Great demand for computing power. Too expensive centralized.

GRID, the CERN way The solution? Amidst the LCG-EGEE (LHC Computing Grid – Enabling GRIDs for E-Science) program, the gLite middleware has been developed. (see 7 th slide) What’s considered to be a resource? CPU, HDD

Where is the middleware? hardware Operating system software hardware Operating system software hardware Operating system softwaremiddleware

The role of the middleware It serves as the distributor of resources available to a machine. Advertises these resources (free vs. used) It schedules incoming jobs for execution, reports their status. It serves the jobs with whatever package or library dependencies they might have.

Middleware – á la ALICE gLite: The background service. All higher layers give commands to this layer. AliEn: (AliceEnvironment) a module extending the regular gLite services.(faster, stronger, better, and nontheless: userfriendly) VOBOX: (Virtual Organisation) It is a general purpose element in the middleware. - It isn’t mandatory to install a vobox onto a site. All experiments use it to different ends. - At ALICE, AliEn commands are given from here. (eg. Job submission, result downloading, file operations,...)

Storage – á la ALICE The handling of storage does not go according to standard procedure; gLite supports many storage types and topologies, but not the ALICE standard xrootd. Therefore file operations cannot be invoked by the underlying gLite middleware, it has to be maintained by the VOBOX. (Where foreseen generality comes in handy)

Storage and Computing elements at RMKI SE: (Storage Element) Two machines with 20TB disk space each. 1 machine: 4X RAID 5+1, all 1TB HDD CE: (Computing Element) Vastly Intel Xeon processors with the standard 2GB RAM/core. 13 Dual Core és 100 Quad Core machines are brought to work. They are used in timedivision with the CMS experiment.

A job’s life redirector SE redirector CE VOBOX gLite VOBOX gLite VOBOX gLite „A” site„B” site„C” site CERN BDII database xrootd global redirector JA Job Agent: WHAT we run (contains the executable) ON what (contains the necessary input files) WHERE we run (optional restrictions to site properties)

ROOT and AliROOT ROOT: - C++ based framework. - Meant for 3 dimensional designing with capable mathematical support. AliROOT: - Every version contains the up- to-date schematics of the ALICE detector. - The physical behavior of materials are simulated by Geant.

PART II. GP-GPU applications in the CERN GRID

GP-GPU Expansion: General Purpose-Graphics Processing Unit Advantages: X speed increase is achieveable - Better performance vs cost, wattage ratios. Recent events: - OpenCL (Open Computing Language) - DirectX 11 (Compute Shader 5.0) Backdraws: - Longer development time. - Not all problems can be paralellized.

New Cards Up the Sleeve Both major GPU manufacturers introduced their new generation graphics cards that deliver massive computing power for paralellized algorithms.

Fermi (GT300) Architecture The C2070 Tesla cards will feature 512 CUDA cores, each able to run a single thread of execution. Peak computational power of 1600 GFLOPS SP, 650 GFLOPS DP. 1 FLOPS = 1 FLoating-point Operation Per Second High-end Intel Xeon processors possess ~ 130 GFLOPS SP/DP computing power.

Cypress (RV870) Architecture The AMD ATi Radeon 5870 features 1600 shader processors, each able to run a single thread of execution. Peak computational power of 2700 GFLOPS SP, 500 GFLOPS DP.

RMKI GPU test cluster One machine with 3 GTX295 (dual core) GPUs. Acquisition of 2 new high-end desktop configs in rack mounts: Intel Core-i7 quad-core 2.66 GHz processors, 12GB RAM, 3 high-speed (x16 PCI-E) slots for GPUs. One machine wil be equipped with AMD ATi Radeon 5970 (dual core) GPUs for OpenCL computing in the near future. Second will contain Tesla C2070 cards for CUDA/OpenCL.

Summary The RMKI GRID has been functioning since 2002 ALICE computations running since 2006, from this year at even larger scales. The xrootd based ALICE dedicated storage has been working since this year Q1. Fully accomplishing the directives of a T2 site. At the end of Scientific Linux 5 migration, one step from datataking (thx to Szabolcs Hernáth). Scientific activity: TDK work has just started.

Animations AMD Havok clothing simulation, simple Newton equation solver with high number interactions. NVIDIA CUDA Fluid Dynamics, Navier-Stokes equation solver on realistic size grid. NVIDIA CS 5.0 Ocean Rendering using FFT calculations on a high resolution grid. NVIDIA CS 5.0 Brute Force N-Body Simulation NVIDIA OpenCL O(N*logN) N-Body simultaion using cut-off distance optimization.