Report on Vector Prototype J.Apostolakis, R.Brun, F.Carminati, A. Gheata 10 September 2012.

Slides:



Advertisements
Similar presentations
Chapter 3 Embedded Computing in the Emerging Smart Grid Arindam Mukherjee, ValentinaCecchi, Rohith Tenneti, and Aravind Kailas Electrical and Computer.
Advertisements

Thread Criticality Predictors for Dynamic Performance, Power, and Resource Management in Chip Multiprocessors Abhishek Bhattacharjee Margaret Martonosi.
Scalable Multi-Cache Simulation Using GPUs Michael Moeng Sangyeun Cho Rami Melhem University of Pittsburgh.
The Linux Kernel: Memory Management
Development of a track trigger based on parallel architectures Felice Pantaleo PH-CMG-CO (University of Hamburg) Felice Pantaleo PH-CMG-CO (University.
1: Operating Systems Overview
User-Level Interprocess Communication for Shared Memory Multiprocessors Brian N. Bershad, Thomas E. Anderson, Edward D. Lazowska, and Henry M. Levy Presented.
SEC(R) 2008 Intel® Concurrent Collections for C++ - a model for parallel programming Nikolay Kurtov Software and Services.
Predictive Runtime Code Scheduling for Heterogeneous Architectures 1.
IT The Relational DBMS Section 06. Relational Database Theory Physical Database Design.
Introduction and Overview Questions answered in this lecture: What is an operating system? How have operating systems evolved? Why study operating systems?
LOGO OPERATING SYSTEM Dalia AL-Dabbagh
Operating System Review September 10, 2012Introduction to Computer Security ©2004 Matt Bishop Slide #1-1.
Prototyping particle transport towards GEANT5 A. Gheata 27 November 2012 Fourth International Workshop for Future Challenges in Tracking and Trigger Concepts.
University of Michigan Electrical Engineering and Computer Science 1 Dynamic Acceleration of Multithreaded Program Critical Paths in Near-Threshold Systems.
Parallel Applications Parallel Hardware Parallel Software IT industry (Silicon Valley) Users Efficient Parallel CKY Parsing on GPUs Youngmin Yi (University.
1 Robert Wijnbelt Health Check your Database A Performance Tuning Methodology.
RE-THINKING PARTICLE TRANSPORT IN THE MANY-CORE ERA J.APOSTOLAKIS, R.BRUN, F.CARMINATI,A.GHEATA CHEP 2012, NEW YORK, MAY 1.
Multi-Core Architectures
Recall: Three I/O Methods Synchronous: Wait for I/O operation to complete. Asynchronous: Post I/O request and switch to other work. DMA (Direct Memory.
Parallel transport prototype Andrei Gheata. Motivation Parallel architectures are evolving fast – Task parallelism in hybrid configurations – Instruction.
Offline Coordinators  CMSSW_7_1_0 release: 17 June 2014  Usage:  Generation and Simulation samples for run 2 startup  Limited digitization and reconstruction.
Porting Irregular Reductions on Heterogeneous CPU-GPU Configurations Xin Huo, Vignesh T. Ravi, Gagan Agrawal Department of Computer Science and Engineering.
1 Multiprocessor and Real-Time Scheduling Chapter 10 Real-Time scheduling will be covered in SYSC3303.
Status of the vector transport prototype Andrei Gheata 12/12/12.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Concurrency Patterns Emery Berger and Mark Corner University.
Compiling Several Classes of Communication Patterns on a Multithreaded Architecture Gagan Agrawal Department of Computer and Information Sciences Ohio.
Detector Simulation on Modern Processors Vectorization of Physics Models Philippe Canal, Soon Yung Jun (FNAL) John Apostolakis, Mihaly Novak, Sandro Wenzel.
ADAPTATIVE TRACK SCHEDULING TO OPTIMIZE CONCURRENCY AND VECTORIZATION IN GEANTV J Apostolakis, M Bandieramonte, G Bitzes, R Brun, P Canal, F Carminati,
The Fresh Breeze Memory Model Status: Linear Algebra and Plans Guang R. Gao Jack Dennis MIT CSAIL University of Delaware Funded in part by NSF HECURA Grant.
Automatically Exploiting Cross- Invocation Parallelism Using Runtime Information Jialu Huang, Thomas B. Jablin, Stephen R. Beard, Nick P. Johnson, and.
Intel Research & Development ETA: Experience with an IA processor as a Packet Processing Engine HP Labs Computer Systems Colloquium August 2003 Greg Regnier.
Compiler and Runtime Support for Enabling Generalized Reduction Computations on Heterogeneous Parallel Configurations Vignesh Ravi, Wenjing Ma, David Chiu.
1 Computer Systems II Introduction to Processes. 2 First Two Major Computer System Evolution Steps Led to the idea of multiprogramming (multiple concurrent.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Process-Concept.
CSC Multiprocessor Programming, Spring, 2012 Chapter 11 – Performance and Scalability Dr. Dale E. Parson, week 12.
PARTICLE TRANSPORT REFLECTING ON THE NEXT STEP R.BRUN, F.CARMINATI, A.GHEATA 1.
The High Performance Simulation Project Status and short term plans 17 th April 2013 Federico Carminati.
DISTRIBUTED COMPUTING
CSC 322 Operating Systems Concepts Lecture - 7: by Ahmed Mumtaz Mustehsan Special Thanks To: Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall,
MISSION CRITICAL COMPUTING Siebel Database Considerations.
GeantV scheduler, concurrency Andrei Gheata GeantV FNAL meeting Fermilab, October 20, 2014.
R EDESIGN OF TG EO FOR CONCURRENT PARTICLE TRANSPORT ROOT Users Workshop, Saas-Fee 2013 Andrei Gheata.
Lecture 27 Multiprocessor Scheduling. Last lecture: VMM Two old problems: CPU virtualization and memory virtualization I/O virtualization Today Issues.
Update on G5 prototype Andrei Gheata Computing Upgrade Weekly Meeting 26 June 2012.
Preliminary Ideas for a New Project Proposal.  Motivation  Vision  More details  Impact for Geant4  Project and Timeline P. Mato/CERN 2.
Andrei Gheata (CERN) for the GeantV development team G.Amadio (UNESP), A.Ananya (CERN), J.Apostolakis (CERN), A.Arora (CERN), M.Bandieramonte (CERN), A.Bhattacharyya.
Chapter 11 System Performance Enhancement. Basic Operation of a Computer l Program is loaded into memory l Instruction is fetched from memory l Operands.
Improving the analysis performance Andrei Gheata ALICE offline week 7 Nov 2013.
CSC Multiprocessor Programming, Spring, 2012 Chapter 12 – Testing Concurrent Programs Dr. Dale E. Parson, week 12.
Vector Prototype Status Philippe Canal (For VP team)
GeantV – status and plan A. Gheata for the GeantV team.
GeantV fast simulation ideas and perspectives Andrei Gheata for the GeantV collaboration CERN, May 25-26, 2016.
GeantV prototype at a glance A.Gheata Simulation weekly meeting July 8, 2014.
Scheduling fine grain workloads in GeantV A.Gheata Geant4 21 st Collaboration Meeting Ferrara, Italy September
GeantV – Adapting simulation to modern hardware Classical simulation Flexible, but limited adaptability towards the full potential of current & future.
Scheduler overview status & issues
Operating Systems (CS 340 D)
Processes and Threads Processes and their scheduling
A task-based implementation for GeantV
GeantV – Parallelism, transport structure and overall performance
Report on Vector Prototype
Operating Systems (CS 340 D)
Hyperthreading Technology
Store Recycling Function Experimental Results
Department of Computer Science University of California, Santa Barbara
Multiprocessor and Real-Time Scheduling
Department of Computer Science University of California, Santa Barbara
CSC Multiprocessor Programming, Spring, 2011
Presentation transcript:

Report on Vector Prototype J.Apostolakis, R.Brun, F.Carminati, A. Gheata 10 September 2012

Simple observation: HEP transport is mostly local ! ATLAS volumes sorted by transport time. The same behavior is observed for most HEP geometries. 50 per cent of the time spent in 50/7100 volumes Locality not exploited by the classical transportation approach Existing code very inefficient ( IPC) Cache misses due to fragmented code

A playground for new ideas Simulation prototype in the attempt to explore parallelism and efficiency issues – Basic idea: simple physics, realistic geometry: can we implement a parallel transport model on threads exploiting data locality and vectorisation? – Clean re-design of data structures and steering to easily exploit parallel architectures and allow for sharing all the main data structures among threads – Can we make it fully non-blocking from generation to digitization and I/O ? Events and primary tracks are independent – Transport together a vector of tracks coming from many events – Study how does scattering/gathering of vectors of tracks and hits impact on the simulation data flow Start with toy physics to develop the appropriate data structures and steering code – Keep in mind that the application should be eventually tuned based on realistic numbers

Volume-oriented transport model We implemented a model where all particles traversing a given geometry volume are transported together as a vector until the volume gets empty – Same volume -> local (vs. global) geometry navigation, same material and same cross sections – Load balancing: distribute all particles from a volume type into smaller work units called baskets, give a basket to a transport thread at a time – Steering methods working with vectors, allowing for future auto- vectorisation Particles exiting a volume are distributed to baskets of the neighbor volumes until exiting the setup or disappearing – Like a champagne cascade, but lower glasses can also fill top ones… – No direct communication between threads to avoid synchronization issues

Event injection Realistic geometry + event generator Inject event in the volume containing the IP More events better to cut event tails and fill better the pipeline !

Transport model Physics processes Geometry transport n Event factory n nvnv nv Track baskets (tracks in a single volume type) Inject events into a track container taken by a scheduler thread Track containers (any volume) The scheduler holds a track “basket” for each logical volume. Tracks go to the right basket. Baskets are filled up to a threshold, then injected in a work queue Transport threads pick baskets “first in first out” Physics processes and geometry transport work with vectors Tracks are transported to boundaries, then dumped in a track collection per thread Track collections are pushed to a queue and picked by the scheduler Scheduler

Preliminary benchmarks HT mode Excellent CPU usage Benchmarking 10+1 threads on a 12 core Xeon Locks and waits: some overhead due to transitions coming from exchanging baskets via concurrent queues Event re-injection will improve the speed-up

Evolution of populations Flush events Transporting initial buffer of events "Priority" regime Excellent concurrency allover Vectors "suffer" in this regime

Prototype implementation transport pick-up baskets transportable baskets recycled baskets full track collections recycled track collections Worker threads Dispatch & garbage collect thread Crossing tracks (itrack, ivolume) Push/replace collection Main scheduler n Inject priority baskets recycle basket ivolume loop tracks and push to baskets n Stepping(tid, &tracks) Digitize & I/O thread Priority baskets Generate(N events ) Hits Hits Digitize(iev) Disk Inject/replace baskets deque generate flush

Buffered events & re-injection Reusage of event slots for re-injected events keeps memory under control Depletion regime with sparse tracks just few % of the job Concurrency still excellent Vectors preserved much better

Next steps Include hits, digitization and I/O in the prototype – Factories allowing contiguous pre-allocation and re-usage of user structures MyHit *hit = HitFactory(MyHit::Class())->NextHit(); hit->SetP(particle->P()); Introduce realistic EM physics – Tuning model parameters, better estimate memory requirements Re-design transport models from a “plug-in” perspective – E.g. ability to use fast simulation on per-track basis Look further to auto-vectorization options – New compiler features, Intel Cilk array notation,... – Check impact of vector-friendly data restructuring Vector of objects -> object with vectors Push vectors lower level – Geometry and physics models as main candidates GPU is a great challenge for simulation code – Localize hot-spots with high CPU usage and low data transfer requirements – Test performance on data flow machines

Outlook Existing simulation approaches perform badly on the new computing infrastructures – Projecting this in future looks much worse... – The technology trends motivate serious R&D in the field We have started looking at the problem from different angles – Data locality and contiguity, fine-grain parallelism, data flow, vectorization – Preliminary results confirming that the direction is good We need to understand what can be gained and how, what is the impact on the existing code, what are the challenges and effort to migrate to a new system…

Thank you!