Download presentation
Presentation is loading. Please wait.
Published byNeil Dorsey Modified over 8 years ago
1
Grid Computing: dealing with GB/s dataflows David Groep, NIKHEF Graphics: Real Time Monitor, Gidon Moont, Imperial College London, see http://gridportal.hep.ph.ic.ac.uk/rtm/ Jan Just Keijser, Nikhef janjust@nikhef.nl 3 May 2012
2
LHC Computing Large Hadron Collider ‘the worlds largest microscope’ 'looking at the fundamental forces of nature’ 27 km circumference CERN, Genève atom 10 -15 m nucleus quarks ~ 20 PByte of data per year, ~ 60 000 modern PC style computers
5
Atlas Trigger Design Level 1 – Hardware based, online – Accepts 75 KHz, latency 2.5 ms – 160 GB/s Level 2 – 500 Processor farm – Accepts 2 KHz, latency 10 ms – 5 GB/s Event Filter – 1600 processor farm – Accepts 200 Hz, ~1 s per event – Incorporates alignment, calibration – 300 MB/s From: The ATLAS trigger system, Srivas Prasad
6
Signal/Background 10 -9 Data volume ● (high rate) X (large number of channels) X (4 experiments) 20 PetaBytes new data per year Compute power ● (event complexity) X (number of events) X (thousands of users) 60.000 processors Concorde (15 Km) Balloon (30 Km) Stack of CDs w/ 1 year LHC data! (~ 20 Km) Mt. Blanc (4.8 Km)
8
Scientific Compute e-Infrastructure From: Key characteristics of SARA and BiG Grid Compute services Task parallelism (also known as function parallelism and control parallelism) is a form of parallelization of computer code across multiple processors in parallel computing environments. Task parallelism focuses on distributing execution processes (threads) across different parallel computing nodes. Data parallelism (also known as loop-level parallelism) is a form of parallelization of computing across multiple processors in parallel computing environments. Data parallelism focuses on distributing the data across different parallel computing nodes.
9
What is BiG Grid? Collaborative effort of the NBIC, NCF and Nikhef. Aims to set up a grid infrastructure for scientific research. This research infrastructure contains compute clusters, data storage, combined with specific middleware and software to enable research which needs more than just raw computing power or data storage. We aim to assist scientists from all backgrounds in exploring and using the opportunities offered by the Dutch e-science grid. http://www.biggrid.nl
10
Nikhef (NDPF) 3336 processor cores 1600TByte disk 160Gbps network SARA (GINA+LISA) 3000 processor cores 1800TByte disk 2000TByte tape 160Gbps network RUG-CIT (Grid) 400 processor cores 8 800GByte disk 10Gbps network Philips Research Ehv 1600processor cores 100TByte disk 1Gbps network
11
Virtual Laboratory for e-Science Avian Alert & FlySafe Willem Bouten et al. UvA Institute for Biodiversity Ecosystem Dynamics, IBED Data integration for genomics, proteomics, etc. analysis Timo Breit et al. Swammerdam Institute of Life Sciences Medical Imaging & fMRI Silvia Olabarriaga et al. AMC and UvA IvI Molecular Cell Biology & 3D Electron Microscopy Bram Koster et al. LUMC Microscopic Imaging group Image sources: VL-e Consortium Partners
12
BiG Grid Image sources: BiG Grid Consortium Partners SCIAMACHY Wim Som de Cerff et al. KNMI MPI Nijmegen: Psycholinguistics
13
Image sources: BiG Grid Consortium Partners LOFAR: LOw Frequency ARray radio telescope Leiden Grid Initiative: Computational Chemistry BiG Grid
14
Grid organisation National Grid Initiatives & European Grid Initiative At the national level a grid infrastructure is offered to national and international users by the NGIs. BiG Grid is (de facto) the Dutch NGI. The 'European Grid Initiative' coordinates the efforts of the different NGIs and ensures interoperability Circa 40 European NGIs, with links to South America and Taiwan Headquarter of EGI is at the Science Park in Amsterdam
15
Cross-domain and global e-Science grids The communities that make up the grid: not under single hierarchical control, temporarily joining forces to solve a particular problem at hand, bringing to the collaboration a subset of their resources, sharing those at their discretion and each under their own conditions.
16
Grid especially means scaling up: Distributed computing on many, different computers, Distributed storage of data, Large amounts of data (Giga-, Tera-, Petabytes), Large number of files (millions). This gives rise to “interesting” problems: Remote logins are not always possible on the grid, Debugging a program is a challenge, Regular filesystems tend to choke on millions of files, Storing data is one thing, searching and retrieving turn out to be even bigger challenges. Challenges: scaling up
17
Challenges: security Why is security so important for an e-Science Infrastructure? e-Science communities are not under a single hierarchical control; As grid site administrator you are allowing relatively unknown persons to run programs on your computers; All of these computers are connected to the internet using an incredibly fast network: This makes the grid a potentially very dangerous service on the internet
18
Storing Petabytes of data is possible, but... Retrieving data is harder than you would expect; Organising such amounts of data is non-trivial; Applications are much smaller than the data they need to process always bring your application to the data, if possible; The “data about the data” (metadata) becomes crucial: –location, –experimental conditions, –date and time Storing the metadata in a database can be a life-saver. Lessons Learned: Data Management
19
Lessons Learned: Job efficiency A recurring complaint heard about grid computing is low job efficiency (~94%). It is important to know that: Failed jobs almost always did so due to data access issues; If you remove data access issues, job efficiency jumps to ~99%, which is on par with cluster and cloud computing. Mitigation strategies: Replicate files to multiple storage systems; Pre-stage data to specific compute sites; “Program for failure”.
20
Lessons Learned: Network bandwidth All data taken by the LHC in CERN is replicated out to 11 Tier-1 centres around Europe. BiG Grid serves as one of those Tier-1's. We always thought and knew we have a good network, but Having a dedicated optical network (OPN) from CERN to the data storage centres (Tier-1s) turned out to be crucial; It turns out that the Network bandwidth between storage and compute clusters is equally important
21
Questions? http://www.nikhef.nl
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.