Computing Facilities & Capabilities

Slides:



Advertisements
Similar presentations
Jens G Jensen Atlas Petabyte store Supporting Multiple Interfaces to Mass Storage Providing Tape and Mass Storage to Diverse Scientific Communities.
Advertisements

Computing Infrastructure
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
CS267 - April 26th, 2011 Big Bang, Big Iron High Performance Computing and the Cosmic Microwave Background Julian Borrill Computational Cosmology Center,
Grid Services at NERSC Shreyas Cholia Open Software and Programming Group, NERSC NERSC User Group Meeting September 17, 2007.
Gordon: Using Flash Memory to Build Fast, Power-efficient Clusters for Data-intensive Applications A. Caulfield, L. Grupp, S. Swanson, UCSD, ASPLOS’09.
Introduction to Database Systems 1 The Storage Hierarchy and Magnetic Disks Storage Technology: Topic 1.
F1031 COMPUTER HARDWARE CLASSES OF COMPUTER. Classes of computer Mainframe Minicomputer Microcomputer Portable is a high-performance computer used for.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En’yo, Takashi Ichihara, Yasushi Watanabe and Satoshi.
The Cosmic Simulator Daniel Kasen (UCB & LBNL) Peter Nugent, Rollin Thomas, Julian Borrill & Christina Siegerist.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Bill Reach 2009 May 14 Greater IPAC Technology Symposium.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
US Planck Data Analysis Review 1 Christopher CantalupoUS Planck Data Analysis Review 9–10 May 2006 CTP Working Group Presented by Christopher Cantalupo.
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Grid Lab About the need of 3 Tier storage 5/22/121CHEP 2012, The need of 3 Tier storage Dmitri Ozerov Patrick Fuhrmann CHEP 2012, NYC, May 22, 2012 Grid.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
Using Bitmap Index to Speed up Analyses of High-Energy Physics Data John Wu, Arie Shoshani, Alex Sim, Junmin Gu, Art Poskanzer Lawrence Berkeley National.
Jacquard: Architecture and Application Performance Overview NERSC Users’ Group October 2005.
(D)CI related activities at IFCA Marcos López-Caniego Instituto de Física de Cantabria (CSIC-UC) Astro VRC Workshop Paris Nov 7th 2011.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
STAR Off-line Computing Capabilities at LBNL/NERSC Doug Olson, LBNL STAR Collaboration Meeting 2 August 1999, BNL.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
P ITTSBURGH S UPERCOMPUTING C ENTER R ESOURCES & S ERVICES Marvel 0.3 TF HP GS 1280 SMP OS: Tru64 Unix 2 nodes (128 processors) Nodes: 64 x 1.15 GHz EV67.
Data Workflow Overview Genomics High- Throughput Facility Genome Analyzer IIx Institute for Genomics and Bioinformatics Computation Resources Storage Capacity.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
High Energy FermiLab Two physics detectors (5 stories tall each) to understand smallest scale of matter Each experiment has ~500 people doing.
National Energy Research Scientific Computing Center (NERSC) HPC In a Production Environment Nicholas P. Cardo NERSC Center Division, LBNL November 19,
University user perspectives of the ideal computing environment and SLAC’s role Bill Lockman Outline: View of the ideal computing environment ATLAS Computing.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
US Planck Data Analysis Review 1 Julian BorrillUS Planck Data Analysis Review 9–10 May 2006 Computing Facilities & Capabilities Julian Borrill Computational.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Office of Science U.S. Department of Energy NERSC Site Report HEPiX October 20, 2003 TRIUMF.
Cosmic Microwave Background Data Analysis At NERSC Julian Borrill with Christopher Cantalupo Theodore Kisner.
Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences.
Seaborg Decommission James M. Craw Computational Systems Group Lead NERSC User Group Meeting September 17, 2007.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
US ATLAS Tier 1 Facility Rich Baker Deputy Director US ATLAS Computing Facilities October 26, 2000.
WG2 Meeting, Paris, 28-11/ WG2 and LFI-DPC Davide Maino, Carlo Baccigalupi.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
Computer Performance. Hard Drive - HDD Stores your files, programs, and information. If it gets full, you can’t save any more. Measured in bytes (KB,
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
Report from US ALICE Yves Schutz WLCG 24/01/2007.
Canadian Bioinformatics Workshops
A Brief Introduction to NERSC Resources and Allocations
Simulations and Data Reduction of the ESA Planck mission
Vanderbilt Tier 2 Project
SAM at CCIN2P3 configuration issues
(for the Algorithm Development Group of the US Planck Team)
Southwest Tier 2.
ALICE Computing Model in Run3
ALICE Computing Upgrade Predrag Buncic
Data Issues Julian Borrill
Scientific Computing At Jefferson Lab
JDAT Production Hardware
المحور 3 : العمليات الأساسية والمفاهيم
NERSC Reliability Data
National Energy Research Scientific Computing Center (NERSC)
CPSC 641: WAN Measurement Carey Williamson
Nuclear Physics Data Management Needs Bruce G. Gibbard
TeraScale Supernova Initiative
Carey Williamson Department of Computer Science University of Calgary
Cluster Computers.
Presentation transcript:

Computing Facilities & Capabilities Julian Borrill Computational Research Division, Berkeley Lab & Space Sciences Laboratory, UC Berkeley

Computing Issues Data Volume Data Processing Data Storage Data Security Data Transfer Its all about the data

Data Volume Planck data volume drives (almost) everything LFI : HFI : 22 detectors with 32.5, 45 & 76.8 Hz sampling 4 x 1010 samples per year 0.2 TB time-ordered data + 1.0 TB full detector pointing data HFI : 52 detectors with 200 Hz sampling 3 x 1011 samples per year 1.3 TB time-ordered data + 0.2 TB full boresight pointing data LevelS (e.g. CTP “Trieste” simulations) : 4 LFI detectors with 32.5 Hz sampling 4 x 109 samples per year 2 scans x 2 beams x 2 samplings x 7 components + 2 noises 1.0 TB time-ordered data + 0.2 TB full detector pointing data

Data Processing Operation count scales linearly (& inefficiently) with # analyses, # realizations, # iterations, # samples : 100 x 100 x 100 x 100 x 1011 ~ O(10) Eflop NERSC Seaborg : 6080 CPU, 9 Tf/s Jacquard : 712 CPU, 3 Tf/s (21 x Magique-II) Bassi : 888 CPU, 7 Tf/s NERSC-5 : O(100) Tf/s, first-byte in 2007 O(2 x 106) CPU-hours/year => O(4) Eflop/yr (10GHz/5%) USPDC cluster Specification & location TBD, first-byte in 2007 O(100) CPU x 7000 hours/year => O(0.4) Eflop/yr (5GHz/3%) IPAC small cluster dedicated to ERCSC

Processing 9 Tf/s NERSC Seaborg 3 Tf/s NERSC Jacquard 7 Tf/s NERSC Bassi 0.1 Tf/s ERCSC Cluster 0.5 Tf/s USPDC Cluster 100 Tf/s NERSC 5

Data Storage Archive at IPAC Long-term at NERSC using HPSS mission data O(10) TB Long-term at NERSC using HPSS mission + simulation data & derivatives O(2) PB Spinning disk at USPDC cluster & at NERSC using NGF current active data subset O(2 - 20) TB Processor memory at USPDC cluster & at NERSC running job(s) O(1 - 10+) GB/CPU & O(0.1 - 10) TB total

Processing + Storage 9 Tf/s 6 TB NERSC Seaborg 2/20 PB NERSC HPSS 2 TB NERSC Jacquard 10 TB IPAC Archive 20/200 TB NERSC NGF 7 Tf/s 4 TB NERSC Bassi 0.1 Tf/s 50 GB ERCSC Cluster 2 TB USPDC Cluster 0.5 Tf/s 200 GB USPDC Cluster 100 Tf/s 50 TB NERSC 5

Data Security UNIX filegroups Personal keyfob to access planck acount special account : user planck permissions _r__/___/___ Personal keyfob to access planck acount real-time grid-certification of individuals fobs issued & managed by IPAC single system for IPAC, NERSC & USPDC cluster Allows securing of selected data e.g. mission vs simulation Differentiates access to facilities and to data standard personal account & special planck account

Processing + Storage + Security IPAC KEYFOB REQUIRED 9 Tf/s 7 TB NERSC Seaborg 2/20 PB NERSC HPSS 3 Tf/s 2 TB NERSC Jacquard 10 TB IPAC Archive 20/200 TB NERSC NGF 7 Tf/s 4 TB NERSC Bassi 0.1 Tf/s 50 GB ERCSC Cluster 2 TB USPDC Cluster 0.5 Tf/s 200 GB USPDC Cluster 100 Tf/s 50 TB NERSC 5

Data Transfer From DPCs to IPAC From IPAC to NERSC transatlantic tests being planned From IPAC to NERSC (check networks/bandwidth with Bill Johnston) From NGF to/from HPSS (check bandwidth with David Skinner) From NGF to memory (most real-time critical) within NERSC (check bandwidths with David Skinner) offsite depends on location 10Gb/s to LBL over dedicated data link on Bay Area MAN

Processing + Storage + Security + Networks IPAC KEYFOB REQUIRED 9 Tf/s 7 TB NERSC Seaborg 2/20 PB NERSC HPSS 10 Gb/s 3 Tf/s 2 TB NERSC Jacquard 10 Gb/s 10 TB IPAC Archive 20/200 TB NERSC NGF DPCs ? 10 Gb/s 10 Gb/s 10 Gb/s 7 Tf/s 4TB NERSC Bassi ? ? ? 30 Gb/s 0.1 Tf/s 50 GB ERCSC Cluster 2 TB USPDC Cluster 0.5 Tf/s 200 GB USPDC Cluster 100 Tf/s 50 TB NERSC 5 ?