Computing at Fermilab D. Petravick Fermilab August 16, 2007.

Slides:



Advertisements
Similar presentations
Facilities Based Networking Corporation for Education Networking Initiatives in California (CENIC)
Advertisements

1 ESnet Network Requirements ASCAC Networking Sub-committee Meeting April 13, 2007 Eli Dart ESnet Engineering Group Lawrence Berkeley National Laboratory.
ESnet Status Update ESCC July 18, 2007
R. Pordes, I Brazilian LHC Computing Workshop 1 What is Open Science Grid?  High Throughput Distributed Facility  Shared opportunistic access to existing.
1 Networking for the Future of Science ESnet4: Networking for the Future of DOE Science William E. Johnston ESnet Department Head and Senior Scientist.
HEP Prospects, J. Yu LEARN Strategy Meeting Prospects on Texas High Energy Physics Network Needs LEARN Strategy Meeting University of Texas at El Paso.
1 ESnet Update Summer 2007 Joint Techs Workshop Joe Burrescia ESnet General Manager July 16,2007 Energy Sciences Network Lawrence Berkeley National Laboratory.
ATLAS Tier 2 Paths Within ESnet Mike O’Connor ESnet Network Engineering Group Lawrence Berkeley National Lab
1 ESnet - Connecting the USA DOE Labs to the World of Science Eli Dart, Network Engineer Network Engineering Group Chinese American Network Symposium Indianapolis,
Arkansas Research and Education Optical Network Arkansas Association of Public Universities Little Rock, Arkansas April 10, 2008 Dr. Robert Zimmerman ARE-ON.
National LambdaRail A Fiber-based Research Infrastructure Vice-Provost for Scholarly Technology University of Southern California Chair of the CENIC Board.
1 Optical Research Networks WGISS 18: Beijing China September 2004 David Hartzell NASA Ames / CSC
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
Dan Nae California Institute of Technology US LHCNet Update.
1 ESnet On-demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok Network Engineering Group ESCC July Energy Sciences Network.
OSCARS Overview Path Computation Topology Reachability Contraints Scheduling AAA Availability Provisioning Signalling Security Resiliency/Redundancy OSCARS.
1 Services to the US Tier-1 Sites LHCOPN April 4th, 2006 Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
100G R&D at Fermilab Gabriele Garzoglio (for the High Throughput Data Program team) Grid and Cloud Computing Department Computing Sector, Fermilab Overview.
Copyright 2004 National LambdaRail, Inc Connecting to National LambdaRail 6/23/2004 Debbie Montano Director, Development & Operations
GENI GEC 15 Bonnie Hurst Experimental Support Service
Scientific Computing Experimental Physics Lattice QCD Sandy Philpott May 20, 2011 IT Internal Review 12GeV Readiness.
1 ESnet Update Winter 2008 Joint Techs Workshop Joe Burrescia ESnet General Manager January 21, 2008 Energy Sciences Network Lawrence Berkeley National.
1 ESnet Update Joint Techs Meeting Minneapolis, MN Joe Burrescia ESnet General Manager 2/12/2007.
1 Nuclear Physics Network Requirements Workshop Washington, DC Eli Dart, Network Engineer ESnet Network Engineering Group May 6, 2008 Energy Sciences Network.
Advanced Computing Oliver Gutsche FRA Visiting Committee April 20/
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Copyright 2004 National LambdaRail, Inc N ational L ambda R ail Update 9/28/2004 Debbie Montano Director, Development & Operations
F Computing at Fermilab Fermilab Onsite Review Stephen Wolbers August 7, 2002.
A User Driven Dynamic Circuit Network Implementation Evangelos Chaniotakis Network Engineering Group DANMS 2008 November Energy Sciences Network.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
CCIRN Meeting: Optical Networking Topic North America report Heather Boyles, Internet2
Storage and Data Movement at FNAL D. Petravick CHEP 2003.
Dynamic Network Services In Internet2 John Vollbrecht /Dec. 4, 2006 Fall Members Meeting.
Michael Ernst U.S. ATLAS Tier-1 Network Status Evolution of LHC Networking – February 10,
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
Victoria A. White Head, Computing Division, Fermilab Fermilab Grid Computing – CDF, D0 and more..
Nigel Lockyer Fermilab Operations Review 16 th -18 th May 2016 Fermilab in the Context of the DOE Mission.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
Nigel Lockyer Fermilab Operations Review 16 th -18 th May 2016 Fermilab in the Context of the DOE Mission.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
Run - II Networks Run-II Computing Review 9/13/04 Phil DeMar Networks Section Head.
Storage, Networking and Data Management Don Petravick, Fermilab OSG Milwaukee meeting July, 2005.
1 ESnet4 IP Network and Science Data Network Configuration and Roll Out Schedule Projected Schedule as of Sept., 2006 For more information contact William.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
Storage, Networking and Data Management Don Petravick, Fermilab OSG Milwaukee meeting July, 2005.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
July 19, 2005-LHC GDB T0/T1 Networking L. Pinsky--ALICE-USA1 ALICE-USA T0/T1 Networking Plans Larry Pinsky—University of Houston For ALICE-USA.
Page 1 Page 1 Dynamic Provisioning and Survivability in Hybrid Circuit/Packet Optical Networks DoE New Projects Kick-Off Meeting Chicago, Sept
Community Design Workshop - First Day Summary, Issues Rick Summerhill, Director Network Research, Architecture, and Technologies Internet2 Community Design.
Particle Physics Sector Young-Kee Kim / Greg Bock Leadership Team Strategic Planning Winter Workshop January 29, 2013.
HIGH ENERGY PHYSICS DATA WHERE DATA COME FROM, WHERE THEY NEED TO GO In their quest to understand the fundamental nature of matter and energy, Fermilab.
LQCD Computing Project Overview
Fermilab T1 infrastructure
Grid site as a tool for data processing and data analysis
Networking for the Future of Science
Dynamic Network Services In Internet2
DOE Facilities - Drivers for Science: Experimental and Simulation Data
ESnet Network Engineer Lawrence Berkeley National Laboratory
Dagmar Adamova, NPI AS CR Prague/Rez
Networking for the Future of Large-Scale Science: An ESnet Perspective
ESnet4: Networking for the Future of DOE Science ESnet R&D Roadmap Workshop, April 23-24, 2007 William E. Johnston ESnet Department Head and Senior.
ESnet Status Update ESCC July 18, 2007
National LambdaRail Update Dave Jent NLR Layer 2/3 Service Center Indiana University July 18, 2005.
Nuclear Physics Data Management Needs Bruce G. Gibbard
ATLAS Tier 2 Paths Within ESnet
ESnet4: Networking for the Future of DOE Science
Fall 2006 Internet2 Member Meeting
Presentation transcript:

Computing at Fermilab D. Petravick Fermilab August 16, 2007

FNAL Scientific Computing CDF D0 Lattice QCD CMS Tier 1 center Experimental Astrophysics Theoretical Astrophysics –partner w/Kavli Inst. U Chicago Accelerator Simulations Neutrino Program

HEP Computing at Fermilab Fermilab has a substantial and long established program in capacity computing as part of its experimental HEP program. Roughly –Event simulation –online filtering (1:10,000) –“production” (instrumental -> physical) –Analysis -- study of the data.

HEP software environment Large Codes, large frameworks, shared between tasks, variety of languages –Benefits from homogenous development and deployment environments. –Lap/desktop -> into framework -> batch –Codes are likely larger, more complex than many HPC codes. Environment is overwhelmingly linux. –Scant traction for other environments. –Codes are likely larger, more complex than many HPC codes.

Lattice at Fermilab National collaboration oversees computing resources. –Group see roles for DOE Exascale and local computing. –Relatively large amount of shared data, techniques. –Configurations re-used by different groups cores at Fermilab –Cluster approach –Coupled with Myrinet and Infiniband. –Additional procurement Federal FY

Cosmological Computing Cosmological simulations are powerful tools to understand the growth of structure in the universe… is essential for extracting precision information about fundamental physics such as dark energy and dark matter from upcoming astronomical surveys. Roadmap calls for –Formation of larger collaborations –Computational methods and data are more diverse than lattice. –10000 cores by –12 PB data requirement

Other elements of lab program Not mentioned in this overview: –Experimental Astrophysics (Sloan Digital Sky Survey, Dark Energry Survey) –Other Element of Theoretical Astrophysics –Acceleratior simulations –Neutrino program Synergia Simulation Of bunching in the Tevatron Element of CDMS detector SDSS Lensing

Significant campus Storage and Data Movement ~6.5 PB on tape. Few PB of disk Ethernet SAN > 40 gbps disk movement 300 T/mo ingest ~60/TB day tape movment

Which induces the need for networking + Esnet MAN, and Wide Area Network R&D Group.

ESnet4 IP + SDN, 2009 Configuration (Est.) Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash. DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia Indianapolis (? ) Atlanta Nashville Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site OC48 (0) (1) ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number (20) (7) (17) (19) (20) (22) (23) (29) (28) (8) (16) (32) (2) (4) (5) (6) (9) (11) (13) (25) (26) (10) (12) (27) (14) (24) (15) (30) West Chicago MAN Long Island MAN Newport News - Elite San Francisco Bay Area MAN LBNL SLAC JGI LLNL SNLL NERSC FNAL 600 W. Chicago Starlight ANL USLHCNet BNL 111-8th 32 AoA, NYC USLHCNet JLab ELITE ODU MATP Wash., DC 2 2 (3) (21) Uncorrected Map –Space for other disk.

HEP capacity, adapted… HEP has adopted grid techniques as the basis for a world wide computing infrastructure. Both data and Jobs are shipped about. Fermilab leadership –Fermigrid -- campus –Open Science Grid - national, –WLCG grid scopes.

2002 Power Forecast

Computing Centers Three computing buildings. FCC, -- > 20 year old purpose built LCC, GCC: built on former experimental halls w/ substantial power infrastructure. LCC GCC FCC

Type of Space Thinking continues to evolve, notion is that there is: –Space for new computing systems. –Space for long-lived, “tape” storage. –Space for generator backed disk.

GCC Grid Computing Center Legacy Building First ExpansionSecond Expansion CRCCRBCRATape

Just in Time Delivery FCC600 LCC710 CRA840 CRB840 CRC840 Tape45 Total3875 KVA, excluding cooling

Summary Fermilab has a diverse scientific computing program. Forecasting has shown a need to continually expand computing facilities. Fermilab has expanded its capacity incrementally. Notwithstanding that, the lab’s program includes substantial future growth.