DOE Facilities - Drivers for Science: Experimental and Simulation Data

Slides:



Advertisements
Similar presentations
Jens G Jensen Atlas Petabyte store Supporting Multiple Interfaces to Mass Storage Providing Tape and Mass Storage to Diverse Scientific Communities.
Advertisements

Joint CASC/CCI Workshop Report Strategic and Tactical Recommendations EDUCAUSE Campus Cyberinfrastructure Working Group Coalition for Academic Scientific.
Current Testbed : 100 GE 2 sites (NERSC, ANL) with 3 nodes each. Each node with 4 x 10 GE NICs Measure various overheads from protocols and file sizes.
ASCR Data Science Centers Infrastructure Demonstration S. Canon, N. Desai, M. Ernst, K. Kleese-Van Dam, G. Shipman, B. Tierney.
U.S. Department of Energy’s Office of Science Basic Energy Sciences Advisory Committee Dr. Daniel A. Hitchcock October 21, 2003
Priority Research Direction (I/O Models, Abstractions and Software) Key challenges What will you do to address the challenges? – Develop newer I/O models.
High Performance Computing Course Notes Grid Computing.
GENI: Global Environment for Networking Innovations Larry Landweber Senior Advisor NSF:CISE Joint Techs Madison, WI July 17, 2006.
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CIF21) NSF-wide Cyberinfrastructure Vision People, Sustainability, Innovation,
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CF21) IRNC Kick-Off Workshop July 13,
NGNS Program Managers Richard Carlson Thomas Ndousse ASCAC meeting 11/21/2014 Next Generation Networking for Science Program Update.
WORKFLOWS IN CLOUD COMPUTING. CLOUD COMPUTING  Delivering applications or services in on-demand environment  Hundreds of thousands of users / applications.
1 Building National Cyberinfrastructure Alan Blatecky Office of Cyberinfrastructure EPSCoR Meeting May 21,
Computing in Atmospheric Sciences Workshop: 2003 Challenges of Cyberinfrastructure Alan Blatecky Executive Director San Diego Supercomputer Center.
US NITRD LSN-MAGIC Coordinating Team – Organization and Goals Richard Carlson NGNS Program Manager, Research Division, Office of Advanced Scientific Computing.
The Data Grid: Towards an Architecture for the Distributed Management and Analysis of Large Scientific Dataset Caitlin Minteer & Kelly Clynes.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
PPDG and ATLAS Particle Physics Data Grid Ed May - ANL ATLAS Software Week LBNL May 12, 2000.
The Future of the iPlant Cyberinfrastructure: Coming Attractions.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
Presented by Leadership Computing Facility (LCF) Roadmap Buddy Bland Center for Computational Sciences Leadership Computing Facility Project.
Service - Oriented Middleware for Distributed Data Mining on the Grid ,劉妘鑏 Antonio C., Domenico T., and Paolo T. Journal of Parallel and Distributed.
Pascucci-1 Valerio Pascucci Director, CEDMAV Professor, SCI Institute & School of Computing Laboratory Fellow, PNNL Massive Data Management, Analysis,
Research Networks and Astronomy Richard Schilizzi Joint Institute for VLBI in Europe
© 2006 National Institute of Informatics 1 Jun Matsukata National Institute of Informatics SINET3: The Next Generation SINET July 19, 2006.
The Earth System Grid (ESG) Computer Science and Technologies DOE SciDAC ESG Project Review Argonne National Laboratory, Illinois May 8-9, 2003.
Spectrum of Support for Data Movement and Analysis in Big Data Science Network Management and Control E-Center & ESCPS Network Management and Control E-Center.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Advanced Networks: The Past and the Future – The Internet2 Perspective APAN 7 July 2004, Cairns, Australia Douglas Van Houweling, President & CEO Internet2.
1 OFFICE OF ADVANCED SCIENTIFIC COMPUTING RESEARCH The NERSC Center --From A DOE Program Manager’s Perspective-- A Presentation to the NERSC Users Group.
Internet of Things. IoT Novel paradigm – Rapidly gaining ground in the wireless scenario Basic idea – Pervasive presence around us a variety of things.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
30 November 2001 Advisory Panel on Cyber Infrastructure National Science Foundation Douglas Van Houweling November 30, 2001 National Science Foundation.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
An Architectural Approach to Managing Data in Transit Micah Beck Director & Associate Professor Logistical Computing and Internetworking Lab Computer Science.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI strategy and Grand Vision Ludek Matyska EGI Council Chair EGI InSPIRE.
Nigel Lockyer Fermilab Operations Review 16 th -18 th May 2016 Fermilab in the Context of the DOE Mission.
Internet2 Members Meeting Washington, DC 1 Advanced Networking Infrastructure and Research (ANIR) Aubrey Bush Division Director, ANIR National Science.
Toward High Breakthrough Collaboration (HBC) Susan Turnbull Program Manager Advanced Scientific Computing Research March 4, 2009.
1 Network Measurement Challenges LHC E2E Network Research Meeting October 25 th 2006 Joe Metzger Version 1.1.
Advancing National Wireless Capability Date: March 22, 2016 Wireless Test Bed & Wireless National User Facility Paul Titus Department Manager, Communications.
1 "The views expressed in this presentation are those of the author and do not necessarily reflect the views of the European Commission" NCP infoday Capacities.
Virtual Laboratory Amsterdam L.O. (Bob) Hertzberger Computer Architecture and Parallel Systems Group Department of Computer Science Universiteit van Amsterdam.
“Your application performance is only as good as your network” (4)
CLOUD ARCHITECTURE Many organizations and researchers have defined the architecture for cloud computing. Basically the whole system can be divided into.
Past research work and research work in progress on elephant flows
Welcome Network Virtualization & Hybridization Thomas Ndousse
Joslynn Lee – Data Science Educator
Computing models, facilities, distributed computing
Grid Computing.
WP1 activity, achievements and plans
SENSE: SDN for End-to-end Networked Science at the Exascale
University of Technology
Cloud Computing.
On Using Semantic Complex Event Processing for Dynamic Demand Response
Internet-Scale Systems Research Group
Future Data Architectures Big Data Workshop – April 2018
ESnet and Science DMZs: an update from the US
PRPv1 Discussion topics
ExaO: Software Defined Data Distribution for Exascale Sciences
The Globus Toolkit™: Information Services
Scheduled Accomplishments
TeraScale Supernova Initiative
BigData Express: Toward Predictable, Schedulable, and High-performance Data Transfer Wenji Wu, April 4, 2019.
Defining the Grid Fabrizio Gagliardi EMEA Director Technical Computing
Expand portfolio of EGI services
L. Glimcher, R. Jin, G. Agrawal Presented by: Leo Glimcher
Presentation transcript:

DOE Facilities - Drivers for Science: Experimental and Simulation Data Richard Carlson – ASCR PM Richard.Carlson@science.doe.gov Joint Techs Winter 2012 Jan 24, 2012 Baton Rouge, LA

Advanced Scientific Computing Research- Program Office 2

DOE Facilities

DOE Science

Next-Generation Networks for Science Mission: The Goals of the program are 1) to research, develop, test and deploy advanced network technologies critical in addressing networking capabilities unique to DOE’s science mission and 2) identify scientific principles that lead to understanding about network and application behavior.  The program’s portfolio consists of two main elements: High-Performance Networks High-Performance Middleware DOE ESnet

Data Transfer Basics Time to transfer 1 TB on various networks 1 TB/hour uses ¼ of a 10 Gbps network 10 TB/hour uses ¼ of a 100 Gbps network 100 TB/hour uses ¼ of a 1 Tbps network Conversely a 1 Tbps network will move 10 PB/day Real-time aspects at 100 Gbps Voice (128B): 20.5 nsec/packet Jumbo frame Ethernet (9KB): 720 nsec/packet

CEDPS - SaaS Data Management 4.1 Gbps 1.6 Gbps Moving 322 TeraBytes of data from ANL to each remote site 7.34 Days to NERSC 18.56 Days to ORNL

Esnet – DOE’s R&D Network

DOE Distributed Science Complex

Challenges for Next-Gen Program Develop a fundamental understanding about how DOE scientists use networks and how those networks behave Provide scientists with advanced technologies that simplify access to experimental facilities, Supercomputers and scientific data Provide dynamic and hybrid networking capabilities to support diverse types of high-end science applications at scale.

Themes and Portfolio Directions Research Activities: Challenges: Network/Middleware Core Research: Fast data movement service 100 Gbps NICs Application performance analysis service Science driven on-demand circuits 100 GE LAN/MAN/WAN integration Comprehensive data mgt service Computational Science for ASCR Radical new network architecture/protocols Comprehensive scientific workflow services Scalable network – middleware architectures, protocols, and services Federated scientific collaborations 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 Effectively identifying performance bottlenecks Routine movement of terabyte datasets Understanding complex network infrastructures Extreme collaborations with 100K+ participants Risk-informed decision-making through modeling and simulation Creating hybrid networks Managing large collaboration space Emergent Area Research Multi-layer hybrid network control systems Multi-domain monitoring and measurement systems Grid infrastructures and data management services Massively parallel data streams Massive numbers of independent collaborations Middleware libraries and APIs for Large Systems ESnet Traffic 20 PB 100 PB 1 EB 10 EB 50 EB 100 Gbps 400 Gbps 1 Tbps 4 Tbps 10 Tbps ESnet Backbone Capacity Massive Data – from ESnet projected data traffic

Research Questions and Directions Predictable Develop networks, tools, and services that offer predictable performance to a user. Guaranteed Develop networks, tools, and services that guarantee some level of performance. Scientific Understanding Identify why the network or application behave in the observed manner. Advanced tools and services Tools and services to allow scientists to complete their tasks without worrying about the infrastructure.