ANSE: Advanced Network Services for Experiments Institutes: –Caltech (PI: H. Newman, Co-PI: A. Barczyk) –University of Michigan (Co-PI: S. McKee) –Vanderbilt.

Slides:



Advertisements
Similar presentations
ExoGENI Rack Architecture Ilia Baldine Jeff Chase Chris Heermann Brad Viviano
Advertisements

Kathy Benninger, Pittsburgh Supercomputing Center Workshop on the Development of a Next-Generation Cyberinfrastructure 1-Oct-2014 NSF Collaborative Research:
Title or Title Event/Date Presenter, PresenterTitle, Internet2 Network Virtualization & the Internet2 Innovation Platform To keep our community at the.
Evolving Data Logistics Needs of the LHC Experiments Paul Sheldon Vanderbilt University.
Integrating Network and Transfer Metrics to Optimize Transfer Efficiency and Experiment Workflows Shawn McKee, Marian Babik for the WLCG Network and Transfer.
Transatlantic Connectivity in LHCONE Artur Barczyk California Institute of Technology LHCONE Meeting Washington DC, June 13 th,
LHCONE Point-to-Point Service Workshop - CERN Geneva Eric Boyd, Internet2 Slides borrowed liberally from Artur, Inder, Richard, and other workshop presenters.
CMS Data Transfer Challenges LHCOPN-LHCONE meeting Michigan, Sept 15/16th, 2014 Azher Mughal Caltech.
IRNC Special Projects: IRIS and DyGIR Eric Boyd, Internet2 October 5, 2011.
National Science Foundation Arlington, Virginia January 7-8, 2013 Tom Lehman University of Maryland Mid-Atlantic Crossroads.
LARK Bringing Distributed High Throughput Computing to the Network Todd Tannenbaum U of Wisconsin-Madison Garhan Attebury
SDN Research Roy Hockett - Network Architect January 14, 2014 Common Solutions Group workshop.
The DYNES Instrument: A Description and Overview May 24 th 2012 – CHEP 2012 Jason Zurawski, Senior Network Engineer - Internet2.
LHCONE in North America September 27 th 2011, LHCOPN Eric Boyd, Internet2.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
ANSE: Advanced Network Services for the HEP Community Harvey B Newman California Institute of Technology LHCONE Workshop Paris, June 17, Networking.
Campus Cyberinfrastructure – Network Infrastructure and Engineering (CC-NIE) Kevin Thompson NSF Office of CyberInfrastructure April 25, 2012.
GEC 15 Houston, Texas October 23, 2012 Tom Lehman Xi Yang University of Maryland Mid-Atlantic Crossroads (MAX)
PPDG and ATLAS Particle Physics Data Grid Ed May - ANL ATLAS Software Week LBNL May 12, 2000.
PanDA A New Paradigm for Computing in HEP Kaushik De Univ. of Texas at Arlington NRC KI, Moscow January 29, 2015.
InterDomain Dynamic Circuit Network Demo Joint Techs - Hawaii Jan 2008 John Vollbrecht, Internet2
LHC Open Network Environment LHCONE Artur Barczyk California Institute of Technology LISHEP Workshop on the LHC Rio de Janeiro, July 9 th,
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
Challenging CMS Computing with network-aware systems CHEP 2013T.Wildish / Princeton1 Challenging data and workload management in CMS Computing with network-aware.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
NSF IRNC PI Meeting October 6, 2011 Julio Ibarra Center for Internet Augmented Research & Assessment Florida International University Americas Lightpaths.
Scientific Networking: The Cause of and Solution to All Problems April 14 th Workshop on High Performance Applications of Cloud and Grid Tools Jason.
Connect communicate collaborate GÉANT3 Services Connectivity and Monitoring Services by and for NRENs Ann Harding, SWITCH TNC 2010.
Data Intensive Science Network (DISUN). DISUN Started in May sites: Caltech University of California at San Diego University of Florida University.
Perspectives on Grid Technology Ian Foster Argonne National Laboratory The University of Chicago.
1 Networking in the WLCG Facilities Michael Ernst Brookhaven National Laboratory.
ASCR/ESnet Network Requirements an Internet2 Perspective 2009 ASCR/ESnet Network Requirements Workshop April 15/16, 2009 Richard Carlson -- Internet2.
DYNES Storage Infrastructure Artur Barczyk California Institute of Technology LHCOPN Meeting Geneva, October 07, 2010.
The Internet2 HENP Working Group Internet2 Spring Meeting April 9, 2003.
The LHC Computing Models and Concepts Continue to Evolve Rapidly and Need To How to Respond ?
Sponsored by the National Science Foundation Internet2 OpenFlow Backbone Spiral 2 Year-end Project Review Internet2 PI: Eric Boyd Co-PI: Matt Zekauskas.
Slide 1 Experiences with PerfSONAR and a Control Plane for Software Defined Measurement Yan Luo Department of Electrical and Computer Engineering University.
ANSE: Advanced Network Services for [LHC] Experiments Artur Barczyk, California Institute of Technology for the ANSE team LHCONE Point-to-Point Service.
Internet2 Update October 7 th 2010, LHCOPN Jason Zurawski, Internet2.
Xiaolin Andy Li  Provide 100 Gbps switch  Internet2 Innovation Platform FLR link to Jacksonville  Use SDN to enable.
LHC OPEN NETWORK ENVIRONMENT STATUS UPDATE Artur Barczyk/Caltech Tokyo, May 2013 May 14, 2013
Advanced Network Services for Experiments (ANSE) Introduction: Progress, Outlook and Issues Harvey B Newman California Institute of Technology ANSE Annual.
Cyberinfrastructure: An investment worth making Joe Breen University of Utah Center for High Performance Computing.
LHC Open Network Environment Architecture Overview and Status Artur Barczyk/Caltech LHCONE meeting Amsterdam, September 26 th,
Network awareness and network as a resource (and its integration with WMS) Artem Petrosyan (University of Texas at Arlington) BigPanDA Workshop, CERN,
IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.
ANSE: Advanced Network Services for Experiments Shawn McKee / Univ. of Michigan BigPANDA Meeting / UTA September 4, 2013.
PanDA Status Report Kaushik De Univ. of Texas at Arlington ANSE Meeting, Nashville May 13, 2014.
DYNES Project Updates October 11 th 2011 – USATLAS Facilities Meeting Shawn McKee, University of Michigan Jason Zurawski, Internet2.
Dynamic Circuit Network An Introduction John Vollbrecht, Internet2 May 26, 2008.
PanDA & BigPanDA Kaushik De Univ. of Texas at Arlington BigPanDA Workshop, CERN October 21, 2013.
Simple Infrastructure to Exploit 100G Wide Are Networks for Data-Intensive Science Shawn McKee / University of Michigan Supercomputing 2015 Austin, Texas.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
SDN Provisioning, next steps after ANSE Kaushik De Univ. of Texas at Arlington US ATLAS Planning, CERN June 29, 2015.
Brian Noble, Campus heavily invested in shared cyberinfrastructure Nyx: condo-model HPC cluster, 4K nodes Flux: 8K nodes, most “rented”
LHC P2P Experiments. Elephant vs Turtle Flows, do they matter ? Various Projects History  DYNES (FDT + OSCARS + Dragon / OESS)  OLiMPS (Multipath modules.
Strawman LHCONE Point to Point Experiment Plan LHCONE meeting Paris, June 17-18, 2013.
DYNES ( DYnamic NEtwork System) & LHCONE ( LHC Open Network Env.) Shawn McKee University of Michigan Jason Zurawski Internet2 USATLAS Facilities Meeting.
ANSE: Advanced Network Services for the HEP Community Harvey B Newman California Institute of Technology Snowmass on the Mississippi Minneapolis, July.
LHCONE NETWORK SERVICES: GETTING SDN TO DEV-OPS IN ATLAS Shawn McKee/Univ. of Michigan LHCONE/LHCOPN Meeting, Taipei, Taiwan March 14th, 2016 March 14,
The DYNES Architecture & LHC Data Movement Shawn McKee/University of Michigan For the DYNES collaboration Contributions from Artur Barczyk, Eric Boyd,
US ATLAS Tier-2 Networking Shawn McKee University of Michigan US ATLAS Tier-2 Meeting San Diego, March 8 th, 2007.
LHCONE NETWORK SERVICES INTERFACE (NSI) POINT-TO-POINT TESTBED WITH ATLAS SITES Shawn McKee/Univ. of Michigan Kaushik De/Univ. of Texas Arlington (Thanks.
End-Site Orchestration 1 With Open vSwitch (OVS) R. Voicu, A. Mughal, J. Balcas, H. Newman for the Caltech Team.
Joint Genome Institute
InterDomain Dynamic Circuit Network Demo
LHCONE Point- to-Point service WORKSHOP SUMMARY
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
NSF cloud Chameleon: Phase 2 Networking
ExaO: Software Defined Data Distribution for Exascale Sciences
Presentation transcript:

ANSE: Advanced Network Services for Experiments Institutes: –Caltech (PI: H. Newman, Co-PI: A. Barczyk) –University of Michigan (Co-PI: S. McKee) –Vanderbilt University (Co-PI: P. Sheldon) –University of Texas in Arlington (Co-PI: K. De) Presented by: Artur Barczyk or “Network Integration and Applied Innovation” program area –Integration of advanced network developments from previously funded projects with the mainstream applications in use in the LHC and other research communities Focus on LHC, pave way for others to use same/similar approach Main thrust: integrate advanced networking tools and services with the software stacks of the LHC experiments –LHC software: PanDA in ATLAS, PhEDEx in CMS –Networking services/tools: Dynamic circuits (DYNES, ION, OSCARS) and Monitoring (perfSONAR and MonALISA) Strategic planning of workflow including network capacity, as well as CPU and storage capacity as a co-scheduled resource Working with the main workflow management developers and operations staff for deterministic, worldwide distributed workflow in both CMS and ATLAS, in ANSE

ANSE - Relation to DYNES In brief, DYNES is an NSF funded project to deploy a ‘cyberinstrument’ linking up to 50 US campuses through Internet2 dynamic circuit backbone –based on ION service, using OSCARS technology –Use of OpenFlow, through Internet2’s OS3E network, being considered/tested DYNES instrument is intended as a production-grade ‘starter-kit’ –comes with a disk server, inter-domain controller (server) and FDT (transfer application) installation –FDT code includes OSCARS IDC API -> reserves bandwidth, and moves data through the created circuit “Bandwidth on Demand” The DYNES system is naturally capable of advance reservation –But we need the right agent code inside CMS/ATLAS to call the API whenever transfers involve two DYNES sites Btw - DYNES is entering production-readiness in 2013 (now)

SDN Deployment at Caltech ANSE is a SW development project, but will make use of infrastructure deployed as part of the DYNES instrument –This slide shows Caltech installation only! –Installation at other DYNES campuses varies see for detailshttp://dynes.internet2.edu Caltech: –1 IDC server –1 data server –1 switch (future: OF-capable) earlier SDN installation, aka DCN testbed

Outlook etc. Currently, there is no GENI deployment at Caltech The HEP group is investigating potential installation of a GENI rack –Intended use case: network R&D for HEP data distribution The HEP Networking group at Caltech is active in SDN R&D: –OLiMPS (Openflow Link-layer Multipath Switching) project funded by DOE-OASCR Contact: –Artur Barczyk (HEP networking group), –Harvey Newman,