Xiaolin Andy Li  Provide 100 Gbps switch  Internet2 Innovation Platform FLR link to Jacksonville  Use SDN to enable.

Slides:



Advertisements
Similar presentations
Duke University SDN Approaches and Uses GENI CIO Workshop – July 12, 2012.
Advertisements

GENI/CCNIE Workshop: UW-Madison Key components & approx timetable/milestones 100Gbps (campus to Internet2 & ESNet) (6/30/13) Science DMZ (8/15/13) Openflow/SDN.
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
Presented by: Julio Ibarra, Eric Johnson, David Rotella, Key Components: Science DMZ at FIU FlowSurge.
ExoGENI Rack Architecture Ilia Baldine Jeff Chase Chris Heermann Brad Viviano
University of Kentucky Brent Salisbury Partnership between IT, CS, CCS, and Researchers. Liberation of research traffic from.
GENI-Federated Multimodal Cyber-Physical Systems Xiaolin (Andy) Li, Xinxin Liu, Nhat Nguen, Di Wang, Vaibhav Tankha, Johnson Thomas Scalable Software Systems.
Take your CMS to the cloud to lighten the load Brett Pollak Campus Web Office UC San Diego.
Kathy Benninger, Pittsburgh Supercomputing Center Workshop on the Development of a Next-Generation Cyberinfrastructure 1-Oct-2014 NSF Collaborative Research:
Internet2 and AL2S Eric Boyd Senior Director of Strategic Projects
Title or Title Event/Date Presenter, PresenterTitle, Internet2 Network Virtualization & the Internet2 Innovation Platform To keep our community at the.
Internet2 Network: Convergence of Innovation, SDN, and Cloud Computing Eric Boyd Senior Director of Strategic Projects.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
The Future of GÉANT: The Future Internet is Present in Europe Vasilis Maglaris Professor of Electrical & Computer Engineering, NTUA Chairman, NREN Policy.
Roy Hockett April 28, The science is the key to the proposal, without the science the proposal has little chance. What infrastructure.
1 In VINI Veritas: Realistic and Controlled Network Experimentation Jennifer Rexford with Andy Bavier, Nick Feamster, Mark Huang, and Larry Peterson
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
Transport SDN: Key Drivers & Elements
E MERGING AND E NABLING G LOBAL, N ATIONAL, AND R EGIONAL N ETWORK I NFRASTRUCTURE T O S UPPORT R ESEARCH & E DUCATION.
Agenda Network Infrastructures LCG Architecture Management
VAP What is a Virtual Application ? A virtual application is an application that has been optimized to run on virtual infrastructure. The application software.
CMS Data Transfer Challenges LHCOPN-LHCONE meeting Michigan, Sept 15/16th, 2014 Azher Mughal Caltech.
CRON: Cyber-infrastructure for Reconfigurable Optical Networks PI: Seung-Jong Park, co-PI: Rajgopal Kannan GRA: Cheng Cui, Lin Xue, Praveenkumar Kondikoppa,
IRNC Special Projects: IRIS and DyGIR Eric Boyd, Internet2 October 5, 2011.
The Florida LambdaRail (FLR) A Research and Education Network for Florida Veronica Sarjeant Chief Operations Officer Community College CIO Meeting April.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
SDN Research Roy Hockett - Network Architect January 14, 2014 Common Solutions Group workshop.
VICCI: Programmable Cloud Computing Research Testbed Andy Bavier Princeton University November 3, 2011.
1 Florida Cyberinfrastructure Development: SSERCA Fall Internet2 Meeting Raleigh, Va October 3, 2011 Paul Avery University of Florida
1. 2 Tier I/II $Ks Tier III/IV $Ms Tier V/VI $Bs Create New Vulnerabilities Discover New Vulnerabilities Exploit Known Vulnerabilities.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
Wide Area Network Access to CMS Data Using the Lustre Filesystem J. L. Rodriguez †, P. Avery*, T. Brody †, D. Bourilkov *, Y.Fu *, B. Kim *, C. Prescott.
University of Chicago Brent O’Keeffe – CC-NIE proposal # CC-NIE proposal # Creation of HiPeRNet.
Sponsored by the National Science Foundation GENI Goals & Milestones GENI CC-NIE Workshop NSF Mark Berman January 7,
UCSD CMS 2009 T2 Site Report Frank Wuerthwein James Letts Sanjay Padhi Abhishek Rana Haifen Pi Presented by Terrence Martin.
NSF IRNC PI Meeting October 6, 2011 Julio Ibarra Center for Internet Augmented Research & Assessment Florida International University Americas Lightpaths.
WLCG Networking Tony Cass, Edoardo Martelli 11 th April 2015.
Campus networks preparing for BIGDATA Erik Deumens Research Computing UFIT 10/29/2012.
Scientific Networking: The Cause of and Solution to All Problems April 14 th Workshop on High Performance Applications of Cloud and Grid Tools Jason.
Innovations to Transition a Campus Core Cyberinfrastructure to Serve Diverse and Emerging Researcher Needs Prasad Calyam (Presenter), Jay Young, Paul Schopis.
1.  Infrastructure status  Up to 60G backbone for testing network equipment capability  10~60G backbone is deployed nationwide (6 Pops)  About 60.
KISTI-GSDC SITE REPORT Sang-Un Ahn, Jin Kim On the behalf of KISTI GSDC 24 March 2015 HEPiX Spring 2015 Workshop Oxford University, Oxford, UK.
Sponsored by the National Science Foundation Internet2 OpenFlow Backbone Spiral 2 Year-end Project Review Internet2 PI: Eric Boyd Co-PI: Matt Zekauskas.
1 Development of a High-Throughput Computing Cluster at Florida Tech P. FORD, R. PENA, J. HELSBY, R. HOCH, M. HOHLMANN Physics and Space Sciences Dept,
SDN AND OPENFLOW SPECIFICATION SPEAKER: HSUAN-LING WENG DATE: 2014/11/18.
Slide 1 Experiences with PerfSONAR and a Control Plane for Software Defined Measurement Yan Luo Department of Electrical and Computer Engineering University.
Slide 1 9/29/15 End-to-End Performance Tuning and Best Practices Moderator: Charlie McMahon, Tulane University Jan Cheetham, University of Wisconsin-Madison.
Communications & Computer Networks Resource Notes - Network Hardware
Cyberinfrastructure: An investment worth making Joe Breen University of Utah Center for High Performance Computing.
IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.
Vista Licensing Information University of Pennsylvania Office of Software Licensing Mary A. Griffin November 13, 2006.
23-Jan-16 CLOUD COMPUTING & IPTV BY ROBERT NATHAN.
VISION & CHANDLER MIGRATION 10/03/2013. Executive Summary Last year we requested several critical Vision enhancements immediately post go-live.
Brocade Flow Optimizer
Brian Noble, Campus heavily invested in shared cyberinfrastructure Nyx: condo-model HPC cluster, 4K nodes Flux: 8K nodes, most “rented”
ANSE: Advanced Network Services for Experiments Institutes: –Caltech (PI: H. Newman, Co-PI: A. Barczyk) –University of Michigan (Co-PI: S. McKee) –Vanderbilt.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
21 st Century Network Project Status Approximately 18 months ago work began on the 21st Century Network Project. This project encompasses many facets of.
BEIJING-LCG Network Yan Xiaofei
Networks ∙ Services ∙ People Mian Usman TNC15, Porto GÉANT IP Layer 17 th June 2015 IP Network Architect, GÉANT.
100GE Upgrades at FNAL Phil DeMar; Andrey Bobyshev CHEP 2015 April 14, 2015.
Network evolving 2020 and beyond
Stuart Wild. Particle Physics Group Meeting, January 2010.
HIGH-PERFORMANCE COMPUTING SYSTEM FOR HIGH ENERGY PHYSICS
Willis Marti: Director, Networking
GridPP Tier1 Review Fabric
“Welcome 2016 Technology Exchange”
NSF cloud Chameleon: Phase 2 Networking
GENI Exploring Networks of the Future
In-network computation
Presentation transcript:

Xiaolin Andy Li  Provide 100 Gbps switch  Internet2 Innovation Platform FLR link to Jacksonville  Use SDN to enable and optimize data flow  CMS LHC experiment  iDigBio project University of Florida

 Hardware Upgrade ◦ Brocade for major portion of network. ◦ Pronto for Day 1 SDN.  Dramatic increase in bandwidth ◦ 200Gbps for Tier 1 sites, 80Gbps for Tier 2 sites. ◦ Legacy sites remain at 10-20Gbps ◦ Support higher external bandwidth from FLR via 100G research wave.  SDN/Openflow capability ◦ Day 1, Openflow/SDN testbed using Pronto Switches. ◦ Future, full Openflow/SDN support on all major network elements (MLXe,ICX).  GENI Racks ◦ ACIS Data Center (IBM) and UF Data Center (in discussion with Dell)  Timeline ◦ Central node goes live Jan 31 st ◦ Other nodes follow in February/March. ◦ Pronto nodes in March/April. ◦ GENI racks: installation March/April Campus Research Network (CRN) Upgrade

CRNv1 to CRNv2 Upgrade

 POC for GENI and GatorCloud: ◦ Xiaolin “Andy” Li, ECE ◦  POC for CC-NIE : ◦ Erik Deumens, Research Computing, IT ◦  Deployment: ◦ Chris Griffin, CNS, IT ◦  Research: ◦ P. Avery, J. Fortes, R. Figueiredo, S. Asseng, W. Farmerie, S. Ranka, A. George, E. Ford, L. McIntyre, L. Page, D. Wu GENI Planning and Integration