Physical Buildout of the OptIPuter at UCSD. What Speeds and Feeds Have Been Deployed Over the Last 10 Years Scientific American, January 2001 Number of.

Slides:



Advertisements
Similar presentations
Ethernet Switch Features Important to EtherNet/IP
Advertisements

EdgeNet2006 Summit1 Virtual LAN as A Network Control Mechanism Tzi-cker Chiueh Computer Science Department Stony Brook University.
-Grids and the OptIPuter Software Architecture Andrew A. Chien Director, Center for Networked Systems SAIC Chair Professor, Computer Science and Engineering.
Why Optical Networks Are Emerging as the 21 st Century Driver Scientific American, January 2001.
NETWORK TRANSFORMATION THROUGH VIRTUALIZATION
HetnetIP Ethernet BackHaul Configuration Automation Demo.
Logically Centralized Control Class 2. Types of Networks ISP Networks – Entity only owns the switches – Throughput: 100GB-10TB – Heterogeneous devices:
Packet Switching COM1337/3501 Textbook: Computer Networks: A Systems Approach, L. Peterson, B. Davie, Morgan Kaufmann Chapter 3.
Implementing Inter-VLAN Routing
MUNIS Platform Migration Project WELCOME. Agenda Introductions Tyler Cloud Overview Munis New Features Questions.
Guide to Network Defense and Countermeasures Second Edition
The Case for Enterprise Ready Virtual Private Clouds Timothy Wood, Alexandre Gerber *, K.K. Ramakrishnan *, Jacobus van der Merwe *, and Prashant Shenoy.
PRISM: High-Capacity Networks that Augment Campus’ General Utility Production Infrastructure Philip Papadopoulos, PhD. Calit2 and SDSC.
1 In VINI Veritas: Realistic and Controlled Network Experimentation Jennifer Rexford with Andy Bavier, Nick Feamster, Mark Huang, and Larry Peterson
RIT Campus Data Network. General Network Statistics Over 23,000 wired outlets Over 14,500 active switched ethernet ports > 250 network closets > 1,000.
1 Chapter 8 Local Area Networks - Internetworking.
Service Providers & Data Link & Physical layers Week 4 Lecture 1.
1 K. Salah Module 4.3: Repeaters, Bridges, & Switches Repeater Hub NIC Bridges Switches VLANs GbE.
1 25\10\2010 Unit-V Connecting LANs Unit – 5 Connecting DevicesConnecting Devices Backbone NetworksBackbone Networks Virtual LANsVirtual LANs.
CISCO NETWORKING ACADEMY Chabot College ELEC Router Introduction.
Optical modules, WDM, routing and KTH/CSD master program 2009 Robert Olsson KTH/CSD.
Company and Product Overview Company Overview Mission Provide core routing technologies and solutions for next generation carrier networks Founded 1996.
Chapter 1: Hierarchical Network Design
VLANS and Other Hardware CS442. Examples: Client in A wants to contact server in A or B First, a review problem Subnet mask:
SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program.
LECTURE 9 CT1303 LAN. LAN DEVICES Network: Nodes: Service units: PC Interface processing Modules: it doesn’t generate data, but just it process it and.
Hosting Virtual Networks on Commodity Hardware VINI Summer Camp.
Virtual LAN Design Switches also have enabled the creation of Virtual LANs (VLANs). VLANs provide greater opportunities to manage the flow of traffic on.
OptIPuter Physical Testbed at UCSD, Extensions Beyond the Campus Border Philip Papadopoulos and Cast of Real Workers: Greg Hidley Aaron Chin Sean O’Connell.
End-to-end resource management in DiffServ Networks –DiffServ focuses on singal domain –Users want end-to-end services –No consensus at this time –Two.
Why Optical Networks Will Become the 21 st Century Driver Scientific American, January 2001 Number of Years Performance per Dollar Spent Data Storage.
Mr. SACHIN KHANDELWAL (S.D.E.) Mr. N.S.NAG (D.E.) Mr. L.K.VERMA (PROJECT GUIDE)  Group Members- 1)Mohit Udani 2)Ranjith Kumar.M 3)Salma Siddique 4)Abhishek.
1 Second ATLAS-South Caucasus Software / Computing Workshop & Tutorial October 24, 2012 Georgian Technical University PhD Zaza Tsiramua Head of computer.
© Copyright 2010 Hewlett-Packard Development Company, L.P. 1 Jayaram Mudigonda, HP Labs Praveen Yalagandula, HP Labs Mohammad Al-Fares, UCSD Jeff Mogul,
Chiaro’s Enstara™ Summary Scalable Capacity –6 Tb/S Initial Capacity –GigE  OC-192 Interfaces –“Soft” Forwarding Plane With Network Processors For Maximum.
By: Aleksandr Movsesyan Advisor: Hugh Smith. OSI Model.
LAN Switching and Wireless – Chapter 1 Vilina Hutter, Instructor
Using Photonics to Prototype the Research Campus Infrastructure of the Future: The UCSD Quartzite Project Philip Papadopoulos Larry Smarr Joseph Ford Shaya.
11 Copyright © 2009 Juniper Networks, Inc. ANDY INGRAM VP FST PRODUCT MARKETING & BUSINESS DEVELOPMENT.
NMS Case Study HP OpenView Network Node Manager Hong-taek Ju DP&NM Lab. Dept. of Computer Science and Engineering POSTECH, Pohang Korea Tel:
SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program.
Review: –Ethernet What is the MAC protocol in Ethernet? –CSMA/CD –Binary exponential backoff Is there any relationship between the minimum frame size and.
Campus Networking Best Practices Hervey Allen NSRC & University of Oregon Dale Smith University of Oregon & NSRC
GrangeNet Dr. Greg Wickham APAN NOC 25 August 2005.
15.1 Chapter 15 Connecting LANs, Backbone Networks, and Virtual LANs Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or.
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 1: Introduction to Scaling Networks Scaling Networks.
A High-Performance Campus-Scale Cyberinfrastructure For Effectively Bridging End-User Laboratories to Data-Intensive Sources Presentation by Larry Smarr.
Five Essential Elements for Future Regional Optical Networks Harold Snow Sr. Systems Architect, CTO Group.
Large-scale Virtualization in the Emulab Network Testbed Mike Hibler, Robert Ricci, Leigh Stoller Jonathon Duerig Shashi Guruprasad, Tim Stack, Kirk Webb,
Ocean Sciences Cyberinfrastructure Futures Dr. Larry Smarr Director, California Institute for Telecommunications and Information Technologies Harry E.
CISCO PACKET TRACER By:- Ankita Rawat Sohit Mehta Sukhwinder Singh.
OptIPuter Networks Overview of Initial Stages to Include OptIPuter Nodes OptIPuter Networks OptIPuter Expansion OPtIPuter All Hands Meeting February 6-7.
BNL PDN Enhancements. Perimeter Load Balancers Scaleable Performance Fault Tolerance Server Maintainability User Convenience Perimeter Security.
Rehab AlFallaj.  Network:  Nodes: Service units: PC Interface processing Modules: it doesn’t generate data, but just it process it and do specific task.
Services and Applications’ infrastructure for agile optical networks An early draft proposal Tal Lavian.
CERN Campus Network Infrastructure Specificities Jean-Michel Jouanigot Campus Network Leader CERN EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH EUROPEAN LABORATORY.
Southern California Infrastructure Philip Papadopoulos Greg Hidley.
| Basel Fabric Management with Virtual Machine Manager Philipp Witschi – Cloud Architect & Microsoft vTSP Thomas Maurer – Cloud Architect & Microsoft MVP.
15.1 Chapter 15 Connecting LANs, Backbone Networks, and Virtual LANs Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or.
University of Illinois at Chicago Lambda Grids and The OptIPuter Tom DeFanti.
By Harshal Ghule Guided by Mrs. Anita Mahajan G.H.Raisoni Institute Of Engineering And Technology.
GGF 17 - May, 11th 2006 FI-RG: Firewall Issues Overview Document update and discussion The “Firewall Issues Overview” document.
Chapter 4 Data Link Layer Switching
Chapter 5: Inter-VLAN Routing
CT1303 LAN Rehab AlFallaj.
IS3120 Network Communications Infrastructure
Direct Attached Storage and Introduction to SCSI
The Stanford Clean Slate Program
Optical SIG, SD Telecom Council
Cost Effective Network Storage Solutions
Presentation transcript:

Physical Buildout of the OptIPuter at UCSD

What Speeds and Feeds Have Been Deployed Over the Last 10 Years Scientific American, January 2001 Number of Years Performance per Dollar Spent Uplink Speed DWDM Capability Endpoint Speed 10Mb 1000Mb Doublings x 10000Mb Wiglaf Rockstar OptIPuter Infrastructure

½ Mile SIO SDSC CRCA Phys. Sci - Keck SOM JSOE Preuss 6 th College SDSC Annex Node M Earth Sciences SDSC Medicine Engineering High School To CENIC and NLR Collocation Source: Phil Papadopoulos, SDSC; Greg Hidley, Cal-(IT) 2 The UCSD OptIPuter Deployment UCSD is Prototyping a Campus-Scale OptIPuter Calit2 Juniper T Tbps Backplane Bandwidth Chiaro Estara Dedicated Fibers Between Sites Link Linux Clusters Cisco – 10GigE

UCSD Packet Test Bed OptIPuter Year 2

Different Kind of Experimental Infrastructure UCSD Campus Infrastructure –A campus-wide experimental apparatus Different Kinds of Cluster Endpoints (scaling in the usual dimensions) –Compute –Storage –Visualization –300 + Nodes available for experimentation (ia32, Opteron, Linux) –7 different labs Clusters and Network can be allocated and configured by the researcher at the lowest level –Machine SW configuration: OS (kernel, networking modules, etc), Middleware, OptIPuter System Software, Application Software –Root access given to researchers when needed –As close to chaos as we can get Networks –Packet oriented network. 10 Gbps/site. Multiple 10GigE where needed –Adding lambda capability (Quartzite: Research Instrumentation Award)

What’s Coming Soon? 10 GigE Switching –Force 10 e1200. Initially with sixteen 10GigE Connections –Expansion is $6K/Port + Optics ($2K for Grey, $5K for DWDM) –Line Cards, Grey Optics here. Awaiting Chassis –Force 10 S50 Edge Switches –48-port GigE + two 10GigE uplinks ~ $10K with Grey Optics 10 GigE NICs –Neterion –PCI-X (Intel OEM) with XFP (just received) –Myrinet 10G (PCI Express)– Ready to place Order DWDM –On Order: four 10GigE XFPs, 40KM, Channels 31,32 (2 each). –Delayed: Expect arrival in March (Sigh). –Following NASA’s lead on the DWDM Hardware (Very good Results on Dragon) –Arrived: two 8 channel Mux/DeMux from Finisar DWDM Switching –Expect Wavelength selective switch this summer.

What’s Changing II “Center Switching Complex” moving to Calit2 Should be done my end of March A modest number of endpoint for OptIPuter Research will be added A larger Number (e.g. CAMERA) of “production” resources added Increasing emphasis on longer haul connections –Connections to UCI

Quartzite: Reconfigurable Networking NSF Research Instrumentation, Papadopoulos, PI Packet network is great –Give me bigger and faster of what I already know –Even though TCP is challenged on big pipes –What about lambdas? And switching lambdas? Existing Fiber Plant is fixed. –Want to Experiment with different topologies? -> “buy” a telecom worker to reconnect cables as needed Quartzite: Research Instrumentation Award (Started 15 Sep) –Hybrid Network “Switch stack” at our Collocation Point –Packet Switch –Transparent Optical Switch –Allows us to physically build new topologies without physical rewiring –Wavelength-Selective Switch –Experimental device from Lucent

Quartzite: DWDM $5K/ XFP $2K/Channel (Mux/demux) $10K/ switch + = $14K/Connecte d Pair Single fiber pair Cheap uncooled lasers 0W Optical splitters/combiners 0.8nm spacing for DWDM 1GigE, 10GigE Bonded or Separate

UCSD Quartzite Core at Completion (Year 5 of OptIPuter) Funded 15 Sep 2004 Physical HW to Enable Optiputer and Other Campus Networking Research Hybrid Network Instrument Reconfigurable Network and Enpoints

Scalable and automated network mapping for Optiputer/Quartzite Network Optiputer AHM Meeting San Diego, CA January Praveen Jagadishprasad Hassan Elmadi Calit2, UCSD Phil Papadopoulos Mason Katz SDSC

Network Map ( 01/16/2006)

Motivation Management –Inventory –Troubleshooting Programming the network –Ability to view and manipulate the network as a single entity. –Aid network reconfiguration in a heterogenous network –Experimental networks have high degree of reconfiguration Glimmerglass based physical changes VLAN based logical topology changes –Final goal to automate the reconfiguration process. Focus on switch/router configuration process

Automated Discovery Minimal input needed. –One gateway might be sufficient SNMP based discovery –Not tied to vendor protocol –Tested with Cisco, HP, Dell, Extreme etc –Almost all major vendors support SNMP Fast –Discovery process highly threaded –3 minutes for UCSD optiputer network (~600 hosts and 20 switches) Framework based –Extensible to include mibs for specific switch/router models. For example –Cisco vlans –Extreme trunking

Design for discovery and mapping Phase 1 ( Layer 3 ) –Router discovery –Subnet discovery Phase 2 ( Layer 2) –Switch discovery –Host discovery –Switch Host mapping –IP arp mapping Phase 3 –Network mapping –Form integrated map through novel algorithms –Area of research Phase 4 –Web based Viz –Database storage

Future work Reliable discovery of logical topology ( VLANs) Automate generation of switch/router configs –Use physical topology information to aid config generation –Fixed templates for each switch/router model –Templates are extended depending on configuration needed Batch configuration of switches/routers – Support Custom VLANS with only end-host specification –Constructing spanning tree of end-host and intermediate switches/routers\ –Schedule dependencies for step-by-step configuration –Physical topology information essential

Logical topology adds an VLAN table to the physical topology tables. –VLAN composed of trunks. –Each Trunk can be a single/multiple port to port connection between same set of switches –Schema supports retaining VLAN id when modifying trunks and vice-versa. Optiputer Network Inventory Management – Logical View LOGICAL TOPOLOGY (Single VLAN) GRAPH

Look at Parallel Data Serving 128 node Rockstar Cluster (Same as SC2003 Build) 1 SCSI Drive/File Server Node 8 Lustre Clients 10 Lustre File Servers 10 Lustre File Servers 8 Lustre Clients 10 Lustre File Servers 10 Lustre File Servers 8 Lustre Clients 10 Lustre File Servers 10 Lustre File Servers 8 Lustre Clients 10 Lustre File Servers 10 Lustre File Servers 48 Port GigE + 10GigE Uplink 48-port GigE

Basic Performance 32, 8, 16, 4 nodes reading the same 32 GB file Under these Ideal Circumstances, able to read more than 1.4GB/sec from disk Writing different 10 GB files from each nodes: about 700MB/s

Why a Hybrid Structure Create different physical topologies quickly Change when site/node is connected via packet, lambda or a hybrid combination –Want to understand the practical challenges in different circumstances Circuits don’t scale in the Internet Sense Packet switches will be congested in for long-haul –Real QoS is unreachable in the ossified Internet The engineering compromise is likely a hybrid network –Packet paths always exist (internet scalability argument) –Circuit paths on demand –Think private high-speed networks not just point-to-point

Summary OptIPuter is addressing a subset of the research needed for figuring out how to waste (I mean utilize) bandwidth Work at multiple levels of the Software stack – protocols, virtual machine construction, storage retrieval Trying to understand how lambdas are presented to applications –Explicit? –Hidden? –Hybrid? Building an experimental infrastructure as large as our budget will allow –OptIPuter is already international in scale at 10gigabit. –Approximating the Terabit Campus with Quartzite