Southern California Infrastructure Philip Papadopoulos Greg Hidley.

Slides:



Advertisements
Similar presentations
Electronic Visualization Laboratory University of Illinois at Chicago EVL Optical Networking Research Oliver Yu Electronic Visualization Laboratory University.
Advertisements

-Grids and the OptIPuter Software Architecture Andrew A. Chien Director, Center for Networked Systems SAIC Chair Professor, Computer Science and Engineering.
Why Optical Networks Are Emerging as the 21 st Century Driver Scientific American, January 2001.
"The OptIPuter: an IP Over Lambda Testbed" Invited Talk NREN Workshop VII: Optical Network Testbeds (ONT) NASA Ames Research Center Mountain View, CA August.
Chapter 1: Introduction to Scaling Networks
M A Wajid Tanveer Infrastructure M A Wajid Tanveer
DRAGON Dynamic Resource Allocation via GMPLS Optical Networks Tom Lehman University of Southern California Information Sciences Institute (USC/ISI) National.
AHM Overview OptIPuter Overview Third All Hands Meeting OptIPuter Project San Diego Supercomputer Center University of California, San Diego January 26,
PRISM: High-Capacity Networks that Augment Campus’ General Utility Production Infrastructure Philip Papadopoulos, PhD. Calit2 and SDSC.
Campus LAN Overview. Objectives Identify the technical considerations in campus LAN design Identify the business considerations in campus LAN design Describe.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Silicon Graphics, Inc. Poster Presented by: SGI Proprietary Technologies for Breakthrough Research Rosario Caltabiano North East Higher Education & Research.
RIT Campus Data Network. General Network Statistics Over 23,000 wired outlets Over 14,500 active switched ethernet ports > 250 network closets > 1,000.
Milos Kobliha Alejandro Cimadevilla Luis de Alba Parallel Computing Seminar GROUP 12.
The Future of Internet Research Scott Shenker (on behalf of many networking collaborators)
Abstraction and Control of Transport Networks (ACTN) BoF
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
InterVLAN Routing Design and Implementation. What Routers Do Intelligent, dynamic routing protocols for packet transport Packet filtering capabilities.
SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program.
Chapter 1 An Introduction to Networking
CD FY09 Tactical Plan Review FY09 Tactical Plan for Wide-Area Networking Phil DeMar 9/25/2008.
1 October 20-24, 2014 Georgian Technical University PhD Zaza Tsiramua Head of computer network management center of GTU South-Caucasus Grid.
Is Lambda Switching Likely for Applications? Tom Lehman USC/Information Sciences Institute December 2001.
OOI CI R2 Life Cycle Objectives Review Aug 30 - Sep Ocean Observatories Initiative OOI CI Release 2 Life Cycle Objectives Review CyberPoPs & Network.
Physical Buildout of the OptIPuter at UCSD. What Speeds and Feeds Have Been Deployed Over the Last 10 Years Scientific American, January 2001 Number of.
OptIPuter Physical Testbed at UCSD, Extensions Beyond the Campus Border Philip Papadopoulos and Cast of Real Workers: Greg Hidley Aaron Chin Sean O’Connell.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
Layered Protocol. 2 Types of Networks by Logical Connectivity Peer to Peer and Client-Server Peer-to-peer Networks  Every computer can communicate directly.
Why Optical Networks Will Become the 21 st Century Driver Scientific American, January 2001 Number of Years Performance per Dollar Spent Data Storage.
Source: Jim Dolgonas, CENIC CENIC is Removing the Inter-Campus Barriers in California ~ $14M Invested in Upgrade Now Campuses Need to Upgrade.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
InfiniSwitch Company Confidential. 2 InfiniSwitch Agenda InfiniBand Overview Company Overview Product Strategy Q&A.
1 Second ATLAS-South Caucasus Software / Computing Workshop & Tutorial October 24, 2012 Georgian Technical University PhD Zaza Tsiramua Head of computer.
Chiaro’s Enstara™ Summary Scalable Capacity –6 Tb/S Initial Capacity –GigE  OC-192 Interfaces –“Soft” Forwarding Plane With Network Processors For Maximum.
Chicago/National/International OptIPuter Infrastructure Tom DeFanti OptIPuter Co-PI Distinguished Professor of Computer Science Director, Electronic Visualization.
A Wide Range of Scientific Disciplines Will Require a Common Infrastructure Example--Two e-Science Grand Challenges –NSF’s EarthScope—US Array –NIH’s Biomedical.
1 NORTHROP GRUMMAN PRIVATE / PROPRIETARY LEVEL 1 NG/VITA Strategy & Architecture NG/VITA Strategy & Architecture Tony Shoot December 19, 2006.
LAN Switching and Wireless – Chapter 1 Vilina Hutter, Instructor
Using Photonics to Prototype the Research Campus Infrastructure of the Future: The UCSD Quartzite Project Philip Papadopoulos Larry Smarr Joseph Ford Shaya.
RENCI’s BEN (Breakable Experimental Network) Chris Heermann
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program.
GrangeNet Dr. Greg Wickham APAN NOC 25 August 2005.
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 1: Introduction to Scaling Networks Scaling Networks.
A High-Performance Campus-Scale Cyberinfrastructure For Effectively Bridging End-User Laboratories to Data-Intensive Sources Presentation by Larry Smarr.
Multicast instant channel change in IPTV systems 1.
John D. McCoy Principal Investigator Tom McKenna Project Manager UltraScienceNet Research Testbed Enabling Computational Genomics Project Overview.
1 Network Measurement Summary ESCC, Feb Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
Infiniband Bart Taylor. What it is InfiniBand™ Architecture defines a new interconnect technology for servers that changes the way data centers will be.
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Optical Architecture Invisible Nodes, Elements, Hierarchical, Centrally Controlled, Fairly Static Traditional Provider Services: Invisible, Static Resources,
Ocean Sciences Cyberinfrastructure Futures Dr. Larry Smarr Director, California Institute for Telecommunications and Information Technologies Harry E.
Xingfu Wu Xingfu Wu, Valerie Taylor Department of Computer Science Texas A&M University Third Annual Workshop on OptIPuter.
The OptIPuter Project Tom DeFanti, Jason Leigh, Maxine Brown, Tom Moher, Oliver Yu, Bob Grossman, Luc Renambot Electronic Visualization Laboratory, Department.
1 NSF/TeraGrid Science Advisory Board Meeting July 19-20, San Diego, CA Brief TeraGrid Overview and Expectations of Science Advisory Board John Towns TeraGrid.
OptIPuter Networks Overview of Initial Stages to Include OptIPuter Nodes OptIPuter Networks OptIPuter Expansion OPtIPuter All Hands Meeting February 6-7.
Prentice HallHigh Performance TCP/IP Networking, Hassan-Jain TCP/IP Performance over Optical Networks.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
Zagreb Optical Packet Switching the technology and its potential role in future communication networks Results from.
“OptIPuter: From the End User Lab to Global Digital Assets" Panel UC Research Cyberinfrastructure Meeting October 10, 2005 Dr. Larry Smarr.
NG/VITA Strategy & Architecture Tony Shoot
Joint Techs, Columbus, OH
Grid Computing.
GGF15 – Grids and Network Virtualization
Computer Technology Notes #4
NTHU CS5421 Cloud Computing
Optical SIG, SD Telecom Council
Cloud-Enabling Technology
Presentation transcript:

Southern California Infrastructure Philip Papadopoulos Greg Hidley

UCSD Packet Test Bed OptIPuter Year 2

Goals Expand our Campus-wide Research Instrument –Support of Researcher Needs – Focus on Application Needs –Deployment of Scalable Endpoints –Continued Evolution of the Packet-Based OptIPuter –Begin Deployment of 10 GigE Technologies –Drive Towards Goal of 1:1 Bisection Bandwidth –5:1 For This Year Evaluation of Parallel Storage Systems with Remote Access Expand the UCSD-based OptIPuter from Campus-only to Southern California and Eventually to Chicago

Year 2 Accomplishments Staffing –Hired UCSD Optiputer Project Manager Aaron Chin –Additional 1.5 FTE Effort Towards Infrastructure Deployment: –Max Okumoto, Sean O’Connell, David Hutches, Mason Katz Node Buildout –48 Node (IA-32), 21 TB (300 Spindles) IBM Storage Cluster (JSOE) –128 Node (IA-32) Sun Compute Cluster (SDSC) –22 Node (Opteron) Sun Viz Cluster (SOM) Geowall2 –3 Node (IA-32) Shuttle Viz Cluster (CRCA) Display Wall –Expected By 10/1/04 –3 Sun (Opteron) Clusters, Size to be Determined Network Buildout –1 Gigabit Transport –Connectivity To Campus VLAN Infrastructure And Border Router (T320) –10 Gigabit Transport –Single Interface for Chiaro –Extreme 400 (48 Port Gige with two 10gige Uplinks) Layer 2 Fabric –1 Each at JSOE, CSE and SDSC –5:1 Bisection for the Storage Cluster

Making The Campus-Wide OptIPuter Usable Solving Network Issues –Converted to Public IP Address Space to Facilitate Off-campus Connectivity –4 GigE Channel Bonding – Worked With Chiaro to Improve Performance –Load-balancing Algorithm Among Links is Handled Differently at Layer 2 (Dell) and Layer 3 (Chiaro) –35% of Bonded Link Capacity Before “Fix”, 60% Afterwards. L2-L2 (Dell-to- Dell) is 95% For Comparison –PVFS –8 IBM Storage Nodes Running as PVFS Servers –Clients Running at SIO, JSOE and SOM –Performance Reasonable but not Significantly Larger than 1 Gigabit Developing Support for Users [ ] –Account Request Tools –Network Monitoring Facilities –Technical Network Configuration Information –Node Configuration Information –IP Allocation Map

SoCal CalREN-XD OptIPuter Build-Out Expected Completion July 2004

Expanding the OptIPuter LambdaGrid Allows us to build OptIPuters at larger diameters (latency metric) Significant enabler for investigating Hybrid Networks (packet + circuit)

Year 3 Plans: Enhance Campus OptIPuter A Substantial Portion of the Physical Build Completes in Year 2 –Endpoints, Cross-campus Fiber, Commodity Endpoints Increase Campus Bandwidth –Work Towards More Extensive 10GigE Integration –Optiputer HW Budget Limited In Year 3, Focus is on Network Extension –Connect Two Campus Sites with 32-node Clusters At 10GigE –3:1 Campus Bisection Ratio Add/Expand a Moderate Number of new Campus Endpoints –Add New Endpoints Into The Chiaro Network –UCSD Sixth College –JSOE (Engineering) Collaborative Visualization Center –New Calit2 Research Facility –Add 3 General-purpose Sun Opteron Clusters at Key Campus Sites (Compute and Storage); Clusters Will All Have PCI-X (100 Mhz, 1Gbps) –Deploy Infiniband on Our IBM Storage Cluster and on a Previously-Donated Sun 128-node Compute Cluster Complete Financial Acquisition of the Chiaro Router

Year Three Goals Integrate New NSF Quartzite MRI Goal -- integration of Packet-based (SoCal) and Circuit-based (Illinois) Approaches a Hybrid System –Add Additional O-O-O Switching Capabilities Through a Commercial (Calient Or Glimmerglass) All-optical Switch and the Lucent (Pre- commercial) Wavelength Selective Switch –Begin CWDM Deployment to Extend Optical Paths Around UCSD and Provide Additional Bandwidth –Add Additional 10GigE in Switches and Cluster Node NICs MRI Proposal (Quartzite, Recommended for Funding) Allows Us to Match the Network to the Number of Existing Endpoints This is a New Kind of Distributed Instrument –300+ Components Distributed Over the Campus –Simple and Centralized Control for Other Optiputer Users

UCSD Quartzite Core at Completion (Year 5 of OptIPuter) Recommended for Funding Physical HW to Enable Optiputer and Other Campus Networking Research Hybrid Network Instrument