SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program.

Slides:



Advertisements
Similar presentations
-Grids and the OptIPuter Software Architecture Andrew A. Chien Director, Center for Networked Systems SAIC Chair Professor, Computer Science and Engineering.
Advertisements

Why Optical Networks Are Emerging as the 21 st Century Driver Scientific American, January 2001.
Computing Infrastructure
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
PRISM: High-Capacity Networks that Augment Campus’ General Utility Production Infrastructure Philip Papadopoulos, PhD. Calit2 and SDSC.
Obstacles Security Culture Cloud Cloud Computing will be the primary delivery model, the big question is how fast is going to get there. The cost is the.
1 © 2001, Cisco Systems, Inc. All rights reserved. NIX Press Conference Catalyst 6500 Innovation Through Evolution 10GbE Tomáš Kupka,
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Building on the BIRN Workshop BIRN Systems Architecture Overview Philip Papadopoulos – BIRN CC, Systems Architect.
© UC Regents 2010 Extending Rocks Clusters into Amazon EC2 Using Condor Philip Papadopoulos, Ph.D University of California, San Diego San Diego Supercomputer.
Silicon Graphics, Inc. Poster Presented by: SGI Proprietary Technologies for Breakthrough Research Rosario Caltabiano North East Higher Education & Research.
1 In VINI Veritas: Realistic and Controlled Network Experimentation Jennifer Rexford with Andy Bavier, Nick Feamster, Mark Huang, and Larry Peterson
Real Parallel Computers. Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Company and Product Overview Company Overview Mission Provide core routing technologies and solutions for next generation carrier networks Founded 1996.
Rocks Clusters SUN HPC Consortium November 2004 Federico D. Sacerdoti Advanced CyberInfrastructure Group San Diego Supercomputer Center.
SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program.
Visualization Linda Fellingham, Ph. D Manager, Visualization and Graphics Sun Microsystems Shared Visualization 1.1 Software Scalable Visualization 1.1.
PhD course - Milan, March /09/ Some additional words about cloud computing Lionel Brunie National Institute of Applied Science (INSA) LIRIS.
Physical Buildout of the OptIPuter at UCSD. What Speeds and Feeds Have Been Deployed Over the Last 10 Years Scientific American, January 2001 Number of.
Bridging the Gap Between the CIO and the Scientist Michael A. Pearce – Deputy CIO University of Southern California (USC) Workshop April, 2006.
OptIPuter Physical Testbed at UCSD, Extensions Beyond the Campus Border Philip Papadopoulos and Cast of Real Workers: Greg Hidley Aaron Chin Sean O’Connell.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
and beyond Office of Vice President for Information Technology.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
Why Optical Networks Will Become the 21 st Century Driver Scientific American, January 2001 Number of Years Performance per Dollar Spent Data Storage.
Trends In Network Industry - Exploring Possibilities for IPAC Network Steven Lo.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
HEAnet Centralised NAS Storage Justin Hourigan, Senior Network Engineer, HEAnet Limited.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
InfiniSwitch Company Confidential. 2 InfiniSwitch Agenda InfiniBand Overview Company Overview Product Strategy Q&A.
Chiaro’s Enstara™ Summary Scalable Capacity –6 Tb/S Initial Capacity –GigE  OC-192 Interfaces –“Soft” Forwarding Plane With Network Processors For Maximum.
Chicago/National/International OptIPuter Infrastructure Tom DeFanti OptIPuter Co-PI Distinguished Professor of Computer Science Director, Electronic Visualization.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
Using Photonics to Prototype the Research Campus Infrastructure of the Future: The UCSD Quartzite Project Philip Papadopoulos Larry Smarr Joseph Ford Shaya.
RENCI’s BEN (Breakable Experimental Network) Chris Heermann
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
Copyright 2004 National LambdaRail, Inc N ational L ambda R ail Update 9/28/2004 Debbie Montano Director, Development & Operations
Presented by Leadership Computing Facility (LCF) Roadmap Buddy Bland Center for Computational Sciences Leadership Computing Facility Project.
“Gigabit or Bust” Reaching all users Dave Reese CTO, CENIC 17 September 2002.
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 1: Introduction to Scaling Networks Scaling Networks.
Large Scale Parallel File System and Cluster Management ICT, CAS.
A High-Performance Campus-Scale Cyberinfrastructure For Effectively Bridging End-User Laboratories to Data-Intensive Sources Presentation by Larry Smarr.
SAN DIEGO SUPERCOMPUTER CENTER SDSC's Data Oasis Balanced performance and cost-effective Lustre file systems. Lustre User Group 2013 (LUG13) Rick Wagner.
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 1: Introduction to Scaling Networks Scaling Networks.
1 How High Performance Ethernet Plays in RONs, GigaPOPs & Grids Internet2 Member Meeting Sept 20,
® Adtran, Inc All rights reserved 1 ® Adtran, Inc All rights reserved ADTRAN & Smart Grid January 21, 2010 Kevin Morgan Director, Product Marketing.
The OptIPuter Project Tom DeFanti, Jason Leigh, Maxine Brown, Tom Moher, Oliver Yu, Bob Grossman, Luc Renambot Electronic Visualization Laboratory, Department.
OptIPuter Networks Overview of Initial Stages to Include OptIPuter Nodes OptIPuter Networks OptIPuter Expansion OPtIPuter All Hands Meeting February 6-7.
Where we are... Public Interconnects: Number of peers on switch ~60 Aggregate bandwidth through switching fabric ~530mb/s average - ~680mb/s peak.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
UCSD’s Distributed Science DMZ
Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
Southern California Infrastructure Philip Papadopoulos Greg Hidley.
University of Illinois at Chicago Lambda Grids and The OptIPuter Tom DeFanti.
Architecture of a platform for innovation and research Erik Deumens – University of Florida SC15 – Austin – Nov 17, 2015.
High Performance Cyberinfrastructure Discovery Tools for Data Intensive Research Larry Smarr Prof. Computer Science and Engineering Director, Calit2 (UC.
Peter Idoine Managing Director Oracle New Zealand Limited.
FusionCube At-a-Glance. 1 Application Scenarios Enterprise Cloud Data Centers Desktop Cloud Database Application Acceleration Midrange Computer Substitution.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Introduction to Networks
CLUSTER COMPUTING.
Optical SIG, SD Telecom Council
Cloud-Enabling Technology
Cluster Computers.
Presentation transcript:

SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program Director, Grids and clusters San Diego Supercomputer Center September 2003

SoCal Infrastructure UCSD Heavy Lifters Greg Hidley, School of Engineering, Director of Cal-(IT) 2 Technology Infrastructure Mason Katz, SDSC, Cluster Development Group Leader David Hutches, School of Engineering Ted O’Connell, School of Engineering Max Okumoto, School of Engineering

SoCal Infrastructure Year 1 Mod-0, UCSD

SoCal Infrastructure Building an Experimental Apparatus Mod-0 Optiputer Ethernet (Packet) Based –Focused as an Immediately-usable High-bandwidth Distributed Platform –Multiple Sites on Campus ( a Few Fiber Miles ) –Next-generation Highly-scalable Optical Chiaro Router at Center of Network Hardware Balancing Act –Experiments Really Require Large Data Generators and Consumers –Science Drivers Require Significant Bandwidth to Storage –OptIPuter Predicated on Price/performance curves of > 1GE networks System Issues –How does one Build and Manage a Reconfigurable Distributed Instrument?

SoCal Infrastructure Raw Hardware Center of UCSD Network is a Chiaro Internet Router –Unique Optical Cross Connect Scales to 6.4 Tbit/sec Today –We Have the 640 Gigabit “Starter” System –Has “Unlimited” Bandwidth from our Perspective –Programmable Network Processors –Supports Multiple Routing Instances (Virtual Cut-through) –“Wild West” OptIPuter-routed (Campus) –High-performance Research in Metro (CalREN-HPR) and Wide-area –Interface to Campus Production Network with Appropriate Protections Endpoints are Commodity Clusters –Clustered Commodity-based CPUs, Linux. GigE on Every Node. –Differentiated as Storage vs. Compute vs. Visualization –> $800K of Donated Equipment From Sun And IBM –128 Node (256 Gbit/s) Intel-based Cluster from Sun (Delivered 2 Weeks ago) –48 Node (96 Gbit/s), 21TB (~300 Spindles) Storage Cluster from IBM (in Process) –SIO VIZ Cluster Purchased by Project

SoCal Infrastructure Raw Campus Fiber Plant: First Find the Conduit

SoCal Infrastructure Storewidth Investigations: General Model DAV Local Cluster Interconnect Parallel Pipes, Large Bisection, Unified Name Space Viz, Compute or other Clustered Endpoint Storage Cluster with Multiple network and drive pipes Large Virtual Disk (Multiple Network Pipes) httpd pvfs httpd pvfs httpd pvfs httpd pvfs httpd pvfs Local Cluster Interconnect httpd pvfs httpd pvfs httpd pvfs httpd pvfs httpd pvfs Local Cluster InterconnectAggregation Switch Chiaro Aggregation Switch Symmetric “Storage Service” 1.6 Gbit/s (200 MB/s) - 6 clients & servers (HTTP, 1GB file) 1.1 Gbit/s (140 MB/s) - 7 clients & servers (davFS, 1GB file) baseline

SoCal Infrastructure Year 2 – Mod-0, UCSD

SoCal Infrastructure Southern Cal Metro Extension Year 2

SoCal Infrastructure Aggregates Year 1 (Network Build) –Chiaro Router Purchased, Installed, Working (Feb) –5 sites on Campus. Each with 4 GigE Uplinks to Chiaro –Private Fiber, UCSD-only. –~40 Individual nodes, Most Shared with Other Projects –Endpoint resource poor. Network Rich Year 2 (Endpoint Enhancements) –Chiaro Router – Additional Line Cards, IPV6, Starting 10GigE Deployment –8 Sites on Campus –h 3 Metro Sites –Multiple Virtual Routers for Connection to Campus, CENIC HPR, others –> 200 Nodes. Most are Donated (Sun and IBM). Most Dedicated to OptIPuter –Infiniband Test Network on 16 nodes + Direct IB Switch to GigE –Enough Resource to Support Data-intensive Activity, –Slightly network poor. Year 3 + (Balanced Expansion Driven by Research Requirements) –Expand 10GigE deployments –Bring Network, Endpoint, and DWDM (Mod-1) Forward Together –Aggregate at Least a Terabit (both Network and Endpoints) by Year 5

SoCal Infrastructure Managing a few hundred endpoints Rocks Toolkit used on over 130 Registered Clusters. Several Top500 Clusters –Descriptions Easily Express Different System Configurations –Support IA32 and IA64. Opteron in Progress OptIPuter is Extending the Base Software –Integrate Experimental Protocols/Kernels/Middleware into stack –Build Visualization and Storage Endpoints –Adding Common Grid (NMI) Services through Collaboration with GEON/BIRN