Source: Jim Dolgonas, CENIC CENIC is Removing the Inter-Campus Barriers in California ~ $14M Invested in Upgrade Now Campuses Need to Upgrade.

Slides:



Advertisements
Similar presentations
High Performance Cyberinfrastructure Enabling Data-Driven Science Supporting Stem Cell Research Invited Presentation Sanford Consortium for Regenerative.
Advertisements

High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biomedical Sciences Joint Presentation UCSD School of Medicine Research Council.
Can a Greener Internet Help Us Moderate Climate Change? Invited Talk Computational Research in Boston Seminar Series MIT Boston, MA April 15, 2009 Dr.
High Performance Cyberinfrastructure Discovery Tools for Data Intensive Research Larry Smarr Prof. Computer Science and Engineering Director, Calit2 (UC.
Why Optical Networks Are Emerging as the 21 st Century Driver Scientific American, January 2001.
"The OptIPuter: an IP Over Lambda Testbed" Invited Talk NREN Workshop VII: Optical Network Testbeds (ONT) NASA Ames Research Center Mountain View, CA August.
The UCSD/Calit2 NSF GreenLight MRI Tom DeFanti, PI.
HPC in Poland Marek Niezgódka ICM, University of Warsaw
PRISM: High-Capacity Networks that Augment Campus’ General Utility Production Infrastructure Philip Papadopoulos, PhD. Calit2 and SDSC.
The ADAMANT Project: Linking Scientific Workflows and Networks “Adaptive Data-Aware Multi-Domain Application Network Topologies” Ilia Baldine, Charles.
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CIF21) NSF-wide Cyberinfrastructure Vision People, Sustainability, Innovation,
Xingfu Wu Xingfu Wu and Valerie Taylor Department of Computer Science Texas A&M University iGrid 2005, Calit2, UCSD, Sep. 29,
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CF21) IRNC Kick-Off Workshop July 13,
Green Cloud Computing Hadi Salimi Distributed Systems Lab, School of Computer Engineering, Iran University of Science and Technology,
“Software Platform Development for Continuous Monitoring Sensor Networks” Sebastià Galmés and Ramon Puigjaner Dept. of Mathematics and Computer Science.
Cyber UC San Diego Elazar C. Harel May 14, 2008.
Building on the BIRN Workshop BIRN Systems Architecture Overview Philip Papadopoulos – BIRN CC, Systems Architect.
Project GreenLight Measuring the Energy Cost of Applications, Algorithms, and Architectures CENIC Awards Presentation Long Beach, CA March 10, 2009 Larry.
Restructuring Campus CI -- UCSD-A LambdaCampus Research CI and the Quest for Zero Carbon ICT Invited Presentation to the Campus Cyberinfrastructure.
Institutional Research Computing at WSU: Implementing a community-based approach Exploratory Workshop on the Role of High-Performance Computing in the.
1 Building National Cyberinfrastructure Alan Blatecky Office of Cyberinfrastructure EPSCoR Meeting May 21,
Presentation for the 7th ITU Symposium on ICTs, the Environment and Climate Change Greening ICT Infrastructures Session 5/30/12 Dr. Gregory Hidley California.
Scientific Data Infrastructure in CAS Dr. Jianhui Scientific Data Center Computer Network Information Center Chinese Academy of Sciences.
The Role of University Energy Efficient Cyberinfrastructure in Slowing Climate Change Talk to MGT166 Class Business Ethics and Corporate Social Responsibility.
Physical Buildout of the OptIPuter at UCSD. What Speeds and Feeds Have Been Deployed Over the Last 10 Years Scientific American, January 2001 Number of.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
OptIPuter Physical Testbed at UCSD, Extensions Beyond the Campus Border Philip Papadopoulos and Cast of Real Workers: Greg Hidley Aaron Chin Sean O’Connell.
DRAFT 1 Institutional Research Computing at WSU: A community-based approach Governance model, access policy, and acquisition strategy for consideration.
Universities as “Smart Cities” in a Globally Connected World - How Will They be Transformed? Monash University ITS Strategic Planning Session RE-INVENT.
Why Optical Networks Will Become the 21 st Century Driver Scientific American, January 2001 Number of Years Performance per Dollar Spent Data Storage.
“An Integrated Science Cyberinfrastructure for Data-Intensive Research” Panel CISCO Executive Symposium San Diego, CA June 9, 2015 Dr. Larry Smarr Director,
Green Cyberinfrastructure on a Carbon-Constrained Planet Invited Talk Institute for the Future Board Meeting July 27, 2009 Dr. Larry Smarr Director, California.
“Creating a High Performance Cyberinfrastructure to Support Analysis of Illumina Metagenomic Data” DNA Day Department of Computer Science and Engineering.
Developing a North American Global LambdaGrid Dr. Larry Smarr Director, California Institute for Telecommunications and Information Technology Harry E.
Project GreenLight: Optimizing Cyberinfrastructure for a Carbon Constrained World Keynote Talk for the Joint 33 rd IEEE International Computer Software.
Can a Greener Internet Help Us Moderate Climate Change? High Definition Remote Presentation to the Monash Undergraduate Research Projects Abroad (MURPA)
Chiaro’s Enstara™ Summary Scalable Capacity –6 Tb/S Initial Capacity –GigE  OC-192 Interfaces –“Soft” Forwarding Plane With Network Processors For Maximum.
Chicago/National/International OptIPuter Infrastructure Tom DeFanti OptIPuter Co-PI Distinguished Professor of Computer Science Director, Electronic Visualization.
A Wide Range of Scientific Disciplines Will Require a Common Infrastructure Example--Two e-Science Grand Challenges –NSF’s EarthScope—US Array –NIH’s Biomedical.
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
Using Photonics to Prototype the Research Campus Infrastructure of the Future: The UCSD Quartzite Project Philip Papadopoulos Larry Smarr Joseph Ford Shaya.
SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program.
Sustainability Requires Green Cyberinfrastructure Invited Talk UCSD Jacobs School of Engineering Council of Advisors May 1, 2009 Dr. Larry Smarr Director,
The iPlant Collaborative Community Cyberinfrastructure for Life Science Tools and Services Workshop Discovery Environment Overview.
A High-Performance Campus-Scale Cyberinfrastructure For Effectively Bridging End-User Laboratories to Data-Intensive Sources Presentation by Larry Smarr.
Project GreenLight Overview Thomas DeFanti Full Research Scientist and Distinguished Professor Emeritus California Institute for Telecommunications and.
Digital Infrastructure in a Carbon-Constrained World Invited Presentation to the Committee on Science, Engineering, and Public Policy (COSEPUP) National.
The iPlant Collaborative Community Cyberinfrastructure for Life Science Tools and Services Workshop Discovery Environment Overview.
The Role of Energy Efficient Cyberinfrastructure in Slowing Climate Change Community Alliance for Distributed Energy Resources Scripps Forum, UCSD La Jolla,
Ocean Sciences Cyberinfrastructure Futures Dr. Larry Smarr Director, California Institute for Telecommunications and Information Technologies Harry E.
The OptIPuter Project Tom DeFanti, Jason Leigh, Maxine Brown, Tom Moher, Oliver Yu, Bob Grossman, Luc Renambot Electronic Visualization Laboratory, Department.
OptIPuter Networks Overview of Initial Stages to Include OptIPuter Nodes OptIPuter Networks OptIPuter Expansion OPtIPuter All Hands Meeting February 6-7.
“The Pacific Research Platform: a Science-Driven Big-Data Freeway System.” Big Data for Information and Communications Technologies Panel Presentation.
XI HE Computing and Information Science Rochester Institute of Technology Rochester, NY USA Rochester Institute of Technology Service.
“The UCSD Big Data Freeway System” Invited Short Talk Workshop on “Enriching Human Life and Society” UC San Diego February 6, 2014 Dr. Larry Smarr Director,
“ OptIPuter Year Five: From Research to Adoption " OptIPuter All Hands Meeting La Jolla, CA January 22, 2007 Dr. Larry Smarr Director, California.
Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
Lawrence H. Landweber National Science Foundation SC2003 November 20, 2003
SALSASALSASALSASALSA Digital Science Center February 12, 2010, Bloomington Geoffrey Fox Judy Qiu
Southern California Infrastructure Philip Papadopoulos Greg Hidley.
University of Illinois at Chicago Lambda Grids and The OptIPuter Tom DeFanti.
1 Implementing a Virtualized Dynamic Data Center Solution Jim Sweeney, Principal Solutions Architect, GTSI.
High Performance Cyberinfrastructure Discovery Tools for Data Intensive Research Larry Smarr Prof. Computer Science and Engineering Director, Calit2 (UC.
“OptIPuter: From the End User Lab to Global Digital Assets" Panel UC Research Cyberinfrastructure Meeting October 10, 2005 Dr. Larry Smarr.
ChinaGrid: National Education and Research Infrastructure Hai Jin Huazhong University of Science and Technology
Extreme Scale Infrastructure
Lizhe Wang, Gregor von Laszewski, Jai Dayal, Thomas R. Furlani
What is HPC? High Performance Computing (HPC)
The OptIPortal, a Scalable Visualization, Storage, and Computing Termination Device for High Bandwidth Campus Bridging Presentation by Larry Smarr to.
Presentation transcript:

Source: Jim Dolgonas, CENIC CENIC is Removing the Inter-Campus Barriers in California ~ $14M Invested in Upgrade Now Campuses Need to Upgrade

The “Golden Spike” UCSD Experimental Optical Core: Ready to Couple Users to CENIC L1, L2, L3 Services Source: Phil Papadopoulos, SDSC/Calit2 (Quartzite MRI PI, OptIPuter co-PI) Funded by NSF MRI Grant Lucent Glimmerglass Force10 OptIPuter Border Router CENIC L1, L2 Services Cisco 6509 Currently: >= 60 endpoints at 10 GigE >= 30 Packet switched >= 30 Switched wavelengths >= 400 Connected endpoints Approximately 0.5 Tbps Arrive at the “Optical” Center of Hybrid Campus Switch

Network Today Quartzite

Calit2 Sunlight Optical Exchange Contains Quartzite Maxine Brown, EVL, UIC OptIPuter Project Manager

What the Network Enables Data, Computing anywhere on Campus Always-on high-resolution streaming Large-scale data movement w/o impacting commodity net. Complete re-factoring of where network- connected resources are located

Campus Fiber Network Based on Quartzite Allowed UCSD CI Design Team to Architect Shared Resources UCSD Storage OptiPortal Research Cluster Digital Collections Lifecycle Management PetaScale Data Analysis Facility HPC System Cluster Condo UC Grid Pilot Research Instrument N x 10Gbe DNA Arrays, Mass Spec., Microscopes, Genome Sequencers Source: Phil Papadopoulos, SDSC/Calit2

Triton – A Downpayment on Campus-scale CI Standard Compute Cluster (256 nodes, 2048 Cores, 6TB RAM) Large-memory Cluster (28 nodes, 896 cores, 9TB RAM) Large-scale storage –At baby stage with 180TB and 4GB/sec –Goal is ~4PB and 100GB/sec BW Structure managed with Rocks. An open system. Will also function as a high-performance cloud platform

TritonResource: Expect initial production on compute systems ~June 2009 Data Oasis storage system expected fall 2009

Triton Designed for Particular Apps Overriding need for Large Memory nodes 512GB, 256GB (4 dedicated as DB’s) A Small Sampling: Regional Ocean Circulation Scripps) –Scalable algorithm + single node optimization step (> 150GB memory needed) 3D Tomographic Reconstruction of EM Images (Medicine) –256, 512GB “on the small side” DNA Sequence Analysis with short sequence reads - > 128 GB Human Heart Full Beat Simulation (Bioengineering) –100 – 200 GB Drug discovery and design from first principles.

Triton Network Connectivity Total Switch Capacity – 512 X 10 Gbit/sec = 5 Tbit/s ($150K) 32 x 10GbE to Campus Networks including at least 5x10GbE to Quartzite OptIPuter. –All external-to-UCSD high-speed networks could terminate on Triton at full rate Mid Construction – Large Memory Nodes Integrated into Switch (28 nodes, 40Gbit/s/Node)

The NSF-Funded GreenLight Project Giving Users Greener Compute and Storage Options Measure and Control Energy Usage: –Sun Has Shown up to 40% Reduction in Energy –Active Management of Disks, CPUs, etc. –Measures Temperature at 5 Levels in 8 Racks –Power Utilization in Each of the 8 Racks –Chilled Water Cooling Systems UCSD Structural Engineering Dept. Conducted Sun MD Tests May 2007 UCSD (Calit2 & SOM) Bought Two Sun MDs May 2008 Source: Tom DeFanti, Calit2; GreenLight PI

The GreenLight Project: Instrumenting the Energy Cost of Computational Science Focus on 5 Communities with At-Scale Computing Needs: –Metagenomics –Ocean Observing –Microscopy –Bioinformatics –Digital Media Measure, Monitor, & Web Publish Real-Time Sensor Outputs –Via Service-oriented Architectures –Allow Researchers Anywhere To Study Computing Energy Cost –Enable Scientists To Explore Tactics For Maximizing Work/Watt Develop Middleware that Automates Optimal Choice of Compute/RAM Power Strategies for Desired Greenness Partnering With Minority-Serving Institutions Cyberinfrastructure Empowerment Coalition Source: Tom DeFanti, Calit2; GreenLight PI

Research Needed on How to Deploy a Green CI Computer Architecture –Rajesh Gupta/CSE Software Architecture –Amin Vahdat, Ingolf Kruger/CSE CineGrid Exchange –Tom DeFanti/Calit2 Visualization –Falko Kuster/Structural Engineering Power and Thermal Management –Tajana Rosing/CSE Analyzing Power Consumption Data –Jim Hollan/Cog Sci Direct DC Datacenters –Tom Defanti, Greg Hidley MRI

New Techniques for Dynamic Power and Thermal Management to Reduce Energy Requirements Dynamic Thermal Management (DTM) Workload Scheduling: Machine learning for Dynamic Adaptation to get Best Temporal and Spatial Profiles with Closed-Loop Sensing Proactive Thermal Management Reduces Thermal Hot Spots by Average 60% with No Performance Overhead Dynamic Power Management (DPM) Optimal DPM for a Class of Workloads Machine Learning to Adapt Select Among Specialized Policies Use Sensors and Performance Counters to Monitor Multitasking/Within Task Adaptation of Voltage and Frequency Measured Energy Savings of Up to 70% per Device NSF Project Greenlight Green Cyberinfrastructure in Energy-Efficient Modular Facilities Closed-Loop Power &Thermal Management System Energy Efficiency Lab (seelab.ucsd.edu) Prof. Tajana Šimunić Rosing, CSE, UCSD