Millennium Overview and Status David Culler and Jim Demmel Computer Science Division

Slides:



Advertisements
Similar presentations
Technology Analysis LINUX Alper Alansal Brian Blumberg Ramank Bharti Taihoon Lee.
Advertisements

IBM Software Group ® Integrated Server and Virtual Storage Management an IT Optimization Infrastructure Solution from IBM Small and Medium Business Software.
Welcome to Middleware Joseph Amrithraj
1 Project overview Presented at the Euforia KoM January, 2008 Marcin Płóciennik, PSNC, Poland.
System Area Network Abhiram Shandilya 12/06/01. Overview Introduction to System Area Networks SAN Design and Examples SAN Applications.
Joint CASC/CCI Workshop Report Strategic and Tactical Recommendations EDUCAUSE Campus Cyberinfrastructure Working Group Coalition for Academic Scientific.
ASCR Data Science Centers Infrastructure Demonstration S. Canon, N. Desai, M. Ernst, K. Kleese-Van Dam, G. Shipman, B. Tierney.
Unique Opportunities in Experimental Computer Systems Research - the Berkeley Testbeds David Culler U.C. Berkeley Grad.
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
Millennium Overview and Status David Culler and Jim Demmel Computer Science Division
NPACI Panel on Clusters David E. Culler Computer Science Division University of California, Berkeley
1 Intellectual Architecture Leverage existing domain research strengths and organize around multidisciplinary challenges Institute for Computational Research.
Millennium: Computer Systems, Computational Science and Engineering in the Large David Culler, J. Demmel, E. Brewer, J. Canny, A. Joseph, J. Landay, S.
Millennium: Cluster Technology for Computational Science and Engineering David Culler E. Brewer, J. Canny, J. Demmel, A. Joseph, J. Landay, S. McCanne.
Distributed Object Computing Weilie Yi Dec 4, 2001.
Towards I-Space Ninja Mini-Retreat June 11, 1997 David Culler, Steve Gribble, Mark Stemm, Matt Welsh Computer Science Division U.C. Berkeley.
1 GENI: Global Environment for Network Innovations Jennifer Rexford On behalf of Allison Mankin (NSF)
Developing a Cluster Strategy for NPACI All Hands Meeting Panel Feb 11, 2000 David E. Culler Computer Science Division University of California, Berkeley.
Directions for EECS Computing and Networking David Culler U.C. Berkeley.
Connecting the Invisible Extremes of Computing David Culler U.C. Berkeley Summer Inst. on Invisible Computing July,
IPPS 981 Berkeley FY98 Resource Working Group David E. Culler Computer Science Division U.C. Berkeley
Packing for the Expedition David Culler. 5/25/992 Ongoing Endeavors Millennium: building a large distributed experimental testbed –Berkeley Cluster Software.
NPACI: National Partnership for Advanced Computational Infrastructure August 17-21, 1998 NPACI Parallel Computing Institute 1 Cluster Archtectures and.
SKA-cba-ase NSF and Science of Design Avogadro Scale Engineering Center for Bits & Atoms November 18-19, 2003 Kamal Abdali Computing & Communication.
Assessment of Core Services provided to USLHC by OSG.
1 Jack Dongarra University of Tennesseehttp://
Cloud Computing 1. Outline  Introduction  Evolution  Cloud architecture  Map reduce operation  Platform 2.
MIS3300_Team8 Service Aron Allen Angela Chong Cameron Sutherland Edment Thai Nakyung Kim.
SAS Grid at HC and PHAC June 12, Agenda  To Grid or Not To Grid  The Approach  The Metrics  Lessons Learned  Looking Forward.
Copyright © 2002, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners
Cooperative Caching for Efficient Data Access in Disruption Tolerant Networks.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Condor : A Concept, A Tool and.
SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program.
The Right OS for Your Job Major: Computer Science Instructor: Dr Anvari Presenter: Ke Huang Student ID:
Russ Hobby Program Manager Internet2 Cyberinfrastructure Architect UC Davis.
Introduction Infrastructure for pervasive computing has many challenges: 1)pervasive computing is a large aspect which includes hardware side (mobile phones,portable.
Tools for collaboration How to share your duck tales…
PARALLEL COMPUTING overview What is Parallel Computing? Traditionally, software has been written for serial computation: To be run on a single computer.
Project GreenLight Overview Thomas DeFanti Full Research Scientist and Distinguished Professor Emeritus California Institute for Telecommunications and.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
SimMillennium Project Overview David E. Culler Computer Science Division U.C. Berkeley NSF Site Visit March 2, 1998.
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Copyright © 2006 by The McGraw-Hill Companies,
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Breakout # 1 – Data Collecting and Making It Available Data definition “ Any information that [environmental] researchers need to accomplish their tasks”
ProActive Infrastructure Eric Brewer, David Culler, Anthony Joseph, Randy Katz Computer Science Division U.C. Berkeley ninja.cs.berkeley.edu Active Networks.
 The End to the Means › (According to IBM ) › 03.ibm.com/innovation/us/thesmartercity/in dex_flash.html?cmp=blank&cm=v&csr=chap ter_edu&cr=youtube&ct=usbrv111&cn=agus.
Millennium Executive Committee Meeting David E. Culler Computer Science Division
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
December 10, 2003Slide 1 International Networking and Cyberinfrastructure Douglas Gatchell Program Director International Networking National Science Foundation,
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
SimMillennium Systems Requirements and Challenges David E. Culler Computer Science Division U.C. Berkeley NSF Site Visit March 2, 1998.
Societal-Scale Computing: The eXtremes Scalable, Available Internet Services Information Appliances Client Server Clusters Massive Cluster Gigabit Ethernet.
Science Support for Phase 4 Dr Alan D Simpson HPCx Project Director EPCC Technical Director.
Computing Strategies. A computing strategy should identify – the hardware, – the software, – Internet services, and – the network connectivity needed.
Southern California Infrastructure Philip Papadopoulos Greg Hidley.
INTRODUCTION TO GRID & CLOUD COMPUTING U. Jhashuva 1 Asst. Professor Dept. of CSE.
Dato Confidential 1 Danny Bickson Co-Founder. Dato Confidential 2 Successful apps in 2015 must be intelligent Machine learning key to next-gen apps Recommenders.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
VisIt Project Overview
Overview: Cloud Datacenters
Clouds , Grids and Clusters
Berkeley Cluster Projects
Programming Models for SimMillennium
U.C. Berkeley Millennium Project
University of Technology
IBM Pervasive Computing Visit June 9, 1997
Results of Prior NSF RI Grant: TITAN
IBM Pervasive Computing Visit Jan 7, 1999
Next-Generation Internet-Scale Systems Ninja/Millennium Projects
Instructor: Mort Anvari
Presentation transcript:

Millennium Overview and Status David Culler and Jim Demmel Computer Science Division

Intel Visit 8/28/98Millennium2 MM Proposed Cluster of Clusters Gigabit Ethernet SIMS C.S. E.E. M.E. BMRC N.E. IEOR C. E. MSME NERSC Transport Business Chemistry Astro Physics Economy Math

Intel Visit 8/28/98Millennium3 Physical Connectivity

Intel Visit 8/28/98Millennium4 Associated Commitments Intel provides 6 M$ in equipment Sun provides all Solaris x86 software Microsoft provides all NT software Campus provides staff to support the core infrastructure, networking CS will try to raise funds for the network and campus cluster infrastructure Departments provide 1/2 of system admin for own side and 20 K$ for group cluster –racks, network, software,...

Intel Visit 8/28/98Millennium5 Where are we? Evolving Vision => SimMillennium Hardware deployment Software availability Cluster Environment Grants Networking

Intel Visit 8/28/98Millennium6 The Vision To work, think, and study in a computationally rich environment with deep information stores and powerful services –test ideas through simulation –explore and investigate data and information –share, manipulate, and interact through natural actions Organized in a manner consistent with the University setting

Intel Visit 8/28/98Millennium7 SimMillennium Project Goals Enable major advances in Computational Science and Engineering –Simulation, Modeling, and Information Processing becoming ubiquitous Explore novel design techniques for large, complex systems –Fundamental Computer Science problems ahead are problems of scale Develop fundamentally better ways of assimilating and interacting with large volumes of information –and with each other Explore emerging technologies –networking, OS, devices

Intel Visit 8/28/98Millennium8 Components of the Effort Community Cluster-based Resources Connectivity User Interaction Computational Economics

Intel Visit 8/28/98Millennium9 NSF investment: Cluster Network Transforms large collection of individual resources into a powerful system –can be focus on a problem High Bandwidth –scales with the number of processors (Gb/s per proc) Low Latency Low Overhead Low Cost Simple and Flexible Almost no errors Low Risk Today: Myrinet

Intel Visit 8/28/98Millennium10 NSF Investment: Inter-cluster network Gigabit Ethernet connecting group clusters and campus cluster Bay Networks provides 70% discount Campus provides fiber plant, maintenance, and staff

Intel Visit 8/28/98Millennium11 NSF Investment: UI Technology Two Projection Table –large field of view in horizontal (or vertical) orientation Phantom Haptic Interface –3D force feedback Motion Tracker – untethered position 3D Shutter Glasses –low cost visualization

Intel Visit 8/28/98Millennium12 User Interaction Research Agenda Expand access to 3D visualization –Explore any data anywhere –Ease development Develop lab-bench metaphor for Viz –two hands, physical icons Fast prototyping and exchange through Informal Interfaces –sketching Dealing with large volumes of information –lenses, brushing and linking 3D collaboration and interaction

Intel Visit 8/28/98Millennium13 Computational Economy How is this vast, integrated pool of resources managed? Traditional system approach: empower global OS to provide “optimal” allocation to blind applications –predefined metric, tuned to fixed workload –ignores the inherent adaptation of demand Computer Center –charge => director-to-user feedback according to cost Economic view: decentralized allocation according to perceived value –pricing => user-to-user feedback –compatible niches,sense of control, cooperation –idea has been around, why now?

Intel Visit 8/28/98Millennium14 NSF Investment: Staff Support Provide enabling technology and let it evolve –monitoring, enforcement –exchange –negotiation tools Integrate it into users enviroment Tools and measurements to determine effectiveness

Intel Visit 8/28/98Millennium15 Integrated Research Agenda Advance the State of Computational Science and Engineering –immerse a community in a computationally rich environment with the right tools: algorithms, programming & system support –Path to exploiting novel techniques and technology Explore design techniques for robust large-scale distributed systems –economic (or ecologic) approach Explore new ways of interacting with information –large paste-ups, two hands, sketching, 3D collaboration Investigate new technology –SMP nodes, gigabit Ethernet, SANs, VIA –NT, dCOM, Java beans, directory services –workbench displays, 3D icons, haptics, position sensors

Intel Visit 8/28/98Millennium16 Perspective Highly leveraged investment in a large scale infrastructure for studying problems of scale Deep commitment across the campus Sense of ownership and participation Rich research agenda

Intel Visit 8/28/98Millennium17 Nuts and Bolts

Intel Visit 8/28/98Millennium18 Current Environment All projects have servers and several desktops –last few in current shipment millennium.berkeley.edu domain established solaris/x86 shared servers with 60GB disks – MM.millennium.berkeley.edu –imap,... solaris/x86 sww served from CS servers NT domain server with 60 GB disks –MMNT.millennium.berkeley.edu –exchange, file,... First cluster ready for use NSF SimMillennium grant to cover network Two of three Millennium staff hired

Intel Visit 8/28/98Millennium19 Hardware Deployment (per Q2_98)

Intel Visit 8/28/98Millennium20 Basic Software Tools Standard Unix Tools /usr/sww/bin –484 packages GNU Tools –gcc, g++ (version 2.8.1), gdb, g... Sun ProWorks –workshop “development environment” –C, C++, F77 –Debugger Sun Performance Library/usr/sww/lib Sun Math Library NT dialtone

Intel Visit 8/28/98Millennium21 Cluster Developments 4 x 4 PentiumPro Cluster ready Full NOW environment –Glunix, MPI, Active Messages, Split-C –titanium protoype Transfering to Cory to debug deployment CS / Civil Eng. Consortium has been shaking it out –petc graph partitioner, finite element classes, … –ScaLapack (???) Have parts for three 8x2 Pentium II clusters –AstroPhysics –Soda Solaris x86 to be shipped out –Soda NT cluster

Intel Visit 8/28/98Millennium22 Grants NSF CISE Research Infrastructure grant –cluster networking –inter-cluster networking (campus mgmt) –computational economy staff support –visualization –devoted to CS research with disciplinary applns Bay Networks 70% discount up to 5 M$ –gigabit networks IBM SUR disk towers IBM CS extension to PDA access NERSC/LBL - DOE2000 Initiative => NSF Science and Tech. Centers in process

Intel Visit 8/28/98Millennium23 Networking Campus using Millennium to drive planning CNS is working closely with us. Testing gigabit ethernet Campbell, Evans, Davis moving forward CS will be able to cover the intracluster network (Myrinet) and gigabit switch in the group clusters –frees up some of the committed resources

Intel Visit 8/28/98Millennium24 Going forward Roll out group clusters over next few quarters Roll out gigabit interconnect Build up the cluster programming tools Intel Merced pushed out –campus cluster delayed –utilize NOW as an alternative –combine with NPACI infrastructure Exploring cluster technology –VIA –Synfinity, ServerNet, Gigabit Ethernet Exploring NT

Intel Visit 8/28/98Millennium25 A New Tier to Millennium Gigabit Ethernet PDAs Cell Phones Future Devices Wireless Infrastructure