Millennium Overview and Status David Culler and Jim Demmel Computer Science Division

Slides:



Advertisements
Similar presentations
Technology Analysis LINUX Alper Alansal Brian Blumberg Ramank Bharti Taihoon Lee.
Advertisements

Introduction to Grid Application On-Boarding Nick Werstiuk
A Workflow Engine with Multi-Level Parallelism Supports Qifeng Huang and Yan Huang School of Computer Science Cardiff University
IBM Software Group ® Integrated Server and Virtual Storage Management an IT Optimization Infrastructure Solution from IBM Small and Medium Business Software.
1 Project overview Presented at the Euforia KoM January, 2008 Marcin Płóciennik, PSNC, Poland.
System Area Network Abhiram Shandilya 12/06/01. Overview Introduction to System Area Networks SAN Design and Examples SAN Applications.
Joint CASC/CCI Workshop Report Strategic and Tactical Recommendations EDUCAUSE Campus Cyberinfrastructure Working Group Coalition for Academic Scientific.
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CF21) IRNC Kick-Off Workshop July 13,
Cisco and NetApp Confidential. Distributed under non-disclosure only. Name Date FlexPod Entry-level Solution FlexPod Value, Sized Right for Smaller Workloads.
Unique Opportunities in Experimental Computer Systems Research - the Berkeley Testbeds David Culler U.C. Berkeley Grad.
NPACI Panel on Clusters David E. Culler Computer Science Division University of California, Berkeley
Millennium Overview and Status David Culler and Jim Demmel Computer Science Division
1 Intellectual Architecture Leverage existing domain research strengths and organize around multidisciplinary challenges Institute for Computational Research.
Millennium: Computer Systems, Computational Science and Engineering in the Large David Culler, J. Demmel, E. Brewer, J. Canny, A. Joseph, J. Landay, S.
Millennium: Cluster Technology for Computational Science and Engineering David Culler E. Brewer, J. Canny, J. Demmel, A. Joseph, J. Landay, S. McCanne.
Distributed Object Computing Weilie Yi Dec 4, 2001.
Towards I-Space Ninja Mini-Retreat June 11, 1997 David Culler, Steve Gribble, Mark Stemm, Matt Welsh Computer Science Division U.C. Berkeley.
1 GENI: Global Environment for Network Innovations Jennifer Rexford On behalf of Allison Mankin (NSF)
Developing a Cluster Strategy for NPACI All Hands Meeting Panel Feb 11, 2000 David E. Culler Computer Science Division University of California, Berkeley.
Connecting the Invisible Extremes of Computing David Culler U.C. Berkeley Summer Inst. on Invisible Computing July,
IPPS 981 Berkeley FY98 Resource Working Group David E. Culler Computer Science Division U.C. Berkeley
Interpret Application Specifications
Packing for the Expedition David Culler. 5/25/992 Ongoing Endeavors Millennium: building a large distributed experimental testbed –Berkeley Cluster Software.
NPACI: National Partnership for Advanced Computational Infrastructure August 17-21, 1998 NPACI Parallel Computing Institute 1 Cluster Archtectures and.
Assessment of Core Services provided to USLHC by OSG.
Information Technology
Computing in Atmospheric Sciences Workshop: 2003 Challenges of Cyberinfrastructure Alan Blatecky Executive Director San Diego Supercomputer Center.
 Cloud computing  Workflow  Workflow lifecycle  Workflow design  Workflow tools : xcp, eucalyptus, open nebula.
Cloud Computing 1. Outline  Introduction  Evolution  Cloud architecture  Map reduce operation  Platform 2.
Copyright © 2002, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners
4.2.1 Programming Models Technology drivers – Node count, scale of parallelism within the node – Heterogeneity – Complex memory hierarchies – Failure rates.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Condor : A Concept, A Tool and.
Using Virtual Servers for the CERN Windows infrastructure Emmanuel Ormancey, Alberto Pace CERN, Information Technology Department.
The Right OS for Your Job Major: Computer Science Instructor: Dr Anvari Presenter: Ke Huang Student ID:
Introduction Infrastructure for pervasive computing has many challenges: 1)pervasive computing is a large aspect which includes hardware side (mobile phones,portable.
Tools for collaboration How to share your duck tales…
CCS Overview Rene Salmon Center for Computational Science.
PARALLEL COMPUTING overview What is Parallel Computing? Traditionally, software has been written for serial computation: To be run on a single computer.
SimMillennium Project Overview David E. Culler Computer Science Division U.C. Berkeley NSF Site Visit March 2, 1998.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Breakout # 1 – Data Collecting and Making It Available Data definition “ Any information that [environmental] researchers need to accomplish their tasks”
ProActive Infrastructure Eric Brewer, David Culler, Anthony Joseph, Randy Katz Computer Science Division U.C. Berkeley ninja.cs.berkeley.edu Active Networks.
 The End to the Means › (According to IBM ) › 03.ibm.com/innovation/us/thesmartercity/in dex_flash.html?cmp=blank&cm=v&csr=chap ter_edu&cr=youtube&ct=usbrv111&cn=agus.
Millennium Executive Committee Meeting David E. Culler Computer Science Division
Data Center & Large-Scale Systems (updated) Luis Ceze, Bill Feiereisen, Krishna Kant, Richard Murphy, Onur Mutlu, Anand Sivasubramanian, Christos Kozyrakis.
Background Real-time environmental monitoring is a field garnering an ever-increasing amount of attention. The ability for sensors to make and publish.
Foundations of Information Systems in Business. System ® System  A system is an interrelated set of business procedures used within one business unit.
| nectar.org.au NECTAR TRAINING Module 4 From PC To Cloud or HPC.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
SimMillennium Systems Requirements and Challenges David E. Culler Computer Science Division U.C. Berkeley NSF Site Visit March 2, 1998.
Societal-Scale Computing: The eXtremes Scalable, Available Internet Services Information Appliances Client Server Clusters Massive Cluster Gigabit Ethernet.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
Chapter 1 WHAT IS A COMPUTER Faculty of ICT & Business Management Tel : BCOMP0101 Introduction to Information Technology.
INTRODUCTION TO GRID & CLOUD COMPUTING U. Jhashuva 1 Asst. Professor Dept. of CSE.
Chapter 16 Client/Server Computing Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E William.
© 2007 IBM Corporation IBM Software Strategy Group IBM Google Announcement on Internet-Scale Computing (“Cloud Computing Model”) Oct 8, 2007 IBM Confidential.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
Extreme Scale Infrastructure
VisIt Project Overview
Clouds , Grids and Clusters
Berkeley Cluster Projects
Programming Models for SimMillennium
U.C. Berkeley Millennium Project
University of Technology
IBM Pervasive Computing Visit June 9, 1997
Results of Prior NSF RI Grant: TITAN
IBM Pervasive Computing Visit Jan 7, 1999
CLUSTER COMPUTING.
Next-Generation Internet-Scale Systems Ninja/Millennium Projects
Presentation transcript:

Millennium Overview and Status David Culler and Jim Demmel Computer Science Division

Intel Visit 8/28/98Millennium2 MM Proposed Cluster of Clusters Gigabit Ethernet SIMS C.S. E.E. M.E. BMRC N.E. IEOR C. E. MSME NERSC Transport Business Chemistry Astro Physics Economy Math

Intel Visit 8/28/98Millennium3 Physical Connectivity

Intel Visit 8/28/98Millennium4 Associated Commitments Intel provides 6 M$ in equipment Sun provides all Solaris x86 software Microsoft provides all NT software Campus provides staff to support the core infrastructure, networking CS will try to raise funds for the network and campus cluster infrastructure Departments provide 1/2 of system admin for own side and 20 K$ for group cluster –racks, network, software,...

Intel Visit 8/28/98Millennium5 Where are we? Evolving Vision => SimMillennium Hardware deployment Software availability Cluster Environment Grants Networking

Intel Visit 8/28/98Millennium6 The Vision To work, think, and study in a computationally rich environment with deep information stores and powerful services –test ideas through simulation –explore and investigate data and information –share, manipulate, and interact through natural actions Organized in a manner consistent with the University setting

Intel Visit 8/28/98Millennium7 SimMillennium Project Goals Enable major advances in Computational Science and Engineering –Simulation, Modeling, and Information Processing becoming ubiquitous Explore novel design techniques for large, complex systems –Fundamental Computer Science problems ahead are problems of scale Develop fundamentally better ways of assimilating and interacting with large volumes of information –and with each other Explore emerging technologies –networking, OS, devices

Intel Visit 8/28/98Millennium8 Components of the Effort Community Cluster-based Resources Connectivity User Interaction Computational Economics

Intel Visit 8/28/98Millennium9 Component 0: Community An inter-disciplinary community with common interests and shared view of the future –strong momentum in computational science and engineering –Members of 17 campus units and NERSC in Intel Millennium –Need and commitment required for participation –Key subset represented in SimMillennium proposal Strong Progress with Pacific Earthquake Engineering Research Center New Thrust: NSF Science and Technology Center for the Study of Turbulence in Geophysical and Astrophysical Flows

Intel Visit 8/28/98Millennium10 Component 1: Resources (Millennium) An environment with vast cluster-based computing power and storage (CLUMPS) behind a personal 3D desktop NT 3D Desktop Group Cluster of SMPs Dept. SMP Campus Cluster

Intel Visit 8/28/98Millennium11 Resource Component Support Computers via Intel Technology 2000 grant –200 NT desktops – 16 department 4-way SMPs – 8 5x4 Group Clusters, – 1 ~100x4 Campus Cluster –PPro => Pentium II => Merced Additional storage via IBM SUR grant –0.5 TB this year => 4 TB NT tools via Microsoft grant Solaris x86 tools via SMCC grant Campus provides Technical staff Research provides the prog. and system support 200 Gflop/s 150 GB memory 8 TB disk

Intel Visit 8/28/98Millennium12 NSF investment: Cluster Network Transforms large collection of individual resources into a powerful system –can be focus on a problem High Bandwidth –scales with the number of processors (Gb/s per proc) Low Latency Low Overhead Low Cost Simple and Flexible Almost no errors Low Risk Today: Myrinet

Intel Visit 8/28/98Millennium13 Cluster Research Agenda Applications grow into resources –huge range of needs –require Algorithmic Innovation, Prog. Tools, & Performance Dealing Deep Memory Hierarchy –New numerical algorithms on CLUMPs –New compiler techniques for parallel object language Fast Multi-protocol Communication Global system at large scale –Unix vs. NT, single system image vs. objects Exciting technology turnover –VIA, SANs, Gigabit Ethernet

Intel Visit 8/28/98Millennium14 Component 2: Connectivity Create a richly interconnected pool of resources owned by members of the community –Enable transportation of huge data sets and computation –Enable remote visualization and collaboration –Enable extensive sharing of resources Expand networking technology Campus Cluster CS Cluster EE Cluster CE Cluster ME Cluster Astro/Phys Cluster xport Cluster BIO Cluster Econ/Math Cluster

Intel Visit 8/28/98Millennium15 NSF Investment: Inter-cluster network Gigabit Ethernet connecting group clusters and campus cluster Bay Networks provides 70% discount Campus provides fiber plant, maintenance, and staff

Intel Visit 8/28/98Millennium16 Inter-Cluster Research Agenda Vastly expands the scope of systems challenge –integrate well-connected resources according application needs, rather than physical packaging –resource allocation, management, and administration Network bandwidth matches display BW –Protocols and run-time sys. for visualization, media transport, interaction, and collaboration. Community can share non-trivial resources while preserving sense of ownership –Bandwidth translates into efficiency of exchange –Data can be anywhere Important networking technology in its own right. Emulate networks on Internet Scale!

Intel Visit 8/28/98Millennium17 Component 3: User Interaction High-quality 3D graphics emerging on cost- effective platforms –desktops and dedicated cluster nodes –NERSC team provides modern scientific visualization support Gigabit network allows this to be remote. New displays create “workbench” environment where large volumes of information can be viewed and manipulated. Trackers and Haptic interfaces greatly enhance degrees of user input –3D capture

Intel Visit 8/28/98Millennium18 NSF Investment: UI Technology Two Projection Table –large field of view in horizontal (or vertical) orientation Phantom Haptic Interface –3D force feedback Motion Tracker – untethered position 3D Shutter Glasses –low cost visualization

Intel Visit 8/28/98Millennium19 User Interaction Research Agenda Expand access to 3D visualization –Explore any data anywhere –Ease development Develop lab-bench metaphor for Viz –two hands, physical icons Fast prototyping and exchange through Informal Interfaces –sketching Dealing with large volumes of information –lenses, brushing and linking 3D collaboration and interaction

Intel Visit 8/28/98Millennium20 Component 4: Computational Economy How is this vast, integrated pool of resources managed? Traditional system approach: empower global OS to provide “optimal” allocation to blind applications –predefined metric, tuned to fixed workload –ignores the inherent adaptation of demand Computer Center –charge => director-to-user feedback according to cost Economic view: decentralized allocation according to perceived value –pricing => user-to-user feedback –compatible niches,sense of control, cooperation –idea has been around, why now?

Intel Visit 8/28/98Millennium21 Research Agenda Natural fit to academic structure –members want control over own resources, and each has varying needs that far exceed dedicated resources –incentive for maintaining resources up to par Address partial or delayed information, component failure, and user satisfaction from the start Framework for elevating design from resources to services Rich body of theory, little empirical validation –experts in several parts of the community New paradigm for algorithms & perf. Analysis Complex, large-scale systems

Intel Visit 8/28/98Millennium22 Basic Approach Desktop an active agent conducting automated negotiation for resources Servers provide resources to highest bidders –monitor usage and enforce limits within remote execution environment –placement based on economic advantage Higher level system functions are self-supporting –resource availability, brokering, directories Useful applications packaged as services –may charge more than resources cost

Intel Visit 8/28/98Millennium23 NSF Investment: Staff Support Provide enabling technology and let it evolve –monitoring, enforcement –exchange –negotiation tools Integrate it into users enviroment Tools and measurements to determine effectiveness

Intel Visit 8/28/98Millennium24 Integrated Research Agenda Advance the State of Computational Science and Engineering –immerse a community in a computationally rich environment with the right tools: algorithms, programming & system support –Path to exploiting novel techniques and technology Explore design techniques for robust large-scale distributed systems –economic (or ecologic) approach Explore new ways of interacting with information –large paste-ups, two hands, sketching, 3D collaboration Investigate new technology –SMP nodes, gigabit Ethernet, SANs, VIA –NT, dCOM, Java beans, directory services –workbench displays, 3D icons, haptics, position sensors

Intel Visit 8/28/98Millennium25 Perspective Highly leveraged investment in a large scale infrastructure for studying problems of scale Deep commitment across the campus Sense of ownership and participation Rich research agenda

Intel Visit 8/28/98Millennium26 Nuts and Bolts

Intel Visit 8/28/98Millennium27 Current Environment All projects have servers and several desktops –last few in current shipment millennium.berkeley.edu domain established solaris/x86 shared servers with 60GB disks – MM.millennium.berkeley.edu –imap,... solaris/x86 sww served from CS servers NT domain server with 60 GB disks –MMNT.millennium.berkeley.edu –exchange, file,... First cluster ready for use NSF SimMillennium grant to cover network Two of three Millennium staff hired

Intel Visit 8/28/98Millennium28 Hardware Deployment (per Q2_98)

Intel Visit 8/28/98Millennium29 Basic Software Tools Standard Unix Tools /usr/sww/bin –484 packages GNU Tools –gcc, g++ (version 2.8.1), gdb, g... Sun ProWorks –workshop “development environment” –C, C++, F77 –Debugger Sun Performance Library/usr/sww/lib Sun Math Library NT dialtone

Intel Visit 8/28/98Millennium30 Cluster Developments 4 x 4 PentiumPro Cluster ready Full NOW environment –Glunix, MPI, Active Messages, Split-C –titanium protoype Transfering to Cory to debug deployment CS / Civil Eng. Consortium has been shaking it out –petc graph partitioner, finite element classes, … –ScaLapack (???) Have parts for three 8x2 Pentium II clusters –AstroPhysics –Soda Solaris x86 to be shipped out –Soda NT cluster

Intel Visit 8/28/98Millennium31 Grants NSF CISE Research Infrastructure grant –cluster networking –inter-cluster networking (campus mgmt) –computational economy staff support –visualization –devoted to CS research with disciplinary applns Bay Networks 70% discount up to 5 M$ –gigabit networks IBM SUR disk towers IBM CS extension to PDA access NERSC/LBL - DOE2000 Initiative => NSF Science and Tech. Centers in process

Intel Visit 8/28/98Millennium32 Networking Campus using Millennium to drive planning CNS is working closely with us. Testing gigabit ethernet Campbell, Evans, Davis moving forward CS will be able to cover the intracluster network (Myrinet) and gigabit switch in the group clusters –frees up some of the committed resources

Intel Visit 8/28/98Millennium33 Going forward Roll out group clusters over next few quarters Roll out gigabit interconnect Build up the cluster programming tools Intel Merced pushed out –campus cluster delayed –utilize NOW as an alternative –combine with NPACI infrastructure Exploring cluster technology –VIA –Synfinity, ServerNet, Gigabit Ethernet Exploring NT

Intel Visit 8/28/98Millennium34 A New Tier to Millennium Gigabit Ethernet PDAs Cell Phones Future Devices Wireless Infrastructure