Download presentation
Presentation is loading. Please wait.
2
UT-BATTELLE High Performance Computing: Past Highlights and Future Trends David W.Walker Computer Science and Mathematics Division Oak Ridge National Laboratory Oak Ridge, TN 37831-6367 U. S. A.
3
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Outline of Talk Trends in hardware performance. Advances in algorithms. Obstacles to efficient parallel programming. Successes and disappointments from the past 2 decades of parallel computing. Futures trends. Problem-solving environments. Petascale computing. Alternative algorithmic approaches. Concluding remarks.
4
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Moore’s Law: A Dominant Trend 19502000 195519601965197019751980198519901995 1 KFlop/s 1 MFlop/s 1 GFlop/s 1 TFlop/s EDSAC 1 UNIVAC 1 IBM 7090 CDC 6600 CDC 7600IBM 360/195 Cray 1 Cray X-MP Cray 2 TMC CM-2 TMC CM-5Cray T3D ASCI Red
5
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Era of Modern Supercomputing In 1976 the introduction of the Cray 1 ushered in the era of modern supercomputing. –ECL chip technology –Shared memory, vector processing –Good software environment –About 100 Mflop/s peak –Cost about $5 million The Intel iPSC/1 was introduced in 1985 –Distributed memory –More scalable hardware –8 Mflop/s peak for 64 processor machine –Explicit message passing
6
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Competing Paradigms Shared memory vs. distributed memory Scalar vs. vector processing Custom vs. commodity processors Cluster vs. stand-alone system ?
7
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Recent Trends The Top500 list provides statistics on high performance computers, based on the performance of the LINPACK benchmark. Before 1995 the Top500 list was dominated by systems at US government research sites. Since 1995 commercial and industrial sites have figured more prominently in the Top500. Reasons: In 1994 companies such as SGI and Sun began selling symmetric multiprocessor (SMP) systems. IBM SP2 systems also popular with industrial sites. Dedicated database systems important as well as web servers.
8
Architectures 91 const, 14 clus, 275 mpp, 120 smp
9
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Top500 CPU Technology http://www.top500.org
10
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Performance in the Top500
11
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Top500 Performance
12
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Top500 Application Areas
13
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Top500 Application Areas Rmax
14
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Top500 Systems Installed by Area
15
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Top500 Data by Continent
16
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Top500 Systems Installed by Continent
17
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Top500 Rmax by Continent
18
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Top500 Systems Installed by Manufacturer
19
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Future Extrapolations from Top500 Data
20
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Some Conclusions from Top500 Data Rapid turnover in architectures, vendors, and technologies. But long-term performance trends appear steady - how long will this continue? Moderately parallel systems now in widespread commercial use. Highest performance systems still found mostly in government-funded sites doing Grand and National challenges - mostly numerically intensive simulations.
21
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Advances in Algorithms Advances in algorithms have led to performance improvements of several orders of magnitude in certain areas. Obvious example is the FFT: O(N 2 ) O(NlogN) Other examples include: –fast multipole methods –wavelet-based algorithms –sparse matrix solvers –etc.
22
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Problems with High Performance Computing HPC is “difficult.” Often large disparity between peak performance and actual performance. Application developers must be aware of the memory hierarchy, and program accordingly. Lack of standard software environment and tools has been a problem. Not many commercial products. Platforms quickly become obsolete so it costs a lot of money to stay at the forefront of HPC technology. A Cray Y-MP C90, purchased in 1993 when the list price was $35M, is being sold on the eBay auction web site. “If there are no takers, we'll have to pay Cray about $30,000 to haul it away for salvage.” Mike Schneider, Pittsburgh Supercomputer Center
23
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Successes of Parallel Computing Portability. In the early days of HPC each machine came with its own application programming interface, and a number of competing research projects offered “compatibility” interfaces. Standardised APIs are now available –MPI for message passing machines –OpenMP for shared memory machines Libraries. Some success has been achieved with developing parallel libraries. For example, –ScaLAPACK for dense and banded numerical linear algebra (Dongarra et al.). –SPRNG for parallel random number generation (NCSA). –FFTW package developed at MIT by Matteo Frigo and Steven G. Johnson for parallel fast Fourier transforms.
24
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Application Successes Climate modelling. The Climate Change Prediction Program seeks to understand the processes that control the Earth's climate and predict future changes in climate due to natural and human influences. Tokamak design. The main goal of the Numerical Tokamak Turbulence Project is to develop realistic fluid and particle simulations of tokamak plasma turbulence in order to optimize performance of fusion devices. Rational drug design. The goal is to discover and design new drugs based on computer simulations of macro-molecular structure. Computational fluid dynamics. This is important in the design of aerospace vehicles and cars. Quantum chromodynamics. Lattice QCD simulations allow us to make first-principle calculations of hadronic properties. The development of scalable massively parallel computers was motivated largely by a set of Grand Challenge applications :
25
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Difficulties and Disappointments Automatic parallelizing compilers. –Automatic vectorization was successful on vector machines. –Automatic parallelization worked quite well on shared memory machines. –Automatic parallelization has been less successful on distributed memory machines. The compiler must decide how to partition the data and assign it to processes to maximize the number of local memory accesses and minimize communication. Software tools for parallel computers. There are few standard software tools for parallel computers. Those that exist are mostly research projects - there are few commercial products. High Performance Fortran. Compilers and tools for HPF were slow in appearing, and HPF was not well-suited to irregular problems.
26
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Future Trends in High Performance Computing Metacomputing using distributed clusters and the Grid –multidisciplinary applications –collaborative applications –advanced visualization environments Ultra-high performance computing –quantum computing –macro-molecular computing –petascale computing Problem-solving environments: Grid portals to computational resources Different algorithmic emphasis –“keep it simple”, cellular automata and particle-based methods –automatic performance tuning and “intelligent algorithms” –interval arithmetic techniques
27
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Metacomputing and the Grid Metacomputing refers to the use of multiple platforms (or nodes ) to seamlessly construct a single virtual computer. In general the nodes may be arbitrarily distant from one another. Some of the nodes may be specialised for a particular task. The nodes themselves may be sequential or parallel computers. A software component running on a single node may make use of MPI or OpenMP. Interaction between nodes is mediated by a software layer such as CORBA, Globus, or Legion. In a common model we view the nodes as offering different sets of computing services with known interfaces.
28
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Metacomputing Limitations This type of distributed metacomputing is limited by the bandwidth of the communication infrastructure connecting the nodes. Limited use for compute-intensive applications. Tasks must be loosely coupled. May be useful for some multi- disciplinary applications.
29
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Important Metacomputing Issues Resource discovery - how do nodes publicise their services. Resource scheduling - how to optimise resource use when there are multiple users. Resource monitoring - need to be able to monitor the bandwidth between nodes and the load on each. Code mobility - often in data-intensive applications it makes more sense to send the code to the data, rather than the other way round. What is the appropriate software infrastructure for interaction between nodes?
30
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Tele-Presence and Metacomputing More generally, the nodes of the metacomputer do not have to be computers - they may be people, experimental instruments, satellites, etc. The remote control of instruments, such as astronomical telescopes or electron microscopes, often involves several collaborators who interact with the instrument and with each other through a thin client interface. In recent work at Cardiff University researchers have developed a WAP interface that allows an MPI application running on a network of workstations to be controlled using a mobile phone.
31
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Collaborative Immersive Visualization Essential feature - observer appears to be in the same space as the visualized data. Observer can navigate within the visualization space relative to the data. Several observers can co-exist in the same visualization space - ideal for remote collaboration.
32
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Hardware Options CAVE: a fully immersive environment. ORNL system has stereoscopic projections onto 3 walls and the floor. ImmersaDesk: projects stereoscopic images onto a single flat panel display. Stereoscopic workstation: a stereoscopic viewing device, such as CrystalEyes, can be used on workstations and PCs. Stereo- ready graphics cards are becoming increasingly available.
33
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov CAVE
34
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Immersive Visualization and Terascale Computing Scientific simulations, experiments, and observations generate vast amounts of data that often overwhelm data management, analysis, and visualization capabilities. Collaborative IV is becoming important in interpreting and extracting insights from this wealth of data. Immersive visualization capability is essential in a credible terascale computing facility.
35
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Collaborative Framework for Simulation-enabled Processes Training Developers Users Cost Analysts Producers Testers Logistics/ Support Analysts Tools Processes Participants Program Managers Data System Developers Subsys/Tech Developers Web-Centric Data Integration Model Integration (CORBA & HLA) Web-based Collaborative Environment with Visualization Support
36
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Research Issues in Collaborative Immersive Visualization Collaborative use of immersive visualization across a range of hardware platforms within a distributed computing environment requires “resource-aware” middleware. Data management, analysis, rendering, and visualization should be tailored to the resources available. Make the visualization system resource-aware so that tasks of data extraction, processing, rendering, and communication across network can be optimized. Permit a wide range of platforms, ranging from CAVEs to laptops, to be used for collaborative data exploration and navigation.
37
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Videmus Prototype Develop a collaborative immersive environment for navigating and analysing very large numerical data sets. Provide suite of resource-aware visualization tools for 3D scalar and vector fields. Support steering, and the retrieval and annotation of data. Permit collaborators to interact in the immersive space by audio and gestures. Make visualization adapt to network bandwidth - if bandwidth is low data may be compressed or lower resolution used. Use server-side processing to lessen load on client and network. Use software agents in implementation.
38
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov CAVE ImmersaDesk Workstation Server Data server Compute server Rendering server Videmus Architecture Data Request Agent Data Dispatch Agent
39
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Other Collaborative Visualization Projects The Electronic Visualization Laboratory at UIC are world leaders, but not particularly focused at scientific visualization. NCSA have done a lot of work on middleware for advanced visualization, and human factors related research. The Virtual Environments Lab at Old Dominion University. Good potential collaborators. COVISE from the University of Stuttgart. SNL has projects in VR for mesh generation, and the “Navigating Science” project to develop a method of exploring and analysing scientific literature using virtual reality.
40
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Massive data processing and visualization Recent acquisition of 9000 CDs of digital data –Vector maps –Scanned maps –Elevation data –Imagery Visualization: desktop to immersive VR environment Storage strategies Data exchange issues Collaborative environment Digital Earth Observatory HPAC Data, Climate and Groundwater Data, Transportation and Energy Data Probe ESnet3
41
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Motivation for Problem-Solving Environments The aim is scientific insight. Better understanding of the fundamental laws of the universe and how these interact to produce complex phenomena. New technologies for economic competitiveness and a cleaner and safer environment.
42
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Aspects of Scientific Computing Prediction : as in traditional computational science. Abstraction : the recognition of patterns and inter- relationships. –Visualization for steering, navigation, and immersion. –Data mining. Collaboration : brings a wide range of expertise to a problem. We use computers to advance science through:
43
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Innovative Environments Gives transparent access to heterogeneous distributed resources Supports all aspects of software creation and use Seamlessly incorporates new hardware and software We seek to support prediction, abstraction, and collaboration in an integrated computing environment that: Problem-Solving Environment
44
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Problem-Solving Environments are specific to application domain, e.g., PSE for climate modeling, PSE for material science, etc. provide easy access to distributed computing resources so end user can focus on the science and not computer issues. deliver state-of-the-art problem-solving power to the end user. increase research productivity.
45
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov PSEs and Complexity Modeling complex physical systems requires complex computer hardware (hierarchical memory, parallelism) and complex computer software (numerical methods, message passing, etc.). PSEs handle this complexity for the end user.
46
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Vision for PSEs PSEs herald a new era in scientific computing, both in power and how resources are accessed and used PSEs will become the main gateway for scientists to access terascale computing resources. PSEs will allow users to access these resources from any web connection. They are as web portals to the Grid. PSE’s support for collaborative computational science will change the pervading research culture, making it more open and accountable.
47
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Synergies Research Hardware Research Software Research Culture Better = bigger & faster Distributed, immersive Collaborative Better = more open & accountable
48
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Software Technologies for PSEs XML is used for interface specification and defining the component model. Java is used for platform-independent programming. CORBA provides for transparent interaction between distributed resources. Agents for user support, resource monitoring and discovery. Wherever possible use accepted software standards and component-based software engineering.
49
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Main Features of a PSE Collaborative code development environment. Intelligent resource management system. Expert assistance in application design and input data specification. Electronic notebooks for recording and sharing results of research.
50
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Collaborative Code Development Environment The collaborative code development environment uses a visual programming tool for seamlessly integrating code from multiple sources. Applications are created by plugging together software components. Legacy codes in any major scientific programming language can be handled.
51
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Novel Ideas for PSE Research Intelligence is important in PSEs for efficient use and management of resources, for ease of use, and user support. The PSE must be able to learn what works best. Living documents are a novel way of electronically publishing research results. Readers can replay simulations, and experiment with changing input parameters. Resource-aware visualization refers to ability of a visualization to adapt to the hardware platform, which may range from a PC to a CAVE.
52
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Why PSEs? Need: enhanced scientific insight; reduced development costs; improved product quality and industrial efficiency. Need: transparent means of integrating distributed computers, instruments, sensors, and people. Need: improved software productivity to extract maximum benefit from advances in computers, networks, and algorithms.
53
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Why Now? Confluence of complementary technologies. Faster networks and communications. Network software technologies such as CORBA, Java, and XML. “Big Science” is inherently distributed and collaborative, and needs to migrate to WAN environments to progress.
54
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov What’s the Problem? High-level, problem-specification languages, often coupled with expert system. For example, PDE solvers, numerical integration, etc. Problem composition in form of dataflow graph using a GUI. Typically used in modelling and simulation of physical systems.
55
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov PSE Requirements Expert assistance in problem specification and input. Transparent access to distributed heterogeneous resources. Interactivity and computational steering. Advanced/immersive visualisation. Integration with other knowledge repositories and databases.
56
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Technologies for PSEs Hardware: Increasingly powerful computers Increasingly fast networks - gigabit ethernet, vBNS, etc. Immersive visualisation platforms - CAVEs, ImmersaDesks, etc.
57
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Technologies for PSEs Software: CORBA for transparent interaction between distributed resources. Java for platform-independent programming. XML interface specification. MPI for message-passing in SPMD codes.
58
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov An Example PSE Architecture Main PSE sub-systems are: Visual Program Composition Environment (VPCE) for graphically composing applications. Intelligent Resource Management System (IRMS) for scheduling applications on distributed resources.
59
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov VPCE Overview GUI is used to build an application from software components - either a java or CORBA object with its interface specified in XML. Each component may have a performance model and help file. An annotated dataflow graph is produced that is passed to the IRMS.
60
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov IRMS Overview IRMS locates software and hardware resources through information servers. IRMS then schedules components on appropriate resources based on performance models and database of experience from previous runs. Genetic and neural network algorithms may be used.
61
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov The PSE Research Community European Research Conference on PSEs took place June 1999 in Spain. Next one in summer 2001. http://www.cs.cf.ac.uk/euresco99/ EuroTools SIG on PSEs. http://www.irisa.fr/EuroTools/Sigs/ Cardiff PSE project web site. http://www.cs.cf.ac.uk/PSEweb/
62
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov US Software Infrastructure Globus: provides core services for grid-enabled computing. http://www.globus.org/ Legion: an object-based metacomputing project. http://legion.virginia.edu/ The Grid is a computational and network infrastructure providing pervasive, uniform, and reliable access to distributed resources.
63
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov European Software Infrastructure UNICORE: Uniform access to Computing Resources. Aimed at providing uniform, secure, batch access to distributed resources. http://www.genias.de/unicore/unicore.html POLDER: a more ambitious metacomputing project. http://www.wins.uva.nl/projects/polder/
64
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov European Software Infrastructure CODINE: resource management system targeted at optimal use of all software and hardware resources in a heterogeneous networked environment. http://www.genias.de/products/codine/ CCS: Computing Centre Software - resource management for networked high-performance computers. http://www.uni- paderborn.de/pc2/projects/ccs/
65
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov European Software Infrastructure GRD: Global Resource Director for distributed environments featuring policy management and dynamic scheduling. http://www.genias.de/products/grd/ NWIRE: Netwide resources - management system for WAN-based resources. http://www-ds.e-technik.uni-dortmund.de/
66
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov COVISE Visualisation Environment The Co llaborative Visualisation and Simulation Environment is a distributed software environment that seamlessly integrates simulations, post-processing, and visualisation. COVISE supports collaborative working, and is available commercially. http://www.hlrs.de/structure/organisation/vis/covise/
67
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Ctadel and PDE Problems Code-generation tool for applications based on differential equations using high-level language specifications is an environment for the automatic generation of efficient Fortran or HPF programs for PDE-based problems. Used in HIRLAM numerical weather forecast system. http://www.wi.leidenuniv.nl/CS/HPC/ctadel.html
68
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov An Environment for Cellular Automata CAMELCAMEL is a CA environment designed for message-passing parallel computers. It hides parallelism issues from a user. CARPETUser specifies only the transition function of a single cell of the system with CARPET, a high-level cellular language. http://isi-cnr.deis.unical.it:1080/~talia/CA.html
69
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov A PSE for Numerical General Relativity CACTUS is a collaborative software environment for composing applications for the solution of general relativity problems. Has been used in distributed computing experiments using Globus. Interactive visualisation important. http://cactus.aei-potsdam.mpg.de
70
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov JACO3: Industrial Design PSE Java and CORBA based collaborative environment for coupled simulations. A CORBA based high performance distributed computing environment for coupling simulation codes. Optimal design of complex and expensive products like airplanes, satellites, or cars. http://www.arttic.com/projects/jaco3/
71
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov A PSE for Stochastic Analysis Promenvir: Probabilistic mechanical design environment - a metacomputing tool for stochastic analysis. It can automatically generate a series of stochastic computational experiments, and run them on the available resources It has been used for optimal design problems in the automobile industry. http://www.cepba.upc.es/promenvir.html
72
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov PSE for Engineering Simulations JULIUS : Joint Industrial Interface for End-User Simulations. Integrated HPC environment for multi-disciplinary engineering simulations. Aimed at reducing design time for industrial products. End-users are engineers. http://www.6s.org/
73
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Ultra-High Performance Computing Quantum computing. Based on the principle of superposition that says that a particle can simultaneously be in multiple quantum states. In theory, this allows a quantum computer to perform enormous numbers of operations in parallel, and thus achieve very high performance. But not yet - there are lots of problems, such as how do you couple a quantum computer to its environment. Molecular computing. Macro-molecules such as DNA can be used to perform massively parallel searches and provide ultra-high capacity storage devices. But not yet - a stable molecular computer is still decades away. Superconductor-based logic. This is the basis of the Hybrid Technologies Multi-Threaded (HTMT) architecture, and may lead to a petaflop computing by 2010 - 10 15 floating-point operations per second.
74
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Future of Semiconductor Chips. Due to fundamental physical constraints it is expected that the clock speed of silicon-based semiconductor chips will level out at about 6GHz by 2010. Need about 170000 of these chips to reach 1 Pflop/s, a small power plant to provide the energy! For most of us Moore’s Law will no longer hold. We can look forward to cheap, long-lasting computers with stable software.
75
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov The HTMT Architecture and Petascale Computing Superconductor Rapid Single-Flux Quantum (RSFQ) logic as the basis of 150 GHz processing elements. Processing-in-Memory (PIM) chips to reduce memory access bottlenecks. An optical interconnection network with a communication speed of 500 Gbps per channel. Hardware support for latency management. The HTMT project aims to build a petascale computer by 2010, based on a number of innovative technologies:
76
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Superconductor RSFQ logic RSFQ devices running at up to 700 GHz have been tested. RSFQ processing elements are high-speed, but have low power consumption. Digital bits are encoded by a single quantum of magnetic flux. Data are transferred in picosecond SFQ pulses. Uses superconducting niobium. Processes must be cooled to liquid helium temperatures, but costs are acceptable. This technology is not for the mass market!
77
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Processing-In-Memory Chips A central issue in the design and use of HPC systems has been the fact that peak processing speeds have been increasing at a faster rate than memory access speeds. This has led to systems with complex memory hierarchies. PIM designs seek to reduce the memory access bottleneck by integrating processing logic and memory on a single chip. Commercial and research PIM chips already exist.
78
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov HTMT and Hierarchical Memory Superconductor processing elements (Spells) each with 1 Mbyte of memory cooled by liquid helium - cryogenic RAM (CRAM). SRAM PIMs cooled by liquid nitrogen. DRAM PIMs that are connected to the SRAM PIMs by an optical network. Holographic 3/2 memory (HRAM). The HTMT architecture has a deep memory hierarchy. There are 4 levels in the hierarchy:
79
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Latency Management It takes many processor cycles for a SPELL to access memory in the PIM or HRAM levels of the memory hierarchy. Latency management is crucial o the effective use of the HTMT. Multithreading is used as the basis of the HTMT execution model. The PIM-based levels are used in thread context management to keep the SPELLs supplied with work to do.
80
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov HTMT Challenges The HTMT architecture presents many technical challenges in almost every aspect of its design. It will require a new approach to programming and algorithm design. To make best use of the memory hierarchy simple regular algorithms (that may have a higher operation count) will be favored over more complex inhomogeneous algorithms. Applications will require a high degree of concurrency to keep the SPELLs busy. Given the requirements of high concurrency and latency tolerance, it is likely that highly-tuned software libraries and PSEs will be important in using HTMT computers. Needs long-term funding commitment. Currently the HTMT project is not funded.
81
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Alternative Algorithmic Approaches Algorithms with a regular structure, but a higher operation count, may be better than those with irregular structure. Slower algorithms that are more accurate and stable may be better than faster algorithms that are less accurate and stable. Cellular automata (CA) appear very well suited for future generation HPC machines. Interval-based algorithms may play a greater role in the future. Automatic tuning of numerical libraries. Intelligent algorithms and “algorithmic bombardment.” The gap between processor speeds and memory access speeds is expected to increase in the future, so latency tolerant algorithms will continue to be important.
82
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Cellular Automata CAs are highly parallel, very regular in structure, can handle complex geometries, and are numerically stable. Dynamics of the CA mimics the fine-grain dynamics of the actual physical system being modeled. Complex collective global behavior arises from simple components obeying simple interaction rules. Over the next 10 years CA will play an increasing role in the simulation of physical (and social) phenomena. Cellular automata offer an alternative to classical PDE-based techniques for solving certain problems.
83
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov CA for Surface Reactions A cellular automaton is used to model the reaction of carbon monoxide and oxygen to form carbon dioxide CO + O CO 2 Reactions take place on surface of a crystal which serves as a catalyst. This is used in models of catalytic converters.
84
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov The Problem Domain The problem domain is a periodic square lattice representing the crystal surface. CO and O 2 are adsorbed onto the crystal surface from the gas phase. Parameter y is the fraction of CO and 1-y is the fraction of O 2.
85
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Interaction Rules Choose a lattice site at random and attempt to place a CO or an O 2 there with probabilities y and 1-y, respectively. If site is occupied then the CO or O 2 bounces off, and a new trial begins. O 2 disassociates so we have to find 2 adjacent sites for these. The following rules determine what happens next.
86
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Interaction Rules for CO oxygen CO 1. CO adsorbed 3. CO and O react 4. CO 2 desorbs 2. Check 4 neighbors for O
87
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Interaction Rules for O 1. O 2 adsorbed 2. O 2 disassociates 3. Check 6 neighbors for CO 4. O and CO react 5. CO 2 desorbs oxygen CO O2O2
88
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Steady State Reaction For y 1 <y<y 2 we get a steady state. y 1 0.39 y 2 0.53
89
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov CO Poisoning: y > y 2
90
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Oxygen Poisoning: y < y 1
91
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Load Imbalance Load imbalance is smaller for smaller block sizes. Load imbalance is large as CO poisoning occurs. 512x512 lattice y = 0.53
92
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Interval-Based Algorithms Each quantity is represented by a lower and upper bound within which it is guaranteed to lie. Interval representation provides rigorous accuracy information that is absent in the point representation. Interval approach provides a way to keep track of initial uncertainties in the input data, errors in analytic approximations, rounding errors, etc. Important in critical design processes - space shuttle, aircraft, nuclear reactors, etc. Interval methods have existed for a long time but had bad performance because they were implemented in software. Recently compiler and hardware support for interval methods have become available.
93
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Automatically Tuned Numerical Software Libraries This idea is exemplified by the Automatic Tuned Linear Algebra Software (ATLAS) project of Dongarra et al. Numerical routines are developed with a large design space spanned by many tunable parameters, such as: –blocking size –loop nesting permutations –loop unrolling depths –pipelining strategies –register allocation –instruction schedules When ATLAS is installed on a new platform a set of runs automatically determines the best parameter values for that platform. Software must be able to dynamically explore its computational environment, and intelligently adapt as resource availability changes.
94
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Poly-Algorithmic Approaches On some advanced architecture computers the detailed order of arithmetic operations may not be pre-determined. For some problems it may not be possible to predict a priori which solution method is best, or even which will converge. Algorithmic bombardment applies several algorithms concurrently in the hope that at least one will converge to a solution. Could also first try a fast but unreliable method, and then if a problem occurred use a slower but more reliable method to fix it. Poly-algorithmic methods could be made available as black boxes, or offer the user varying degrees of control over the methods used.
95
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Summary: the Past and Present Parallel computing has transformed computational simulation and modeling, enabling a new paradigm of scientific investigation termed computational science. Parallel computing is becoming more widespread in commercial and industrial organizations. But software environments and tools for supporting parallel computing are disappointing and not widely used. Software reusability needs to be improved. OpenMP and MPI are the de facto standards for shared and distributed memory platforms, respectively.
96
Computer Science and Mathematics UT-BATTELLE U.S. D EPARTMENT OF E NERGY O AK R IDGE N ATIONAL L ABORATORY David W. Walker, +1 (865) 241-4624 email: WalkerDW@ornl.gov Summary: the Present and the Future Metacomputing in a distributed environment is attracting a lot of interest, but appears to be of limited use for compute intensive applications. This may change as network bandwidth improves. Application-specific problem-solving environments address issues of software reusability, transparent access to distributed computing resources, data visualization, exploration, and analysis within an integrated software environment. Multi-disciplinary applications, support for collaboration, and advanced visualization interfaces are becoming more important. Performance of conventional chips will level off in a few years, but radical new technologies offer further dramatic increases in compute power. New algorithmic approaches will be needed to exploit future HPC platforms effectively.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.