Download presentation
Presentation is loading. Please wait.
Published byMillicent Shelton Modified over 9 years ago
1
1 HPC Middleware on GRID … as a material for discussion of WG5 GeoFEM/RIST August 2nd, 2001, ACES/GEM at MHPCC Kihei, Maui, Hawaii
2
2Background Various Types of HPC Platforms –MPP, VPP –PC Clusters, Distributed Parallel MPPs, SMP Clusters –8-Way SMP, 16-Way SMP, 256-Way SMP –Power, HP-RISC, Alpha/Itanium, Pentium, Vector PE Parallel/Single PE Optimization is Important Issue for Efficiency –Everyone knows that... but it's a big task especially for application experts such as geophysics people in ACES community. –Machine dependent optimization/tuning required. Simulation Methods such as FEM/FDM/BEM/LSM/DEM etc. have Typical Processes for Computation. How about "Hiding" these Processes from Users ? –code development : efficient, reliable, portable, maintenance-free line number of the source codes will be reduced –accelerates advancement of the applications (= physics)
3
3 Background (cont.) Current GeoFEM provides this environment –limited to FEM –not necessarily perfect GRID as next generation HPC infrastructure –Currently, middlewares and protocols are being developed to enable unified interface to treat various OS, computers, ultra-speed network and database. –What are expected to GRID ? Meta-computing : simultaneous use of supercomputers in the world Volunteer-computing : efficient use of idling computers Access Grid : research collaboration environment Data Intensive Computing : c omputation with large-scale data Grid ASP : a pplication services on WEB
4
4 Similar Research Groups ALICE ( ANL )ALICE ( ANL ) CCAforum ( Common Component Architecture, DOE )CCAforum ( Common Component Architecture, DOE ) DOE/ASCI/Distributed Computing Research TeamDOE/ASCI/Distributed Computing Research Team –ESI(Equation Solver Interface Standards) –FEI(The Finite Element/Equation Solver Interface Specification) ADR ( Active Data Repository )( NPACI )ADR ( Active Data Repository )( NPACI )
5
5 Are they successful ? It seems NO Very limited targets, processesVery limited targets, processes –Mainly for Optimization of Linear Solvers Where are Interfaces between Applications and Libraries ?Where are Interfaces between Applications and Libraries ? –Approach from Computer/Computational Science People –Not Really Easy to Use by Application People -Linear solvers -Numerical Algorithms -Parallel Programming -Optimization Computer/ Computational Science -FEM -FDM -Spectral Methods -MD, MC -BEM Applications
6
6 Example of HPC Middleware (1) Simulation Methods include Some Typical Processes Sparse Mat. Mult. Nonlinear Procedure FFT Eward Terms O(N) Ab Initio MD
7
7 Example of HPC Middleware (2) Individual Process could be optimized for Various Types of MPP Architectures Sparse Mat. Mult. Nonlinear Proc. FFT Eward Terms O(N) Ab Initio MD MPP-A MPP-B MPP-C
8
8 Sparse Matrix Mult. Nonlinear Proc. FFT Eward Terms Example of HPC Middleware (3) Use Optimized Libraries O(N)ab-initioMD Sparse Matrix Mult. Nonlinear Proc. FFT Eward Terms Sparse Matrix Mult. Nonlinear Alg. FFT Eward Terms Sparse Matrix Mult. Nonlinear Proc. FFT Eward Terms
9
9 Example of HPC Middleware (4) - Optimized code is generated by special language/ compiler based on analysis data and H/W information. - Optimum algorithm can be adopted O(N)ab-initioMD MPP-A MPP-B MPP-B MPP-C Data for Analysis Model Parameters of H/W SpecialCompiler Sparse Matrix Mult. Nonlinear Proc. FFT Eward Terms
10
10 Example of HPC Middleware (5) - On network-connected H/W's (meta-computing) - Optimized for individual architecture - Optimum load-balancing O(N)ab-initioMDanalysismodelspace
11
11 Example of HPC Middleware (6) Multi Module Coupling through Platform HPC Platform/Middleware Ab-Initio MD Classical MD FEM HPC Platform/Middleware Ab-Initio MD Classical MD FEM ModelingVisualizationLoadBalancingResourceManagementOptimizationDataAssimilation
12
12 PETAFLOPS on GRID from GeoFEM's Point of View Why? When?Why? When? –Datasets (mesh, observation, result) could be distributed. –Problem size could be too large for single MPP system. according to G.C.Fox, (TOP500) is about 100 TFLOPS now...according to G.C.Fox, (TOP500) is about 100 TFLOPS now... LegionLegion –Prof.Grimshaw (U.Virginia) –Grid OS, Global OS –Can handle MPP's connected through network as one huge MPP (= Super MPP) MPP-A MPP-B MPP-B MPP-C MPP-C –Optimization on Individual Architecture (H/W) –Load balancing according to machine performance and resource availability
13
13 PETAFLOPS on GRID (cont.) GRID + (OS) + HPC MW/PFGRID + (OS) + HPC MW/PF Environment for "Electronic CollaborationEnvironment for "Electronic Collaboration
14
14
15
15 "Parallel" FEM Procedure Initial Mesh Data Partitioning Post Proc. Data Input/Output Domain Specific Algorithms/Models Matrix Assemble Linear Solvers VisualizationPre-ProcessingMainPost-Processing
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.