Leibniz Supercomputing Centre Garching/Munich Matthias Brehm HPC Group June 16.

Slides:



Advertisements
Similar presentations
Barcelona Supercomputing Center. The BSC-CNS objectives: R&D in Computer Sciences, Life Sciences and Earth Sciences. Supercomputing support to external.
Advertisements

Issues of HPC software From the experience of TH-1A Lu Yutong NUDT.
SALSA HPC Group School of Informatics and Computing Indiana University.
Towards a Virtual European Supercomputing Infrastructure Vision & issues Sanzio Bassini
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Cracow Grid Workshop, November 5-6, 2001 Towards the CrossGrid Architecture Marian Bubak, Marek Garbacz, Maciej Malawski, and Katarzyna Zając.
Problem-Solving Environments: The Next Level in Software Integration David W. Walker Cardiff University.
Software Group © 2006 IBM Corporation Compiler Technology Task, thread and processor — OpenMP 3.0 and beyond Guansong Zhang, IBM Toronto Lab.
Computer Science Department 1 Load Balancing and Grid Computing David Finkel Computer Science Department Worcester Polytechnic Institute.
Parallel Programming on the SGI Origin2000 With thanks to Moshe Goldberg, TCC and Igor Zacharov SGI Taub Computer Center Technion Mar 2005 Anne Weill-Zrahia.
The CrossGrid project Juha Alatalo Timo Koivusalo.
Building a Cluster Support Service Implementation of the SCS Program UC Computing Services Conference Gary Jung SCS Project Manager
1 Multi - Core fast Communication for SoPC Multi - Core fast Communication for SoPC Technion – Israel Institute of Technology Department of Electrical.
1 Ideas About the Future of HPC in Europe “The views expressed in this presentation are those of the author and do not necessarily reflect the views of.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Edge Based Cloud Computing as a Feasible Network Paradigm(1/27) Edge-Based Cloud Computing as a Feasible Network Paradigm Joe Elizondo and Sam Palmer.
Charm++ Load Balancing Framework Gengbin Zheng Parallel Programming Laboratory Department of Computer Science University of Illinois at.
Beyond Automatic Performance Analysis Prof. Dr. Michael Gerndt Technische Univeristät München
Slide 1 Auburn University Computer Science and Software Engineering Scientific Computing in Computer Science and Software Engineering Kai H. Chang Professor.
Co-ordination & Harmonisation of Advanced e-Infrastructures Research Infrastructures – Grant Agreement n Regional progress report: Mediterranean.
LLNL-PRES-XXXXXX This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
N. GSU Slide 1 Chapter 02 Cloud Computing Systems N. Xiong Georgia State University.
DISTRIBUTED COMPUTING
Compiler BE Panel IDC HPC User Forum April 2009 Don Kretsch Director, Sun Developer Tools Sun Microsystems.
Performance Model & Tools Summary Hung-Hsun Su UPC Group, HCS lab 2/5/2004.
Loosely Coupled Parallelism: Clusters. Context We have studied older archictures for loosely coupled parallelism, such as mesh’s, hypercubes etc, which.
4.2.1 Programming Models Technology drivers – Node count, scale of parallelism within the node – Heterogeneity – Complex memory hierarchies – Failure rates.
Crystal Ball Panel ORNL Heterogeneous Distributed Computing Research Al Geist ORNL March 6, 2003 SOS 7.
The Grid System Design Liu Xiangrui Beijing Institute of Technology.
SALSA HPC Group School of Informatics and Computing Indiana University.
Comparison of Distributed Operating Systems. Systems Discussed ◦Plan 9 ◦AgentOS ◦Clouds ◦E1 ◦MOSIX.
Cracow Grid Workshop October 2009 Dipl.-Ing. (M.Sc.) Marcus Hilbrich Center for Information Services and High Performance.
SALSASALSASALSASALSA FutureGrid Venus-C June Geoffrey Fox
Large Scale Parallel File System and Cluster Management ICT, CAS.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
ServiceSs, a new programming model for the Cloud Daniele Lezzi, Rosa M. Badia, Jorge Ejarque, Raul Sirvent, Enric Tejedor Grid Computing and Clusters Group.
Numerical Libraries Project Microsoft Incubation Group Mary Beth Hribar Microsoft Corporation CSCAPES Workshop June 10, 2008 Copyright Microsoft Corporation,
GVis: Grid-enabled Interactive Visualization State Key Laboratory. of CAD&CG Zhejiang University, Hangzhou
RI The DEISA Sustainability Model Wolfgang Gentzsch DEISA-2 and OGF rzg.mpg.de.
Experts in numerical algorithms and HPC services Compiler Requirements and Directions Rob Meyer September 10, 2009.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
© 2009 IBM Corporation Parallel Programming with X10/APGAS IBM UPC and X10 teams  Through languages –Asynchronous Co-Array Fortran –extension of CAF with.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Anne C. Elster1 CLUSTER TECHNOLOGIES (Foilene ble også presentert på NOTUR 2003) Anne C. Elster Dept. of Computer & Information Science (IDI) Norwegian.
1 Grid Activity Summary » Grid Testbed » CFD Application » Virtualization » Information Grid » Grid CA.
International Symposium on Grid Computing (ISGC-07), Taipei - March 26-29, 2007 Of 16 1 A Novel Grid Resource Broker Cum Meta Scheduler - Asvija B System.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
Parallelization Strategies Laxmikant Kale. Overview OpenMP Strategies Need for adaptive strategies –Object migration based dynamic load balancing –Minimal.
I MAGIS is a joint project of CNRS - INPG - INRIA - UJF iMAGIS-GRAVIR / IMAG Efficient Parallel Refinement for Hierarchical Radiosity on a DSM computer.
Dr. Isabel Campos Plasencia (IFCA-CSIC) Spanish NGI Coordinator ES-GRID The Spanish National Grid Initiative.
Benchmarking and Applications. Purpose of Our Benchmarking Effort Reveal compiler (and run-time systems) weak points and lack of adequate automatic optimizations.
EU-Russia Call Dr. Panagiotis Tsarchopoulos Computing Systems ICT Programme European Commission.
Background Computer System Architectures Computer System Software.
HPC University Requirements Analysis Team Training Analysis Summary Meeting at PSC September Mary Ann Leung, Ph.D.
EGEE is a project funded by the European Union under contract IST Generic Applications Requirements Roberto Barbera NA4 Generic Applications.
INTRODUCTION TO GRID & CLOUD COMPUTING U. Jhashuva 1 Asst. Professor Dept. of CSE.
Defining the Competencies for Leadership- Class Computing Education and Training Steven I. Gordon and Judith D. Gardiner August 3, 2010.
1 "The views expressed in this presentation are those of the author and do not necessarily reflect the views of the European Commission" NCP infoday Capacities.
NIIF HPC services for research and education
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
PRACE Experiences of an e-Infrastructure Flagship Project
ICT NCP Infoday Brussels, 23 June 2010
Wide Area Workload Management Work Package DATAGRID project
Defining the Grid Fabrizio Gagliardi EMEA Director Technical Computing
HPC User Forum: Back-End Compiler Technology Panel
Support for Adaptivity in ARMCI Using Migratable Objects
Expand portfolio of EGI services
e-Infrastructures for Research and Education:
Presentation transcript:

Leibniz Supercomputing Centre Garching/Munich Matthias Brehm HPC Group June 16

Leibniz Supercomputing Centre (LRZ) Bavarian Academy of Sciences and Humanities  Computing Centre (~175 employees) for Munich Universities All kinds of IT services and support Capacity computing, (Virtual) servers  Regional Computing Centre for all Bavarian Universities Capacity computing Backup and Archiving Centre (more than 7 petabytes, 5.5 billion files) Competence Centre (Networks, IT Management)  National Supercomputing Centre Integrated into Gauss Centre for Supercomputing = (JSC, HLRS + LRZ) Legal entity for acting in Europe High End System (62 TF, 9726 cores) Linux Cluster (45 TF, 5000 cores) Grid Computing Active in DEISA and PRACE (1IP) WP8 (WP9) Leadership: Future Technologies Current Procurement: Multi-PetaFlop System: End of 2011 Contract in 2010 General Purpose System (Intel or AMD based) of Thin and Fat Shared Memory Nodes Doubling of Computer Cube, Cave & Visualization, new office space LRZ, High Performance Computing Group, Matthias Brehm, June 16 2

HPC research activities  IT Management (Methods, Architectures, Tools) Service Management: Impact Analysis, Customer Service Mgmt, SLA Mgmt, Monitoring, Process Refinement Virtualization Operational Strategies for Petaflop Systems  Grids Middleware (IGE, Initiative for Globus in Europe, Project Leader): services, coordination, provisioning Grid Monitoring (D-MON, Resources of gLite, Globus, Unicore) Security and Intrusion Detection, Meta-Scheduling, SLA  Computational Science Munich Computational Sciences Centre (MCSC) & Munich Centre of Advanced Computing (MAC): TU Munich, Univ. Munich, Max-Planck-Society Garching New Programming Paradigms for Petaflop Systems Energy efficiency (Hot water) Cooling & Reuse (heating of buildings) Scheduling, sleep mode of idle procs etc. Automatic performance analysis and system-wide performance monitoring  Network Technologies & Network Monitoring  Long-Term Archiving  Talks/Activities with Russia LSU Moscow: Coop. Competence Network of HPC & Bavarian Graduate School of Comp. Engin.: joint courses, applications in physics, climatology, quantum chemistry, drug design Steklov Inst. / State Univ. St. Petersburg: Joint Advanced Student School (JASS): Modelling and Simulation T-Platforms: Cooling technology, energy-efficiency LRZ, High Performance Computing Group, Matthias Brehm, June 16 3

Specific research ideas for collaboration  Programming models and runtime support PGAS (partitioned global address space) – Coarray Fortran CAF (or UPC) Re-implement an essential infrastructure library – e.g. ARPACK in CAF sparse might be a good candidate for load balancing Implement a microbenchmark set –measure QoImpl. vs. OpenMP / MPI –measure QoImpl. for message optimization (message aggregation etc.) Investigate potential of interoperability between CAF and UPC, CAF and OpenMP, CAF and MPI –what is feasible? what isn‘t? –standards don‘t mention this anywhere (yet) Develop Fortran class libraries for parallel patterns –presently the only „OO“ and simultaneously parallel language User Training Scalable Visualisation Infrastructure Highly scalable visualisation service for HPC Remote visualization, virtualization –location-independent, instant, and cost-effective framework for the analysis of HPC simulation results –resource allocation, account management, data transfer and data compression, advance reservation and quality of service LRZ, High Performance Computing Group, Matthias Brehm, June 16 4

Specific research ideas for collaboration Energy Efficiency Scheduling Dynamic clock adjustment of CPU (and Memory) Monitoring and Tuning of energy fluxes Cooling technologies, energy reuse  Performance analysis tools for HPC Automatic performance monitoring and analysis System-wide background monitoring Hardware performance counters, communication behaviour, I/O Automatic bottleneck detection (System) Monitoring –By Using Map-Reduce-Techniques  Optimisation, scalability and porting of codes Scalable and dynamical mesh generation & load balancing More than parMetis Application areas: geophysics, cosmology, CFD, multi-physics LRZ, High Performance Computing Group, Matthias Brehm, June 16 5