Download presentation
Presentation is loading. Please wait.
Published byAlisha Adams Modified over 9 years ago
1
Introduction to LinkSCEEM and Computing @ SESAME 15 June 2014, ibis Hotel, Amman - Jordan Presented by Salman Matalgah Computing Group leader SESAME
2
Outline: Who we are! Computing @ SESAME IMAN1 : National supercomputing center LinkSCEEM Project. What we can offer for our community.
3
SESAME Scientific Director: Giorgio PAOLUCCI SESAME Computing group: Salman Matalgah: System & Network Admin, Computing group Leader. Mostafa Zoubi: Computing System Engineer. Zubair Nawaz : LinkSCEEM SESAME Beamlines: Messaoud Harfouche: Beamline Scientist (XAFS) Who we are!
4
Computing @ SESAME IT infrastructure and services: System and services ( heterogeneous windows and Linux setup ) Networks ( Office, Machine and Beamlines ) Scientific Computing: LinkSCEEM project resources. SESAME resources: CPU/GPU Tesla K20 IMAN1 resources : National supercomputing center Strong Collaborations with ESRF and other SR. ASREN and JUNET
5
IMAN1 is the National supercomputing center operated by SESAME and JEAC Open door for any Scientific Computing needs. Allow External/flexible access to all Jordanian researchers Multiple HPC platform resources are available Offer a continues training program on parallel programing and HPC subjects Offer strong local and community support www.iman1.jo
6
LinkSCEEM Project www.linksceem.eu SESAME is strategic partner as WP5 leader. LinkSCEEM project idea LinkSCEEM research fields LinkSCEEM resources SCORE (Synchrotron Computing-Oriented Regional Education). MEWEMS (Middle East Women Engaged in Mathematics and Science). SESAME held many big Open days @SESAME, 120+ participants yearly. Successful 1 st,2 nd and 3rd SESAME-LinkSCEEM summer school (SR+HPC). Regional effects ( infrastructure, community, define challenges )
7
Content Introduction - HPC Available computational resources Access to Resources High Performance Computing training opportunities
8
High Performance Computing Supercomputers are large scale computers that use many processors and fast interconnect to solve parallel problems serial computationparallel computation Scalable systems Does your laptop / PC support parallel CPU programing ?
9
High Performance Computing Supercomputers are large scale computers that use many processors and fast interconnect to solve parallel problems See how Pelegant Beam dynamics simulation software is going to be distributed on a small PS3 devices cluster.
10
High Performance Computing Supercomputers are large scale computers that use many processors and fast interconnect to solve parallel problems JUQUEEN at Juelich Supercomputing Centre in Germany (no. 8 world wide), 458,752 cores 5,872.0 Tflop/s National Supercomputer Centre of China (no. 1 world wide) 3,120,000 cores 54,902.4 TFlop/s
11
HPC in the Eastern Mediterranean LinkSCEEM-1 project run from 2008-2009 assessment of HPC resources and users in the Eastern Mediterranean countries by HPC resources (status Nov 2013, source www.top500.org) Need for additional computational resources (regional survey LinkSCEEM-1) Saudi Arabia - 0.6%,
12
HPC in the Eastern Mediterranean LinkSCEEM-1 project run from 2008-2009 assessment of HPC resources and users in the Eastern Mediterranean Conclusions: Main factors that have hindered the development of computational science in the region limited access to HPC resources limited knowledge on HPC inadequate regional connectivity limited awareness of simulation as a scientific tool
13
LinkSCEEM-2 - Establish a HPC EcoSystem in the Eastern Mediterranean FP7 under Grant Agreement Number: 261600 Provide access to HPC resources Provide training in HPC Raise awareness for HPC research tools Bring international expertise into the region
14
LinkSCEEM-2: Development of a HPC ecoSystem in Cyprus and the Eastern Mediterranean HPC user community training and support Cy-Tera HPC user HPC e-infrastructure Both Cy-Tera and LinkSCEEM are under CaSToRC (Computation-based Science and Technology Research Center)
15
Project consortium Regional research institutions and international centres of excellence in high performance computing (HPC)
16
Content Introduction Available computational resources Access to Resources High Performance Computing training opportunities
17
Cy-Tera HPC System at CyI Hybrid CPU/GPU Linux Cluster Theoretical Peak Performance (TPP) = 30.5Tflops Computational Power 98 x 2 x Intel Westmere 6-core compute nodes (1176 CPU’s) 18 x 2 x Intel Westmere 6-core + 4 x NVIDIA M2070 GPU nodes (216 CPU’s and 72 GPU’s) 48 GB memory per node (5.56 TB) MPI Messaging & Storage Access 40Gbps QDR Infiniband Storage: 360TB raw disk
18
EUCLID Cluster System at CyI 1 TFlop/s 12 eight-core compute nodes (4 * x3550, 8 * x3650M2) 2 quad-core sockets per node, each is Intel Quad Xeon E5440 @ 2.83GHz 16 GB memory per node Total memory.192 Tbytes 10 TB shared scratch - not backed up EUCLID is a training Cluster for LinkSCEEM Project
19
HPC System at Bibliotheca Alexandrina SUN cluster of peak performance of 12 Tflops 130 eight-core compute nodes 2 quad-core sockets per node, each is Intel Quad Xeon E5440 @ 2.83GHz 8 GB memory per node, Total memory 1.05 TBytes 36 TB shared scratch Node-node interconnect Ethernet & 4x SDR Infiniband network for MPI
20
Content Introduction Available computational resources Access to Resources High Performance Computing training opportunities
21
Access to HPC facilities Regionally distributed LinkSCEEM e-infrastructure is being accessed for free through a joint call CaSToRC (Computation-based Science and Technology Research Center): Cy-Tera (CPU/GPU cluster, 30 TFlops) EUCLID (LinkSCEEM training cluster) BA: Sun cluster (CPU, 12 TFlops) Regional collaboration currently: HP-SEE (High-Performance Computing Infrastructure for South East Europe’s Research Communities) pilot access for LinkSCEEM users
22
What are the different types of access? Preparatory access – up to 6 months Call always open (apply here)apply here Production access – up to 12 months (apply here)apply here
23
Access to resources - process Proposal submission Technical reviewScientific reviewRight to reply Resource allocation through RAC Notification of applicant Proposal submission Technical reviewScientific reviewRight to reply Resource allocation through RAC Notification of applicant Production Access (process duration 2 month) Preparatory Access (process duration 2 days!)
24
Success stories Project is in its 4 rd year now (duration of 4 + x years) LinkSCEEM e-infrastructure is established 4 production access calls allocated, 2 finished already Projects by country: 4 th call Projects by country: 3 rd call
25
Content Introduction Available computational resources Access to Resources High Performance Computing training opportunities
26
HPC training opportunities LinkSCEEM aims to offer training opportunities to new and experienced users Training efforts focus on Online training opportunities Educational Access Program LinkSCEEM training events
27
Online training opportunities Attempts to aggregate and filter material into distinct categories Recommends content within categories Provides an environment to browse content from categories while remaining within the website Portal Website
28
Educational Access Program Access to HPC resources for educational purpose LinkSCEEM offers resources dedicated to this type of access Euclid is used as the training cluster of the LinkSCEEM Project Apply via hpc-support@linksceem.eu
29
LinkSCEEM training events a total of 32 to date
30
Last years events Training material from past events is available on the Linksceem-2 website: http://linksceem.eu/ls2/user- resources/lectures.html
31
What we can offer for our community: Easy access to available HPC resources for Scientific Computing Disseminations and outreach activities to regional/Jordanian universities about the available HPC resources ( roadshow and open days) we are ready !! Training program on parallel programing and HPC subjects (4 th summer school) 15-17 June 2014 Training opportunities in external HPC facilities (LinkSCEEM) Technical consultations and community support
32
Thanks Presented by Salman Matalgah On behalf of SESAME Computing Group, SESAME salman.matalgah@sesame.org.oj
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.