Introduction to LinkSCEEM and SESAME 15 June 2014, ibis Hotel, Amman - Jordan Presented by Salman Matalgah Computing Group leader SESAME.

Slides:



Advertisements
Similar presentations
Founded in 2010: UCL, Southampton, Oxford and Bristol Key Objectives of the Consortium: Prove the concept of shared, regional e-infrastructure services.
Advertisements

LinkSCEEM-2: A computational resource for the Eastern Mediterranean.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Beowulf Supercomputer System Lee, Jung won CS843.
Contact: Hirofumi Amano at Kyushu 40 Years of HPC Services In this memorable year, the.
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Early Linpack Performance Benchmarking on IPE Mole-8.5 Fermi GPU Cluster Xianyi Zhang 1),2) and Yunquan Zhang 1),3) 1) Laboratory of Parallel Software.
CaSToRC LinkSCEEM-2: A Computational Resource for the Development of Computational Sciences in the Eastern Mediterranean Jens Wiegand Scientific Coordinator.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO SDSC RP Update October 21, 2010.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
1 SESAME as International Research Lab by: Salman Matalgah Synchrotron-light for Experimental Science and Applications in the Middle.
WEST VIRGINIA UNIVERSITY HPC and Scientific Computing AN OVERVIEW OF HIGH PERFORMANCE COMPUTING RESOURCES AT WVU.
Information Technology Center Introduction to High Performance Computing at KFUPM.
LinkSCEEM Roadshow Introduction to LinkSCEEM/SESAME/IMAN1 4 May 2014, J.U.S.T Presented by Salman Matalgah Computing Group leader SESAME.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
1 petaFLOPS+ in 10 racks TB2–TL system announcement Rev 1A.
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
Illinois Campus Cluster Program User Forum October 24, 2012 Illini Union Room 210 2:00PM – 3:30PM.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
Administration and management of Windows-based clusters Windows HPC Server 2008 Matej Ciesko HPC Consultant, PM
SESAME Computing Updates for TAC+SAC Computing Group Scientific Sector Salman Matalgah 1Salman Matalgah TAC/SAC 2012.
HPC at IISER Pune Neet Deo System Administrator
SESAME and LinkSCEEM Project
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
VO Sandpit, November 2009 e-Infrastructure to enable EO and Climate Science Dr Victoria Bennett Centre for Environmental Data Archival (CEDA)
Computing Group SAC 2013 Scientific Sector Salman Matalgah & Mostafa Zoubi 1Salman Matalgah SAC 2013.
Results of the HPC in Europe Taskforce (HET) e-IRG Workshop Kimmo Koski CSC – The Finnish IT Center for Science April 19 th, 2007.
Networks ∙ Services ∙ People Mandeep Saini TF-MSP, Espoo, Finland Service Delivery and Adoption 10 th Sep 2015 Task Leader, GN4-1 SA7 T3.
S&T IT Research Support 11 March, 2011 ITCC. Fast Facts Team of 4 positions 3 positions filled Focus on technical support of researchers Not “IT” for.
CRISP & SKA WP19 Status. Overview Staffing SKA Preconstruction phase Tiered Data Delivery Infrastructure Prototype deployment.
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
Contact: Hirofumi Amano at Kyushu Mission 40 Years of HPC Services Though the R. I. I.
Pascucci-1 Valerio Pascucci Director, CEDMAV Professor, SCI Institute & School of Computing Laboratory Fellow, PNNL Massive Data Management, Analysis,
CCS Overview Rene Salmon Center for Computational Science.
Co-ordination & Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing Research Infrastructures Grant Agreement n
HP-SEE High-Performance Computing Infrastructure for South East Europe’s Research Communities 8th e-Infrastructure Concertation Meeting,
United Nations Economic Commission for Europe Statistical Division Big Data Sandbox Antonino Virgillito Project manager “Big Data Project” UNECE Head of.
By Chi-Chang Chen.  Cluster computing is a technique of linking two or more computers into a network (usually through a local area network) in order.
NICS Update Bruce Loftis 16 December National Institute for Computational Sciences University of Tennessee and ORNL partnership  NICS is the 2.
SEE-GRID-2 The SEE-GRID-2 initiative is co-funded by the European Commission under the FP6 Research Infrastructures contract no
SESAME-NET: Supercomputing Expertise for Small And Medium Enterprises Todor Gurov Associate Professor IICT-BAS
This project has received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under.
General Introduction Markus Nordberg (CERN DG-DI-DI) 1
CMB & LSS Virtual Research Community Marcos López-Caniego Enrique Martínez Isabel Campos Jesús Marco Instituto de Física de Cantabria (CSIC-UC) EGI Community.
CaSToRC Linking Scientific Computing in Europe and the Eastern Mediterranean LinkSCEEM-2 CyI IUCC CYNET SESAME JUNET JSC BA NARSS MPIC ESRF NCSA.
Hopper The next step in High Performance Computing at Auburn University February 16, 2016.
Multicore Applications in Physics and Biochemical Research Hristo Iliev Faculty of Physics Sofia University “St. Kliment Ohridski” 3 rd Balkan Conference.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
A. Andries, A. Altuhov, P. Bogatencov, N. Iliuha, G. Secrieru RENAM Association Moldova Integrarea în Spaţiul European de Cercetare devine.
Sobolev(+Node 6, 7) Showcase +K20m GPU Accelerator.
Centre of Excellence in Physics at Extreme Scales Richard Kenway.
Ben Rogers August 18,  High Performance Computing  Data Storage  Hadoop Pilot  Secure Remote Desktop  Training Opportunities  Grant Collaboration.
NIIF HPC services for research and education
Opportunities offered by LinkSCEEM Project
Accessing the VI-SEEM infrastructure
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
Buying into “Summit” under the “Condo” model
White Rose Grid Infrastructure Overview
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
Heterogeneous Computation Team HybriLIT
LinkSCEEM-2: A computational resource for the Eastern Mediterranean
NGIs – Turkish Case : TR-Grid
Grid infrastructure development: current state
Accessing LinkSCEEM resources
Hadoop Clusters Tess Fulkerson.
Footer.
Overview of HPC systems and software available within
CUBAN ICT NETWORK UNIVERSITY COOPERATION (VLIRED
Presentation transcript:

Introduction to LinkSCEEM and SESAME 15 June 2014, ibis Hotel, Amman - Jordan Presented by Salman Matalgah Computing Group leader SESAME

Outline: Who we are! SESAME IMAN1 : National supercomputing center LinkSCEEM Project. What we can offer for our community.

SESAME Scientific Director: Giorgio PAOLUCCI SESAME Computing group: Salman Matalgah: System & Network Admin, Computing group Leader. Mostafa Zoubi: Computing System Engineer. Zubair Nawaz : LinkSCEEM SESAME Beamlines: Messaoud Harfouche: Beamline Scientist (XAFS) Who we are!

SESAME  IT infrastructure and services:  System and services ( heterogeneous windows and Linux setup )  Networks ( Office, Machine and Beamlines )  Scientific Computing:  LinkSCEEM project resources.  SESAME resources: CPU/GPU Tesla K20  IMAN1 resources : National supercomputing center  Strong Collaborations with ESRF and other SR.  ASREN and JUNET

 IMAN1 is the National supercomputing center operated by SESAME and JEAC  Open door for any Scientific Computing needs.  Allow External/flexible access to all Jordanian researchers  Multiple HPC platform resources are available  Offer a continues training program on parallel programing and HPC subjects  Offer strong local and community support 

LinkSCEEM Project SESAME is strategic partner as WP5 leader. LinkSCEEM project idea LinkSCEEM research fields LinkSCEEM resources SCORE (Synchrotron Computing-Oriented Regional Education). MEWEMS (Middle East Women Engaged in Mathematics and Science). SESAME held many big Open 120+ participants yearly. Successful 1 st,2 nd and 3rd SESAME-LinkSCEEM summer school (SR+HPC). Regional effects ( infrastructure, community, define challenges )

Content  Introduction - HPC  Available computational resources  Access to Resources  High Performance Computing training opportunities

High Performance Computing Supercomputers are large scale computers that use many processors and fast interconnect to solve parallel problems serial computationparallel computation  Scalable systems  Does your laptop / PC support parallel CPU programing ?

High Performance Computing Supercomputers are large scale computers that use many processors and fast interconnect to solve parallel problems See how Pelegant Beam dynamics simulation software is going to be distributed on a small PS3 devices cluster.

High Performance Computing Supercomputers are large scale computers that use many processors and fast interconnect to solve parallel problems JUQUEEN at Juelich Supercomputing Centre in Germany (no. 8 world wide), 458,752 cores 5,872.0 Tflop/s National Supercomputer Centre of China (no. 1 world wide) 3,120,000 cores 54,902.4 TFlop/s

HPC in the Eastern Mediterranean  LinkSCEEM-1 project run from  assessment of HPC resources and users in the Eastern Mediterranean countries by HPC resources (status Nov 2013, source Need for additional computational resources (regional survey LinkSCEEM-1) Saudi Arabia - 0.6%,

HPC in the Eastern Mediterranean  LinkSCEEM-1 project run from  assessment of HPC resources and users in the Eastern Mediterranean Conclusions: Main factors that have hindered the development of computational science in the region limited access to HPC resources limited knowledge on HPC inadequate regional connectivity limited awareness of simulation as a scientific tool

LinkSCEEM-2 - Establish a HPC EcoSystem in the Eastern Mediterranean FP7 under Grant Agreement Number:  Provide access to HPC resources  Provide training in HPC  Raise awareness for HPC research tools  Bring international expertise into the region

LinkSCEEM-2: Development of a HPC ecoSystem in Cyprus and the Eastern Mediterranean HPC user community training and support Cy-Tera HPC user HPC e-infrastructure Both Cy-Tera and LinkSCEEM are under CaSToRC (Computation-based Science and Technology Research Center)

Project consortium  Regional research institutions and international centres of excellence in high performance computing (HPC)

Content  Introduction  Available computational resources  Access to Resources  High Performance Computing training opportunities

Cy-Tera HPC System at CyI  Hybrid CPU/GPU Linux Cluster  Theoretical Peak Performance (TPP) = 30.5Tflops  Computational Power  98 x 2 x Intel Westmere 6-core compute nodes (1176 CPU’s)  18 x 2 x Intel Westmere 6-core + 4 x NVIDIA M2070 GPU nodes (216 CPU’s and 72 GPU’s)  48 GB memory per node (5.56 TB)  MPI Messaging & Storage Access  40Gbps QDR Infiniband  Storage: 360TB raw disk

EUCLID Cluster System at CyI  1 TFlop/s  12 eight-core compute nodes (4 * x3550, 8 * x3650M2)  2 quad-core sockets per node, each is Intel Quad Xeon 2.83GHz  16 GB memory per node Total memory.192 Tbytes  10 TB shared scratch - not backed up EUCLID is a training Cluster for LinkSCEEM Project

HPC System at Bibliotheca Alexandrina  SUN cluster of peak performance of 12 Tflops  130 eight-core compute nodes  2 quad-core sockets per node, each is Intel Quad Xeon 2.83GHz  8 GB memory per node, Total memory 1.05 TBytes  36 TB shared scratch  Node-node interconnect  Ethernet & 4x SDR Infiniband network for MPI

Content  Introduction  Available computational resources  Access to Resources  High Performance Computing training opportunities

Access to HPC facilities Regionally distributed LinkSCEEM e-infrastructure is being accessed for free through a joint call  CaSToRC (Computation-based Science and Technology Research Center):  Cy-Tera (CPU/GPU cluster, 30 TFlops)  EUCLID (LinkSCEEM training cluster)  BA:  Sun cluster (CPU, 12 TFlops)  Regional collaboration  currently: HP-SEE (High-Performance Computing Infrastructure for South East Europe’s Research Communities) pilot access for LinkSCEEM users

What are the different types of access?  Preparatory access – up to 6 months Call always open (apply here)apply here  Production access – up to 12 months (apply here)apply here

Access to resources - process Proposal submission Technical reviewScientific reviewRight to reply Resource allocation through RAC Notification of applicant Proposal submission Technical reviewScientific reviewRight to reply Resource allocation through RAC Notification of applicant Production Access (process duration 2 month) Preparatory Access (process duration 2 days!)

Success stories  Project is in its 4 rd year now (duration of 4 + x years)  LinkSCEEM e-infrastructure is established  4 production access calls allocated, 2 finished already Projects by country: 4 th call Projects by country: 3 rd call

Content  Introduction  Available computational resources  Access to Resources  High Performance Computing training opportunities

HPC training opportunities  LinkSCEEM aims to offer training opportunities to new and experienced users  Training efforts focus on  Online training opportunities  Educational Access Program  LinkSCEEM training events

Online training opportunities  Attempts to aggregate and filter material into distinct categories  Recommends content within categories  Provides an environment to browse content from categories while remaining within the website Portal Website

Educational Access Program  Access to HPC resources for educational purpose  LinkSCEEM offers resources dedicated to this type of access  Euclid is used as the training cluster of the LinkSCEEM Project  Apply via

LinkSCEEM training events a total of 32 to date

Last years events Training material from past events is available on the Linksceem-2 website: resources/lectures.html

What we can offer for our community:  Easy access to available HPC resources for Scientific Computing  Disseminations and outreach activities to regional/Jordanian universities about the available HPC resources ( roadshow and open days) we are ready !!  Training program on parallel programing and HPC subjects (4 th summer school) June 2014  Training opportunities in external HPC facilities (LinkSCEEM)  Technical consultations and community support

Thanks Presented by Salman Matalgah On behalf of SESAME Computing Group, SESAME