ISS-AliEn and ISS-gLite Adrian Sevcenco RO-LCG 2011 WORKSHOP Applications of Grid Technology and High Performance Computing in Advanced Research.

Slides:



Advertisements
Similar presentations
Optimization on Kepler Zehuan Wang
Advertisements

Hungrid A Possible Distributed Computing Platform for Hungarian Fusion Research Szabolcs Hernáth MTA KFKI RMKI EFDA RP Workshop.
SAI DEEPTHI YARLAGADDA STUDY OF JAVA BASED GRID COMPUTING APRIL 30TH 2003.
1 NATIONAL PROGRAMMME National Space and Aeronautics Programme – AEROSPAŢIAL – Space Exploration Subprogramme M I N I S T R Y OF E D U C A T I O N A N.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Regional Computing Centre for Particle Physics Institute of Physics AS CR (FZU) TIER2 of LCG (LHC Computing Grid) 1M. Lokajicek Dell Presentation.
Tier 2 Prague Institute of Physics AS CR Status and Outlook J. Chudoba, M. Elias, L. Fiala, J. Horky, T. Kouba, J. Kundrat, M. Lokajicek, J. Svec, P. Tylka.
Scientific Data Infrastructure in CAS Dr. Jianhui Scientific Data Center Computer Network Information Center Chinese Academy of Sciences.
Farzaneh Rahmani Mazandaran University of Science And Technology February, 2011 Supervisor: Hadi Salimi.
Supercomputing Center Jysoo Lee KISTI Supercomputing Center National e-Science Project.
Green technology used for ATLAS processing Dr. Ing. Fărcaş Felix NATIONAL INSTITUTE FOR RESEARCH AND DEVELOPMENT OF ISOTOPIC AND MOLECULAR.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
Predictive Runtime Code Scheduling for Heterogeneous Architectures 1.
27 May 2004 C.N. Papanicolas EGEE and the role of IASA ( In close collaboration with UOA ) IASA GRID Steering Committee: George Kallos Lazaros.
Science Clouds and FutureGrid’s Perspective June Science Clouds Workshop HPDC 2012 Delft Geoffrey Fox
BalticGrid-II Project MATLAB implementation and application in Grid Ilmars Slaidins, Lauris Cikovskis Riga Technical University AHM Riga May 12-14, 2009.
Current Status of the Grid Computing for Physics in Romania Horia Hulubei National Institute of R&D in Physics and Nuclear Engineering (IFIN-HH)
INFSO-RI Enabling Grids for E-sciencE The US Federation Miron Livny Computer Sciences Department University of Wisconsin – Madison.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
Physics Steven Gottlieb, NCSA/Indiana University Lattice QCD: focus on one area I understand well. A central aim of calculations using lattice QCD is to.
General Purpose Computing on Graphics Processing Units: Optimization Strategy Henry Au Space and Naval Warfare Center Pacific 09/12/12.
Grid Technologies  Slide text. What is Grid?  The World Wide Web provides seamless access to information that is stored in many millions of different.
General KM3NeT meeting, Pylos 15 – 18 April 2007 Institute of Space Sciences, Bucharest, Romania Vlad Popa, ISS.
1 Evaluation of parallel particle swarm optimization algorithms within the CUDA™ architecture Luca Mussi, Fabio Daolio, Stefano Cagnoni, Information Sciences,
CUDA Optimizations Sathish Vadhiyar Parallel Programming.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Resource Brokering in the PROGRESS Project Juliusz Pukacki Grid Resource Management Workshop, October 2003.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks, An Overview of the GridWay Metascheduler.
GVis: Grid-enabled Interactive Visualization State Key Laboratory. of CAD&CG Zhejiang University, Hangzhou
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Some key aspects of NVIDIA GPUs and CUDA. Silicon Usage.
ESFRI & e-Infrastructure Collaborations, EGEE’09 Krzysztof Wrona September 21 st, 2009 European XFEL.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Grid DESY Andreas Gellrich DESY EGEE ROC DECH Meeting FZ Karlsruhe, 22./
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
NATIONAL INSTITUTE FOR LASER, PLASMA AND RADIATION PHYSICS Institute for Space Sciences P.O. Box: MG-23, RO Bucharest ROMANIA Tel./Fax (4021)
High Performance Computing (HPC) Data Center Proposal Imran Latif, Facility Project Manager Scientific & Enterprise Computing Data Centers at BNL 10/14/2015.
Program Optimizations and Recent Trends in Heterogeneous Parallel Computing Dušan Gajić, University of Niš Program Optimizations and Recent Trends in Heterogeneous.
Dr. Andreas Wagner Deputy Group Leader - Operating Systems and Infrastructure Services CERN IT Department The IT Department & The LHC Computing Grid –
2 nd FCPPL Workshop Closing Remarks Hesheng CHen.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
NSF Middleware Initiative Purpose To design, develop, deploy and support a set of reusable, expandable set of middleware functions and services that benefit.
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
Ukrainian Academic Grid Initiative (UAGI) Status and outlook G. Zinovjev Bogolyubov Institute for Theoretical Physics Kiev, Ukraine.
An attempt to summarize…or … some highly subjective observations Matthias Kasemann, CERN & DESY.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Spanish National Research Council- CSIC Isabel.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
OpenNebula: Experience at SZTAKI Peter Kacsuk, Sandor Acs, Mark Gergely, Jozsef Kovacs MTA SZTAKI EGI CF Helsinki.
Nigel Lockyer Fermilab Operations Review 16 th -18 th May 2016 Fermilab in the Context of the DOE Mission.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
Nigel Lockyer Fermilab Operations Review 16 th -18 th May 2016 Fermilab in the Context of the DOE Mission.
CYBERSAR Cybersar Gianluigi Zanetti Cybersar.
RI EGI-InSPIRE RI Astronomy and Astrophysics Dr. Giuliano Taffoni Dr. Claudio Vuerli.
IBERGRID as RC Total Capacity: > 10k-20K cores, > 3 Petabytes Evolving to cloud (conditioned by WLCG in some cases) Capacity may substantially increase.
CMB & LSS Virtual Research Community Marcos López-Caniego Enrique Martínez Isabel Campos Jesús Marco Instituto de Física de Cantabria (CSIC-UC) EGI Community.
Mihnea Dulea, IFIN-HH R-ECFA Meeting, National Physics Library IFIN-HH, Magurele Romanian participation in WLCG M. Dulea Elementary Particles.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
RECENT DEVELOPMENTS IN THE CONTRIBUTION OF DFCTI/IFIN-HH TO THE WLCG COLLABORATION Department of Computational Physics and Information Technologies (DFCTI)
An Overview of Volunteer Computing
NIIF HPC services for research and education
What is HPC? High Performance Computing (HPC)
Clouds , Grids and Clusters
GPU-based iterative CT reconstruction
Grid site as a tool for data processing and data analysis
Sathish Vadhiyar Parallel Programming
Grid Application Model and Design and Implementation of Grid Services
Authors: Dana Petcu, Viorel Negru, Florin Fortiș
Presentation transcript:

ISS-AliEn and ISS-gLite Adrian Sevcenco RO-LCG 2011 WORKSHOP Applications of Grid Technology and High Performance Computing in Advanced Research

Main directions for R & D activity Space physics: –cosmic rays, –High Energy Physics –nuclear astrophysics and particles –space plasma and magnetometry Cosmology General theoretical and mathematical physics Gravitation and Microgravitation Space technology: – engineering for space research and applications

ISS Computing Facilities AliEn cluster – ALICE dedicated cluster gLite cluster – gLite middleware cluster; VO ALICE so far RoSpaceGrid cluster – Space Plasma and Magnetometry Group (GPSM) - ISS PlanckGrid cluster -

392 CPU 740 GB memory 140 TB storage space  30% of total Romanian Tier 2 Federation  2% of ALICE computing (last 1 year period) ISS COMPUTING FACILITIES

ISS-AliEn 1 year cluster activity

ISS-AliEn

ISS-gLite

Storage status

RoSpaceGRID - PECS-ESA Romanian GRID middelware repository for Space Science Applications The RoSpaceGRID framework is an implementation of GRID which consists of clusters, labs with distributed computing resources all around the world, where computer activities can be submitted for scheduled execution. The RoSpaceGRID is deployed in Institute of Space Science (ISS) for PLANCK, CLUSTER II and VENUS EXPRESS space missions. This project will be established for the benefit and in cooperation with the scientific community of the three missions mentioned above. R&D GRID

Future developments Main data traffic : WNs local storage Change local network infrastructure to 10 GigE (dec 2011 – jan 2012)

CUDA parallel computing Compute Unified Device Architecture ­ NVIDIA General purpose parallel computing for GPU Requirements: ISA(Instruction Set Architecture) NV driver CUDA Toolkit (CUDA SDK)

CUDA parallel computing Execution Model

CUDA parallel computing Performance optimizations Expose as much parallelism as possible Optimize memory usage to achieve maximum memory throughput Optimize instruction usage to achieve maximum instruction throughput Maximize occupancy to hide latency

Promising Future New building: National Center for Space Science and Technology Description: Basement - Data Center, CleanRoom, Utilities 1 st floor – Electronic lab 2 nd,3 rd floors-Offices Technological Roof –telescopes, satellites antenna, space science experiments Status: construction

Promising Future (up to 2012) Basement level Data Center Infrastructure: Redundant 2xUPS 250 KVA Redundant cooling system Generator 650KVA Computing Facilities: 2000 core - ALICE (AliEn +WLCG) 1000 core – ESA Missions 500 core - IAF (ISS Analysis Facility) 250 core - new VO’s (Antares,etc) 100 core -R&D GRID (testing new GRID technologies, spin-off, etc) Storage: 600 TB –ALICE 250TB –ESA Missions 100 TB –IAF 150 TB –new VO’s

Thank you for your attention!

Promising Future New building: National Center for Space Science and Technology Description: Basement - Data Center, CleanRoom, Utilities 1 st floor – Electronic lab 2 nd,3 rd floors-Offices Technological Roof –telescopes, satellites antenna, space science experiments Status: construction

Promising Future (up to 2012) Basement level Data Center Infrastructure: Redundant 2xUPS 250 KVA Redundant cooling system Generator 650KVA Computing Facilities: 2000 core - ALICE (AliEn +WLCG) 1000 core – ESA Missions 500 core - IAF (ISS Analysis Facility) 250 core - new VO’s (Antares,etc) 100 core -R&D GRID (testing new GRID technologies, spin-off, etc) Storage: 600 TB –ALICE 250TB –ESA Missions 100 TB –IAF 150 TB –new VO’s

Thank you for your attention!