CYBERSAR Cybersar Gianluigi Zanetti Cybersar.

Slides:



Advertisements
Similar presentations
Conference xxx - August 2003 Sverker Holmgren SNIC Director SweGrid A National Grid Initiative within SNIC Swedish National Infrastructure for Computing.
Advertisements

CBPF J. Magnin LAFEX-CBPF. Outline What is the GRID ? Why GRID at CBPF ? What are our needs ? Status of GRID at CBPF.
1 Chapter 11: Data Centre Administration Objectives Data Centre Structure Data Centre Structure Data Centre Administration Data Centre Administration Data.
EGEE-II INFSO-RI Enabling Grids for E-sciencE The gLite middleware distribution OSG Consortium Meeting Seattle,
S.Chechelnitskiy / SFU Simon Fraser Running CE and SE in a XEN virtualized environment S.Chechelnitskiy Simon Fraser University CHEP 2007 September 6 th.
Towards a Virtual European Supercomputing Infrastructure Vision & issues Sanzio Bassini
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site Adam Padee ( ) Ryszard Gokieli ( ) Krzysztof.
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
Bondyakov A.S. Institute of Physics of ANAS, Azerbaijan JINR, Dubna.
University of Oklahoma Network Infrastructure and National Lambda Rail.
Virtual Infrastructure in the Grid Kate Keahey Argonne National Laboratory.
Quantitative Methodologies for the Scientific Computing: An Introductory Sketch Alberto Ciampa, INFN-Pisa Enrico Mazzoni, INFN-Pisa.
1 Advanced Storage Technologies for High Performance Computing Sorin, Faibish EMC NAS Senior Technologist IDC HPC User Forum, April 14-16, Norfolk, VA.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
and beyond Office of Vice President for Information Technology.
INTRODUCTION TO CLOUD COMPUTING CS 595 LECTURE 2.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
VO Sandpit, November 2009 e-Infrastructure to enable EO and Climate Science Dr Victoria Bennett Centre for Environmental Data Archival (CEDA)
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
ISS-AliEn and ISS-gLite Adrian Sevcenco RO-LCG 2011 WORKSHOP Applications of Grid Technology and High Performance Computing in Advanced Research.
EU-IndiaGrid (RI ) is funded by the European Commission under the Research Infrastructure Programme The EU-IndiaGrid Project Joining.
CRISP & SKA WP19 Status. Overview Staffing SKA Preconstruction phase Tiered Data Delivery Infrastructure Prototype deployment.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
WNoDeS – Worker Nodes on Demand Service on EMI2 WNoDeS – Worker Nodes on Demand Service on EMI2 Local batch jobs can be run on both real and virtual execution.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
The II SAS Testbed Site Jan Astalos - Institute of Informatics Slovak Academy of Sciences.
Preparing Resource Aggregations for FIRST Future Internet Testbed Feb. 11 th, 2010 eFIT - Future Internet Testbed APAN Sydney Meeting Dr. JongWon.
Storage, Networks, Data Management Report on Parallel Session OSG Meet 8/2006 Frank Würthwein (UCSD)
CCS Overview Rene Salmon Center for Computational Science.
Project GreenLight Overview Thomas DeFanti Full Research Scientist and Distinguished Professor Emeritus California Institute for Telecommunications and.
EU-IndiaGrid (RI ) is funded by the European Commission under the Research Infrastructure Programme WP5 Application Support Marco.
October 2006ICFA workshop, Cracow1 HEP grid computing in Portugal Jorge Gomes LIP Computer Centre Lisbon Laboratório de Instrumentação e Física Experimental.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Overview of grid activities in France in relation to FKPPL FKPPL Workshop Thursday February 26th, 2009 Dominique Boutigny.
Optical Architecture Invisible Nodes, Elements, Hierarchical, Centrally Controlled, Fairly Static Traditional Provider Services: Invisible, Static Resources,
IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.
1 Grid Activity Summary » Grid Testbed » CFD Application » Virtualization » Information Grid » Grid CA.
A Silvio Pardi on behalf of the SuperB Collaboration a INFN-Napoli -Campus di M.S.Angelo Via Cinthia– 80126, Napoli, Italy CHEP12 – New York – USA – May.
BalticGrid-II Project EGEE’09, Barcelona1 GRID infrastructure for astrophysical applications in Lithuania Gražina Tautvaišienė and Šarūnas Mikolaitis Institute.
Università di Perugia Enabling Grids for E-sciencE Status of and requirements for Computational Chemistry NA4 – SA1 Meeting – 6 th April.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
Southern California Infrastructure Philip Papadopoulos Greg Hidley.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
Internet2 Members Meeting Washington, DC 1 Advanced Networking Infrastructure and Research (ANIR) Aubrey Bush Division Director, ANIR National Science.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Instituto de Biocomputación y Física de Sistemas Complejos Cloud resources and BIFI activities in JRA2 Reunión JRU Española.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
INFN/IGI contributions Federated Clouds Task Force F2F meeting November 24, 2011, Amsterdam.
CMB & LSS Virtual Research Community Marcos López-Caniego Enrique Martínez Isabel Campos Jesús Marco Instituto de Física de Cantabria (CSIC-UC) EGI Community.
WP5 – Infrastructure Operations Test and Production Infrastructures StratusLab kick-off meeting June 2010, Orsay, France GRNET.
EGI-InSPIRE EGI-InSPIRE RI The European Grid Infrastructure Steven Newhouse Director, EGI.eu Project Director, EGI-InSPIRE 29/06/2016CoreGrid.
Grid activities in Czech Republic Jiri Kosina Institute of Physics of the Academy of Sciences of the Czech Republic
SuperB – Naples Site Dr. Silvio Pardi. Right now the Napoli Group is employed in 3 main tasks relate the computing in SuperB Fast Simulation Electron.
Grid Activities in the Philippines Rey Vincent P. Babilonia Advanced Science and Technology Institute Department of Science and Technology PHILIPPINES.
1 "The views expressed in this presentation are those of the author and do not necessarily reflect the views of the European Commission" NCP infoday Capacities.
CERN Openlab Openlab II virtualization developments Havard Bjerke.
Brief introduction about “Grid at LNS”
Bob Jones EGEE Technical Director
Ian Bird GDB Meeting CERN 9 September 2003
Christos Markou Institute of Nuclear Physics NCSR ‘Demokritos’
Sergio Fantinel, INFN LNL/PD
The INFN TIER1 Regional Centre
Статус ГРИД-кластера ИЯФ СО РАН.
GGF15 – Grids and Network Virtualization
Welcome to (HT)Condor Week #19 (year 34 of our project)
Presentation transcript:

CYBERSAR Cybersar Gianluigi Zanetti Cybersar

CYBERSAR Cybersar A cooperative effort for a shared HPC facility Dual purpose –Computational Support to RTD –RTD on computational infrastructure Members of the co-op –CRS4, INAF, INFN, UNICA, UNISS, Nice, Tiscali Partially Funded by MUR –March 2006 – December 2008 Supported by RAS (Sardinian Regional Authority) –access to dark-fibers & regional network AOB INFN

CYBERSAR Cybersar A cooperative effort for a shared HPC facility Dual purpose –Computational Support to RTD –RTD on computational infrastructure Members of the co-op –CRS4, INAF, INFN, UNICA, UNISS, Nice, Tiscali Partially Funded by MUR –March 2006 – December 2008 Supported by RAS (Sardinian Regional Authority) –access to dark-fibers & regional network AOB INFN

CYBERSAR Cybersar Cybersar from afar 1/5 Total: Cpu : 1,224 cores RAM: 2.5 TB Disk: 211 TB Cpu : 488 cores RAM: 1 TB Disk: 100 TB Icnct: GbE, IB Cpu : 256 cores RAM: 0.5 TB Disk: 32 TB Icnct: GbE Cpu : 72 cores RAM: 0.14 TB Disk: 13 TB Icnct:, IB Cpu : 408 cores RAM: 0.8 TB Disk: 66 TB Icnct: GbE, IB

CYBERSAR Cybersar Cybersar from afar 2/5 Total: Cpu : 1,224 cores RAM: 2.5 TB Disk: 211 TB

CYBERSAR Cybersar Cybersar from afar 3/5

CYBERSAR Cybersar Cybersar from afar 4/5 RTR DWDM network 2.5Gb/s lambdas Dark fiber core

CYBERSAR Cybersar Cybersar from afar 5/5 DWDM network 2.5Gb/s lambdas Janna dark fibers

CYBERSAR Cybersar Geographical latencies -or- Have bandwith, can wait Total: Cpu : 1,224 cores RAM: 2.5 TB Disk: 211 TB Cpu : 488 cores RAM: 1 TB Disk: 100 TB Icnct: GbE, IB Cpu : 256 cores RAM: 0.5 TB Disk: 32 TB Icnct: GbE Cpu : 72 cores RAM: 0.14 TB Disk: 13 TB Icnct:, IB Cpu : 408 cores RAM: 0.8 TB Disk: 66 TB Icnct: GbE, IB

CYBERSAR Cybersar Have bandwith, can wait 1/7 Latencies Infiniband ~ 2  sec GbE ~ 10  sec Disk(10K rpm) ~ 8 msec 1 msec 0.5 msec

CYBERSAR Cybersar Have bandwidth, can wait 2/7

CYBERSAR Cybersar Have bandwidth, can wait 3/7 driving application I: large scale virtual observatory for deep space exploration

CYBERSAR Cybersar Have bandwidth, can wait 4/7 driving application II: ensemble forecast system of “extreme” meteorological events

CYBERSAR Cybersar Have bandwidth, can wait 5/7 : wavefield migration and montecarlo imaging of deep underground structures

CYBERSAR Cybersar Have bandwidth, can wait 6/7 driving application IV: immersive exploration of large scale scientific and engineering data

CYBERSAR Cybersar Have bandwidth, can wait 7/7 driving application V: driving (quasi-) holographic display

CYBERSAR Cybersar Dynamic resource allocation Total: Cpu : 1,224 cores RAM: 2.5 TB Disk: 211 TB Cpu : 488 cores RAM: 1 TB Disk: 100 TB Icnct: GbE, IB Cpu : 256 cores RAM: 0.5 TB Disk: 32 TB Icnct: GbE Cpu : 72 cores RAM: 0.14 TB Disk: 13 TB Icnct:, IB Cpu : 408 cores RAM: 0.8 TB Disk: 66 TB Icnct: GbE, IB

CYBERSAR Cybersar Collaborating computing centers, T = T 0 Network infrastructure Computing HW Computing HW Computing HW OS Middleware Application OS

CYBERSAR Cybersar Collaborating computing centers, T = T months Network infrastructure Computing HW Computing HW Computing HW OS Middleware Application OS Middleware

CYBERSAR Cybersar Physical layer VO appslayer VO computing facility layer WN portal WN scheduler WN portal WN scheduler

CYBERSAR Cybersar Physical layer VO computing facility layer WN portal WN scheduler WN portal WN scheduler Control Plane VOCF-1 VOCF-n VOCF Blueprints VOCF running images Activator Hibernator VOCF Manager

CYBERSAR Cybersar WN portal WN scheduler VLAN WN portal WN scheduler Host deploymentVM deployment

CYBERSAR Cybersar Cybersar Control plane Clear separation between –state information & control logic Implemented as a SOA –All components wrapped in a WS –State information -> WSRF –Control logic -> BPEL workflows –VM (Xen) encapsulated by a WSRF (VMH) on each comp. host –Independent comm. Network to components –Low level components control via IPMI/SNMP wrapped by WS Status –Basic Deployement system, Activator and VOCF manager –16 dual opteron testbed –Xen 3, (kvm) support –Started testing gLite WN, CE See also: + Virtual Clusters for Grid Communities. I.Foster et al. CCGrid An Edge Services Framework (ESF) for EGEE, LCG, and OSG. A.Rana et al. CHEP (Computing in High Energy and Nuclear Physics) 2006, Mumbai, February 13-17, SOA based control plane for virtual clusters, P.Anedda et Al. IEEE-XHPC’07, 2007

CYBERSAR Cybersar Cyber-infrastructure for research in Sardinia Dedicated to the support of scientific & technological research Based on high speed networks connecting Network core on dark fibers Excellent platform to support research on –Optical switching –Bandwidth Unlimited Computing –Computing control plane AOB

CYBERSAR Cybersar Questions?