Die Kooperation von Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) StorNextFS, a fast global Filesysteme in a heterogeneous Cluster Environment.

Slides:



Advertisements
Similar presentations
CSF4 Meta-Scheduler Tutorial 1st PRAGMA Institute Zhaohui Ding or
Advertisements

18 th WGISS Meeting, September 6-10, 2004, Beijing, People Republic of China Activities on Grid Technology at GISTDA Pakorn Apaphant GISTDA.
GridPP July 2003Stefan StonjekSlide 1 SAM middleware components Stefan Stonjek University of Oxford 7 th GridPP Meeting 02 nd July 2003 Oxford.
Southgrid Status Pete Gronbech: 21 st March 2007 GridPP 18 Glasgow.
Current status of grids: the need for standards Mike Mineter TOE-NeSC, Edinburgh.
SWITCH Visit to NeSC Malcolm Atkinson Director 5 th October 2004.
The RHIC-ATLAS Computing Facility at BNL HEPIX – Edinburgh May 24-28, 2004 Tony Chan RHIC Computing Facility Brookhaven National Laboratory.
SCARF Duncan Tooke RAL HPCSG. Overview What is SCARF? Hardware & OS Management Software Users Future.
Lustre 100GBit Testbed Michael Kluge Robert Henschel, Stephen Simms
Die Kooperation von Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) The Cirrus & Cumulus Project: Build a Scientific Cloud for a Data Center.
1 | M. Sutter | IPE | KIT – die Kooperation von Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) Michael Sutter H.-J. Mathes,
GridKa January 2005 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Doris Ressmann 1 Mass Storage at GridKa Forschungszentrum Karlsruhe GmbH.
Service Data Challenge Meeting, Karlsruhe, Dec 2, 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Plans and outlook at GridKa Forschungszentrum.
KIT – The cooperation of Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) LHCOPN issues Responce of MDM monitoring Steinbuch.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Torsten Antoni – LCG Operations Workshop, CERN 02-04/11/04 Global Grid User Support - GGUS -
Die Kooperation von Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) Tools used for operations at GridKa Angela Poschlad, SCC.
Forschungszentrum Karlsruhe Technik und Umwelt Regional Data and Computing Centre Germany (RDCCG) RDCCG – Regional Computing and Data Center Germany software.
Removing the I/O Bottleneck with Virident PCIe Solid State Storage Solutions Jan Silverman VP Operations.
Copyright © 2007, SAS Institute Inc. All rights reserved. SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks.
Windows IT Pro magazine Datacenter solution with lower infrastructure costs and OPEX savings from increased operational efficiencies. Datacenter.
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Introduction to Computer Administration Introduction.
Virtual Machine Technology Dr. Gregor von Laszewski Dr. Lizhe Wang.
25 seconds left…...
Week 1.
Storage Area Network SAN
IWR Ideen werden Realität Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Institut für Wissenschaftliches Rechnen GridKa Database Services Overview.
Grupo de Arquitectura de Computadores, Comunicaciones y Sistemas (Computer Architecture, Communications and Systems Group)
Copyright 2007 FUJITSU LIMITED Introduction to Systemwalker Resource Coordinator Virtual server Edition V13.2 December, 2007 Fujitsu Limited Server Management.
Beowulf Supercomputer System Lee, Jung won CS843.
© 2006 EMC Corporation. All rights reserved. Network Attached Storage (NAS) Module 3.2.
Windows HPC Server 2008 Presented by Frank Chism Windows and Condor: Co-Existence and Interoperation.
KIT – The cooperation of Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) SX-9, a good Choice ? Steinbuch Centre for Computing.
Die Kooperation von Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH)‏ Steinbuch Centre for Computing (SCC) A large Multi-Functional HPC.
E-Science Workshop, Santiago de Chile, 23./ KIT ( Frank Schmitz Forschungszentrum Karlsruhe Institut.
National Energy Research Scientific Computing Center (NERSC) The GUPFS Project at NERSC GUPFS Team NERSC Center Division, LBNL November 2003.
Storage Area Networks The Basics. Storage Area Networks SANS are designed to give you: More disk space Multiple server access to a single disk pool Better.
Methodologies, strategies and experiences Virtualization.
Distributed Systems Early Examples. Projects NOW – a Network Of Workstations University of California, Berkely Terminated about 1997 after demonstrating.
Corporate Partner Overview and Update September 27, 2007 Gary Crane SURA Director IT Initiatives.
SAIGONTECH COPPERATIVE EDUCATION NETWORKING Spring 2010 Seminar #1 VIRTUALIZATION EVERYWHERE.
CASPUR Site Report Andrei Maslennikov Lead - Systems Karlsruhe, May 2005.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
November 2, 2000HEPiX/HEPNT FERMI SAN Effort Lisa Giacchetti Ray Pasetes GFS information contributed by Jim Annis.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
Using Virtual Servers for the CERN Windows infrastructure Emmanuel Ormancey, Alberto Pace CERN, Information Technology Department.
Laboratório de Instrumentação e Física Experimental de Partículas GRID Activities at LIP Jorge Gomes - (LIP Computer Centre)
Minneapolis / St. Paul Computer Measurement Group – Spring Virtual(ization) Reality Check James (Jim) Vence Technical Solutions Manager IBM Global.
INFSO-RI Enabling Grids for E-sciencE GridKaSchool 2005 XEN – Use of virtualisation in NA3 (Grid in a box)
Storage and Storage Access 1 Rainer Többicke CERN/IT.
Campus grids: e-Infrastructure within a University Mike Mineter National e-Science Centre 14 February 2006.
Computing at Norwegian Meteorological Institute Roar Skålin Director of Information Technology Norwegian Meteorological Institute CAS.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Implementation of a reliable and expandable on-line storage for compute clusters Jos van Wezel.
CASPUR Site Report Andrei Maslennikov Lead - Systems Rome, April 2006.
Barriers to IB adoption (Storage Perspective) Ashish Batwara Software Solution Architect May 01, 2007.
Office of Science U.S. Department of Energy NERSC Site Report HEPiX October 20, 2003 TRIUMF.
KIT – University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association STEINBUCH CENTRE FOR COMPUTING - SCC
SA1 operational policy training, Athens 20-21/01/05 Presentation of the HG Node “Isabella” and operational experience Antonis Zissimos Member of ICCS administration.
National Energy Research Scientific Computing Center (NERSC) NERSC Site Report Shane Canon NERSC Center Division, LBNL 10/15/2004.
Mass Storage at SARA Peter Michielse (NCF) Mark van de Sanden, Ron Trompert (SARA) GDB – CERN – January 12, 2005.
SYSTEM MODELS FOR ADVANCED COMPUTING Jhashuva. U 1 Asst. Prof CSE
Red Hat Enterprise Linux Presenter name Title, Red Hat Date.
KIT – Universität des Landes Baden-Württemberg und nationales Forschungszentrum in der Helmholtz-Gemeinschaft Steinbuch Centre for Computing
GPFS Parallel File System
Advanced Network Administration Computer Clusters.
Storage Area Networks The Basics.
Operating System & Application Software
NL Service Challenge Plans
Storage Area Network
Статус ГРИД-кластера ИЯФ СО РАН.
GGF15 – Grids and Network Virtualization
Presentation transcript:

Die Kooperation von Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) StorNextFS, a fast global Filesysteme in a heterogeneous Cluster Environment Frank Schmitz

KIT - Die Kooperation von Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) 2 | Frank Schmitz | Steinbuch Centre for Computing | University 11Faculties 120 Institutes 4,000 Employees 18,500 Students 250 mio Budget University 11Faculties 120 Institutes 4,000 Employees 18,500 Students 250 mio Budget Forschungszentrum 10Programs 21 large Institutes 4,000 Employees 310 mio Budget Forschungszentrum 10Programs 21 large Institutes 4,000 Employees 310 mio Budget Karlsruhe Institute of Technology 10 km 15 min Research Education Management Structures Institutes Services Infrastructure KIT…

KIT - Die Kooperation von Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) 3 | Frank Schmitz | Steinbuch Centre for Computing | Motivation for a global file system and the virtualisation Existing Linux- / Unix-Cluster / Windows environments and HPC systems like vector computers and InfiniBand-cluster New processors include virtualization technology (Vanderpool, Pacifica) Tasks not solvable with Linux (e.g. large excel sheets and other Microsoft based applications) Windows needed Accessing data in a global file system solution from various operating systems and hardware platforms (like IBM, Intel, AMD, SUN, NEC) Testing six month (starting early 2006) in a heterogeneous SAN environment, we have found StorNextFS (SNFS) from Quantum/Adic as the best solution for KIT.

KIT - Die Kooperation von Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) 4 | Frank Schmitz | Steinbuch Centre for Computing | Ideas in 2006 one global and fast file system to solve all needs (StorNextFS, SAN-FS, SAM-QFS, NEC GFS, PVFS, Sistina GFS, CXFS, Celerra High-Road,…), integration in low performance Grid file system or something like AFS InfiniBand, iSCSI, FC-SAN, gateway solutions first steps in 2004: gLite as the middleware layer, but … OGSA compliant Grid services Globus ToolKit 4 resource brokerage (TORQUE, LSF, CONDOR, LoadLeveler…) security Kerberos 5 based solution

KIT - Die Kooperation von Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) 5 | Frank Schmitz | Steinbuch Centre for Computing | Since early summer 2006 Grid middleware GT4, gLite and Unicore are running in the CampusGrid (gLite and GT4 only Linux) environment GT4 will be available soon for the AIX and Super/UX operating systems integration in low performance Grid file system or something like AFS InfiniBand, iSCSI, FC-SAN, gateway solutions first steps in 2004: gLite as the middleware layer, but … OGSA compliant Grid services Globus ToolKit 4 resource brokerage (TORQUE, LSF, CONDOR, LoadLeveler…) security Kerberos 5 based solution

KIT - Die Kooperation von Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) 6 | Frank Schmitz | Steinbuch Centre for Computing | The hardware structure for the new StorNextFS version 3.0 (redundant) in the CampusGrid environment FSC Celsius (MetaDataserver) 96 FSC RX 220 IBM eServer e325 IBM p630 Power4 IBM xSeries Sunfire V20z Brocade Silkworm qLogic/Silverstorm IB director IB-FC gateway FibreChannel InfiniBand 10/20 Gbit/s Brocade Switch and IBM Blade JS20/HS20/HS21 EMC Clariion CX700, S-ATA, FC disks Intel Whitebox 8 Gbit/s 4 Gbit/s FSC Celsius (MetaDataserver) 4 Gbit/s 8 Gbit/s 32 Gbit/s Performance- upgrade as soon as possible

KIT - Die Kooperation von Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) 7 | Frank Schmitz | Steinbuch Centre for Computing | Performance of a small InfiniBand cluster using a FC-Gateway from qLogic

KIT - Die Kooperation von Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) 8 | Frank Schmitz | Steinbuch Centre for Computing | Performance of a small InfiniBand cluster using a FC-Gateway from qLogic

KIT - Die Kooperation von Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) 9 | Frank Schmitz | Steinbuch Centre for Computing | The hardware structure for the new StorNextFS version 3.0 (redundant) in the CampusGrid environment FSC Celsius (MetaDataserver) 96 FSC RX 220 IBM eServer e325 IBM p630 Power4 IBM xSeries Sunfire V20z Brocade Silkworm qLogic/Silverstorm IB director IB-FC gateway FibreChannel InfiniBand 10/20 Gbit/s Brocade Switch and IBM Blade JS20/HS20/HS21 EMC Clariion CX700, S-ATA, FC disks Intel Whitebox 16 Gbit/s 4 Gbit/s FSC Celsius (MetaDataserver) 4 Gbit/s 16 Gbit/s 32 Gbit/s

KIT - Die Kooperation von Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) 10 | Frank Schmitz | Steinbuch Centre for Computing | Performance of a small InfiniBand cluster using a FC-Gateway from qLogic

KIT - Die Kooperation von Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) 11 | Frank Schmitz | Steinbuch Centre for Computing | Internals Limitations A redundant Clariion controller running in secure mode is limited to 1.5 GByte/s for I/O Because we are using two metadata-LUNs and four data-LUNs the limitation for writing on RAID-5 is 1.4 GByte/s (350 MByte each LUN) InfiniBand DDR performance is 20 Gbit/s, the effective data rate is limited to 2 GByte/s because of the 8B/10B encoding schema (full duplex) PCIe x8, its limitation is 2 GByte/s A single node using one HCA can achieve up to 250 MByte/s StorNextFS 3.0 One metadata-server could handle thousands of clients Metadata will be send via Ethernet

KIT - Die Kooperation von Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) 12 | Frank Schmitz | Steinbuch Centre for Computing | Results Advantage for VMware virtualisation No need to change existing Environments offers more different Services support for Windows- and Unix-programs including a unified file system solution Using ADS as a user administration solution for KIT Disadvantage for virtualisation maybe reduced Performance compared to native Installation little overhead if CCN have to run all the time StorNextFS performing very well, the bottlenecks are the EMC Clariion (only one system used for the benchmark) controller and the four FC data LUNs the windows ADS integration is very well done! The windows performance for the version 2.7 was as assumed. working in a VMware ESX Environment is no problem. the EMC Clariion and the IB-FC gateway are the limiting factor! Installation of StorNextFS is easy