EGEE is a project funded by the European Union under contract IST-2003-508833 Test di GPFS a Catania IV Workshop INFN Grid – Bari 25-27 Ottobre 2004 www.eu-egee.org.

Slides:



Advertisements
Similar presentations
The Linux Storage People Simple Fast Massively Scalable Network Storage Coraid EtherDrive ® Storage.
Advertisements

EGEE is a project funded by the European Union under contract IST Using SRM: DPM and dCache G.Donvito,V.Spinoso INFN Bari
Introduction to DBA.
Chapter One The Essence of UNIX.
© 2006 EMC Corporation. All rights reserved. Network Attached Storage (NAS) Module 3.2.
Network-Attached Storage
Institute for High Energy Physics ( ) NEC’2007 Varna, Bulgaria, September Activities of IHEP in LCG/EGEE.
Red Hat Linux Network. Red Hat Network Red Hat Network is the environment for system- level support and management of Red Hat Linux networks. Red Hat.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
On evaluating GPFS Research work that has been done at HLRS by Alejandro Calderon.
Lesson 15 – INSTALL AND SET UP NETWARE 5.1. Understanding NetWare 5.1 Preparing for installation Installing NetWare 5.1 Configuring NetWare 5.1 client.
Lesson 5-Accessing Networks. Overview Introduction to Windows XP Professional. Introduction to Novell Client. Introduction to Red Hat Linux workstation.
Understanding Networks I. Objectives Compare client and network operating systems Learn about local area network technologies, including Ethernet, Token.
Chapter 1 Introducing Windows Server 2012/R2
Basic Unix Dr Tim Cutts Team Leader Systems Support Group Infrastructure Management Team.
Hardening Linux for Enterprise Applications Peter Knaggs & Xiaoping Li Oracle Corporation Sunil Mahale Network Appliance Session id:
GDC Workshop Session 1 - Storage 2003/11. Agenda NAS Quick installation (15 min) Major functions demo (30 min) System recovery (10 min) Disassembly (20.
Chapter 11: Creating and Managing Shared Folders BAI617.
Hands-On Microsoft Windows Server 2008 Chapter 1 Introduction to Windows Server 2008.
1 A Look at PVFS, a Parallel File System for Linux Will Arensman Anila Pillai.
Module 13: Configuring Availability of Network Resources and Content.
Guide to Linux Installation and Administration, 2e1 Chapter 3 Installing Linux.
1 A Look at PVFS, a Parallel File System for Linux Talk originally given by Will Arensman and Anila Pillai.
Pooja Shetty Usha B Gowda.  Network File Systems (NFS)  Drawbacks of NFS  Parallel Virtual File Systems (PVFS)  PVFS components  PVFS application.
Module 12: Designing High Availability in Windows Server ® 2008.
CIS 191 – Lesson 2 System Administration. CIS 191 – Lesson 2 System Architecture Component Architecture –The OS provides the simple components from which.
Chapter 8 Implementing Disaster Recovery and High Availability Hands-On Virtual Computing.
Chapter 9: Networking with Unix and Linux Network+ Guide to Networks Third Edition.
Big Red II & Supporting Infrastructure Craig A. Stewart, Matthew R. Link, David Y Hancock Presented at IUPUI Faculty Council Information Technology Subcommittee.
Chapter 4 Server Clients Workstation Operating Systems Workstation Requirements NIC Software Setup Resolve a Resource Conflict Prepare Workstation - Windows.
By: Ashish Gohel 8 th sem ISE.. Why Cloud Computing ? Cloud Computing platforms provides easy access to a company’s high-performance computing and storage.
1 Interface Two most common types of interfaces –SCSI: Small Computer Systems Interface (servers and high-performance desktops) –IDE/ATA: Integrated Drive.
Module 11: Implementing ISA Server 2004 Enterprise Edition.
AlphaServer UNIX Resource Consolidation.
InstantGrid: A Framework for On- Demand Grid Point Construction R.S.C. Ho, K.K. Yin, D.C.M. Lee, D.H.F. Hung, C.L. Wang, and F.C.M. Lau Dept. of Computer.
Types of Operating Systems
FailSafe SGI’s High Availability Solution Mayank Vasa MTS, Linux FailSafe Gatekeeper
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Computer Systems Lab The University of Wisconsin - Madison Department of Computer Sciences Linux Clusters David Thompson
1 Week #10Business Continuity Backing Up Data Configuring Shadow Copies Providing Server and Service Availability.
EGEE is a project funded by the European Union under contract IST JRA1-SA1 requirement gathering Maite Barroso JRA1 Integration and Testing.
Using NAS as a Gateway to SAN Dave Rosenberg Hewlett-Packard Company th Street SW Loveland, CO 80537
EGEE is a project funded by the European Union under contract IST HellasGrid Hardware Tender Christos Aposkitis GRNET EGEE 3 rd parties Advanced.
 CASTORFS web page - CASTOR web site - FUSE web site -
Kickstart Installation
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Implementation of a reliable and expandable on-line storage for compute clusters Jos van Wezel.
System Administrator Responsible for? Install OS Network Configuration Security Configuration Patching Backup Performance Management Storage Management.
Types of Operating Systems 1 Computer Engineering Department Distributed Systems Course Assoc. Prof. Dr. Ahmet Sayar Kocaeli University - Fall 2015.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
Chapter 9: Networking with Unix and Linux. Objectives: Describe the origins and history of the UNIX operating system Identify similarities and differences.
A Silvio Pardi on behalf of the SuperB Collaboration a INFN-Napoli -Campus di M.S.Angelo Via Cinthia– 80126, Napoli, Italy CHEP12 – New York – USA – May.
System Administrator Responsible for? Install OS Network Configuration Security Configuration Patching Backup Performance Management Storage Management.
SA1 operational policy training, Athens 20-21/01/05 Presentation of the HG Node “Isabella” and operational experience Antonis Zissimos Member of ICCS administration.
CEG 2400 FALL 2012 Linux/UNIX Network Operating Systems.
Deploying Highly Available SQL Server in Windows Azure A Presentation and Demonstration by Microsoft Cluster MVP David Bermingham.
EGEE is a project funded by the European Union under contract IST Issues from current Experience SA1 Feedback to JRA1 A. Pacheco PIC Barcelona.
An Introduction to GPFS
EGEE is a project funded by the European Union under contract IST GPFS General Parallel File System INFN-GRID Technical Board – Bologna 1-2.
GPFS Parallel File System
Virtuozzo 4.0 Carla Safigan Virtuozzo Marketing Jack Zubarev COO.
Linux Systems Administration 101 National Computer Institute Sep
Chapter 1 Introducing Windows Server 2012/R2
Heterogeneous Computation Team HybriLIT
Introduction to Networks
Chapter 2: The Linux System Part 1
Distributed computing deals with hardware
Cost Effective Network Storage Solutions
INFNGRID Workshop – Bari, Italy, October 2004
Presentation transcript:

EGEE is a project funded by the European Union under contract IST Test di GPFS a Catania IV Workshop INFN Grid – Bari Ottobre Rosanna Catania INFN Catania

INFN–GRID Workshop, Bari The General Parallel File System (GPFS) for Linux on xSeries® is a high-performance shared-disk file system that can provide data access from all nodes in a Linux cluster environment. Parallel and serial applications can readily access shared files using standard UNIX® file system interfaces, and the same file can be accessed concurrently from multiple nodes. GPFS provides high availability through logging and replication, and can be configured for failover from both disk and server malfunctions. Introducing GPFS

INFN–GRID Workshop, Bari What does GPFS do? Why use GPFS? Presents one file system to many nodes – appears to the user as a standard Unix filesystem Allows nodes concurrent access to the same data GPFS offers:  scalability,  high availability and recoverability  high performance GPFS highlights  Improved system performance!  Assured file consistency  High recoverability and increased data availability  Enhanced system flexibility  Simplified administration

INFN–GRID Workshop, Bari System requirements Upgrade kernel Ensure the glibc level is or greater Proper authorization is garanted to all nodes in the GPFS cluster to use alternative remote shell and remote copy commands (at Catania we use SSH everywhere)

INFN–GRID Workshop, Bari RPMs : src i386.rpm rsct.basic i386.rpm rsct.core i386.rpm rsct.core.utils i386.rpm gpfs.base i386.rpm gpfs.docs noarch.rpm gpfs.gpl noarch.rpm gpfs.msg.en_US noarch.rpm

INFN–GRID Workshop, Bari RSCT: Reliable Scalable Cluster Tecnology RSCT is a set of software components that together provide a comprenhensive clustering environment for Linux Is the infrastructure used to provide clusters with improved system availability, scalability, and ease of use.

INFN–GRID Workshop, Bari RSCT peer domain: configuration IP connectivity between all nodes of the peer domain Prepare initial security environment on each node that will be in the peer domain using the  preprpnode -k originator_node ip_server1 Create a new peer domain definition by issuing the  mkrpdomain – f allnodes.txt domain_name Bring the peer domain online using the  startrpdomain domain_name Verify your configuration  lsrpdomain domain_name  lsrpnode –a

INFN–GRID Workshop, Bari GPFS: Installation On each node copy the self-extrating images from the CDROM, invoke and accept the license agreement . /gpfs_install _i386 --silent rpm -ivh gpfs.base i386.rpm gpfs.docs noarch.rpm gpfs.gpl noarch.rpm gpfs.msg.en_US noarch.rpm Build your GPFS portability module  vi /usr/lpp/mmfs/src/config/site.mcr  export SHARKCLONEROOT=/usr/lpp/mmfs/src  cd /usr/lpp/mmfs/src/  make World To install the linux portability interface for GPFS  make InstallImages

INFN–GRID Workshop, Bari GPFS: Configuration CREATING the CLUSTER:  mmcrcluster -t lc -n allnodes.txt -p primary_server -s secondary_server -r /usr/bin/ssh -R /usr/bin/scp  mmlscluster CREATING the NODESET ON THE ORIGINATOR NODE:  mmconfig -n allnodes.txt -A -C cluster_name  mmlsconfig -C cluster_name START the GPFS SERVICES ON EACH NODE:  mmstartup -C cluster_name (mmstartup –a) Verification less /var/adm/ras/mmfs.log.latest

INFN–GRID Workshop, Bari “Direct attached” configuration (Tested on Grid!) GPFS software installed on each node. GPFS dedicated network, min. 10/100 ethernet SWT: connection between all nodes and storage Each logical disk becomes a logical volume, from which the GPFS filesystem is created. Node GPFS SWT

INFN–GRID Workshop, Bari GPFS: Configuration CREATE NSD (Node Shared Disks)  mmcrnsd -F Descfile -v yes CREATING A FILE SYSTEM  mkdir /gpfs  mmcrfs /gpfs gpfs0 -F Descfile -C cluster_name -A yes MOUNT A FILE SYSTEM  mount /gpfs VERIFICATION:  mmlscluster  mmlsconfig-C cluster_name

INFN–GRID Workshop, Bari Distributions and kernel levels has been tested on Grid: GPFS 2.2 for Linux on xSeries GPFS VersionLinux DistributionKernel VersionRole Red Hat smpDisk Server Red Hat smpDis Server Red Hat smpClient Red Hat legacy.smp Client

INFN–GRID Workshop, Bari Test A – 1WN : configuration 2 SERVER RH WN RH smp 2 storage disk 480 = 960 GB 2 storage disk 720 = GB /local 1 GB file system: ext3 /NFS-DATA mounted via NFS /gpfs-data 2.2TB mounted by GPFS SERVER NFS. TOTAL:4 disk/2 server = 2.4 TB /GPFS-DATA 2.2TB file system: GPFS /NFS-DATA /local

INFN–GRID Workshop, Bari TEST A: READING… (average of 5 samples) Seconds MB

INFN–GRID Workshop, Bari TEST A : WRITING… (average of 5 samples) Seconds MB

INFN–GRID Workshop, Bari Test B : configuration 1 SERVER RH smp 1 SERVER RH smp WN RH legacy.smp 2 storage disk 720 = TB 2 storage disk 1000 = 2 TB TOTAL:4 disk/2 server = 3.4 TB /GPFS-DATA 3.4TB file system: GPFS /local 1 GB file system: ext3 /NFS-DATA mounted via NFS /gpfs-data 2.2TB mounted by GPFS SERVER NFS. /NFS-DATA /local

INFN–GRID Workshop, Bari TEST B – 1WN: READING… (average of 5 samples) Seconds MB

INFN–GRID Workshop, Bari TEST B – 2 WN : READING… (average of 5 samples) Seconds MB

INFN–GRID Workshop, Bari TEST B – 3 WN : READING… (average of 5 samples) Seconds MB

INFN–GRID Workshop, Bari TEST B – 1WN: WRITING… (average of 5 samples) Seconds MB

INFN–GRID Workshop, Bari TEST B - 2 WN : WRITING… (average of 5 samples) Seconds MB

INFN–GRID Workshop, Bari TEST B – 3 WN : WRITING… (average of 5 samples) Seconds MB

INFN–GRID Workshop, Bari Analysis of results Reading from GPFS takes more or less the same time of reading from NFS Writing on GPFS is faster than on NFS and increases with the number of WNs

INFN–GRID Workshop, Bari Conclusions and outlook Preliminary I/O performance tests in the “NFS” configuration show a worse behaviour w.r.t. to native NFS (about 4:1) ; “Direct attached” is strongly suggest to improve performance Network bandwidth of the single servers is VERY important (GPFS sets down to the “slowest” node) The proper configuration with GPFS installed both on WNs and servers has been tested: Short term (next weeks): tests of reliability Medium term (by the end of the year): use GPFS to manage all the disk storage at Catania