Download presentation
Presentation is loading. Please wait.
1
NL Service Challenge Plans
Kors Bos, Sander Klous (NIKHEF) Pieter de Boer, Mark van de Sanden, Ron Trompert, Jules Wolfrat (SARA) Peter Michielse (NCF) Serv.Chall.Mtng. – RAL – January 27, 2005
2
Planning January setting up
February disk-disk tests with test cluster March sustained 100 MB/s disk-disk April disk-disk tests with new hardware May first disk-tape tests May 30 – June Gbit/s tests July sustained 50 MB/s disk-tape August vacation
3
December 2004
4
January setting up UvA cluster
Replace teras P1 by a cluster of 10 cpu nodes: 64-bit dual Opteron at 2 GHz and 4 GB RAM 2 TB RAID disk per node 1 Gb/s UTP managed NIC 1 Gb/s fiber NIC 10 Gb/s fiber NIC on 6 nodes Setting up: Accounts Software Network connections
5
January network We plan to use “only” 1 GE between CERN and SARA
10 Itanium dual CPU 1.5 GHz, 2-4 GB RAM 1G ENTERASYS N7 10 GE Switches 10G CERNH7 Router 1G SURFnet ONS 15454 SARA 10G transparent lambda SURFnet HDXc We plan to use “only” 1 GE between CERN and SARA except for a test in June 10GE WAN PHY FORCE10 10 Opteron dual CPU 2 GHz, 4 GB RAM 1G
6
February tests with UvA cluster
Disk-to-disk performance tests: 7-11/2 Using “only” 1 GE network connection Need the Radiant cluster Goal transfer rate >800 Mbit/s Need better communication MSN, VRVS, other tool? Send Sander Klous to CERN Decide on “new hardware” and order: 18/2
7
March Sevice Challenge
Disk-to-disk performance tests: 14-25/3 Sustained file transfer tests Goal transfer rate 100 Mbyte/s 5 T1’s : Lyon, Fermi, A’dam, FZK, RAL Install new hardware: 28/3-1/4
8
April new hardware New Hardware: 4 dual CPU servers New Hardware:
32-bit dual Intel at 2.8 GHz and 4 GB RAM 200 GB RAID1 disk per node 2 x 1 Gb/s Ethernet (1 for data, 1 for management) 1 x 2 Gb/s Fiber Channel Interfaces New Hardware: 1 GigE switch 2 9940B tape drives
9
April new hardware set-up
Force10 Gb/s Internal Gigabit Network Gb/s SAN HSM Shared FS Fiber Channel
10
April tests New set-up tests: 11-15/4 Try new software:
Data from CERN into HSM Goal transfer rate 100 Mbyte/s Different architectures to be tried As little data copying as possible Integrated storage environment Try new software: SRM on DMF (prefered) SRM plus Dcache on DMF
11
May tests with tape First tape tests: 9-14/5 Software:
Data from CERN onto tape at SARA 2 new 9940B tape drives Expected rate 50 Mbyte/s Software: SRM
12
June 10 Gbit/s tests Set up network: 30/5 – 3/6
Use full SURFnet lambdano other users 10 GE harware at CERN needed 10 Gb/s tests: 6/6 – 10/6 Disk-to-HSM (not to tape) Expected rate >500 Mbyte/s After 10/6 We switch back to 1 Gb/s for July Challenge Vancouver 10 GE tests: 13/6 – 24/6 no others
13
July Service Challenge
Preparations: 27/6 – 1/7 Network: back to 1 Gb/s operation Hardware: servers/disks/tapes Manpower: operators/manpower/vacation Operation mode: monitoring/communication Service Challenge: 4-29/7 50 Mbyte/s from CERN to SARA tape (100 tapes) Using SRM Together with: Lyon, Fermi, A’dam, FZK, RAL
14
August vacation
15
Back-up slides Not to be shown
16
High-level set-up at SARA
IRIX and Linux64 nodes IRIX IRIX SurfNet NL A7 P4 Jumbo P1 SurfNet Cern SAN Amsterdam: 6x 16-port and 3x 32-port FC switches Extension to Almere: Disk Storage Mass Storage, including SL and tape drives IA32 cluster Other nodes (AIX, Linux32) 4x 9840C 3x 9940B STK 9310 14 TB RAID5
17
SARA SAN
18
Storage Details P1 is SGI Origin3800 Jumbo is SGI Origin350
32p (MIPS), 32 GB Part of 1024p TERAS IRIX (SGI Unix) CXFS since 2001 (SGI’s shared file system) Interactive node for users to test and submit their jobs Has been used so far as the Grid Storage Element Jumbo is SGI Origin350 4p (MIPS), 4 GB CXFS MetaDataServer DMF/TMF (SGI’s hierarchical storage manager) P1 Jumbo
19
DMF DMF/TMF (SGI’s hierarchical storage manager) Regular files
Free Space Dual-state files Regular files 0% 100% Free space minimum target Migration Threshold-driven migration events
20
December 2004 situation for Service Challenge
Arriving in Amsterdam T1 SurfNet, 4x 1GE from CERN P1 IRIX 14 TB RAID5 4x 9840 3x 9940 STK 9310 6x 16-port and 3x 32-port FC switches 4 Gbit/s for test available This was the situation so far: Disk-to-disk P1 (the Grid SE) is part of a general purpose production facility, so difficult to experiment with Uses proprietary OS (SGI IRIX), while tools are based on RH 7.3 Limited by disk storage Relatively old Gbit cards in P1 Planning not optimal DMF HSM seems to be an advantage
21
Service Challenge tests alternative 1
Several alternatives - 1 Pro: Simple hardware Some scalability Con: No separation of incoming (T0) and outgoing (T2) data No integrated storage environment Tape drives fixed to host/ file system
22
Service Challenge tests alternative 2
Several alternatives - 2 Pro: Simple hardware More scalability Separation of incoming (T0) and outgoing (T2) data Con: More complex setup, so more expensive No integrated storage environment Tape drives fixed to host/ file system
23
Service Challenge tests alternative 3
Several alternatives - 3 Pro: Simple hardware More scalability Separation of incoming (T0) and outgoing (T2) data HSM involved, tape drives not fixed to host/file system Integrated storage environment Con: More complex setup, so more expensive Possible dependance on HSM
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.