Download presentation
Presentation is loading. Please wait.
Published byAmy Williamson Modified over 8 years ago
1
Future Plans at RAL Tier 1 Shaun de Witt
2
Introduction Current Set-Up Short term plans Final Configuration How we get there… How we plan/hope/pray to use CEPH
3
Current Infrastructure Common CMS ATLAS Gen LHcB Common Services (nsd, Cupv, vmgr, vdqm) Instances Disk Layer (Tape and Disk ‘Pools’) TapeLayer (at least 1 dedicated drive)
4
ATLAS Instance Exploded nsd01 nsd02 SRM01 SRM02 SRM03 SRM04 HeadNode01 RH Stager TapeGateway HeadNode01 RH Stager TapeGateway HeadNode02 TransferMgr HeadNode02 TransferMgr HeadNode03 TransferMgr NSD Xroot Mgr HeadNode03 TransferMgr NSD Xroot Mgr atlasTape atlasDataDisk atlasScratchDisk Xroot proxy x12 x1
5
Database Configuration Repack Nameserver Cupv … Repack Nameserver Cupv … CMS SRM CMS STGR CMS SRM CMS STGR LHCb SRM LHCb STGR LHCb SRM LHCb STGR ATLAS SRM ATLAS STGR Gen SRM Gen STGR Gen SRM Gen STGR Repack Nameserver Cupv … Repack Nameserver Cupv … CMS SRM CMS STGR CMS SRM CMS STGR LHCb SRM LHCb STGR LHCb SRM LHCb STGR ATLAS SRM ATLAS STGR Gen SRM Gen STGR Gen SRM Gen STGR Primaries Standbys DataGuard
6
Short Term Plans… Improve tape cache performance for ATLAS –Tape rates limited by disk –Currently heavy IO (read/write from grid/tape) –Currently configured with 10(7) ‘small’ server in RAID6 –Would RAID-1(0) help?
7
The Future Swift/S3 XROOT/ gridFTP CASTOR
8
What is Erasure-coded CEPH High-throughput Objectstore EC uses 16+3 ALL user data planned to use erasure coding (no replication) S3/SWIFT recommended interfaces –Xroot and gridFTP for legacy support … more later
9
The Plan… Move –The data Modify Merge
10
The ‘Plan’ – Phase 1 Current disk purchases are usable for CEPH and classic CASTOR Start moving atlasScratchDisk over to echo –Lifetime of files should be ~2 weeks –Allows us to monitor production use of echo Work with VOs to migrate relevant data using FTS Maintain classic CASTOR
11
The Plan – Phase 2 Once all disk-only (on echo?) –Consolidate to single castor instance With single shared diskpool Tape drive dedication…(Tim knows) Clear all diskcopies Single 2/3 node RAC + stdby –Common headnodes supporting all services –Maintain 3-4 SRMs Will probably be phased in
12
Accessing ECHO VOs use gridFTP and xroot ATM –Write using 1 protocol, read using either But not S3/SWIFT –Proposed gridFTP url (writing?) gsiftp://gateway.domain.name/ / Steers transfer to pool Certificate (VO and Role) for AA –Xroot URL As suggested by Seb… –But what about access?
13
The Known Unknowns S3/SWIFT interoperability? Will CASTOR/CEPH support EC pools? Partial writes verboten? Do users need to supply Support for CEPH plugins
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.