Download presentation
Presentation is loading. Please wait.
Published byAshlyn Dixon Modified over 9 years ago
1
1 RAL Status and Plans Carmine Cioffi Database Administrator and Developer 3D Workshop, CERN, 26-27 November 2009
2
2 OUTLINE 3D –Database configuration and HW spec –Storage configuration and HW spec –Future plans CASTOR –Database configuration and HW spec –Storage configuration and HW spec –Schemas Size and Versions –Future plans Backup configuration
3
3 3D: Database Configuration and HW Spec 3 nodes RAC for ATLAS (Ogma) –Red hat 4.8 –64 bit –2 Quad Core Xeon(R) E5410 @ 2.33GHz –16 GB 2 nodes RAC for LHCb (Lugh) –Red Hat 4.8 –64 bit –2 Dual-Core AMD Opteron(tm) 2216 –16 GB 3
4
4 3D: Database Configuration and HW Spec For both RAC Ogma and Lugh: –Oracle 10.2.0.4 –Single OCR –Single Voting Disk 4
5
5 3D: Storage Configuration and HW Spec Single disk array shared by both databases (Ogma, Lugh): –Storage (SAN, 2GBps FC): Ogma:~1/2 TB Pluto: ~100GB –Single switch SANBOX 5200 2Gb/s –16 disks SATA 260GB –Configured with RAID10 5
6
6 3D: Storage Configuration and HW Spec ASM: –Ogma (ATLAS): Normal redundancy Single disk group Two failure groups One disk (512G) per failure group –Lugh (LHCb): Normal redundancy Single disk group Two failure groups One disk (512G) per failure group 6
7
7 3D: Database diagram 7 OGMALUGH FC switch SANBOX 5200 2Gb/s OGMA (1/2TB / 1TB GB) LUGH (100GB / 1/2TB) SAN
8
8 3D Future Plans: DB Configuration and HW Spec There will be no changes on: –Number of nodes per RAC –Hardware spes –Oracle version Deploy on both RAC Ogma and Lugh: –Two OCRs –Three Voting Disks 8
9
9 3D Future Plans: DB Configuration and HW Spec Two disk arrays shared by both databases (Ogma, Lugh): –Storage: SAN 4GBps FC –Physical disk available: Array 1: 16 disks SATA 260GB Array 2: 6 disks SATA 550GB –Arrays with RAID5 configuration Two switches: SANBOX 5200 2Gb/s SANBOX 5602 4Gb/s 9
10
10 3D Future Plans: Storage Configuration and Spec ASM: –Ogma (ATLAS): Normal redundancy Single disk group two failure groups Two or more disks per failure group –Lugh(LHCb): Normal redundancy Single disk group two failure groups One or more disks per failure group 10
11
11 3D: Database Diagram 11 OGMALUGH FC switch 1 SANBOX 5200 2Gb/s Disk array 1 LUGH OGMA FC switch 2 SANBOX 5602 4Gb/s Disk array 2 mirror LUGH OGMA ASM mirroring
12
12 Castor: Database Configuration and HW Spec 2 5-nodes RAC (Pluto, Neptune) + one single instance (Uranus) –Red hat 4.8 –32 bit –Dual Quad Core (Intel Xeon 3Ghz) –4 GB Oracle 10.2.0.4 Single OCR Single Voting Disk 12
13
13 Castor: Storage Configuration and HW Spec Single disk array used by the two RACs: Storage: –Pluto:~200GB –Neptune:~220GB –Single instance: 624GB Overland 1200 disk array –Twin controller –Twin Fibre Channel ports to each controller –10 SAS disk (300GB each 3TB total gross space) –Raid 1(1.5 TB net space) Two Brocade 200E 4Gbit switched 13
14
14 Castor: Storage Configuration and HW Spec ASM (Pluto, Neptune): Normal redundancy Single disk group Two failure groups One disk (512G) per failure group 14
15
15 Database Overview 15 NeptunePluto Brocade 200E Uranus SCSI attached disk array (624GB / 1.8TB Pluto (200GB / 1/2TB) Neptune (220GB / ½ TB) Overland 1200
16
16 Castor Future plans: DB Configuration and HW Spec There will be no changes on the number of node per RAC, the hardware or Oracle version Deploy on both RAC Pluto and Neptune: –Two OCRs –Three Voting Disks 16
17
17 Castor Future plan Storage Configuration and HW Spec Two disk arrays shared by both databases (Neptune, Pluto): –Storage: EMC Clarion –Physical disk available: SAS 300GB Drives 2TB gross –RAID5 configuration Two Brocade 200E 4Gbit switched 17
18
18 Castor Future plan Storage Configuration and Spec ASM (Pluto, Neptune): Normal redundancy Single disk group two failure groups One or more disks per failure group 18
19
19 Castor: Schemas Size and Versions 19 SchemasVersionSize Name Servern/a1.8GB VMGRn/a1.7MB CUPVn/a0.2MB CMS Stager 2_1_7_27_1 1.9GB Gen Stager 2_1_7_27_1 3.8GB Repack_2192_1_9_117MB Repack2_1_7_2762MB Gen SRM2_8_2540MB SRM CMS2_8_21.1GB VDQM2 2_1_8_3_1 5MB Pluto SchemasVersionSize Atlas Stager 2_1_7_27_1 18GB LHCb stager 2_1_7_27_1 1.8GB SRM Atlas2_8_25.1GB SRM LHCb2_8_21.2GB Neptune
20
20 Backup configuration Incremental 0 once a week Incremental 1 the other days of the week All backups are followed by logical validation Archived log backup done during the day (for now) Once we move to the new hardware the archived log will be multiplexed on a shared disk outside ASM Backup are stored on the local disk Backup are copied from the local disk to tape and kept for three months
21
21 Backup configuration RMAN configuration parameters are: –CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 8 DAYS; –CONFIGURE BACKUP OPTIMIZATION ON; –CONFIGURE DEFAULT DEVICE TYPE TO DISK; –CONFIGURE CONTROLFILE AUTOBACKUP ON; –CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/oracle_backup/pluto/%F.bak'; –CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET; –CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; –CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; –CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/oracle_backup/pluto/pluto_%U.bak'; –CONFIGURE MAXSETSIZE TO UNLIMITED; –CONFIGURE ENCRYPTION FOR DATABASE OFF; –CONFIGURE ENCRYPTION ALGORITHM 'AES128'; –CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; –CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/opt/oracle/app/oracle/product/10/db_1/dbs/snapcf_pluto1.f'; # default
22
22 Backup configuration Incremental 0: –backup incremental level 0 duration 12:00 database; –backup archivelog all delete all input; –report obsolete; –delete noprompt obsolete; Incremental 1: –backup incremental level 1 duration 12:00 minimize time database; –backup archivelog all delete all input; Validation: –restore validate check logical database archivelog all;
23
23 ANY QUESTIONS?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.