Presentation is loading. Please wait.

Presentation is loading. Please wait.

CERN IT Department CH-1211 Geneva 23 Switzerland www.cern.ch/i t Experience with NetApp at CERN IT/DB Giacomo Tenaglia on behalf of Eric Grancher Ruben.

Similar presentations


Presentation on theme: "CERN IT Department CH-1211 Geneva 23 Switzerland www.cern.ch/i t Experience with NetApp at CERN IT/DB Giacomo Tenaglia on behalf of Eric Grancher Ruben."— Presentation transcript:

1 CERN IT Department CH-1211 Geneva 23 Switzerland www.cern.ch/i t Experience with NetApp at CERN IT/DB Giacomo Tenaglia on behalf of Eric Grancher Ruben Gaspar Aparicio

2 CERN IT Department CH-1211 Geneva 23 Switzerland www.cern.ch/i t Outline NAS-based usage at CERN Key features Future plans Experience with NetApp at CERN IT/DB - 2

3 Storage for Oracle at CERN 1982: Oracle at CERN, PDP-11, mainframe, VAX VMS, Solaris SPARC 32 and 64 1996: Solaris SPARC with OPS, then RAC 2000: Linux x86 on single node, DAS 2005: Linux x86_64 / RAC / SAN –Experiment and part of WLCG on SAN until 2012 2006: Linux x86_64 / RAC / NFS (IBM/NetApp) 2012: all production primary Oracle databases (*) on NFS (*) apart from ALICE and LHCb online Experience with NetApp at CERN IT/DB - 3

4 Network topology All 10Gb/s Ethernet Same network for storage and cluster interconnect filer1filer2filer3filer4 serverB serverA serverCserverE serverD Ethernet switch Private 1 Ethernet switch Private 2 Internal HA pair interconnect Private network, both CRS and storage “public network” Ethernet switch Public

5 Domains: space/filers Total size (TB)Used for backup (TB)# of Filers des-nas47.462.610 shosts2044 gen3974 rac10596 rac11596 castor15418 acc2818 db disk10002 TOTAL901.41062.658 Experience with NetApp at CERN IT/DB - 5

6 Typical setup

7 Impact of storage architecture on Oracle stability at CERN Experience with NetApp at CERN IT/DB - 7

8 Key features Flash cache RaidDP Snapshots Compression Experience with NetApp at CERN IT/DB - 8

9 Flash cache Help to increase random IOPs on disks –Very good for OLTP-like workload Don’t get wiped when servers reboot For databases –Decide what volumes to cache: fas3240>priority on fas3240>priority set volume volname cache=[reuse|keep] 512 GB modules 1 per controller Experience with NetApp at CERN IT/DB - 9

10 IOPs and Flash cache Experience with NetApp at CERN IT/DB - 10

11 IOPs and Flash cache Experience with NetApp at CERN IT/DB - 11

12 Key features Flash cache RaidDP Snapshots Compression Experience with NetApp at CERN IT/DB - 12

13 Disk and redundancy (1/2) Disks are larger and larger –speed stay ~constant → issue with performance –bit error rate stay constant (10 -14 to 10 -16 ), increasing issue with availability With x as the size and α the “bit error rate” Experience with NetApp at CERN IT/DB - 13

14 Disks, redundancy comparison (2/2) 1 TB SATA desktop Bit error rate 10^-14 RAID 17.68E-02 RAID 5 (n+1)3.29E-016.73E-018.93E-01 ~RAID 6 (n+2)1.60E-141.46E-136.05E-13 ~triple mirror8.00E-16 1TB SATA enterprise Bit error rate 10^-15 RAID 17.96E-03 RAID 5 (n+1)3.92E-021.06E-012.01E-01 ~RAID 6 (n+2)1.60E-161.46E-156.05E-15 ~triple mirror8.00E-18 450GB FC Bit error rate 10^-16 RAID 14.00E-04 RAID 5 (n+1)2.00E-035.58E-031.11E-02 ~RAID 6 (n+2)7.20E-196.55E-182.72E-17 ~triple mirror3.60E-20 51428 51428 10TB SATA enterprise Bit error rate 10^-15 RAID 17.68E-02 RAID 5 (n+1)3.29E-016.73E-018.93E-01 ~RAID 6 (n+2)1.60E-151.46E-146.05E-14 ~triple mirror8E-17 Experience with NetApp at CERN IT/DB - 14 Data loss probability for different disk types and groups

15 Key features Flash cache RaidDP Snapshots Compression Experience with NetApp at CERN IT/DB - 15

16 Snapshots Experience with NetApp at CERN IT/DB - 16 T0: take snapshot 1

17 Snapshots Experience with NetApp at CERN IT/DB - 17 T0: take snapshot 1 T1: file changed

18 Snapshots Experience with NetApp at CERN IT/DB - 18 T0: take snapshot 1 T1: file changed T2: take snapshot 2

19 Snapshots for backups With data growth, restoring databases in reasonable amount of time is impossible using “traditional” restore/backup techniques 100TB, 10GbE, 4 tape drives Tape drive restore performance ~120MB/s Restore ~ 58 hours (but it can be much longer) Experience with NetApp at CERN IT/DB - 19

20 Snapshots and Real Application Testing Capture insert… PL/SQL update … delete … Original Clone 10.2 11.2 Upgrade Replay insert… PL/SQL update … delete … Experience with NetApp at CERN IT/DB - 20

21 Snapshots and Real Application Testing Capture insert… PL/SQL update … delete … Original Clone 10.2 11.2 Upgrade Replay insert… PL/SQL update … delete … SnapRestore® Replay insert… PL/SQL update … delete … Replay insert… PL/SQL update … delete … Experience with NetApp at CERN IT/DB - 20

22 Key features Flash cache RaidDP Snapshots Compression Experience with NetApp at CERN IT/DB - 21

23 NetApp compression factor Experience with NetApp at CERN IT/DB - 22

24 Compression: backup on disk RMAN File backup 1x tape copy + Disk buffer Raw: ~1700 TiB (576 3TB disks) Usable: 1000 TiB (to hold ~2PiB uncompressed data) Experience with NetApp at CERN IT/DB - 23

25 Future: OnTap Cluster Mode Non-disruptive upgrades/operations: the immortal cluster Interesting new features –Internal DNS load balancing –Export policies: fine-grained access for NFS exports –Encryption and compression at storage level –NFS 4.1 implementation, parallel NFS Scale-out architecture: up to 24 (512 theoretical) Seamless data moves for capacity, performance rebalancing or hardware replacement Experience with NetApp at CERN IT/DB - 24

26 Architecture view – Ontap cluster mode Experience with NetApp at CERN IT/DB - 25

27 Possible implementation Experience with NetApp at CERN IT/DB - 26

28 Logical components Experience with NetApp at CERN IT/DB - 27

29 pNFS NFS 4.1 standard (client caching, Kerberos, ACL) Coming with Ontap 8.1RC2 Not natively supported by Oracle yet In RHEL 6.2 Control protocol: provides synchronization among data and metadata server pNFS between client and MDS, get where information is store Storage access protocols: file-based, block-based and object- based pNFS Storage access protocols Experience with NetApp at CERN IT/DB - 28

30 CERN IT Department CH-1211 Geneva 23 Switzerland www.cern.ch/i t Summary Good reliability –Six years of operations with minimal downtime Good flexibility –Same setup for different uses/workloads Scales to our needs Experience with NetApp at CERN IT/DB - 29

31 CERN IT Department CH-1211 Geneva 23 Switzerland www.cern.ch/i t Q&A Thanks! Eric.Grancher@cern.ch Ruben.Gaspar.Aparicio@cern.ch Experience with NetApp at CERN IT/DB - 30


Download ppt "CERN IT Department CH-1211 Geneva 23 Switzerland www.cern.ch/i t Experience with NetApp at CERN IT/DB Giacomo Tenaglia on behalf of Eric Grancher Ruben."

Similar presentations


Ads by Google