Presentation is loading. Please wait.

Presentation is loading. Please wait.

SPARC Supercluster T4-4 Developers Performance and Applications Engineering, Hardware Systems 1 1.

Similar presentations


Presentation on theme: "SPARC Supercluster T4-4 Developers Performance and Applications Engineering, Hardware Systems 1 1."— Presentation transcript:

1 SPARC Supercluster T4-4 Developers Performance and Applications Engineering, Hardware Systems 1 1

2 The following is intended to outline our general product direction
The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle. 2 2

3 Agenda Introduction to T4-4 Introduction to Supercluster
Use Case Configurations MWMe Setup 3 3

4 T4-4 Block Diagram YF0 YF1 YF2 YF3 AST2200 Processor Module [PM0]
NET[0-3] Enet USB0-1 Enet Mgmt Serial Mgmt XAUI[0-7] 10G T4-4 Block Diagram PM Power Domains in Tan (PD0, PD1) (4x 10/100/1000) (rear) 10/100 (rear) Enet (2x QSFP) 1 Rear SIS LEDS 0123 4567 Processor Module [PM0] (Front) Processor Module [PM1] (Front) ) l QSFP QSFP a DVRM D219B DVRM D219B DVRM D219B DVRM D219B DVRM D219B DVRM D219B DVRM D219B DVRM D219B n o i FRU DIMM Population Order: (1) Blue x8, (2) White x8, (3) Black x16 DIMM Population Order: (1) Blue x8, (2) White x8, (3) Black x16 4x t 1G-BaseT 2 1 MAX3241 p 1 D 1 D D 1 D 1 1 D D 1 1 D 1 D D 1 1 D D 1 D 1 1 D 1 D D 1 1 D D 1 1 D 1 D 1 D 1 D 1 D D 1 D 1 D 1 1 D 1 D 1 O D 1 D D 1 D 1 D 1 USB Serial ( 2 2 2 2 M M M M d D M I D M I D M I D M I M M I M M M M M D D M I D M I D M I D M I M D I M M I M M I M M I M M I M M I M M I M M I M M I M M I M M I M D D D D D D D D D D M D I D M M I M M D I M M D I M M I M M I M M D D D I M M I M M M D M I D M I D M I M M I Hub 0 FRU r a L P 2 P 2 2 2 D D o L L P L P N ) N ) N ) N ) D D D D D D D D D D D D D D D D D D d r R e e R B a l l a a l l u u u u a M M I M M I M M I M M I M M I M M I M M I M M I M M I M M I M M I M M I M M I M M I M M I M M I M M I M M M D D D I M I M M I M M D I M M D M D M M D M D M D M D M D M D M D M D a a r a r k r ( d ( d d ( d ( D D D D D D D D D D D D D D D D D D D D D D I D M I D M I M D I M D I M I D M D I M D I M I D M I D M I D o Kawela 0 Kawela 1 N e S t e o 1 2 3 B r Intel 82576 Intel 82576 M M w Y g t P H H Y H Y Y H CH0 CH1 CH0 CH1 CH0 CH1 CH0 CH1 CH0 CH1 CH0 CH1 CH0 CH1 CH0 CH1 CH0 CH1 CH0 CH1 CH0 CH1 CH0 CH1 CH0 CH1 CH0 CH1 CH0 CH1 CH0 CH1 O / MAC/PHY MAC/PHY g m e ] I I P P I P I BOB 0 BOB 1 BOB 2 BOB 3 BOB 0 BOB 1 BOB 2 BOB 3 BOB 0 BOB 1 BOB 2 BOB 3 BOB 0 BOB 1 BOB 2 BOB 3 I ] m t t N B U U U U r A A A A debug PD0 FRU PD1 debug debug PD0 FRU PD1 debug a O I G G X X X X e To SP T [ MCU0 debug MCU1 MCU0 debug MCU1 MCU0 debug MCU1 MCU0 debug MCU1 R R [ NC-SI 1 C C to SP NET0 NET1 B YF0 YF1 B B C C YF2 YF3 B 2 2 A A 2 2 2 2 A A 2 2 Rear IO Module (Rear) D PM0/CMP0 PM0/CMP1 D D PM1/CMP0 PM1/CMP1 D All signals pass through Express Backplane M R B B M R M R B B M V R PDB 12V D PEU Gen2 x 8 NIU (XAUI) PEU NIU V D V Gen2 x 8 (XAUI) D PEU V (4) AC IN Plugs Gen2 x 8 NIU (XAUI) PEU Gen2 x 8 NIU (XAUI) D FRU VAUX XAUI 0 1 0 1 >200VAC (Rear) SSI SSI 0 1 SSI 0 1 SSI (master) PS 0 A239 (Front) to CMPs @1 @2 @1 @2 @1 @2 @1 @2 PS 1 A239 (Front) 12V VAUX 12V 1 2 3 12V 12V 4 5 6 7 12V System Fans (Rear) o r F PS 2 A239 (Front) Midplane [MP] n VAUX FM FM FM FM FM S t I PS 3 A239 (Front) (White space) 1 2 3 4 S E L 12V S D Main Module (Front) Fan Ctrl Debug Riser 0 Debug Riser 1 D x4 x4 x4x4 MotherBoard x4 x4 x4 x4 B Host & Data s0 s1 s0 s1 s2 s3 G Flash PCI-E / PCI S A S 2 D i s k B P [MB] mbp mbp mbp mbp S A S 2 D i s k B P Fan Control Bridge [ S B P ] FRU (Front) FRU mbp mbp mbp mbp SSI (4x) @ @ @ @ @ @ @ @ [ S B P 1 ] FRU FPGA PLX PEX8112 2,3 0,1 s2s3 0,1 2,3 2,3 0,1 0,1 2,3 System Control HDD1 HDD3 PCIe Switch 0 x8 Slot x8 Slot PCIe Switch 2 PCIe Switch 1 PCIe Switch 3 HDD5 HDD7 I 2 C(4x) ENV/SPD PCI / USB HDD0 HDD2 IDT 64H16G2 IDT 64H16G2 ESM IDT 64H16G2 IDT 64H16G2 HDD4 HDD6 s Bridge 8 Gen2 x64 F , @ D Gen2 x64 7 , Super @ Gen2 x64 7 , 5 @ Gen2 x64 D , T o R T N F @ 9 , E @ @ C 6 5 C 6 @ 7 @ P k c S T O V R NEC uPD720101 x4 SAS B Hot-Plug E Hot-Plug @ Caps @ D Hot-Plug @ E 6 Hot-Plug C M C 8 5 e t C D R U I mbp A x4 SAS C , D , F 4 , 5 (Front) , 9 5 D F e C / a D 3 4 5 2 1 0:3 4 5 6,7 @ 5 8,9 A,B @ 4 FRU E,F A,B 4 , 4 @ 4 8,9 A,B E , mbp 0:3 L 4 P d m @ @ C @ REM0 C FRU @ 8 @ 4 @ 1 @ 2 @ 3 PWR @ 1 @ 2 @ 3 7 @ 2 @ 3 REM1 FRU Erie RAID 0/1 @ @ 2 FMOD4-7 Erie RAID 0/1 Service Processor DRAM BCM or Niwot 5/6 S B A G @ 3 ( 4 ) or Niwot 5/6 [SP] 128MB 5221 Rear Net Mgmt x4 SAS U V T 1 T x4 SAS PHY 4:7 P S N E N E FMOD0-3 4:7 (common to RF systems) ( 4 ) Flash Storage Debug Flash DIMMs AST2200 2x 32MB Keyboard/Mouse Rear r j / FRU Ser Mgmt USB4 n (internal) ) e Front IO Assembly 3 b / f h [VGA] [FIO] o a 4 3 MAX3241 2 k 1 k 1 g ( Serial USB Hub 1 i n n i 9 L L . 1 2 / 1 FRU 3 7 v 1 / Serial Mgmt 2 EM0 EM1 EM2 EM3 EM4EM5EM6 EM7 EM8 EM9 EM10 EM11 EM12 EM13 EM14 EM15 e 1 (front, duplicate) USB2-3 (front) Express Backplane [EB] 16 Hot-Plug x8 PCI Express Module Slots (Rear) R 1 Link[1,2]: IO Link card to external IOBox must use EM8 (first) and EM2 (second)

5 Supercluster Inventory
Built using T4-4 Systems 4 YF Sockets with On-Chip PCIe Each Socket with 8 cores and 8 Threads per Core 64 Threads Per Socket, 256 Threads Per System 1 TB of memory 6x600 GB SAS Disks, 2 x 300 GB SSD 4 x CX2-IB_HCA, 4 x Dual Port 10GbE Cards Optional FiberChannel Cards 10GbE Link, Not Connected At Factory EM0 EM1 EM2 EM3 EM4 EM5 EM6 EM7 EM8 EM9 EM10 EM11 EM12 EM13 EM14 EM15 IB Link, Connected At Factory

6 SPARC Supercluster Hardware Stack – Half Rack
Compute 2 * T4-4 nodes, each with: 4 * T4 3.0GHz 1 TB memory 6 * 600GB internal SAS disks 2 * 300GB SSDs 4 * Infiniband HCAs & 4 * 10GbE NICs Network 3 * NanoMagnum2 36-port Infiniband Switches GbE Management Switch Storage 3 Exadata Storage Servers Optional Exadata Storage Server Expansion Rack Shared Storage ZFS Storage Appliance 7320 with 40TB of disk capacity Data Migration Optional FCAL HBA 6

7 SPARC Supercluster Hardware Stack – Full Rack
Compute 4 * T4-4 nodes, each with: 4 * T4 3.0GHz 1 TB memory 6 * 600GB internal SAS disks 2 * 300GB SSDs 4 * Infiniband HCAs & 4 * 10GbE NICs Network 3 * NanoMagnum2 36-port Infiniband Switches GbE Management Switch Storage 6 Exadata Storage Servers Optional Exadata Storage Server Expansion Rack Shared Storage ZFS Storage Appliance 7320 with 40TB of disk capacity Data Migration Optional FCAL HBA 7

8 SPARC Supercluster Exadata Storage Servers
Storage Server Hardware X4270 M2 with 2 sockets Xeon L GHz 12MB L3 24GB Memory (6 * 4GB LV 1333Mhz DDR3 DIMMs) SAS-2 RAID HBA 12 disks: (Hi Perf) or (Hi Capacity) 4 * F20 96GB, total 384GB 1 QDR IB HCA (2 ports) Half Rack 3 Exadata Storage Servers (Hi Perf or Hi Capacity) Full Rack 6 Exadata Storage Servers (Hi Perf or Hi Capacity) Expansion Rack Optional Up to 18 Exadata Storage Servers 3 * NanoMagnum2 36-port Infiniband Switches 8

9 SPARC Supercluster Shared Storage
ZFS Storage Appliance Provided with both half- and full-rack configurations ZFS Storage Appliance 7320HA (2 controllers) Each 7320 controller includes: 2 * Quad core Xeon 2.4GHz 24GB Memory 4 * 512GB Readzillas (read-optimized SSDs) Infiniband HBA (2 port) GbE Management port Disk Shelf 20 * 2TB 7200rpm 4 * 18GB Logzillas (write-optimized SSDs) 9

10 SPARC Supercluster Software Stack
Operating System Solaris 11 for Exadata and Exalogic nodes (physical or LDoms) Solaris 11 or Solaris 10 nodes for applications (physical or LDoms) Virtualization LDoms and Zones (including Branded Zones) Management Ops Center, and optionally Enterprise Manager Grid Control (for DB) Clustering Oracle Solaris Cluster (optional) Oracle Clusterware (for DB) Database 11gR2 to leverage Storage Cells Other databases with external storage Middleware WebLogic Server with optional Exalogic Elastic Cloud Software Applications Oracle, ISV and customer applications qualified on Solaris 10 or Solaris 11 10

11 Deployment Flexibility
SPARC Supercluster Deployment Flexibility SPARC Supercluster allows deployment of: Multiple tiers of an application Multiple applications E.g. Payroll, Supply Chain, SAP Customer Service Multiple Databases Applications can run on a mix of Solaris releases Within LDoms on Solaris 10 or 11 Natively under Solaris 11 In native Zones on either Solaris 10 or Solaris 11 In Zone Clusters managed by Solaris Cluster In Branded Zones (S10BZs on S11; S8/S9BZs on S10) 11

12 Overall Requirements, Why Virtualization?
Need to support multiple stacks Exadata (always needed because of storage cells) Exalogic (optional) Solaris 10 and/or 11 for applications Available virtualization technologies Logical Domains (LDoms) on the T4 processor Containers/Zones in the Solaris OS 12 12

13 SPARC Supercluster Virtualization LDoms for major application stacks
Exdata, Exalogic, Solaris 10, Solaris 11 Maximum 4 LDoms per node Each with PCI root complex, Split-PCIe Domains Zones Deploy on Solaris 10 and Solaris 11 nodes / domains Full resource management available 13

14 Configurating LDoms for each stack
LDoms created based on customer requirements Maximum of 4 per T4-4 node Single LDom per node for Exadata Single LDom per node for Solaris 10 Applications Single LDom per node for Solaris 11 Applications (if required) One LDom per socket on each node for Exalogic Number of LDoms fixed once created during initial install Assignment of CPUs and memory Fixed for Exalogic For Exadata and Applications LDoms, customer can choose proportion of CPU/memory to be assigned to each stack 14 14

15 Supercluster Strategy – Dedicated Software Stacks with LDoms
Use dedicated nodes For Exadata For Solaris 10/11 applications For Exalogic (as required) Simpler to manage and patch each stack independently LDoms can increase the number of available nodes Especially useful in the 2 * T4-4 (half-rack) case Typically only create 2 LDoms per physical node Configure LDoms during system bringup Lock down LDoms configuration after initial setup Can still migrate CPUs & memory between LDoms, though Use Containers/Zones, not LDoms, for dynamic virtualization Management Ops Center can manage zones running in LDoms 15 15

16 LDom Configuration Options
CPU and memory placement alternatives: Optimal performance (the Supercluster default) Fixed configuration with no ability to migrate CPU or memory NUMAness minimized Greatest flexibility Balance cores and memory across all sockets Memory can be migrated with 32GB granularity Ensures an equal amount of memory from each lgroup CPUs can be migrated with 4 core granularity To include one core from each socket This granularity preserves balanced performance Restart of affected domains required after changes 16 16

17 SPARC Supercluster Potential Use Cases
More common Exadata + Exalogic/Apps on Solaris 11 Exadata + Apps on Solaris 10 Exadata only Less common Exadata + Exalogic/Apps on Solaris 11 + Apps on Solaris 10 Exalogic => Any Weblogic Suite/FMW based application 17 17

18 4 Physical Nodes with HA Exadata + Apps on Solaris 10 use case
Entire nodes dedicated to each stack in this example No flexibility to migrate CPUs and memory between Exadata and apps T4-4 Node 1 T4-4 Node 2 T4-4 Node 3 T4-4 Node 4 Solaris 11 Solaris 11 Solaris 10 Solaris 10 Solaris 11 Solaris 11 Siebel Zone Cluster Siebel Zone Cluster DB 11gR2 RAC DB 11gR2 RAC App Zone Cluster App Zone Cluster SAP Zone Cluster SAP Zone Cluster

19 2 Physical Nodes with HA Exadata + Apps on Solaris 10 use case
Resources do not need to be equally split between Exadata and Apps LDoms CPU and memory can be dynamically migrated between LDoms Software HA via Solaris Cluster for apps, Clusterware for DB T4-4 Node 1 T4-4 Node 2 Exadata LDom Apps on LDoms Exadata LDom Apps on LDoms Solaris 11 Solaris10 Solaris 11 Solaris 10 Siebel Siebel DB 11gR2 DB 11gR2 App App ISV app ISV app

20 2 Physical Nodes with HA Exadata + SAP on Solaris 11 use case
More resources for SAP in this example CPU and memory can be dynamically migrated between LDoms Software HA via Solaris Cluster for apps, Clusterware for DB T4-4 Node 1 T4-4 Node 2 SAP LDom Exadat a LDom SAP LDom Exadata LDom Solaris 11 Solaris11 Solaris 11 Solaris 11 SAP DB 11gR2 SAP DB 11gR2

21 Root and Swap with Exadata and S10 Apps
Internal hard disks provide direct root/swap for both LDoms No Exalogic, so SSDs can be used for additional swap T4-4 Node Exadata LDom Application LDom Solaris 11 Solaris 10 DB 11gR2 Apps Solid red lines: mirrors SSD SSD 21

22 Root and Swap with Exadata and Exalogic
Internal hard disks provide direct root/swap for 2 LDoms Vdisks provide root/swap for other 2 LDoms Mirrored vdisks not dependent on single Ldom Exalogic configured as 1 LDom per socket T4-4 Node Exadata LDom Exalogic LDom 1 Exalogic LDom 2 Exalogic LDom 3 Solaris 11 Solaris 11 Solaris 11 Solaris 11 DB 11gR2 Apps Apps Apps Solid red lines: mirrors Dashed black lines: vdisks <Mirrored Vdisk> SSD SSD 22

23 Root and Swap with LDoms
Internal hard disks provide direct root/swap for 2 LDoms Vdisks provide root/swap for other 2 LDoms Mirrored vdisks not dependent on single LDom If no Exalogic, SSDs can be used for additional swap T4-4 Node Exadata LDom App LDom App LDom Exalogic LDom Solaris 11 Solaris 10 Solaris 11 Solaris 11 DB 11gR2 App1 App2 Fusion Middle -ware Solid red lines: mirrors Dashed black lines: vdisks <Mirrored Vdisk> SSD SSD 23

24 Supercluster Install Required LDoms are created
Factory install places Exadata/Exalogic Solaris 11 image on all nodes At customer site Required LDoms are created Required OS is loaded on each node/LDom Exadata onecommand is run on Exadata nodes/LDoms only Exalogic/Supercluster onecommand is run on other nodes/LDoms 24 24

25 Patching and Updates Exadata/Storage Cells updates and patching Follow same process and cadence as for x86 Exadata Only carried out on Exadata nodes/LDoms Exalogic updates and patching Follow same process and cadence as for x86 Exalogic Only carried out on Exalogic nodes/LDoms Application updates and patching Running on generic Solaris 10 and 11 nodes/LDoms Follow same process and cadence as for same application in other environments 25 25

26 EL+ED On Half-Rack Supercluster
3 x ED-Storage Node LDOMS Split-PCIe Each Domain with 1xIB, 1x 10g LDOMS Split-PCIe Each Domain with 1xIB, 1x 10g AppDomain3 RAC DB1 AppDomain3 RAC DB2 IB SWITCH AppDomain2 AppDomain1 AppDomain2 AppDomain1 Optional IB Link Clients/Emulator 10G Switch


Download ppt "SPARC Supercluster T4-4 Developers Performance and Applications Engineering, Hardware Systems 1 1."

Similar presentations


Ads by Google