SPARC Supercluster T4-4 Developers Performance and Applications Engineering, Hardware Systems 1 1.

Slides:



Advertisements
Similar presentations
PowerEdge T20 Customer Presentation. Product overview Customer benefits Use cases Summary PowerEdge T20 Overview 2 PowerEdge T20 mini tower server.
Advertisements

PowerEdge T20 Channel NDA presentation Dell Confidential – NDA Required.
Exadata Distinctives Brown Bag New features for tuning Oracle database applications.
Confidential Prepared by: System Sales PM Version: 1.0 Lean Design with Luxury Performance.
© 2010 Cisco and/or its affiliates. All rights reserved. 1 Microsoft Exchange Server Ready UCS Data Center & Virtualizacion Cisco Latin America Valid through.
Oracle Exalogic Elastic Cloud Vysoký výkon pre Javu, Middleware a Aplikácie Mikuláš Strelecký, Oracle Slovensko.
Oracle Exadata for SAP.
Copyright © 2012, Oracle and/or its affiliates. All rights reserved. 1 Oracle on Oracle Harry Corcell - Hardware Account Manager Samin Sabetazad - Technical.
Copyright © 2012, Oracle and/or its affiliates. All rights reserved. 1.
4/11/2017 © 2014 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks.
Accelerating Your Success™ Oracle on Oracle for NHS 1.
Copyright © 2013, Oracle and/or its affiliates. All rights reserved. 1.
Servidor Rack 2583ECU - x3250_M4 Express x3250 M4, Xeon 4C E3-1220v2 69W 3.1GHz/1600MHz/8MB, 1x4GB, O/Bay SS 3.5in SATA, SR C100, Multi- Burner, 300W p/s,
SGI ® Company Proprietary SGI ® Modular InfiniteStorage Sales Deck – V4 – Feb 2013 An evolutionary new compute & storage platform exclusive to SGI.
Copyright © 2012, Oracle and/or its affiliates. All rights reserved. 1.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
Turbocharge Your SAP Environment on Oracle SuperCluster [THT11481]
CON Software-Defined Networking in a Hybrid, Open Data Center Krishna Srinivasan Senior Principal Product Strategy Manager Oracle Virtual Networking.
The Safe Harbor The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated.
SGI Confidential Application Processor. SGI Confidential Application Processor Overview Application Field Replaceable Units The Application.
Oracle Database 12c Data Protection and Multitenancy on Oracle Solaris 11 Xiaosong Zhu Senior Software Engineer Copyright © 2014, Oracle and/or its affiliates.
Introduction to DoC Private Cloud
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
ASUS Confidential ASUS AP140R Server Introduction By Server Team V1.0.
BNL Oracle database services status and future plans Carlos Fernando Gamboa RACF Facility Brookhaven National Laboratory, US Distributed Database Operations.
© 2009 Oracle Corporation. S : Slash Storage Costs with Oracle Automatic Storage Management Ara Vagharshakian ASM Product Manager – Oracle Product.
Copyright © 2013, Oracle and/or its affiliates. All rights reserved. 1 Preview of Oracle Database 12 c In-Memory Option Thomas Kyte
Getting Started with Oracle Compute Cloud
Module 9 PS-M4110 Overview <Place supporting graphic here>
MIS 1.5 Training. Day 1 Section 1 Overview and Features (30 min) Section 2 Hardware Overview (1 hour) Section 3 Internal Cabling (1 hour) Lab 1 Show and.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Hardware Overview Iomega Network Storage LENOVO | EMC CONFIDENTIAL. ALL RIGHTS RESERVED. Storage for SMB and Distributed Enterprise PX SERIES.
ASGC 1 ASGC Site Status 3D CERN. ASGC 2 Outlines Current activity Hardware and software specifications Configuration issues and experience.
© 2012 IBM Corporation IBM Flex System™ The elements of an IBM PureFlex System.
Business Intelligence Appliance Powerful pay as you grow BI solutions with Engineered Systems.
CON Software-Defined Networking in a Hybrid, Open Data Center Krishna Srinivasan Senior Principal Product Strategy Manager Oracle Virtual Networking.
Copyright © 2014 Oracle and/or its affiliates. All rights reserved. | Deploy Infrastructure Faster with Oracle’s Enterprise Cloud Infrastructure- ECI October.
 System Requirements are the prerequisites needed in order for a software or any other resources to execute efficiently.  Most software defines two.
GPU Solutions Universal I/O Double-Sided Datacenter Optimized Twin Architecture SuperBlade ® Storage SuperBlade ® Configuration Training Francis Lam Blade.
Introduction to Exadata X5 and X6 New Features
SGI Rackable C2108-GP5 “Arcadia” Server
E2800 Marco Deveronico All Flash or Hybrid system
Power Systems with POWER8 Technical Sales Skills V1
This document contains information on a pre-launch desktop that is under NDA and is not yet available. Expected launch is: January 20, 2017.
Lenovo Thinkservers and
SYS-7088B-TR4FT Quick View Front View - DDR4:SYS-7088B-TR4FT (15 PCI-E 3.0 default) Depth:28.87” (733 mm) 8 CPU Modules: P1 to P8 P1 P2 P3 P4 P5 P6 P7.
TYBIS IP-Matrix Virtualized Total Video Surveillance System Edge Technology, World Best Server Virtualization.
RHEV Platform at LHCb Red Hat at CERN 17-18/1/17
Personal Computers A Research and Reverse Engineering
JD Edwards EnterpriseOne In-Memory Sales Advisor
Computer Hardware.
Lenovo Thinkservers and
Lenovo New Thinksystem and
NGS Oracle Service.
Lenovo New Thinksystem and
GGF15 – Grids and Network Virtualization
SCSI over PCI Express (SOP) use cases
Jackie Lee Hyman Wu Louis Luo Gen Wu
Design Package Contribution for Project Olympus US1-EPYC
Intel® Select solutions built on Intel® Data Center Blocks Channel Incentive SKU Details *Other names and brands may be claimed as the property of others.
Reza Abouk, Sr. Database Engineer, American Airlines
Power couple. Dell EMC servers powered by Intel® Xeon® processors and running Windows Server* 2016, ready to securely handle dynamic business workloads.
Microsoft Virtual Academy
Intel® Select solutions built on Intel® Data Center Blocks Channel Incentive SKU Details *Other names and brands may be claimed as the property of others.
Lenovo Thinksystem and
Building continuously available systems with Hyper-V
Cost Effective Network Storage Solutions
Lenovo ThinkSystem Servers Special Promo
Presentation transcript:

SPARC Supercluster T4-4 Developers Performance and Applications Engineering, Hardware Systems 1 1

The following is intended to outline our general product direction The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle. 2 2

Agenda Introduction to T4-4 Introduction to Supercluster Use Case Configurations MWMe Setup 3 3

T4-4 Block Diagram YF0 YF1 YF2 YF3 AST2200 Processor Module [PM0] NET[0-3] Enet USB0-1 Enet Mgmt Serial Mgmt XAUI[0-7] 10G T4-4 Block Diagram PM Power Domains in Tan (PD0, PD1) (4x 10/100/1000) (rear) 10/100 (rear) Enet (2x QSFP) 1 Rear SIS LEDS 0123 4567 Processor Module [PM0] (Front) Processor Module [PM1] (Front) ) l QSFP QSFP a DVRM D219B DVRM D219B DVRM D219B DVRM D219B DVRM D219B DVRM D219B DVRM D219B DVRM D219B n o i FRU DIMM Population Order: (1) Blue x8, (2) White x8, (3) Black x16 DIMM Population Order: (1) Blue x8, (2) White x8, (3) Black x16 4x t 1G-BaseT 2 1 MAX3241 p 1 D 1 D D 1 D 1 1 D D 1 1 D 1 D D 1 1 D D 1 D 1 1 D 1 D D 1 1 D D 1 1 D 1 D 1 D 1 D 1 D D 1 D 1 D 1 1 D 1 D 1 O D 1 D D 1 D 1 D 1 USB Serial ( 2 2 2 2 M M M M d D M I D M I D M I D M I M M I M M M M M D D M I D M I D M I D M I M D I M M I M M I M M I M M I M M I M M I M M I M M I M M I M M I M D D D D D D D D D D M D I D M M I M M D I M M D I M M I M M I M M D D D I M M I M M M D M I D M I D M I M M I Hub 0 FRU r a L P 2 P 2 2 2 D D o L L P L P N ) N ) N ) N ) D D D D D D D D D D D D D D D D D D d r R e e R B a l l a a l l u u u u a M M I M M I M M I M M I M M I M M I M M I M M I M M I M M I M M I M M I M M I M M I M M I M M I M M I M M M D D D I M I M M I M M D I M M D M D M M D M D M D M D M D M D M D M D a a r a r k r ( d ( d d ( d ( D D D D D D D D D D D D D D D D D D D D D D I D M I D M I M D I M D I M I D M D I M D I M I D M I D M I D o Kawela 0 Kawela 1 N e S t e o 1 2 3 B r Intel 82576 Intel 82576 M M w Y g t P H H Y H Y Y H CH0 CH1 CH0 CH1 CH0 CH1 CH0 CH1 CH0 CH1 CH0 CH1 CH0 CH1 CH0 CH1 CH0 CH1 CH0 CH1 CH0 CH1 CH0 CH1 CH0 CH1 CH0 CH1 CH0 CH1 CH0 CH1 O / MAC/PHY MAC/PHY g m e ] I I P P I P I BOB 0 BOB 1 BOB 2 BOB 3 BOB 0 BOB 1 BOB 2 BOB 3 BOB 0 BOB 1 BOB 2 BOB 3 BOB 0 BOB 1 BOB 2 BOB 3 I ] m t t N B U U U U r A A A A debug PD0 FRU PD1 debug debug PD0 FRU PD1 debug a O I G G X X X X e To SP T [ MCU0 debug MCU1 MCU0 debug MCU1 MCU0 debug MCU1 MCU0 debug MCU1 R R [ NC-SI 1 C C to SP NET0 NET1 B YF0 YF1 B B C C YF2 YF3 B 2 2 A A 2 2 2 2 A A 2 2 Rear IO Module (Rear) D PM0/CMP0 PM0/CMP1 D D PM1/CMP0 PM1/CMP1 D All signals pass through Express Backplane M R pci@400 B B pci@500 M R M R pci@600 B B pci@700 M V R PDB 12V D PEU Gen2 x 8 NIU (XAUI) PEU NIU V D V Gen2 x 8 (XAUI) D PEU V (4) AC IN Plugs Gen2 x 8 NIU (XAUI) PEU Gen2 x 8 NIU (XAUI) D FRU VAUX XAUI 0 1 0 1 0 1 0 1 >200VAC (Rear) SSI SSI 0 1 0 1 SSI 0 1 0 1 SSI (master) PS 0 A239 (Front) to CMPs @1 @2 @1 @2 @1 @2 @1 @2 PS 1 A239 (Front) 12V VAUX 12V 1 2 3 12V 12V 4 5 6 7 12V System Fans (Rear) o r F PS 2 A239 (Front) Midplane [MP] n VAUX FM FM FM FM FM S t I PS 3 A239 (Front) (White space) 1 2 3 4 S E L 12V S D Main Module (Front) Fan Ctrl Debug Riser 0 Debug Riser 1 D x4 x4 x4x4 MotherBoard x4 x4 x4 x4 B Host & Data s0 s1 s0 s1 s2 s3 G Flash PCI-E / PCI S A S 2 D i s k B P [MB] mbp mbp mbp mbp S A S 2 D i s k B P Fan Control Bridge [ S B P ] FRU (Front) FRU mbp mbp mbp mbp SSI (4x) @ @ @ @ @ @ @ @ [ S B P 1 ] FRU FPGA PLX PEX8112 2,3 0,1 s2s3 0,1 2,3 2,3 0,1 0,1 2,3 System Control HDD1 HDD3 PCIe Switch 0 x8 Slot x8 Slot PCIe Switch 2 PCIe Switch 1 PCIe Switch 3 HDD5 HDD7 I 2 C(4x) ENV/SPD PCI / USB HDD0 HDD2 IDT 64H16G2 IDT 64H16G2 ESM IDT 64H16G2 IDT 64H16G2 HDD4 HDD6 s Bridge 8 Gen2 x64 F , @ D Gen2 x64 7 , Super @ Gen2 x64 7 , 5 @ Gen2 x64 D , T o R T N F @ 9 , E @ @ C 6 5 C 6 @ 7 @ P k c S T O V R NEC uPD720101 x4 SAS B Hot-Plug E Hot-Plug @ Caps @ D Hot-Plug @ E 6 Hot-Plug C M C 8 5 e t C D R U I mbp A x4 SAS C , D , F 4 , 5 (Front) , 9 5 D F e C / a D 3 4 5 2 1 0:3 4 5 6,7 @ 5 8,9 A,B @ 4 FRU E,F A,B 4 , 4 @ 4 8,9 A,B E , mbp 0:3 L 4 P d m @ @ C @ REM0 C FRU @ 8 @ 4 @ 1 @ 2 @ 3 PWR @ 1 @ 2 @ 3 7 @ 2 @ 3 REM1 FRU Erie RAID 0/1 @ @ 2 FMOD4-7 Erie RAID 0/1 Service Processor DRAM BCM or Niwot 5/6 S B A G @ 3 ( 4 ) or Niwot 5/6 [SP] 128MB 5221 Rear Net Mgmt x4 SAS U V T 1 T x4 SAS PHY 4:7 P S N E N E FMOD0-3 4:7 (common to RF systems) ( 4 ) Flash Storage Debug Flash DIMMs AST2200 2x 32MB Keyboard/Mouse Rear r j / FRU Ser Mgmt USB4 n (internal) ) e Front IO Assembly 3 b / f h [VGA] [FIO] o a 4 3 MAX3241 2 k 1 k 1 g ( Serial USB Hub 1 i n n i 9 L L . 1 2 / 1 FRU 3 7 v 1 / Serial Mgmt 2 EM0 EM1 EM2 EM3 EM4EM5EM6 EM7 EM8 EM9 EM10 EM11 EM12 EM13 EM14 EM15 e 1 (front, duplicate) USB2-3 (front) Express Backplane [EB] 16 Hot-Plug x8 PCI Express Module Slots (Rear) R 1 Link[1,2]: IO Link card to external IOBox must use EM8 (first) and EM2 (second)

Supercluster Inventory Built using T4-4 Systems 4 YF Sockets with On-Chip PCIe Each Socket with 8 cores and 8 Threads per Core 64 Threads Per Socket, 256 Threads Per System 1 TB of memory 6x600 GB SAS Disks, 2 x 300 GB SSD 4 x CX2-IB_HCA, 4 x Dual Port 10GbE Cards Optional FiberChannel Cards 10GbE Link, Not Connected At Factory EM0 EM1 EM2 EM3 EM4 EM5 EM6 EM7 EM8 EM9 EM10 EM11 EM12 EM13 EM14 EM15 IB Link, Connected At Factory

SPARC Supercluster Hardware Stack – Half Rack Compute 2 * T4-4 nodes, each with: 4 * T4 processors @ 3.0GHz 1 TB memory 6 * 600GB internal SAS disks 2 * 300GB SSDs 4 * Infiniband HCAs & 4 * 10GbE NICs Network 3 * NanoMagnum2 36-port Infiniband Switches GbE Management Switch Storage 3 Exadata Storage Servers Optional Exadata Storage Server Expansion Rack Shared Storage ZFS Storage Appliance 7320 with 40TB of disk capacity Data Migration Optional FCAL HBA 6

SPARC Supercluster Hardware Stack – Full Rack Compute 4 * T4-4 nodes, each with: 4 * T4 processors @ 3.0GHz 1 TB memory 6 * 600GB internal SAS disks 2 * 300GB SSDs 4 * Infiniband HCAs & 4 * 10GbE NICs Network 3 * NanoMagnum2 36-port Infiniband Switches GbE Management Switch Storage 6 Exadata Storage Servers Optional Exadata Storage Server Expansion Rack Shared Storage ZFS Storage Appliance 7320 with 40TB of disk capacity Data Migration Optional FCAL HBA 7

SPARC Supercluster Exadata Storage Servers Storage Server Hardware X4270 M2 with 2 sockets Xeon L5640 6 cores @ 2.26GHz 12MB L3 24GB Memory (6 * 4GB LV 1333Mhz DDR3 DIMMs) SAS-2 RAID HBA 12 disks: 600GB@15Krpm (Hi Perf) or 2TB@7200rpm (Hi Capacity) 4 * F20 (Aura) @ 96GB, total 384GB 1 QDR IB HCA (2 ports) Half Rack 3 Exadata Storage Servers (Hi Perf or Hi Capacity) Full Rack 6 Exadata Storage Servers (Hi Perf or Hi Capacity) Expansion Rack Optional Up to 18 Exadata Storage Servers 3 * NanoMagnum2 36-port Infiniband Switches 8

SPARC Supercluster Shared Storage ZFS Storage Appliance Provided with both half- and full-rack configurations ZFS Storage Appliance 7320HA (2 controllers) Each 7320 controller includes: 2 * Quad core Xeon processors @ 2.4GHz 24GB Memory 4 * 512GB Readzillas (read-optimized SSDs) Infiniband HBA (2 port) GbE Management port Disk Shelf 20 * 2TB disks @ 7200rpm 4 * 18GB Logzillas (write-optimized SSDs) 9

SPARC Supercluster Software Stack Operating System Solaris 11 for Exadata and Exalogic nodes (physical or LDoms) Solaris 11 or Solaris 10 nodes for applications (physical or LDoms) Virtualization LDoms and Zones (including Branded Zones) Management Ops Center, and optionally Enterprise Manager Grid Control (for DB) Clustering Oracle Solaris Cluster (optional) Oracle Clusterware (for DB) Database 11gR2 to leverage Storage Cells Other databases with external storage Middleware WebLogic Server with optional Exalogic Elastic Cloud Software Applications Oracle, ISV and customer applications qualified on Solaris 10 or Solaris 11 10

Deployment Flexibility SPARC Supercluster Deployment Flexibility SPARC Supercluster allows deployment of: Multiple tiers of an application Multiple applications E.g. Payroll, Supply Chain, SAP Customer Service Multiple Databases Applications can run on a mix of Solaris releases Within LDoms on Solaris 10 or 11 Natively under Solaris 11 In native Zones on either Solaris 10 or Solaris 11 In Zone Clusters managed by Solaris Cluster In Branded Zones (S10BZs on S11; S8/S9BZs on S10) 11

Overall Requirements, Why Virtualization? Need to support multiple stacks Exadata (always needed because of storage cells) Exalogic (optional) Solaris 10 and/or 11 for applications Available virtualization technologies Logical Domains (LDoms) on the T4 processor Containers/Zones in the Solaris OS 12 12

SPARC Supercluster Virtualization LDoms for major application stacks Exdata, Exalogic, Solaris 10, Solaris 11 Maximum 4 LDoms per node Each with PCI root complex, Split-PCIe Domains Zones Deploy on Solaris 10 and Solaris 11 nodes / domains Full resource management available 13

Configurating LDoms for each stack LDoms created based on customer requirements Maximum of 4 per T4-4 node Single LDom per node for Exadata Single LDom per node for Solaris 10 Applications Single LDom per node for Solaris 11 Applications (if required) One LDom per socket on each node for Exalogic Number of LDoms fixed once created during initial install Assignment of CPUs and memory Fixed for Exalogic For Exadata and Applications LDoms, customer can choose proportion of CPU/memory to be assigned to each stack 14 14

Supercluster Strategy – Dedicated Software Stacks with LDoms Use dedicated nodes For Exadata For Solaris 10/11 applications For Exalogic (as required) Simpler to manage and patch each stack independently LDoms can increase the number of available nodes Especially useful in the 2 * T4-4 (half-rack) case Typically only create 2 LDoms per physical node Configure LDoms during system bringup Lock down LDoms configuration after initial setup Can still migrate CPUs & memory between LDoms, though Use Containers/Zones, not LDoms, for dynamic virtualization Management Ops Center can manage zones running in LDoms 15 15

LDom Configuration Options CPU and memory placement alternatives: Optimal performance (the Supercluster default) Fixed configuration with no ability to migrate CPU or memory NUMAness minimized Greatest flexibility Balance cores and memory across all sockets Memory can be migrated with 32GB granularity Ensures an equal amount of memory from each lgroup CPUs can be migrated with 4 core granularity To include one core from each socket This granularity preserves balanced performance Restart of affected domains required after changes 16 16

SPARC Supercluster Potential Use Cases More common Exadata + Exalogic/Apps on Solaris 11 Exadata + Apps on Solaris 10 Exadata only Less common Exadata + Exalogic/Apps on Solaris 11 + Apps on Solaris 10 Exalogic => Any Weblogic Suite/FMW based application 17 17

4 Physical Nodes with HA Exadata + Apps on Solaris 10 use case Entire nodes dedicated to each stack in this example No flexibility to migrate CPUs and memory between Exadata and apps T4-4 Node 1 T4-4 Node 2 T4-4 Node 3 T4-4 Node 4 Solaris 11 Solaris 11 Solaris 10 Solaris 10 Solaris 11 Solaris 11 Siebel Zone Cluster Siebel Zone Cluster DB 11gR2 RAC DB 11gR2 RAC App Zone Cluster App Zone Cluster SAP Zone Cluster SAP Zone Cluster

2 Physical Nodes with HA Exadata + Apps on Solaris 10 use case Resources do not need to be equally split between Exadata and Apps LDoms CPU and memory can be dynamically migrated between LDoms Software HA via Solaris Cluster for apps, Clusterware for DB T4-4 Node 1 T4-4 Node 2 Exadata LDom Apps on LDoms Exadata LDom Apps on LDoms Solaris 11 Solaris10 Solaris 11 Solaris 10 Siebel Siebel DB 11gR2 DB 11gR2 App App ISV app ISV app

2 Physical Nodes with HA Exadata + SAP on Solaris 11 use case More resources for SAP in this example CPU and memory can be dynamically migrated between LDoms Software HA via Solaris Cluster for apps, Clusterware for DB T4-4 Node 1 T4-4 Node 2 SAP LDom Exadat a LDom SAP LDom Exadata LDom Solaris 11 Solaris11 Solaris 11 Solaris 11 SAP DB 11gR2 SAP DB 11gR2

Root and Swap with Exadata and S10 Apps Internal hard disks provide direct root/swap for both LDoms No Exalogic, so SSDs can be used for additional swap T4-4 Node Exadata LDom Application LDom Solaris 11 Solaris 10 DB 11gR2 Apps Solid red lines: mirrors SSD SSD 21

Root and Swap with Exadata and Exalogic Internal hard disks provide direct root/swap for 2 LDoms Vdisks provide root/swap for other 2 LDoms Mirrored vdisks not dependent on single Ldom Exalogic configured as 1 LDom per socket T4-4 Node Exadata LDom Exalogic LDom 1 Exalogic LDom 2 Exalogic LDom 3 Solaris 11 Solaris 11 Solaris 11 Solaris 11 DB 11gR2 Apps Apps Apps Solid red lines: mirrors Dashed black lines: vdisks <Mirrored Vdisk> SSD SSD 22

Root and Swap with LDoms Internal hard disks provide direct root/swap for 2 LDoms Vdisks provide root/swap for other 2 LDoms Mirrored vdisks not dependent on single LDom If no Exalogic, SSDs can be used for additional swap T4-4 Node Exadata LDom App LDom App LDom Exalogic LDom Solaris 11 Solaris 10 Solaris 11 Solaris 11 DB 11gR2 App1 App2 Fusion Middle -ware Solid red lines: mirrors Dashed black lines: vdisks <Mirrored Vdisk> SSD SSD 23

Supercluster Install Required LDoms are created Factory install places Exadata/Exalogic Solaris 11 image on all nodes At customer site Required LDoms are created Required OS is loaded on each node/LDom Exadata onecommand is run on Exadata nodes/LDoms only Exalogic/Supercluster onecommand is run on other nodes/LDoms 24 24

Patching and Updates Exadata/Storage Cells updates and patching Follow same process and cadence as for x86 Exadata Only carried out on Exadata nodes/LDoms Exalogic updates and patching Follow same process and cadence as for x86 Exalogic Only carried out on Exalogic nodes/LDoms Application updates and patching Running on generic Solaris 10 and 11 nodes/LDoms Follow same process and cadence as for same application in other environments 25 25

EL+ED On Half-Rack Supercluster 3 x ED-Storage Node LDOMS Split-PCIe Each Domain with 1xIB, 1x 10g LDOMS Split-PCIe Each Domain with 1xIB, 1x 10g AppDomain3 RAC DB1 AppDomain3 RAC DB2 IB SWITCH AppDomain2 AppDomain1 AppDomain2 AppDomain1 Optional IB Link Clients/Emulator 10G Switch