Download presentation
Presentation is loading. Please wait.
Published byRosalyn Welch Modified over 9 years ago
1
B Copyright © 2009, Oracle. All rights reserved. Configuring Warehouse Builder in RAC Environments
2
Copyright © 2009, Oracle. All rights reserved. B - 2 Objectives After reading this appendix, you should be familiar with: Devising a plan for installing and configuring OWB in your RAC environment Using Oracle Universal Installer and the OWB Repository Assistant to install the OWB repository and register it on all cluster nodes Replicating files among nodes as necessary Changing a database configuration parameter to enable nodes to synchronize more quickly Using Repository Browser to monitor node activity Locating log files and collecting other information to troubleshoot node failures and monitor recovery
3
Copyright © 2009, Oracle. All rights reserved. B - 3 Scope of RAC Discussed in This Appendix It is assumed that the RAC architecture has already been designed, installed, and configured in your workplace. Warehouse Builder will be installed and configured to fit the given RAC architecture. Warehouse Builder will be installed by staff with some degree of RAC experience and expertise. The Oracle Warehouse Builder 11g Release 2 Installation Guide has a detailed section on RAC. There is detailed Oracle documentation on configuring Oracle for RAC.
4
Copyright © 2009, Oracle. All rights reserved. B - 4 Clusters An Oracle RAC system is composed of a group of independent servers, or nodes. Interconnected nodes act as a single server. Cluster software hides the structure. Disks are available for read and write by all nodes. The operating system is the same on each machine. Disks Interconnect Clusterware on each node Node
5
Copyright © 2009, Oracle. All rights reserved. B - 5 Oracle Real Application Clusters (RAC) Multiple instances accessing the same database Instances spread on each node Physical or logical access to each database file Software-controlled data access Instances spread across nodes Database files Interconnect Shared Cache Node
6
Copyright © 2009, Oracle. All rights reserved. B - 6 Benefits of RAC High availability: Survive node and instance failures. No scalability limits: Add more nodes as you need them tomorrow. Pay as you grow: Pay for just what you need today. Key grid computing feature: –Growth and shrinkage on demand –Single-button addition and removal of servers –Automatic workload management for services Goals of RAC –High availability through: — Load balancing — Failover handling
7
Copyright © 2009, Oracle. All rights reserved. B - 7 OWB Certification on RAC OWB versions certified for RAC: –OWB 10.1.0.4 is certified (both Database 10.1 and 10.2). –OWB 10g R2 is certified. –OWB 11g R1 is certified. –OWB 11g R2 will be certified. OWB for RAC is certified on a listed set of database platforms. For the latest certification and porting information, check OTN and MetaLink.
8
Copyright © 2009, Oracle. All rights reserved. B - 8 Typical Connect Failover Scenarios Database instance dies (crashes or is brought down for maintenance). –Control Center Service dies if a control center is on a node where the database instance dies (rest of RAC is all right). –Connections may fail if a target instance for a request is down. Listener dies. –Control center reconnections may fail if the control center resides on a node where the listener is down. –Connections may fail if the target listener for a request is down. Node dies (crashes or brought down for maintenance). –Control Center Service dies if the control center is on a dead node. –Connection may fail if the target node for a request is down. Session death must be handled by client.
9
Copyright © 2009, Oracle. All rights reserved. B - 9 Control Center Service Control Center Service Failover on RAC 1.Control Center Service (CCS) node fails. 2.CCS goes down. 3.CCS comes up on a different node of the RAC. 4.CCS resumes activities. Control center Single logical instance
10
Copyright © 2009, Oracle. All rights reserved. B - 10 Supported and Unsupported RAC Features Load balancing –Connection load balancing based on server (supported) –Client-side load balancing (not supported) Failover types –Connect failover: node, listener, instance down (supported with properly configured tnsnames.ora ) –Transparent Application Failover (not supported) –Multiple concurrent Control Center Services (not supported)
11
Copyright © 2009, Oracle. All rights reserved. B - 11 Lesson Progress Report Install OWB. Configure database and OWB. Monitor nodes. Troubleshoot. 1. Decide whether to use OWB with shared storage or with nonshared local disks on each RAC node. 2. Select Cluster or Local Installation; run root.sh on each node. 4. Use Runtime Assistant on other nodes to register the repository. 3. Install OWB repository only once, to one node. If using a shared disk, install repository there. 6. Replicate RTREPOS.PROPERTIES and TNSNAMES.ORA on each node. 7. Define OWB locations with TNS names, not host:port:service. 8. Use OWB Browser to monitor, enable, or disable Control Center Service on a node. 9. Search logs on nodes, run helpful utilities, avoid common mistakes. 5. On the database, set MAX_COMMIT_PROPAGATION_DELAY to 0.
12
Copyright © 2009, Oracle. All rights reserved. B - 12 Single Shared Disk Versus Local Disk Per Node This decision is usually made before OWB is considered; it is not an OWB decision. Usually, most of the RAC architecture has been decided; OWB only needs to fit. Advantages of shared disk: –Centralized product installation and maintenance –No need to replicate files among nodes –No need to find and collect log files among nodes Single shared disk or local disk on each node? Shared disk Local disks
13
Copyright © 2009, Oracle. All rights reserved. B - 13 Extra Tasks Required of Non-Shared Local Disk Environments Warehouse Builder must be installed on every node. rtrepos.properties and tnsnames.ora must be copied to each node (tnsnames must be placed in two places per node, in addition to the Oracle home). In a nonshared disk environment, you have a log directory on every nonshared disk. –Log directory primarily contains Control Center Service log files, written to whichever node was active at the time. –You must look in all nodes to find the most common log file, possibly by using time stamps.
14
Copyright © 2009, Oracle. All rights reserved. B - 14 Lesson Progress Report Install OWB. Configure database and OWB. Monitor nodes. Troubleshoot. 9. Search logs on nodes, run helpful utilities, avoid common mistakes. 1. Decide whether to use OWB with shared storage or with nonshared local disks on each RAC node. 2. Select Cluster or Local Installation; run root.sh on each node. 4. Use Runtime Assistant on other nodes to register the repository. 3. Install OWB repository only once, to one node. If using a shared disk, install repository there. 6. Replicate RTREPOS.PROPERTIES and TNSNAMES.ORA on each node. 7. Define OWB locations with TNS names, not host:port:service. 8. Use OWB Browser to monitor, enable, or disable Control Center Service on a node. 9. Search logs on nodes, run helpful utilities, avoid common mistakes. 5. On the database, set MAX_COMMIT_PROPAGATION_DELAY to 0.
15
Copyright © 2009, Oracle. All rights reserved. B - 15 Installing OWB on Real Application Clusters OWB server software must be present on every node of the RAC cluster. –OWB Control Center Service requires this. The Control Center browser enables you to nominate one node and register all other nodes. You need to install OWB only on the nodes that you want OWB to run on. Only one Control Center Service is running at a time. OWB run-time software
16
Copyright © 2009, Oracle. All rights reserved. B - 16 Cluster Installation Local Installation OWB Installation Decisions for RAC Shared disk or local disk on each node? Select “Local Installation” (install to shared disk; not to nodes). Cluster or Local Installation? Shared disk “Cluster Installation” installs OWB locally to all chosen nodes simultaneously. Local disks “Local Installation” installs OWB one node at a time. This lesson shows a cluster installation to two nodes.
17
Copyright © 2009, Oracle. All rights reserved. B - 17 Installing OWB with Oracle Universal Installer NOTE: these slides show OWB being installed to an Oracle 10g R2 database.
18
Copyright © 2009, Oracle. All rights reserved. B - 18 Specifying Oracle Home for All Nodes
19
Copyright © 2009, Oracle. All rights reserved. B - 19 Selecting Cluster or Local Installation Cluster or Local Installation? “Cluster Installation” installs OWB locally to all chosen nodes simultaneously. “Local Installation” installs OWB one node at a time. Install onceInstall When installing to a local disk on each node, rather than to a shared disk, there are two choices: Cluster Installation Local Installation
20
Copyright © 2009, Oracle. All rights reserved. B - 20 Selecting Cluster Installation OWB will be installed to both nodes. Secured Shell Environment (SSH) is required for Cluster Installation (allows connecting to each node without a password).
21
Copyright © 2009, Oracle. All rights reserved. B - 21 Installing to the Same Path on All Nodes OWB will be installed to this home on both cluster nodes.
22
Copyright © 2009, Oracle. All rights reserved. B - 22 Installation Includes Copying to Remote Nodes In this example, OWB is first installed locally to the OWB home of node stbde03. Because Cluster Installation was selected, that OWB home is then copied to node stbde04.
23
Copyright © 2009, Oracle. All rights reserved. B - 23 Executing the root.sh Configuration Script in Each Cluster Node This dialog box is merely a reminder to run the root.sh script on each node as “root.” You cannot use it to execute the script.
24
Copyright © 2009, Oracle. All rights reserved. B - 24 Lesson Progress Report Install OWB. Configure database and OWB. Monitor nodes. Troubleshoot. 1. Decide whether to use OWB with shared storage or with nonshared local disks on each RAC node. 2. Select Cluster or Local Installation; run root.sh on each node. 4. Use Runtime Assistant on other nodes to register the repository. 3. Install OWB repository only once, to one node. If using a shared disk, install repository there. 6. Replicate RTREPOS.PROPERTIES and TNSNAMES.ORA on each node. 7. Define OWB locations with TNS names, not host:port:service. 8. Use OWB Browser to monitor, enable, or disable Control Center Service on a node. 9. Search logs on nodes, run helpful utilities, avoid common mistakes. 5. On the database, set MAX_COMMIT_PROPAGATION_DELAY to 0.
25
Copyright © 2009, Oracle. All rights reserved. B - 25 Installing the Repository to One Node and Registering It to Other Nodes Shared disk Disk drive on a network server DB instance: OWBRAC1 OWB repository installed on this node Node 1 Host name: STDB03 Net service name: SN_OWBRAC1 DB instance: OWBRAC2 OWB repository registered on this node Node 2 Host name: STDB04 Net service name: SN_OWBRAC2 Database or listener or node might die.
26
Copyright © 2009, Oracle. All rights reserved. B - 26 Installing the OWB Repository with the Repository Assistant Install OWB repository only once, to one node. Then register the repository on all other nodes.
27
Copyright © 2009, Oracle. All rights reserved. B - 27 Connection Information for the Node Physical machine name for STBDE03 node Database instance name on STBDE03 node Net service name must be used for RAC; stored in tnsnames.ora. Installing OWB repository to node 1, STBDE03 OWB on RAC requires multiple service names, one for the cluster as a whole, plus service names for each node on the cluster. The OWB 10g R2 Installation Guide has a RAC section detailing this.
28
Copyright © 2009, Oracle. All rights reserved. B - 28 Finishing Installation of the Repository to a Node Net Service Name of node 1 Use the Repository Assistant as you normally would in a non-RAC setup.
29
Copyright © 2009, Oracle. All rights reserved. B - 29 Rerunning the Repository Assistant to Register the Repository on All Other Nodes
30
Copyright © 2009, Oracle. All rights reserved. B - 30 Finishing OWB Repository Registration
31
Copyright © 2009, Oracle. All rights reserved. B - 31 Only One Database Parameter Specifically Recommended for RAC MAX_COMMIT_PROPAGATION_DELAY –Change default of 99 to 0. –A value of 0 aligns the nodes for faster synchronization. This change is recommended, not required. For details, see MetaLink note 341963.1, Part 1, Page 51, item 13.
32
Copyright © 2009, Oracle. All rights reserved. B - 32 rtrepos.properties Must Be Replicated to All Nodes in owb/bin/admin path
33
Copyright © 2009, Oracle. All rights reserved. B - 33 Moving a Copy of rtrepos.properties to Each Node Connecting from node 1 to node 2 to copy the file Copying the file to node 2
34
Copyright © 2009, Oracle. All rights reserved. B - 34 OWB RAC Locations Use Net Service Names Log in using Net Service Name for RAC protection, if running jobs. Define OWB location using Net Service Name.
35
Copyright © 2009, Oracle. All rights reserved. B - 35 Sample TNSNAMES.ORA File SN_OWBRAC2 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = stbde04-vip)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = stbde03-vip)(PORT = 1521)) (LOAD_BALANCE = yes) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = owbrac.us.oracle.com) (INSTANCE_NAME = owbrac2) ) ) SN_OWBRAC1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = stbde03-vip)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = stbde04-vip)(PORT = 1521)) (LOAD_BALANCE = yes) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = owbrac.us.oracle.com) (INSTANCE_NAME = owbrac1) ) ) Place file in these three folders: ORACLE_HOME/network/admin OWB_INSTALLED_HOME /network/admin (for OWB Import) OWB_INSTALLED_HOME /owb/network/admin
36
Copyright © 2009, Oracle. All rights reserved. B - 36 RAC Deployment Design repository Control center Control center Single logical instance Locations are defined using net service names, not host:port:service.
37
Copyright © 2009, Oracle. All rights reserved. B - 37 Design repository Control center Control center Single logical instance RAC Deployment: Alternate Node
38
Copyright © 2009, Oracle. All rights reserved. B - 38 Lesson Progress Report Install OWB. Configure database and OWB. Monitor nodes. Troubleshoot. 1. Decide whether to use OWB with shared storage or with nonshared local disks on each RAC node. 2. Select Cluster or Local Installation; run root.sh on each node. 4. Use Runtime Assistant on other nodes to register the repository. 3. Install OWB repository only once, to one node. If using a shared disk, install repository there. 6. Replicate RTREPOS.PROPERTIES and TNSNAMES.ORA on each node. 7. Define OWB locations with TNS names, not host:port:service. 8. Use OWB Browser to monitor, enable, or disable Control Center Service on a node. 9. Search logs on nodes, run helpful utilities, avoid common mistakes. 5. On the database, set MAX_COMMIT_PROPAGATION_DELAY to 0.
39
Copyright © 2009, Oracle. All rights reserved. B - 39 Logging In to OWB Browser
40
Copyright © 2009, Oracle. All rights reserved. B - 40 Select the Service Node Report
41
Copyright © 2009, Oracle. All rights reserved. B - 41 Service Node Report Shows the Status of Nodes Node 1 Node 2
42
Copyright © 2009, Oracle. All rights reserved. B - 42 Disabling a Node Node 1 Node 2 Disable node 1 by clearing the Enabled check box and clicking the Update Node Details button.
43
Copyright © 2009, Oracle. All rights reserved. B - 43 Enabling a Node Click Refresh. Node 2 is now the active node. Node 1 Node 2
44
Copyright © 2009, Oracle. All rights reserved. B - 44 Lesson Progress Report Install OWB. Configure database and OWB. Monitor nodes. Troubleshoot. 1. Decide whether to use OWB with shared storage or with nonshared local disks on each RAC node. 2. Select Cluster or Local Installation; run root.sh on each node. 4. Use Runtime Assistant on other nodes to register the repository. 3. Install OWB repository only once, to one node. If using a shared disk, install repository there. 6. Replicate RTREPOS.PROPERTIES and TNSNAMES.ORA on each node. 7. Define OWB locations with TNS names, not host:port:service. 8. Use OWB Browser to monitor, enable, or disable Control Center Service on a node. 9. Search logs on nodes, run helpful utilities, avoid common mistakes. 5. On the database, set MAX_COMMIT_PROPAGATION_DELAY to 0.
45
Copyright © 2009, Oracle. All rights reserved. B - 45 Useful Diagnostics for OWB RAC Problems The main diagnostic is service_doctor.sql. Show_service.sql will show whether the Control Center Service is available (running) or not available. –You can determine the same by using the Service Node Report; if the node is marked as enabled, you know that the service is available. In SQL*Plus, select * on user_jobs. – Dbms_job checks every six minutes on RAC. –It identifies the node on which each job is monitoring owb_collect.sql has system information on the database and repository. Check for errors defining and replicating tnsnames.ora.
46
Copyright © 2009, Oracle. All rights reserved. B - 46 Using SQL*Plus Scripts to Test Availability of Control Center Service Host name of node 2 Stop service > not available Start service > available
47
Copyright © 2009, Oracle. All rights reserved. B - 47 Result of Not Replicating rtrepos.properties to a Node Hint: Think of replicating the rtrepos.properties file to all RAC nodes every time you create a repository by using the Repository Assistant.
48
Copyright © 2009, Oracle. All rights reserved. B - 48 Using OWB With or Without a Control Center Service You can execute OWB mappings with or without a Control Center Service (CSS). An RDBMS-only installation on RAC, without a CSS, is possible. –With a CSS, you can: — Use Control Center Manager — Use process flows — Run PL/SQL or SQL*Loader mappings — Have transparent failover and auditing support –Without a CSS, you can: — Run only PL/SQL mappings — Just save generated mapping code to file and execute later — Still have transparent failover and auditing support No CSS
49
Copyright © 2009, Oracle. All rights reserved. B - 49 Further Study of RAC For in-depth study of RAC, consider these Oracle University courses: Oracle Database 10g: Real Application Clusters (D17276GC10) Oracle 10g Database: RAC Deployment Workshop (D44424GC10) Oracle Database 10g: RAC Basic Concepts and Architecture Seminar (D44422GC10), an in-class one-day course Oracle Database 11g: RAC Administration (D50311GC11) Oracle Database 11g: RAC Overview and Architecture Seminar (D53954GC10)
50
Copyright © 2009, Oracle. All rights reserved. B - 50 Summary In this appendix, you should have become familiar with: Devising a plan for installing and configuring OWB in your RAC environment Using Oracle Universal Installer and the OWB Repository Assistant to install the OWB repository and register it on all cluster nodes Replicating files among nodes as necessary Changing a database configuration parameter to enable nodes to synchronize more quickly Using Repository Browser to monitor node activity and enable or disable nodes Locating log files and collecting other information to troubleshoot node failures and monitor recovery
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.