Presentation is loading. Please wait.

Presentation is loading. Please wait.

ITM (FP2) Overview Presented by: Fran Martinez Tivoli L2 Support

Similar presentations


Presentation on theme: "ITM (FP2) Overview Presented by: Fran Martinez Tivoli L2 Support"— Presentation transcript:

1 ITM 6.2.2 (FP2) Overview Presented by: Fran Martinez Tivoli L2 Support
New Features for IBM Tivoli Storage Manager 5.3 ITM (FP2) Overview Presented by: Fran Martinez Tivoli L2 Support

2 New Features for IBM Tivoli Storage Manager 5.3
Agenda ITM Architecture How ITM works – quick summary Real Time Data Collection Historical Data Collection What's new? TEMS Self-Monitoring Proxy Agent Services (watchdogging) Autonomous Mode Granular Warehousing TEP Enhancements Situation Overrides Dynamic thresholds enhancements Dynamic Logical Views Minor TEP Enhancements *Troubleshooting Tips available by topic

3 New Features for IBM Tivoli Storage Manager 5.3
ITM Architecture Hub TEMS TDW Out of the box Monitors WH Proxy Remote TEMS Agentless TEPS TEP Console SQL Interface 3rd Party Reporting (TCR) UA TEC TBSM Omnibus TSLA File Socket API SNMP Post ODBC Script HTTP Data Provider CLI SOAP Tivoli Enterprise Monitoring Server: central repository for monitoring configuration and for data and alerts collected from the agents Tivoli Enterprise Portal Server: user interface layer Tivoli Enterprise Monitoring Agents ITCAM Autonomously configured agent Local configuration data: Private situation and history config XML file SNMP trap config XML file for forwarding private situations Server/Probe

4 High Level Use Cases for ITM v6.2.2+
New Features for IBM Tivoli Storage Manager 5.3 High Level Use Cases for ITM v6.2.2+ The ITM solution set is expanded with ITM It more completely covers the following two solution sets: Fault Management Solutions Exception reporting Eventing to an Event Consolidation Tool Performance Management Solutions Performance trending Root cause analysis Performance/Resource Consumption Reporting Autonomous Connected Agent Agentless Fault Management Solutions Performance Management Solutions

5 How it works – quick summary
New Features for IBM Tivoli Storage Manager 5.3 How it works – quick summary

6 Real Time Data Collection
New Features for IBM Tivoli Storage Manager 5.3 Real Time Data Collection The TEP shows the requested data The TEP user executes the workspace refresh TEP Data collection is started TEMA Data are sent back to TEMS The TEPS receives the query output data and send it to the requesting tep clients TEPS The TEPS receives the request and send it to TEMS TEMS connects to the agent and issues the data request for report TEMS TEMS processes agent data TEMS sends data to TEPS TEMS receives the request and prepare the report data to be sent to agents Realtime and situation data collection are very similar. Realtime data collection is triggered by user actions on TEP (eg. refresh or access workspace) whereas situation data collection is triggered by Situation Interval in situation definition.

7 Historical Data Collection
New Features for IBM Tivoli Storage Manager 5.3 Historical Data Collection Basically same process of the real time data collection but collection of historical data is driven by internal situations called UADVISOR_xxx specific for the single attribute group. Short term data (last 24 hours) are gathered from binary files on TEMA (or TEMS) whereas older data are gathered from Tivoli DataWarehouse The TEP shows the requested data in the historical workspace The TEP user configure and start historical data collection. This sends a START request for the UADVISOR internal situations TEP The TEPS receives the query output data and send it to the requesting tep clients TEPS The TEPS receives the request and send it to TEMS TEMS connects to the agent and issues the data request for report TEMS TEMS processes agent data TEMS sends data to TEPS TEMS receives the request and prepare the report data to be sent to agents Data collection is started TEMA Data are sent back to TEMS Short-term historical data read from historical binary files Long-Term historical data read from datawarehouse

8 RealTime and Historical Data Collection Troubleshooting
New Features for IBM Tivoli Storage Manager 5.3 RealTime and Historical Data Collection Troubleshooting Agent KBB_RAS1=ERROR (UNIT:kraaadspt ALL) (UNIT:k<pc> ALL) (UNIT:kra<pc> ALL) in <pc>.ini (unix/linux) / K<pc>ENV (windows) Note: <pc> is the product code of the agent. Eg. MQ agent has mq.ini or kmqenv TEMS KBB_RAS1=ERROR (UNIT:kpx ALL)(UNIT:kdssqprs INP)(UNIT:kdsrqc1 INP) and IRA_DUMP_DATA=Y in <hostname>_ms_<nnnn>.config (unix/linux) / KBBENV (windows) TEPS KBB_RAS1=ERROR (UNIT:ctdatabus IN,ER) (UNIT:ctsql IN,ER) in cq.ini (unix/linux) / KFWENV (windows) Add (UNIT:cthistory ALL) for historical data collection analysis Add (UNIT:kv4 ALL) for situation events analysis (tracks sit events from TEMS to TEPS) For realtime data collection check log files for proper table (attribute group) For historical data collection check log files for proper UADVISOR situations and READHIST For situations data collection check log files for proper situation name

9 New Features for IBM Tivoli Storage Manager 5.3
What’s new?

10 New Features – Overview TEMS Self-Monitoring
New Features for IBM Tivoli Storage Manager 5.3 New Features – Overview TEMS Self-Monitoring

11 New Features for IBM Tivoli Storage Manager 5.3
TEMS Self-Monitoring

12 TEMS Self-Monitoring: Workspaces Overview
New Features for IBM Tivoli Storage Manager 5.3 TEMS Self-Monitoring: Workspaces Overview Workspace Description Notes/Restrictions Manage Tivoli Enterprise Monitoring Servers Topological view of all TEMS servers in the Enterprise. Navigation to other TEMS self-monitoring workspaces MUST start from this workspace. Navigation point for all other TEMS management workspaces. Manage Tivoli Enterprise Monitoring Servers - Situation Status Lists running situations on each TEMS. Provides summarized and detailed views of the number of active, cleared, and error situations. Manage Tivoli Enterprise Monitoring Servers - System Information Detailed information about the TEMS server process and TEMS configuration (environment variables used). Workspace not supported for TEMS versions prior to

13 TEMS Self-Monitoring: Workspaces Overview
New Features for IBM Tivoli Storage Manager 5.3 TEMS Self-Monitoring: Workspaces Overview Workspace Description Notes/Restrictions Manage Tivoli Enterprise Monitoring Servers - Protocols Configuration of the Global and Local Location Brokers. Workspace not supported for TEMS versions prior to Global Location Broker information is only available for the Hub. Manage Tivoli Enterprise Monitoring Servers - Installed Catalogs One-to-one comparison between application catalogs at the Hub and a Remote TEMS. Supports comparison with the mirror TEMS. Reports catalog out-of-sync conditions. Manage Tivoli Enterprise Monitoring Servers - Enterprise Catalogs Compares the Hub application catalogs to every RTEMS in the Enterprise. Only selectable from a Hub link. Protocols: The Communication Protocols workspaces show simplified views of location brokers. Location brokers are directories of RPC services. There are two types of brokers. Local location broker (LLB) – Contains information of the local host. There is one LLB at each TEMS. Global location broker (GLB) – Contains information for the overall environment. There is one GLB at each hub. If FTO is enabled, there is a GLB on the primary as well as the secondary hub. There are 5 types of services within ITM GLB – Only appear in local location broker. They point to the global location broker. TEMS – In LLB, there are only entries for the local TEMS. In GLB, all TEMS should have at least one entry. HUB – Only appear in GLB and hub’s LLB. They point to the current hub. EIB – Similar to type HUB. However, these aren’t used anymore and can be safely ignored. WAREHOUSE – Appear in both GLB and LLB. These point to the warehouse proxy agents. These two workspaces are useful in debugging connection issues. For instance, when the hub is having problem reaching RTEMS for distributed requests, then the global location broker should be inspected to see if the RTEMS has registered any TEMS entries and whether these have correct addresses. Warning: Format of information stored inside of location brokers can change at any time so it should only be used as a mean to troubleshoot. Catalogs: The Hub should have the NEWEST catalogs for ALL supported applications in the environment (unless you are in the middle of an upgrade…) The Mirror MUST match the Hub. RTEMS configuration can vary, and does NOT need to match the Hub.

14 TEMS Self-Monitoring Troubleshooting
New Features for IBM Tivoli Storage Manager 5.3 TEMS Self-Monitoring Troubleshooting Sometimes users run in Workspace Administration mode which enables direct access to link-only workspaces. Link-only workspaces fail when directly accessed from Enterprise: Failure Reason: TEMS name not specified TEMS name is normally passed as part of the context on the link from the preceding workspace Since workspace was directly accessed, no TEMS name exists in the context TEMS log will show a remote system access failure: (4A54FE C:kdspmop1.c,1052,"BuildSocketList") Remote CMS: node "$_TEMS$" not LB-registered [ "$_TEMS$" is the variable name in the query ] Solution: ALWAYS use the “Manage Tivoli Enterprise Monitoring Servers” workspace to access the other TEMS self-monitoring workspaces.

15 New Features – Overview Proxy Agent Services
New Features for IBM Tivoli Storage Manager 5.3 New Features – Overview Proxy Agent Services

16 New Features for IBM Tivoli Storage Manager 5.3
Functions overview Use the Agent Management Services to monitor the availability of agents and respond automatically (such as with a restart) if the agent becomes unhealthy or exits unexpectedly. Embeds watchdogging and auto-restart functions into OS Monitoring Agents and provides a separate process (physical watchdog) specifically for monitoring OS agent itself. Provides an Agent Management Services workspace under Windows, Linux and Unix managed systems Provides TakeActions that stop/start/recycle/reset daily restart count and enable/disable watchdogging for a given agent Provides an event driven situation for critical availability status problems Availability policies in the form of XML files provided by each framework or application agent install package (instance-based availability policies for multi-instance agents). NOTE: New in 622 FP2 in bold

17 Logical Functional Flow Diagram
New Features for IBM Tivoli Storage Manager 5.3 Logical Functional Flow Diagram PAS OS Agent ./CAP/pas.dat State Persistence: - Version Agent’s managed state Running state Daily restart count Proc ID. Initialization CAP Files The monitoring behavior of the AMS towards a particular agent is governed by settings in an XML-based policy file, referred to as a common agent package (CAP) file. Take Sample Operations Log CAP Directory Monitoring Agent Status/Health Monitoring Distribute Messages Alert Processing Command Processing OS Agent configuration: Allows PAS to locate CAP files in context of OS agent’s environment KCA_CMD_TIMEOUT=30 Default max time in seconds that PAS will wait for stop/start/health check commands to complete. Retries start commands 3 times; retries health check 1 time; does not retry stop command. If they don’t complete, will issue events. KCA_DISCOVERY_INTERVAL=30 How frequently to check for newly started managed instances KCA_DISCOVERY_ITM_INTERVAL = 10 minutes How frequently to check for newly configured but not started managed instances KCA_DISCOVERY_CAP_INTERVAL = 30 How frequently to check for added/removed CAP files for KCA_CAP_DIR directories KCA_CACHE_LIMIT= 24 How long in hours to keep alerts in the alert workspace CAP File: The order of the elements is important. Review kwgcap.xsd for a formal definition of the CAP file schema. <checkFrequency> The length of time between availability checks by Agent Management Services of the managed agent. If system load is heavy, consider increasing the checkFrequency interval along with the KCA_CMD_TIMEOUT agent environment variable setting. Enter the frequency value in multiples of 5 seconds, up to a maximum of 3600 seconds (1 hour). Default: 30. <cpuThreshold>  for the health The maximum average percent of CPU time that the agent process can consume over a time interval equal to "checkFrequency" seconds before being deemed unhealthy and then restarted by Agent Management Services. Enter the threshold percentage as a positive integer from 1 to 100. <memoryThreshold>  for the health Maximum average amount of working set memory that the agent process can consume over a time interval equal to "checkFrequency" seconds before being deemed unhealthy and then restarted by Agent Management Services. Enter the threshold value followed by the unit of measurement: KB, MB, or GB. Example: 50 MB. <managerType> The entity that performs availability monitoring of the agent. Enter an enumerated value: NotManaged or ProxyAgentServices. Default: NotManaged. <maxRestarts> The number of times per day an abnormally stopped or unhealthy agent should be restarted. Agents that do not need to be kept running can have a value of 0. Enter a positive integer. Default: 4. <subagent id> Edit this value only if you are creating an instance-specific CAP file for a particular agent. For example, if you want to create a CAP file specifically for a set of DB2 agent instances where the kud_default.xml file has a subagent id="kudagent", set it to something like <subagent id="kud_instance">. The <agentName> value for both the agent's original CAP file and its instance-specific CAP files should match. Enter a string value for the ID. <instance> Use this element to provide specific instance names that the target CAP file policies apply to. It must follow the <agentName> element in the CAP file. For example, to specify that an instance of a CAP file should apply to two specific instances of the Tivoli Monitoring DB2 agent, named test1 and test2, enter this information: <subagent id="kud_instance"> <agentName>ITCAM Agent for DB2</agentName> <instance> <name>test1</name> <name>test2</name> </instance> Enter a string value for the instance name within a <name> </name> tagging pair. Take Action

18 Proxy Agent Services Troubleshooting (1)
New Features for IBM Tivoli Storage Manager 5.3 Proxy Agent Services Troubleshooting (1) The PAS component running inside the OS Agent will log messages to the standard OS Agent RAS1 log file: Linux - <hostname>_lz_klzagent_<timestamp>-<counter>.log UNIX - <hostname>_ux_kuxagent_<timestamp>-<counter>.log Windows - <hostname>_nt_kntagent_<timestamp>-<counter>.log The PAS component running inside the watchdog process kcawd will log messages to its own RAS1 log file: Linux - <hostname>_ lz_kcawd_<timestamp>-<counter>.log UNIX- <hostname>_ux_kcawd_<timestamp>-<counter>.log Windows - <hostname>_nt_kcawd _<timestamp>-<counter>.log Add (UNIT:KCA ALL) to the OS Agent or to the Watchdog KBB_RAS1 parm. Windows: KNTENV and/or KCAENV located in the TMAITM6 Unix/Linux: Only lz.ini and ux.ini for both OS agent and Watchdog tracing PAS writes operational messages to the Agent Operations Log. All PAS message ID’s have the format: KNTAMSxxx, KUXAMSxxx and KLZAMSxxx.

19 Proxy Agent Services Troubleshooting (2)
New Features for IBM Tivoli Storage Manager 5.3 Proxy Agent Services Troubleshooting (2) CAP file parsing problems: Symptom: Agent data does not appear in workspace Look for: errors in kcaxml.cpp Message example: "Error 10 parsing xml file C:\ibm\itm\tmaitm6\CAP\kca.xml“ "Error parsing buffer at byte 200, line 5 and column 10" Possible problem: Syntax error in the CAP file at the given location. Trouble restarting an agent: Symptom: Managed agent shows stopped status Look for: errors in kcacmdw.cpp or kcacmdlnx.cpp “StartService failed. Error code 5” "Command did not finish within timeout - errno = 5" Possible problem: May need to increase timeout parm in OS agent configuration file. This is the maximum amount of seconds to wait to confirm an agent has started properly. KCA_CMD_TIMEOUT=30

20 New Features – Overview Agent Autonomy
New Features for IBM Tivoli Storage Manager 5.3 New Features – Overview Agent Autonomy

21 Agent Autonomy in 6.2.2 streamline
New Features for IBM Tivoli Storage Manager 5.3 Agent Autonomy in streamline

22 Levels of Agent Autonomy
New Features for IBM Tivoli Storage Manager 5.3 Levels of Agent Autonomy Connected: The default behavior for agents 6.2 FP1 and above The agent starts without requiring a connection to the monitoring server, collects and stores data while it is disconnected, and then sends the data to the server after connection. Eventing is configurable: Send through ITM infrastructure or through direct SNMP eventing Configuration is cached at the agent and updated by the TEMS when connected. Full Autonomy: Optional behavior for agents and above The agent starts without requiring a connection to the monitoring server, collects and stores short-term historical data while it is disconnected, locally evaluates situations Eventing: Sends event data directly from the agent. Configuration is stored in a local XML file at the agent. Agent can periodically poll an SNMP source for new config file.

23 Levels of Agent Autonomy - details
New Features for IBM Tivoli Storage Manager 5.3 Levels of Agent Autonomy - details ITM Event caching in memory when lost TEMS connectivity ITM Full autonomous operation without TEMS - Start monitoring situations immediately - Event data persist over agent restart - Private situations - SNMPv1, SNMPv2 traps and SNMv3 Inform USM - Agent Service Interface - Situation statistics data collection ITM FP2 - Centralized Configuration

24 Autonomous mode capabilities
New Features for IBM Tivoli Storage Manager 5.3 Autonomous mode capabilities In addition to the built-in autonomous capability of Tivoli Enterprise Monitoring Agents and Tivoli System Monitor Agents, you can configure special XML files that require no connection to a Tivoli Enterprise Monitoring Server. With these XML files you can define and run situations locally, emit situation events as SNMP alerts or EIF events to a receiver, collect and save historical data locally, and use Centralized Configuration to distribute XML file updates to selected monitoring agents. Tivoli System Monitor Agent Private situations SNMP alerts and EIF events Private history Enterprise situation overrides Agent Service Interface Centralized Configuration Log monitoring and Eventing Tivoli System Monitor Agent The Tivoli System Monitor Agent agent is an OS agent that never connects to a monitoring server. The autonomous version of the agent installation is faster and has a small installed footprint. Local XML configuration files for defining such functions as private situations and SNMP alerts are processed during agent startup. Private situations Enterprise monitoring agents and system monitor agents can use locally defined situations to operate fully autonomously. These locally defined private situations are created in a private situation definition XML file. Private situations events come directly from the monitoring agent. SNMP alerts and EIF events ITM V6.2.2 enables you to configure SNMP alerts to be sent for situation events to an SNMP receiver directly from the agent without first passing the event through the monitoring server. Likewise, with IBM Tivoli Monitoring V Fix Pack 1 (and later), you can create an EIF event configuration file for emitting private situation events to an EIF receiver. Enterprise situations: You can create a trap configuration XML file that enables an agent to emit SNMP alerts directly to the event receiver with no routing through the monitoring server. The agent must connect to the monitoring server at least once to receive enterprise situation definitions. The user needs to place an SNMP trap configuration file in the agent installation and restart the agent to enable this function. Private situations: Enterprise monitoring agents and system monitor agents can also send SNMP alerts for private situations directly to a receiver such as the Netcool/OMNIbus SNMP Probe or emit EIF events for private situations to an EIF receiver such as the Tivoli Enterprise Console event server or the Netcool/OMNIbus Probe for Tivoli EIF. Important: If you are forwarding enterprise situation events to the Netcool/OMNIbus Probe for Tivoli EIF and emitting SNMP alerts for enterprise situation events to the Netcool/OMNIbus SNMP Probe, there is a difference in the EIF forwarded situation event and the SNMP alert formats and the data contained by each. Be aware that an event for a situation that is sent to both probes connected to the same Netcool/OMNIbus ObjectServer will not be detected as the same event by OMNIbus deduplication. This results in duplicate entries for the same event within the ObjectServer that will be treated individually. Normally this is not desirable and might be difficult to manage. Private history Just as you can create private situations for the agents installed locally, you can configure private history for collecting short-term historical data in the same private situation configuration file using the HISTORY element. The resulting private history binary files can be viewed through the Agent Service Interface (default 24 hours, can be changed; use krarloff to move data into text files). Enterprise situation overrides You can configure situation overrides for the locally installed enterprise monitoring agent by using a pc_thresholds.xml (where pc is the two-character product code) configuration file. And you can manage the overrides at the agent manually or with Centralized Configuration. Agent Service Interface The IBM Tivoli Monitoring Service Index utility provides links to the Agent Service Interface for each monitoring agent installed locally. After logging into the operating system, you can select one of these reports: agent information, situation, history, or queries. Also you can do configuration downloads or recycle of situations. Centralized Configuration Use Centralized Configuration to maintain monitoring agent configuration XML files at a central location that are pulled from the central configuration server at intervals (default is every 60 minutes) or on demand. Agents participating in Centralized Configuration each have their own configuration load list XML file that tells where to connect to get the latest updates in the specified configuration files. Log monitoring and Eventing User builds one or more Log File agents using the Agent Builder with basically all of the other capabilities of other autonomous agents.

25 Autonomous mode - details
New Features for IBM Tivoli Storage Manager 5.3 Autonomous mode - details Use the environment file (pc.ini or k<pc>env) to control the autonomous behavior of the agent(s) when it is disconnected from the Tivoli Enterprise Monitoring Server. Enable autonomous mode IRA_AUTONOMOUS_MODE=Y (enabled by default since CT_CMSLIST unspecified) Important parms and files to determine the level of autonomy: IRA_EVENT_EXPORT_EIF=Y IRA_EVENT_EXPORT_SNMP_TRAP=Y/N IRA_LOCALCONFIG_DIR The default local configuration directory path that contains locally customized configuration files such as private situations, EIF event configuration, and SNMP trap configuration files IRA_PRIVATE_SITUATION_CONFIG Specifies the fully qualified private situation configuration file name. During agent initialization, a check is made for the private situation configuration file: <ITMHome>/localconfig/pc/pc_situations.xml CTIRA_THRESHOLDS Specifies the fully qualified name of the XML-based adaptive (dynamic) threshold override file. By default, the agent looks to see if (where pc is the agent product code) a <ITMHome>/localconfig/pc/pc_thresholds.xml file exists. Determine the agent use cases so you can decide on Level of agent autonomy that will be configured: Connected to TEMS Fully autonomous Can have a mixture of agents (some connected and some fully autonomous) If just using agents configured for fully autonomous operation, the agents can be used without the rest of the ITM infrastructure.

26 Agent Autonomy Troubleshooting (1)
New Features for IBM Tivoli Storage Manager 5.3 Agent Autonomy Troubleshooting (1) One of the most common problem is that the agent is not sending events Possible causes: No situations are defined (or not the desired ones) The situations specify criteria that have not been met The situations defined use functions that are not supported by the private situations The DISTRIBUTION tag is not correct. No trapcnfg/eifcnfg file is provided. The destination(s) specified in the trapcnfg/eifcnfg are wrong

27 Agent Autonomy Troubleshooting (2)
New Features for IBM Tivoli Storage Manager 5.3 Agent Autonomy Troubleshooting (2) Agent Install %CANDLE_HOME%\TMAITM6\logs\Kxxinstall.log(Windows) $CANDLEHOME%/logs/Kxxinstall.log (Unix/Linux) Logs the latest attempt to install this agent Agent runtime %CANDLE_HOME%\TMAITM6\logs\<hostname>_<pc>[_instance_name]_k<pc>agent_<hex timestamp>*.log (Windows) $CANDLEHOME/logs/<hostname>_ >_<pc>[_instance name]_k<pc>agent_<hex timestamp>*.log (Unix/Linux) Uses rotating log files Debug options: IRA_DEBUG_AUTONOMOUS=Y IRA_DEBUG_EVENTEXPORT=Y IRA_DEBUG_PRIVATE_SITUATION=Y IRA_DEBUG_SERVICEAPI=Y

28 New Features – Overview Centralized Configuration
New Features for IBM Tivoli Storage Manager 5.3 New Features – Overview Centralized Configuration

29 Centralized Configuration Implementation
New Features for IBM Tivoli Storage Manager 5.3 Centralized Configuration Implementation Centralized Configuration provides the ability to update local configuration files on many monitoring agents without connection to a Tivoli Enterprise Monitoring Server to instruct them how to behave and what level of autonomy to use. Characteristics Agent performs one time configuration download at start up or continuous periodic update inquiry (default interval is every hour) Agent load list defines configuration characteristics and disposition Well-known agent artifacts such as Authorization Group Profile, Private Situation XML, threshold XML, and Load List itself can be optionally activated upon successful download Benefits: Ensure consistent agent installations Reduce installation support and configuration complexity How to activate it: 2 ways to make agent use Central Configuration - Supply Configuration Load List If no Configuration Load List agent tries to use agent environment parameters at startup to collect Configuration Load List from a Centralized Configuration Server No Configuration Load List and No Configuration Parameters means No Central Configuration Agent environment parameters used in case no Local Config Load List is found: IRA_CONFIG_SERVER_URL IRA_CONFIG_SERVER_USERID IRA_CONFIG_SERVER_PASSWORD IRA_CONFIG_SERVER_FILE_NAME IRA_CONFIG_SERVER_FILE_PATH

30 Centralized Configuration - details
New Features for IBM Tivoli Storage Manager 5.3 Centralized Configuration - details A XML file resides in agent localconfig directory and property of each unique running agent instance File Dispositions: CNFGLIST - Configuration load list file. PVTSIT - Private Situation configuration XML file TRAPCNFG - Agent SNMP trap configuration XML file THRESHOLD - Situation threshold override config file. EIFCNFG - EIF configuration file EIFMAP - EIF event mapping file. UAMETA - Universal Agent application Meta files. PASCAP - PAS (Agent Management Services) CAP file.

31 New Features for IBM Tivoli Storage Manager 5.3
New Features – Overview Granular Historical Data Collection and Warehousing

32 Product Overview – Old GUI
New Features for IBM Tivoli Storage Manager 5.3 Product Overview – Old GUI

33 Product Overview – New S&P GUI
New Features for IBM Tivoli Storage Manager 5.3 Product Overview – New S&P GUI

34 Product Overview – New Collection panel
New Features for IBM Tivoli Storage Manager 5.3 Product Overview – New Collection panel Groups of historical data collections can be created and distributed to single managed systems or to managed system groups Historical collection situation based on attribute group shown CLI tacmd commands have been added and/or updated to reflect the new GUI features Eg. Histcreatecollection (new), histconfiguregroups and histstartcollection (modified), etc

35 Product Overview – New Distribution & Filter tabs
New Features for IBM Tivoli Storage Manager 5.3 Product Overview – New Distribution & Filter tabs When the historical situation is distributed, it is automatically started on the selected Managed System Filter tab introduced in ITM 622 FP2. When filtering is used only those data matching the condition are stored for historical analysis

36 Results of filtering in Historical Data Collection
New Features for IBM Tivoli Storage Manager 5.3 Results of filtering in Historical Data Collection Benefits - Filter out unwanted rows of data. - Less Network Traffic - Less Wasted Disk Space - Improved Performance During Summarization and Pruning Limits - Filtered data is not summarized - If the collection has a filter, then the Available Managed System list will only contain managed systems that have a version greater than or equal to that version. - If there is a distribution, only the attributes of a version equal to or lesser than the version of the lowest distributed managed system version should be displayed.

37 Granular Warehouse Troubleshooting
New Features for IBM Tivoli Storage Manager 5.3 Granular Warehouse Troubleshooting Compare the values in the GUI to those in the TEMS database kfwsqlclient -e "SELECT SITNAME, PDT FROM O4SRV.TSITDESC" Filter values are at the end of the PDT between 3 sets of parenthesis. (((ALMEMORY= ))) Check TEPS error logs *_cq_*.log and kfwjras*.log. Check TEP error logs kcjras*.log. Increase trace level for CTHistory on the TEPS (e.g. ERROR (UNIT:CTHistory ALL)) Increase trace level for HistConfigDialog, HistConfigDistributionPanelHandler, and HistConfigFilterPanelHandler on the TEP (e.g. ERROR (UNIT:HistConfigDialog ALL) (UNIT:HistConfigDistributionPanelHandler ALL) (UNIT:HistConfigFilterPanelHandler ALL))

38 New Features – Overview CLI Enhancements
New Features for IBM Tivoli Storage Manager 5.3 New Features – Overview CLI Enhancements

39 File Transfer Enablement
New Features for IBM Tivoli Storage Manager 5.3 File Transfer Enablement It exploits the KT1 agent-to-agent file transfer facility. It uses existing RPC protocol so it can tunnel thru ports already open for ITM services. Before ITM 622 it was used only by ITCAM for Transaction agents. KT1 code is installed as part of TEMS/TEMA automatically and T1 will show up as a component in cinfo output: t File Transfer Enablement aix536 Version: New ITM CLI commands: getfile – get a file from a MSN putfile – put a file to a MSN executecommand – supports bigger command strings, more robust execution, output collection. Modified ITM CLI commands executeaction

40 New Features for IBM Tivoli Storage Manager 5.3
Troubleshooting TACMD/CLI Trace Level ERROR (UNIT:KT1 ALL) (UNIT:KUI ALL) HUB/RTEMS Trace Level Endpoint Validation Issues for executeCommand/executeaction ERROR (UNIT:KT1 ALL) (UNIT:KSHREQ ALL) (UNIT:KDSSQRPS INP) GetFile/PutFile Execution Problems ERROR (UNIT:KT1 ALL) Try not to set traces (UNIT:KSH ALL). This usually causes indiscriminate tracing with wrapping logs. TEMA GetFile/PutFile executecommand/executeAction ERROR (UNIT:KRACT ALL) (UNIT:KGLEX ALL)

41 New Features – Overview TEP Enhancements
New Features for IBM Tivoli Storage Manager 5.3 New Features – Overview TEP Enhancements

42 New Features for IBM Tivoli Storage Manager 5.3
Situation Overriding Added a new XML option “Originnode=MSN” for the XML Tag <Situation> to identify a unique MSN for each situation entry. Originnode keyword applies to each agent or subnode MSN. Non-subnode agent (NT agent): <situation name="test_sit_08" originnode=“Primary:host:NT" priority="100" lastupdate=" " objname="test_sit_084BBE698E462A4976" > <threshold column="PCFREE" value="20" operator="LT"> </threshold> </situation> Subnodes: <situation name="test_sit_09" originnode=“SUB1:HOST:MYAPP" priority=“200" lastupdate=" " objname="tematest_sit_094BBE698E462A4976" > <threshold column="TLONG1" value="8000" operator="GT“ > </threshold> </situation> originnode=“SUB2:HOST:MYAPP" Introduced in ITM6.2.1, a Situation Override is created to change the formula thresholds for a subset of the managed systems that the situation is distributed to. ITM provides a correction to allow situation overrides to work properly when distributed to ITM Subnode Managed System Names (MSN). An override can be applied immediately or can be scheduled at specific times. To create a situation override, a situation must be eligible for overriding.

43 Dynamic Thresholding Enhancements (1)
New Features for IBM Tivoli Storage Manager 5.3 Dynamic Thresholding Enhancements (1) Since ITM6.2.2 FP2 we call it Visual Baselining / Adaptive Monitoring Modeling Roadmap up to 622 FP2: ITM6.2.1 introduces a baseline as a Situation Override, or a Situation with dynamic thresholds (based on calendar). The CLI can be used to calculate baseline (sit override) values based on detailed historical data and statistical functions. ITM6.2.2 introduces visual baselines. These are lines or series in a chart used to visually determine when data is outside its historical range, statistical norm, or approaching a situation threshold (visual baselines do not support Situation Overrides). ITM6.2.2 FP2 introduce baseline functions to the TEP that support creation and display of Situation Overrides Now we are able to: Build adaptive monitoring definitions using Visual Baselining Workspace-based situation creation using Visual Baselining Dynamic Thresholding for Subnode agents

44 Dynamic Thresholding Enhancements (2)
New Features for IBM Tivoli Storage Manager 5.3 Dynamic Thresholding Enhancements (2) Add Situation Overrides (from Situation Editor) Visualize Situation Overrides You can create a situation based on values visualized and/or elaborated by functions Visualize the threshold, overridden thresholds and baselines Calendar makes the monitoring adaptive Functions to determine thresholds based on statistics Override and visualize the threshold Inline calendar added Building adaptive monitoring definitions using Visual Baselining Workspace-based situation creation using Visual Baselining

45 Dynamic Thresholding Enhancements Troubleshooting
New Features for IBM Tivoli Storage Manager 5.3 Dynamic Thresholding Enhancements Troubleshooting For TEP Client: If troubleshooting a problem with the ‘Add Historical Baseline’ option use ERROR (UNIT:JCChart ALL)(UNIT:JCLine ALL) If troubleshooting a problem with the ‘Model Situation’ dialog use ERROR (UNIT:JCChart ALL)(UNIT:EstablishBaseline ALL) For the TEP Server use ERROR (UNIT:Analytics detail) (logged in eWAS log SystemOut.log) If a dynamic threshold appears in the agent's override XML file, but doesn't appear to be working properly, set the following trace for the agent: ERROR(UNIT:krathagt ALL) (UNIT:kracaagt ALL) (UNIT: kraacth ALL) If calendar entries or situation overrides don't appear to be getting to the agent (e.g. aren't showing up in the agent's override XML file), check the agent's operations log Calendar and threshold adds/deletes/updates appear in the operations log Ops log entries are made by default, no tracing levels necessary Ops log file location: %CANDLE_HOME%\TMAITM6\logs\<pseudo_MSN>.LG0 (Windows) e.g. C:\IBM\ITM\TMAITM6\logs\Primary_LEVER_NT.LG0 $CANDLEHOME/logs/<MSN>.lg0 e.g. /opt/IBM/ITM/logs/lever:lz.lg0 Windows: <ITMHOME>\CNPSJ\profiles\ITMProfiles\logs\ITMServer \SystemOut.log Unix: <ITMHOME>/<platform>/iw/profiles/ITMProfile/logs/ ITMServer/SystemOut.log

46 New Features for IBM Tivoli Storage Manager 5.3
Dynamic Logical Views Customers using TEP logical views today have expressed interest in composing these views based on managed system list definitions (MSL). This feature would allow one or more MSL’s to be assigned to any user-defined Logical View item (parent). These assignments would then be used by the TEPS to automatically attach sub-level (child) items to the parent. If one or more MSL is assigned, then the members of these MSL’s will become the child items of the parent. Filtering options will be supported that specify: whether the managed system list name should be included as a branch-level, aggregate item in the constructed view. whether system-level items will be included in the resulting Logical View (these are the “host-level” items that normally appear above managed system level items in the Physical View). whether existing structure below the child managed systems will be included (these would typically be the “report” level items that exist below managed systems in the Physical View).

47 Minor TEP Enhancements
New Features for IBM Tivoli Storage Manager 5.3 Minor TEP Enhancements UI Look and Feel Tivoli is focused on providing a common user experience across our portfolio. This overall effort is designed to promote a portfolio-wide “Tivoli Look-and-Feel”. Since ITM 6.2.2, the TEP has adopted a new set of icons, colors, borders, and tab styles that adhere to this evolving set of Tivoli UI Guidelines. Workspace Gallery Make it easier to determine what data (in the form of workspaces) is available to access from any selected navigator item

48 New Features for IBM Tivoli Storage Manager 5.3
Questions & Answers

49 New Features for IBM Tivoli Storage Manager 5.3
Thank You Merci Grazie Gracias Obrigado Danke Japanese French Russian German Italian Spanish Brazilian Portuguese Arabic Traditional Chinese Simplified Chinese Hindi Tamil Thai CZESC Polish Dank U Dutch Tak Danish

50 New Features for IBM Tivoli Storage Manager 5.3
BACKUP SLIDES

51 New Features for IBM Tivoli Storage Manager 5.3
ITM Architecture


Download ppt "ITM (FP2) Overview Presented by: Fran Martinez Tivoli L2 Support"

Similar presentations


Ads by Google