Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS.

Slides:



Advertisements
Similar presentations
CPU Scheduling.
Advertisements

Chapter 5: CPU Scheduling
Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 2: Capacity.
 Basic Concepts  Scheduling Criteria  Scheduling Algorithms.
Chapter 5 CPU Scheduling. CPU Scheduling Topics: Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling.
Click to add text Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 3: Scalability.
©HCCS & IBM® 2008 Stephen Linkin1 Mainframe Hardware Systems And High Availability Stephen S. Linkin Houston Community College © HCCS and IBM 2008.
Operating Systems 1 K. Salah Module 2.1: CPU Scheduling Scheduling Types Scheduling Criteria Scheduling Algorithms Performance Evaluation.
Chapter 6: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 2, 2005 Chapter 6: CPU Scheduling Basic.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 3: Scalability.
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Lecture 5 Operating Systems.
Click to add text Introduction to z/OS Basics © 2009 IBM Corporation Chapter 2B Parallel Sysplex.
Review of Memory Management, Virtual Memory CS448.
©2010 SoftwareOnZ AutoSoftCapping (ASC) vWLC Manage your software Bill without the Performance Problems !
1 SYSPLEX By : Seyed Hamid Alvani December Overview System/390 History Introduction to Sysplex What is Sysplex ? Why Sysplex ? Sysplex Philosophy.
Recall: Three I/O Methods Synchronous: Wait for I/O operation to complete. Asynchronous: Post I/O request and switch to other work. DMA (Direct Memory.
Balancing Batch Workloads and CPU Activity in a Parallel Sysplex Environment Prepared by Kevin Martin McKesson For CMG Canada Spring Seminar 2006.
An Intro to AIX Virtualization Philadelphia CMG September 14, 2007 Mark Vitale.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Click to add text Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 2: Capacity.
©Copyright 2008, Computer Management Sciences, Inc., Hartfield, VA 1 Introduction to HiperDispatch Management Mode with z10 NCACMG meeting.
CPU Scheduling CSCI 444/544 Operating Systems Fall 2008.
Silberschatz and Galvin  Operating System Concepts Module 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor.
Data Sharing. Data Sharing in a Sysplex Connecting a large number of systems together brings with it special considerations, such as how the large number.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms.
Chapter 5: CPU Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria.
Chapter 5: Process Scheduling. 5.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Basic Concepts Maximum CPU utilization can be obtained.
1 11/29/2015 Chapter 6: CPU Scheduling l Basic Concepts l Scheduling Criteria l Scheduling Algorithms l Multiple-Processor Scheduling l Real-Time Scheduling.
IO_for System Z Basics: The channel subsystem directs the flow of information between I/O devices and main storage. It relieves CPUs of the task of communicating.
Coupling Facility. The S/390 Coupling Facility (CF), the key component of the Parallel Sysplex cluster, enables multisystem coordination and datasharing.
1 CS.217 Operating System By Ajarn..Sutapart Sappajak,METC,MSIT Chapter 5 CPU Scheduling Slide 1 Chapter 5 CPU Scheduling.
6.1 CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
Chapter 2A: Hardware systems and LPARs
Ensieea Rizwani An energy-efficient management mechanism for large-scale server clusters By: Zhenghua Xue, Dong, Ma, Fan, Mei 1.
Chapter 4 CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Algorithm Evaluation.
©2010 SoftwareOnZ Using AutoSoftCapping (ASC) to Manage Your z/OS Software Bill.
CPU scheduling.  Single Process  one process at a time  Maximum CPU utilization obtained with multiprogramming  CPU idle :waiting time is wasted 2.
Basic Concepts Maximum CPU utilization obtained with multiprogramming
1 Chapter 5: CPU Scheduling. 2 Basic Concepts Scheduling Criteria Scheduling Algorithms.
SysPlex -What’s the problem Problems are growing faster than uni-processor….1980’s Leads to SMP and loosely coupled Even faster than SMP and loosely coupled.
lecture 5: CPU Scheduling
Optimal Performance When Running CICS in a Shared LPAR Environment
Chapter 6: CPU Scheduling
EEE Embedded Systems Design Process in Operating Systems 서강대학교 전자공학과
Chapter 5a: CPU Scheduling
Jonathan Gladstone, P. Eng
Chapter 6: CPU Scheduling
Chapter 6: CPU Scheduling
CPU Scheduling G.Anuradha
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
3: CPU Scheduling Basic Concepts Scheduling Criteria
Chapter5: CPU Scheduling
Presentation & Demo August 7, 2018 Bill Shelden.
Chapter 5: CPU Scheduling
Chapter 6: CPU Scheduling
CPU SCHEDULING.
Outline Scheduling algorithms Multi-processor scheduling
Chapter 6: CPU Scheduling
Chapter 5: CPU Scheduling
Operating System , Fall 2000 EA101 W 9:00-10:00 F 9:00-11:00
Shortest-Job-First (SJR) Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Chapter 6: CPU Scheduling
Module 5: CPU Scheduling
Presentation transcript:

Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

Objectives  Integrated Coupling Facility (ICF) and Integrated Facilities for LINUX (IFL)  PR/SM and LPs  Intelligent Resource Director (IRD)  IBM License Manager (ILM) Capacity Upgrade on Demand (CUoD) Conclusions

Acronyms CF- Coupling Facility CP- Central Processor CPC- Central Processor Complex ICF- Integrated Coupling Facility IFL- Integrated Facility for LINUX PR/SM- Processor Resource/Systems Manager HMC- Hardware Management Console LLIC- LPAR Licensed Internal Code Logical CP- Logical Processor LP- Logical Partition LPAR- Logical Partitioning (LPAR mode) PU- Processor Unit

 Beginning with some IBM G5 processor models, the ability to configure PUs (Processing Units) as non-general purpose processors  Benefit - Does not change model number hence no software licensing cost increase IFL - Integrated Facility for LINUXICF - Integrated Coupling Facility ICF and IFL

z900 Models 2064-( ) CP Processing Unit (PU) PU ICF IFL SAP Central (General) Processor Integrated Coupling Facility Integrated Facility for LINUX System Assist Processor CPC MEMORY All contain a 12 PU MultiChip Module (MCM) PU

CPs Defined = Model Number z900 Model PUs Configured as CPs = Model 105 CPC MEMORY All contain a 12 PU MultiChip Module (MCM) PU CP PU SAP CP Central (General) Processor CP

z900 Model PUs Configured as CPs = Model 105 Central (General) Processor CP CPs Defined = Model Number ICFIFLSAP ICFs, IFLs, and SAPs do not incur software charges CPC MEMORY One PU always left unconfigured for “spare” CP PUIFL SAP CP ICF

There is one section per EBCDIC name that identifies a CPU type. 'CP' and 'ICF', with appropriate trailing blanks, are examples of EBCDIC names describing a General Purpose CPU and an Internal Coupling Facility CPU, respectively. Name = SMF70CIN Length = 16 EBCDIC Description = CPU-identification Name Offsets 0 0 As of z/OS Version 1 Release 2, both IFLs and ICFs are represented by ‘ICF’ in the SMF type 70 CPU ID Section IBM SMF Type 70 subtype 1 record - CPU Identification Section CP= Central ProcessorICF - Integrated Coupling Facility IFL - Integrated Facility for LINUX ICFs and IFLs

 Allows up to 15 images (LPs) per CPC  Different control programs on images – (z/OS, z/VM, Linux, CFCC etc.)  Each LP (image) assigned CPC resources: – Processors (CPs) (referred to as “logical CPs”) – Memory – Channels  Each LP either DEDICATED or SHARED LP= Logical PartitionCPC = Central Processor Complex Logical CP = Logical Processor PR/SM LPAR

 Protection/isolation of business critical applications from non-critical workloads  Isolation of test operating systems  Workload Balancing  Different operating systems -- same CPs  Ability to guarantee minimum percent of shared CP resource to each partition  More “white space” – the ability to handle spikes and unpredictable demand PR/SM Benefits

 LP definitions entered on HMC – Dedicated or not-dedicated (shared) – Logical processors (initial, reserved) – Weight (initial, min, max) – Capped or not-capped – CPC memory allocation – I/O Channel distribution/configuration – More CPC = Central Processor Complex LP= Logical Partition HMC = Hardware Management Console LP Configuration Decisions

 LPs logical CPs are permanently assigned to specific CPC physical CPs  Less LPAR overhead (than shared LPs) HMC Image Profile ZOS1 Dedicated LPs waste physical (CPC) processor cycles unless 100% busy When less than 100% busy, the physical CPs assigned to dedicated LPs are IDLE LP= Logical Partition Logical CP = Logical Processor CPC = Central Processor Complex Dedicated LPs

LPAR MODE - Dedicated LCP ZOS1ZOS2 CPC MEMORY ZOS1 Image - 3 Dedicated Logical ProcessorsZOS2 Image - 2 Dedicated Logical Processors Same problem as basic mode - Unused cycles wasted PR/SM LPAR LIC CP LCP = Logical CP = Logical Processor

HMC Image Profile ZOS1 Shared LPs

HMC Image Profile ZOS2 Shared LPs

ZOS1ZOS2 CPC MEMORY LPAR Mode - Shared PR/SM LPAR LIC CP Shared CP Pool LCP ZOS1 Image 5 Logical CPs Weight 400 ZOS2 Image 3 Logical CPs Weight 100 LCP = Logical CP = Logical Processor

What does LLIC (LPAR Licensed Internal Code) Do?  LCPs are considered dispatchable units of work  LCPs placed on ready queue  LLIC executes on physical CP – it selects a ready LCP and – dispatches it onto real CPs  z/OS executes on physical CP until timeslice expires ( milliseconds) or until z/OS enters a wait state  Environment saved, LLIC executes on freed CP  If LCP still ready (used timeslice), it is placed back on ready queue LCP = Logical CP = Logical Processor LLIC = LPAR Licensed Internal Code CP= Central Processor LPAR Dispatching

 Priority on the “ready” queue is determined by PR/SM LIC – Based on LP logical CP “actual” utilization versus “targeted” utilization  Targeted utilization is determined as a function of #LCPs and LP Weight – LP weight is a user specified number between 1 and 999 (recommended 3 digits) LCP = Logical CP = Logical ProcessorCP= Central Processor LP= Logical PartitionLLIC = LPAR Licensed Internal Code Selecting Logical CPs

LP Weights - Shared Pool % ZOS1ZOS2 PR/SM LPAR LIC CP Shared CP Pool ZOS1 ImageZOS2 Image Total of LP Weights= = 500 ZOS1 LP Weight %= 100 * 400/500 = 80% ZOS2 LP Weight %= 100 * 100/500 = 20% CPC MEMORY LCP LCP = Logical CP = Logical ProcessorLP= Logical Partition

 Weight assigned to each LP defined as shared  All active LP weights summed to Total  Each LP is guaranteed a number of the pooled physical CPs based on weight% of Total  Based on #shared logical CPs defined for each LP & LP weight%, LLIC determines the “ready queue” priority of each logical CP  Weight priority enforced only when contention! LP= Logical Partition CP= Central Processor LLIC = LPAR Licensed Internal Code LCP = Logical CP = Logical Processor LP Weights Guarantee “Pool” CP % Share

ZOS1ZOS2 PR/SM LPAR LIC CP Shared CP Pool ZOS1 ImageZOS2 Image ZOS1 LP Weight % = 80% Target CPs = 0.8 * 5 = 4.0 CPs ZOS2 LP Weight % = 20% Target CPs = 0.2 * 5 = 1.0 CPs CPC MEMORY LCP LP= Logical PartitionLCP = Logical CP = Logical Processor CP= Central Processor LP Target CPs

LP= Logical PartitionLCP = Logical CP = Logical Processor CP= Central Processor LP Logical CP share  ZOS1 LP is guaranteed 4 physical CPs – ZOS1 can dispatch work to 5 logical CPs – Each ZOS1 logical CP gets 4/5 or 0.8 CP – ZOS1 effective speed = 0.8 potential speed  ZOS2 LP is guaranteed 1 physical CP – ZOS2 can dispatch work to 3 logical CPs – Each ZOS2 logical CP gets 1/3 or CP – ZOS2 effective speed = potential speed

 An active LP’s weight can be changed non-disruptively using system console – Increasing an LP’s weight by “x”, without any other configuration changes, increases its pooled CP share at the expense of all other shared LPs – This is because the TOTAL shared LP weight increased, while all other sharing LPs weights remained constant: LP n weight LP n weight > TOTAL (TOTAL + x) LP= Logical PartitionCP= Central Processor Impact of Changing Weights

ZOS1ZOS2 CPC MEMORY PR/SM LPAR LIC CP Shared CP Pool ZOS1 ImageZOS2 Image Total of LP Weights= = 600 ZOS1 LP Weight %= 100 * 400/600 = 66.67% ZOS2 LP Weight %= 100 * 200/600 = 33.33% LCP LP= Logical PartitionLCP = Logical CP = Logical Processor CP= Central Processor Changing LPAR Weights

ZOS1ZOS2 CPC MEMORY PR/SM LPAR LIC CP Shared CP Pool ZOS1 ImageZOS2 Image ZOS1 Weight % = 66.67% Target CPs = * 5 = CPs ZOS2 LP Weight % = 33.33% Target CPs = * 5 = CPs LP= Logical PartitionLCP = Logical CP = Logical Processor CP= Central Processor LCP LP Target CPs

 ZOS1 LP is guaranteed physical CPs – ZOS1 can dispatch work to 5 logical CPs – Each ZOS1 logical CP gets 3.335/5 or CP – ZOS1 effective speed = potential speed  ZOS2 LP is guaranteed physical CP – ZOS2 can dispatch work to 3 logical CPs – Each ZOS2 logical CP gets 1.665/3 or CP – ZOS2 effective speed = potential speed LP= Logical PartitionLCP = Logical CP = Logical Processor CP= Central Processor LP Logical CP share

 An active LP’s logical CPs can be increased or reduced non-disruptively  Changing the number of logical CPs for a shared LP increases or decreases the LP work “potential” – Changes z/OS and PR/SM overhead – Does not change the % CPC pool share – Changes the LP logical CP “effective speed” LP= Logical PartitionLCP = Logical CP = Logical Processor CP= Central Processor CPC = Central Processor Complex Changing Logical CP Count

L LLLL ZOS1ZOS2 CPC MEMORY PR/SM LPAR LIC CP Shared CP Pool 400 ZOS1 ImageZOS2 Image + Total LP Weights = = 500 ZOS1 LP Weight % = 100 * 400/500 = 80% ZOS2 LP Weight % = 100 * 100/500 = 20% WEIGHT % UNCHANGED!! LCP 100 LCP Adding Logical CPs

LCP ZOS1ZOS2 CPC MEMORY PR/SM LPAR LIC CP Shared CP Pool 400 ZOS1 ImageZOS2 Image LCP ZOS1 Weight % = 80% Target CPs = 0.8 * 5 = 4.0 CPs ZOS2 LP Weight % = 20% Target CPs = 0.2 * 5 = 1.0 CPs TARGET CPs UNCHANGED!! LP= Logical PartitionLCP = Logical CP = Logical Processor CP= Central Processor 100 Adding Logical CPs

ZOS2 Effective logical CP speed DECREASED!!  ZOS1 LP is guaranteed 4 physical CPs – ZOS1 can dispatch work to 5 logical CPs – Each ZOS1 logical CP gets 4/5 or 0.8 CP – ZOS1 effective speed = 0.8 potential speed  ZOS2 LP is guaranteed 1 physical CP – ZOS2 can dispatch work to 4 logical CPs – Each ZOS2 logical CP gets 1/4 or 0.25 CP – ZOS2 effective speed = 0.25 potential speed LP= Logical PartitionLCP = Logical CP = Logical Processor CP= Central Processor Adding Logical CPs

LCP ZOS1ZOS2 CPC MEMORY PR/SM LPAR LIC CP Shared CP Pool 400 ZOS1 ImageZOS2 Image LCP - Total LP Weights = = 500 ZOS1 LP Weight % = 100 * 400/500 = 80% ZOS2 LP Weight % = 100 * 100/500 = 20% WEIGHT % UNCHANGED!! LP= Logical PartitionLCP = Logical CP = Logical Processor CP= Central Processor LCP 100 Subtracting Logical CPs

LCP ZOS1ZOS2 CPC MEMORY PR/SM LPAR LIC CP Shared CP Pool 400 ZOS1 ImageZOS2 Image ZOS1 Weight % = 80% Target CPs = 0.8 * 5 = 4.0 CPs ZOS2 LP Weight % = 20% Target CPs = 0.2 * 5 = 1.0 CPs TARGET CPs UNCHANGED!! LP= Logical PartitionLCP = Logical CP = Logical Processor CP= Central Processor 100 Subtracting Logical CPs

ZOS2 Effective logical CP speed INCREASED!!  ZOS1 LP is guaranteed 4 physical CPs – ZOS1 can dispatch work to 5 logical CPs – Each ZOS1 logical CP gets 4/5 or 0.8 CP – ZOS1 effective speed = 0.8 potential speed  ZOS2 LP is guaranteed 1 physical CP – ZOS2 can dispatch work to 2 logical CPs – Each ZOS2 logical CP gets 1/2 or 0.5 CP – ZOS2 effective speed = 0.5 potential speed LP= Logical PartitionLCP = Logical CP = Logical Processor CP= Central Processor Subtracting Logical CPs

 Both z/OS and PR/SM overhead minimized when LCP count is equal to the physical CP requirements of the executing workload  The number of LCPs online to an LP is correct … sometimes … – When the LP is CPU constrained, too few – When the LP is idling, too many – When the LP is about 100% busy, just right!  Ideally, effective LCP speed = 1.0 LP= Logical PartitionLCP = Logical CP = Logical Processor CP= Central Processor Logical CPs - How Many?

 LP definitions entered on HMC – Dedicated or not-dedicated (shared) – Logical processors (initial, reserved) – Weight (initial, min, max) – Capped or not-capped – CPC memory allocation – I/O Channel distribution/configuration – etc HMC = Hardware Management Console CPC = Central Processor Complex LP Configuration Decisions

HMC Image Profile  Initial weight enforced  LLIC will not allow LP to use more than guaranteed shared pool % even when other LPs idle  Dynamic change to capping status – Capped or not capped – Capped weight value In general, not recommendedIn general, not recommended LLIC = LPAR Licensed Internal CodeLP= Logical Partition LP “Hard” Capping

four  IRD brings four new functions to the parallel SYSPLEX that help insure important workloads meet their goals – WLM LPAR Weight Management – WLM Vary CPU Management – Dynamic Channel-Path Management – Channel Subsystem I/O Priority Queueing Intelligent Resource Director

 IRD WLM CPU management allows WLM to dynamically change the weights and number of online logical CPs of all z/OS shared LPs in a CPC LPAR cluster  IRD WLM Weight Management – Allows WLM to instruct PR/SM to adjust shared LP weight  IRD WLM Vary CPU Management – Allows WLM to instruct PR/SM to adjust logical CPs online to LPs Logical CP = Logical ProcessorLP= Logical Partition IRD = PR/SM + WLM

 Dedicate CPs to important LPs--- wasteful  Over share CPs---incurring LPAR overhead  Capping  Super Operator – Dynamically change LP weights/caps – Dynamically vary LCPs offline/online LP= Logical PartitionCP= Central Processor Pre-IRD LP Management

HMC Image Profile ZOS1  Running z/OS in 64- bit mode  Running z/900 in LPAR mode  Using shared (not dedicated) CPs  No hard LP caps  Running WLM goal mode  LPs must select “WLM Managed”  Access to SYSPLEX coupling facility LP= Logical Partition IRD Prerequisites

What is an LPAR Cluster? An LPAR cluster is the set of all z/OS shared LPs in the same z/OS parallel SYSPLEX on the same CPC z900 SYSPLEX2 z/VM Linux ZOS1 ZOS2 ZOS3 ZOS4 ZOS5 ZOSD dedicated shared ZOSZ ZOSY ZOSA ZOSB ZOSC ZOSD ZOSE dedicated shared ZOSX SYSPLEX1 CPC = Central Processor Complex

SYSPLEX2 What is an LPAR Cluster? 4 LPAR clusters in this configuration (color coded) z900 z/VM Linux ZOS1 ZOS2 ZOS3 ZOS4 ZOS5 ZOSD dedicated shared z900 ZOSZ ZOSY ZOSA ZOSB ZOSC ZOSD ZOSE dedicated shared ZOSX SYSPLEX1

 Dynamically changes LP Weights  Donor Receiver Strategy  WLM Evaluates all SYSPLEX Workloads  Suffering Service Class Periods (SSCPs) – High (>1) SYSPLEX Performance Index (PI) – High Importance – CPU delays LP= Logical PartitionWLM = Workload Manager WLM LPAR Weight Management

IF The SSCP is missing goal due to CPU Delay and WLM cannot help the SSCP by adjusting dispatch priorities within an LP THEN WLM and PR/SM start talking Estimate impact of increasing SSCPs LP weight 2.Find donor LP if there will be SSCP PI improvement, 3.Donor LP must contain heavy CPU using SCP 4.Evaluate impact of reducing donor LPs weight - Cannot hurt donor SCPs with >= importance 5.WLM changes weights via new LPAR interface SSCP = Suffering Service Class Periods PI = Performance IndexSCP = Service Class Period WLM = Workload Manager WLM Policy Adjustment Cycle

Rules and Guidelines  5% from donor, 5% to receiver  No “recent” LP cluster weight adjustments – Must allow time for impact of recent adjustments. Avoid see-saw effect  Receiver and Donor LPs will always obey specified min/max weight assignments  Non-z/OS images unaffected because total shared LP weight remains constant! LP= Logical Partition

MAKE SURE YOUR SCP GOALS AND IMPORTANCE REFLECT REALITY AT THE LPAR CLUSTER LEVEL! Because WLM thinks you knew what you were doing! Goals Should Reflect Reality SCP = Service Class Period WLM = Workload Manager

Goals / Reality Continued  In the past, the WLM goal mode SCPs on your “test” or “development” LPs had no impact on “production” LPs – If part of the same LPAR cluster, – IRD will take resource away (decrease weight) of “production” LP, – Add resource (increase weight) of “test” LP to meet the goal set for a SCP of higher importance on “test” LP  Develop service policy as though all SCPs are running on a single system image SCP = Service Class Period WLM = Workload Manager LP= Logical Partition

 WLM uses the Level of Importance YOU assign to make resource allocation decisions! CICSWEBSITE DAVE’S STUFF GUTTER WORKBOB’S STUFF Importance 1 Importance 2 Importance 3 Importance 4 Importance 5 Workload Manager Level of Importance

 Varies logical CPs online/offline to LPs  Goals: – Higher effective logical CP speed – Less LPAR overhead and switching  Characteristics: – Aggressive:Vary logical CP online – Conservative:Vary logical CP offline  Influenced by IRD LP weight adjustments Logical CP = Logical ProcessorLP= Logical Partition WLM Vary CPU Management

 Only initially online logical CPs eligible – Operator varied offline not available  If z/OS LP switched to compatibility mode, – all IRD weight and vary logical CP adjustments “undone”. – LP reverts to initial CP and weight settings LP= Logical Partition CP= Central Processor Logical CP = Logical Processor Vary CPU Algorithm Parameters

What is Online Time? LCP ZOS2 RMF interval: 30 minutes LCP 0 LCP 1 LCP MINUTES Previously, the length of the interval was the MAX time that each LCP could be dispatched. RMF reports on the actual dispatch time. In the past, RMF only indicated that LPC 2 was not online at end of interval. Now, RMF reports the online time for each LPC for each partition. LCP 0 LCP 1 LCP MINUTES Interval time=online time LPC Dispatch Time Note: LCP 2 varied offline during interval PAST (pre-IRD) PRESENT (IRD)

 Before IRD CPU Vary Management CPU Time= Interval Time - Wait Time andCPU % Busy = CPU Time * 100 Interval Time * No. Processors  After IRD CPU Vary Management CPU Time = Online Time - Wait Time andCPU % Busy = CPU Time * 100 Total Online Time  New SMF70ONT field, total time processor online to LP during RMF interval Vary CPU - CPU Percent Busy

 Goals – Better I/O response (less pend) time – I/O configuration definition simplification – Reduces the need for > 256 channels – Enhanced availability – Reduced management  Operates in both goal and compatibility modes Dynamic Channel-Path Management (DCM)

 Systems running on z900 or later CPC  Running z/OS 1.1 in 64 bit mode  Both LPAR and Basic mode supported  To share managed channels on same CPC – Systems in LPAR cluster  Balance mode either compatibility or goal  Goal mode requires WLM goal mode and that global I/O Priority queuing selected DCM = Dynamic Channel-path Management CPC = Central Processor Complex DCM Prerequisites

IRD DCM Balance Mode Channel Path PCTBusyPCTBusy Before IRD DCM Channel Path PCTBusyPCTBusy After IRD DCM Stated simply, the goal of IRD DCM is to evenly distribute I/O activity across all channel paths attached to the CPC DCM = Dynamic Channel-path Management CPC = Central Processor Complex

 Moves channel bandwidth where needed  Simplified configuration definition 2 n – For managed CUs, define 2 non-managed paths plus n managed paths to meet peak workload  DCM balanced mode removes paths from non-busy CUs and adds paths to busy CUs  Currently manages paths to DASD CUs  New metric:I/O Velocity DCM = Dynamic Channel-path Management CU = Control Unit DCM Balance Mode

LCU productive time LCU productive time + channel contention delays  Calculated by WLM and the IOS  Uses the same CF structure for CPU mgmt  Calculated for every LCU in LPAR cluster to compute a weighted average  DCM attempts to ensure all managed LCUs have similar I/O velocities LCU = Logical Control UnitCF = Coupling Facility DCM = Dynamic Channel Path Management I/O Velocity = I/O Velocity

 All LPs in I/O cluster in WLM goal mode  During policy adjustment routine WLM selects SSCP – IF I/O delays the problem & increasing I/O priority does not help & adding alias for PAV volumes will not help & I/O requests suffering channel contention … – THEN WLM estimates impact of increasing LCU I/O velocity and if benefits SCP PI, sets explicit velocity for LCU  Explicit velocity overrides Balance mode LCU = Logical Control Unit SSCP = Suffering Service Class Periods SCP = Service Class PeriodPI = Performance Index DCM Goal Mode

 Dynamically manages channel subsystem I/O priority against WLM policy goals  Only meaningful when I/Os are queued  Supports prioritization of: – I/Os waiting for a SAP – I/Os waiting for a channel  Previously, these delays were handled FIFO SAP = System Assist Processor Channel Subsystem (CSS) I/O Priority Queueing

 Uses Donor Receiver strategy  CSS I/O priority setting (waiting for a SAP) – System SCP assigned highest priority – I/O delayed SCP missing goal next – When meeting goals: Light I/O users higher Discretionary work has lowest priority  UCB and CU I/O priority setting (waiting for a channel) – System SCP assigned highest priority – I/O delayed SCP missing goal next – Less important SCP is donor SAP = System Assist Processor SCP = Service Class Period Channel Subsystem I/O Priority Queueing (cont’d)

IBM License Manager (ILM)  PROBLEMS: – Software charges based on CPC size – CPCs getting bigger – Workloads more erratic (spikes) eBusiness  SOLUTION – New LP setup option: Defined Capacity – Software priced at LP Defined Capacity CPC = Central Processor ComplexLP= Logical Partition

IBM License Manager (ILM)  z/900 servers running z/OS V1R1+  New external: “Defined Capacity” for shared LPs  Defined capacity expressed in MSUs – Millions of Service Units per hour  Rolling 4 hour average MSU use compared with defined capacity by WLM  When rolling 4 hour MSU usage exceeds defined capacity, WLM tells PR/SM to “soft cap” LP (really a temporary hard cap) MSU = Millions of Service UnitsLP= Logical Partition

Rolling Four Hour Average Actual MSUs Consumed Rolling Avg Detail Average Actual Avg Rolling

IBM License Manager (ILM)  For software pricing, IBM uses following: – Dedicated LPs - logical CPs * engine MSU – PR/SM hard cap - shared pool % * engine MSU – Defined Capacity- defined capacity – Basic mode- model’s MSU rating  MSU = Millions of Service Units Logical CP = Logical Processor

IBM License Manager (ILM) If ZOS1 set with defined capacity of 100 MSU … Defined Capacity White Space Ability to handle spikes and unpredictable demand

 CUoD supports non-disruptive activation of PUs as CPs or ICFs in a CPC  Specify “Reserved Processors”  When an upgrade is made (105 to 106) – LPs with reserved processors can begin using immediately after operator command  IBM recommends specifying as many reserved processors as model supports CUoD = Capacity Upgrade on Demand PU = Processor Unit CP= Central Processor ICF = Integrated Coupling Facility LP= Logical Partition CPC = Central Processor Complex Capacity Upgrade on Demand

z900 Model PUs Configured as CPs = Model 105 CPC MEMORY All contain a 12 PU MultiChip Module (MCM) PU CP PU SAP CP Central (General) Processor CP CPs Defined = Model Number

HMC Image Profile ZOS1 Shared LPs

CPs Defined = Model Number z900 Model PUs Configured as CPs = Model 106 CPC MEMORY All contain a 12 PU MultiChip Module (MCM) PUCP PU SAP CP Central (General) Processor CP PU

LP Weights - LP Weight % ZOS1ZOS2 PR/SM LPAR LIC LCP L LLLL ZOS1 ImageZOS2 Image Total of LP Weights= = 500 ZOS1 LP Weight %= 100 * 400/500 = 80% ZOS2 LP Weight %= 100 * 100/500 = 20% CPC MEMORY CP Shared CP Pool CP LCP

ZOS1ZOS2 PR/SM LPAR LIC ZOS1 Weight % = 80% Target CPs = 0.8 *6 = 4.8 CPs ZOS2 LP Weight % = 20% Target CPs = 0.2 * 6 = 1.2 CPs CPC MEMORY CP Shared CP Pool CP LCP L LLLL ZOS1 ImageZOS2 Image LCP LP Target CPs

 ZOS1 LP is guaranteed 4.8 physical CPs – ZOS1 can dispatch work to 5 logical CPs – Each ZOS1 logical CP gets 4.8/5 or 0.96 CP – ZOS1 effective speed = 0.96 potential speed  ZOS2 LP is guaranteed 1.2 physical CP – ZOS2 can dispatch work to 3 logical CPs – Each ZOS2 logical CP gets 1.2/3 or 0.40 CP – ZOS2 effective speed = 0.40 potential speed Logical CP = Logical ProcessorCP= Central Processor LP Logical CP share

Large system resource, configuration, and workload management tasks shifting towards intelligent, automated, dynamic WLM functions Responsibility of capacity planners and performance analysts shifting towards better understanding of business workloads relative importance and performance requirements Conclusions

NeuMICS Support  April 2002 PSP CAP - New IRD and ILM Planning Applications PER - New MSU and Soft Capping Analysis  October 2001 PSP RMF IRD, ILM, and CUoD  April 2001 PSP RMF z/OS (64 bit) Multisystem Enclaves, USS Kernal, and 2105 Cache Controllers  October 2000 PSP RMF ICF, IFL, and PAV support

 Parallel Sysplex Overview: Introducing Data Sharing and Parallelism in a Sysplex (SA )  Redbook: z/OS Intelligent Resource Director (SG )  IBM e-server zSeries 900 and z/OS Reference Guide (G )  zSeries 900 Processor Resource/Systems Manger Planning Guide (SB ) References

Trademarks PR/SM RMF SMF Sysplex Timer S/390 VSE/ESA zSeries z/OS CICSDB2 ESCON FICON IBM IMS MVS MVS/ESA OS/390 Parallel Sysplex Processor Resource/Systems Manager The following terms are trademarks of the International Business Machines Corporation in the United States, or other countries, or both:

Thanks! Questions ??? Darrell Faulkner NeuMICS Development Manager Computer Associates