Presentation is loading. Please wait.

Presentation is loading. Please wait.

Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS.

Similar presentations


Presentation on theme: "Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS."— Presentation transcript:

1 Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS

2 Objectives  Integrated Coupling Facility (ICF) and Integrated Facilities for LINUX (IFL)  PR/SM and LPs  Intelligent Resource Director (IRD)  IBM License Manager (ILM) Capacity Upgrade on Demand (CUoD) Conclusions

3 Acronyms CF- Coupling Facility CP- Central Processor CPC- Central Processor Complex ICF- Integrated Coupling Facility IFL- Integrated Facility for LINUX PR/SM- Processor Resource/Systems Manager HMC- Hardware Management Console LLIC- LPAR Licensed Internal Code Logical CP- Logical Processor LP- Logical Partition LPAR- Logical Partitioning (LPAR mode) PU- Processor Unit

4  Beginning with some IBM G5 processor models, the ability to configure PUs (Processing Units) as non-general purpose processors  Benefit - Does not change model number hence no software licensing cost increase IFL - Integrated Facility for LINUXICF - Integrated Coupling Facility ICF and IFL

5 z900 Models 2064-(101-109) CP Processing Unit (PU) PU ICF IFL SAP Central (General) Processor Integrated Coupling Facility Integrated Facility for LINUX System Assist Processor CPC MEMORY All contain a 12 PU MultiChip Module (MCM) PU

6 CPs Defined = Model Number z900 Model 2064-105 5 PUs Configured as CPs = Model 105 CPC MEMORY All contain a 12 PU MultiChip Module (MCM) PU CP PU SAP CP Central (General) Processor CP

7 z900 Model 2064-105 5 PUs Configured as CPs = Model 105 Central (General) Processor CP CPs Defined = Model Number ICFIFLSAP ICFs, IFLs, and SAPs do not incur software charges CPC MEMORY One PU always left unconfigured for “spare” CP PUIFL SAP CP ICF

8 There is one section per EBCDIC name that identifies a CPU type. 'CP' and 'ICF', with appropriate trailing blanks, are examples of EBCDIC names describing a General Purpose CPU and an Internal Coupling Facility CPU, respectively. Name = SMF70CIN Length = 16 EBCDIC Description = CPU-identification Name Offsets 0 0 As of z/OS Version 1 Release 2, both IFLs and ICFs are represented by ‘ICF’ in the SMF type 70 CPU ID Section IBM SMF Type 70 subtype 1 record - CPU Identification Section CP= Central ProcessorICF - Integrated Coupling Facility IFL - Integrated Facility for LINUX ICFs and IFLs

9  Allows up to 15 images (LPs) per CPC  Different control programs on images – (z/OS, z/VM, Linux, CFCC etc.)  Each LP (image) assigned CPC resources: – Processors (CPs) (referred to as “logical CPs”) – Memory – Channels  Each LP either DEDICATED or SHARED LP= Logical PartitionCPC = Central Processor Complex Logical CP = Logical Processor PR/SM LPAR

10  Protection/isolation of business critical applications from non-critical workloads  Isolation of test operating systems  Workload Balancing  Different operating systems -- same CPs  Ability to guarantee minimum percent of shared CP resource to each partition  More “white space” – the ability to handle spikes and unpredictable demand PR/SM Benefits

11  LP definitions entered on HMC – Dedicated or not-dedicated (shared) – Logical processors (initial, reserved) – Weight (initial, min, max) – Capped or not-capped – CPC memory allocation – I/O Channel distribution/configuration – More CPC = Central Processor Complex LP= Logical Partition HMC = Hardware Management Console LP Configuration Decisions

12  LPs logical CPs are permanently assigned to specific CPC physical CPs  Less LPAR overhead (than shared LPs) HMC Image Profile ZOS1 Dedicated LPs waste physical (CPC) processor cycles unless 100% busy When less than 100% busy, the physical CPs assigned to dedicated LPs are IDLE LP= Logical Partition Logical CP = Logical Processor CPC = Central Processor Complex Dedicated LPs

13 LPAR MODE - Dedicated LCP ZOS1ZOS2 CPC MEMORY ZOS1 Image - 3 Dedicated Logical ProcessorsZOS2 Image - 2 Dedicated Logical Processors Same problem as basic mode - Unused cycles wasted PR/SM LPAR LIC CP LCP = Logical CP = Logical Processor

14 HMC Image Profile ZOS1 Shared LPs

15 HMC Image Profile ZOS2 Shared LPs

16 ZOS1ZOS2 CPC MEMORY LPAR Mode - Shared PR/SM LPAR LIC CP Shared CP Pool LCP ZOS1 Image 5 Logical CPs Weight 400 ZOS2 Image 3 Logical CPs Weight 100 LCP = Logical CP = Logical Processor

17 What does LLIC (LPAR Licensed Internal Code) Do?  LCPs are considered dispatchable units of work  LCPs placed on ready queue  LLIC executes on physical CP – it selects a ready LCP and – dispatches it onto real CPs  z/OS executes on physical CP until timeslice expires (12.5-25 milliseconds) or until z/OS enters a wait state  Environment saved, LLIC executes on freed CP  If LCP still ready (used timeslice), it is placed back on ready queue LCP = Logical CP = Logical Processor LLIC = LPAR Licensed Internal Code CP= Central Processor LPAR Dispatching

18  Priority on the “ready” queue is determined by PR/SM LIC – Based on LP logical CP “actual” utilization versus “targeted” utilization  Targeted utilization is determined as a function of #LCPs and LP Weight – LP weight is a user specified number between 1 and 999 (recommended 3 digits) LCP = Logical CP = Logical ProcessorCP= Central Processor LP= Logical PartitionLLIC = LPAR Licensed Internal Code Selecting Logical CPs

19 LP Weights - Shared Pool % ZOS1ZOS2 PR/SM LPAR LIC CP Shared CP Pool ZOS1 ImageZOS2 Image Total of LP Weights= 400 + 100 = 500 ZOS1 LP Weight %= 100 * 400/500 = 80% ZOS2 LP Weight %= 100 * 100/500 = 20% 400100 CPC MEMORY LCP LCP = Logical CP = Logical ProcessorLP= Logical Partition

20  Weight assigned to each LP defined as shared  All active LP weights summed to Total  Each LP is guaranteed a number of the pooled physical CPs based on weight% of Total  Based on #shared logical CPs defined for each LP & LP weight%, LLIC determines the “ready queue” priority of each logical CP  Weight priority enforced only when contention! LP= Logical Partition CP= Central Processor LLIC = LPAR Licensed Internal Code LCP = Logical CP = Logical Processor LP Weights Guarantee “Pool” CP % Share

21 ZOS1ZOS2 PR/SM LPAR LIC CP Shared CP Pool ZOS1 ImageZOS2 Image 400100 ZOS1 LP Weight % = 80% Target CPs = 0.8 * 5 = 4.0 CPs ZOS2 LP Weight % = 20% Target CPs = 0.2 * 5 = 1.0 CPs CPC MEMORY LCP LP= Logical PartitionLCP = Logical CP = Logical Processor CP= Central Processor LP Target CPs

22 LP= Logical PartitionLCP = Logical CP = Logical Processor CP= Central Processor LP Logical CP share  ZOS1 LP is guaranteed 4 physical CPs – ZOS1 can dispatch work to 5 logical CPs – Each ZOS1 logical CP gets 4/5 or 0.8 CP – ZOS1 effective speed = 0.8 potential speed  ZOS2 LP is guaranteed 1 physical CP – ZOS2 can dispatch work to 3 logical CPs – Each ZOS2 logical CP gets 1/3 or 0.333 CP – ZOS2 effective speed = 0.333 potential speed

23  An active LP’s weight can be changed non-disruptively using system console – Increasing an LP’s weight by “x”, without any other configuration changes, increases its pooled CP share at the expense of all other shared LPs – This is because the TOTAL shared LP weight increased, while all other sharing LPs weights remained constant: LP n weight LP n weight > TOTAL (TOTAL + x) LP= Logical PartitionCP= Central Processor Impact of Changing Weights

24 ZOS1ZOS2 CPC MEMORY PR/SM LPAR LIC CP Shared CP Pool 400 100 + ZOS1 ImageZOS2 Image Total of LP Weights= 400 + 200 = 600 ZOS1 LP Weight %= 100 * 400/600 = 66.67% ZOS2 LP Weight %= 100 * 200/600 = 33.33% LCP LP= Logical PartitionLCP = Logical CP = Logical Processor CP= Central Processor Changing LPAR Weights

25 ZOS1ZOS2 CPC MEMORY PR/SM LPAR LIC CP Shared CP Pool 400200 ZOS1 ImageZOS2 Image ZOS1 Weight % = 66.67% Target CPs = 0.667 * 5 = 3.335 CPs ZOS2 LP Weight % = 33.33% Target CPs = 0.333 * 5 = 1.665 CPs LP= Logical PartitionLCP = Logical CP = Logical Processor CP= Central Processor LCP LP Target CPs

26  ZOS1 LP is guaranteed 3.335 physical CPs – ZOS1 can dispatch work to 5 logical CPs – Each ZOS1 logical CP gets 3.335/5 or 0.667 CP – ZOS1 effective speed = 0.667 potential speed  ZOS2 LP is guaranteed 1.665 physical CP – ZOS2 can dispatch work to 3 logical CPs – Each ZOS2 logical CP gets 1.665/3 or 0.555 CP – ZOS2 effective speed = 0.555 potential speed LP= Logical PartitionLCP = Logical CP = Logical Processor CP= Central Processor LP Logical CP share

27  An active LP’s logical CPs can be increased or reduced non-disruptively  Changing the number of logical CPs for a shared LP increases or decreases the LP work “potential” – Changes z/OS and PR/SM overhead – Does not change the % CPC pool share – Changes the LP logical CP “effective speed” LP= Logical PartitionLCP = Logical CP = Logical Processor CP= Central Processor CPC = Central Processor Complex Changing Logical CP Count

28 L LLLL ZOS1ZOS2 CPC MEMORY PR/SM LPAR LIC CP Shared CP Pool 400 ZOS1 ImageZOS2 Image + Total LP Weights = 400 + 100 = 500 ZOS1 LP Weight % = 100 * 400/500 = 80% ZOS2 LP Weight % = 100 * 100/500 = 20% WEIGHT % UNCHANGED!! LCP 100 LCP Adding Logical CPs

29 LCP ZOS1ZOS2 CPC MEMORY PR/SM LPAR LIC CP Shared CP Pool 400 ZOS1 ImageZOS2 Image LCP ZOS1 Weight % = 80% Target CPs = 0.8 * 5 = 4.0 CPs ZOS2 LP Weight % = 20% Target CPs = 0.2 * 5 = 1.0 CPs TARGET CPs UNCHANGED!! LP= Logical PartitionLCP = Logical CP = Logical Processor CP= Central Processor 100 Adding Logical CPs

30 ZOS2 Effective logical CP speed DECREASED!!  ZOS1 LP is guaranteed 4 physical CPs – ZOS1 can dispatch work to 5 logical CPs – Each ZOS1 logical CP gets 4/5 or 0.8 CP – ZOS1 effective speed = 0.8 potential speed  ZOS2 LP is guaranteed 1 physical CP – ZOS2 can dispatch work to 4 logical CPs – Each ZOS2 logical CP gets 1/4 or 0.25 CP – ZOS2 effective speed = 0.25 potential speed LP= Logical PartitionLCP = Logical CP = Logical Processor CP= Central Processor Adding Logical CPs

31 LCP ZOS1ZOS2 CPC MEMORY PR/SM LPAR LIC CP Shared CP Pool 400 ZOS1 ImageZOS2 Image LCP - Total LP Weights = 400 + 100 = 500 ZOS1 LP Weight % = 100 * 400/500 = 80% ZOS2 LP Weight % = 100 * 100/500 = 20% WEIGHT % UNCHANGED!! LP= Logical PartitionLCP = Logical CP = Logical Processor CP= Central Processor LCP 100 Subtracting Logical CPs

32 LCP ZOS1ZOS2 CPC MEMORY PR/SM LPAR LIC CP Shared CP Pool 400 ZOS1 ImageZOS2 Image ZOS1 Weight % = 80% Target CPs = 0.8 * 5 = 4.0 CPs ZOS2 LP Weight % = 20% Target CPs = 0.2 * 5 = 1.0 CPs TARGET CPs UNCHANGED!! LP= Logical PartitionLCP = Logical CP = Logical Processor CP= Central Processor 100 Subtracting Logical CPs

33 ZOS2 Effective logical CP speed INCREASED!!  ZOS1 LP is guaranteed 4 physical CPs – ZOS1 can dispatch work to 5 logical CPs – Each ZOS1 logical CP gets 4/5 or 0.8 CP – ZOS1 effective speed = 0.8 potential speed  ZOS2 LP is guaranteed 1 physical CP – ZOS2 can dispatch work to 2 logical CPs – Each ZOS2 logical CP gets 1/2 or 0.5 CP – ZOS2 effective speed = 0.5 potential speed LP= Logical PartitionLCP = Logical CP = Logical Processor CP= Central Processor Subtracting Logical CPs

34  Both z/OS and PR/SM overhead minimized when LCP count is equal to the physical CP requirements of the executing workload  The number of LCPs online to an LP is correct … sometimes … – When the LP is CPU constrained, too few – When the LP is idling, too many – When the LP is about 100% busy, just right!  Ideally, effective LCP speed = 1.0 LP= Logical PartitionLCP = Logical CP = Logical Processor CP= Central Processor Logical CPs - How Many?

35  LP definitions entered on HMC – Dedicated or not-dedicated (shared) – Logical processors (initial, reserved) – Weight (initial, min, max) – Capped or not-capped – CPC memory allocation – I/O Channel distribution/configuration – etc HMC = Hardware Management Console CPC = Central Processor Complex LP Configuration Decisions

36 HMC Image Profile  Initial weight enforced  LLIC will not allow LP to use more than guaranteed shared pool % even when other LPs idle  Dynamic change to capping status – Capped or not capped – Capped weight value In general, not recommendedIn general, not recommended LLIC = LPAR Licensed Internal CodeLP= Logical Partition LP “Hard” Capping

37 four  IRD brings four new functions to the parallel SYSPLEX that help insure important workloads meet their goals – WLM LPAR Weight Management – WLM Vary CPU Management – Dynamic Channel-Path Management – Channel Subsystem I/O Priority Queueing Intelligent Resource Director

38  IRD WLM CPU management allows WLM to dynamically change the weights and number of online logical CPs of all z/OS shared LPs in a CPC LPAR cluster  IRD WLM Weight Management – Allows WLM to instruct PR/SM to adjust shared LP weight  IRD WLM Vary CPU Management – Allows WLM to instruct PR/SM to adjust logical CPs online to LPs Logical CP = Logical ProcessorLP= Logical Partition IRD = PR/SM + WLM

39  Dedicate CPs to important LPs--- wasteful  Over share CPs---incurring LPAR overhead  Capping  Super Operator – Dynamically change LP weights/caps – Dynamically vary LCPs offline/online LP= Logical PartitionCP= Central Processor Pre-IRD LP Management

40 HMC Image Profile ZOS1  Running z/OS in 64- bit mode  Running z/900 in LPAR mode  Using shared (not dedicated) CPs  No hard LP caps  Running WLM goal mode  LPs must select “WLM Managed”  Access to SYSPLEX coupling facility LP= Logical Partition IRD Prerequisites

41 What is an LPAR Cluster? An LPAR cluster is the set of all z/OS shared LPs in the same z/OS parallel SYSPLEX on the same CPC z900 SYSPLEX2 z/VM Linux ZOS1 ZOS2 ZOS3 ZOS4 ZOS5 ZOSD dedicated shared ZOSZ ZOSY ZOSA ZOSB ZOSC ZOSD ZOSE dedicated shared ZOSX SYSPLEX1 CPC = Central Processor Complex

42 SYSPLEX2 What is an LPAR Cluster? 4 LPAR clusters in this configuration (color coded) z900 z/VM Linux ZOS1 ZOS2 ZOS3 ZOS4 ZOS5 ZOSD dedicated shared z900 ZOSZ ZOSY ZOSA ZOSB ZOSC ZOSD ZOSE dedicated shared ZOSX SYSPLEX1

43  Dynamically changes LP Weights  Donor Receiver Strategy  WLM Evaluates all SYSPLEX Workloads  Suffering Service Class Periods (SSCPs) – High (>1) SYSPLEX Performance Index (PI) – High Importance – CPU delays LP= Logical PartitionWLM = Workload Manager WLM LPAR Weight Management

44 IF The SSCP is missing goal due to CPU Delay and WLM cannot help the SSCP by adjusting dispatch priorities within an LP THEN WLM and PR/SM start talking --- 1.Estimate impact of increasing SSCPs LP weight 2.Find donor LP if there will be SSCP PI improvement, 3.Donor LP must contain heavy CPU using SCP 4.Evaluate impact of reducing donor LPs weight - Cannot hurt donor SCPs with >= importance 5.WLM changes weights via new LPAR interface SSCP = Suffering Service Class Periods PI = Performance IndexSCP = Service Class Period WLM = Workload Manager WLM Policy Adjustment Cycle

45 Rules and Guidelines  5% from donor, 5% to receiver  No “recent” LP cluster weight adjustments – Must allow time for impact of recent adjustments. Avoid see-saw effect  Receiver and Donor LPs will always obey specified min/max weight assignments  Non-z/OS images unaffected because total shared LP weight remains constant! LP= Logical Partition

46 MAKE SURE YOUR SCP GOALS AND IMPORTANCE REFLECT REALITY AT THE LPAR CLUSTER LEVEL! Because WLM thinks you knew what you were doing! Goals Should Reflect Reality SCP = Service Class Period WLM = Workload Manager

47 Goals / Reality Continued  In the past, the WLM goal mode SCPs on your “test” or “development” LPs had no impact on “production” LPs – If part of the same LPAR cluster, – IRD will take resource away (decrease weight) of “production” LP, – Add resource (increase weight) of “test” LP to meet the goal set for a SCP of higher importance on “test” LP  Develop service policy as though all SCPs are running on a single system image SCP = Service Class Period WLM = Workload Manager LP= Logical Partition

48  WLM uses the Level of Importance YOU assign to make resource allocation decisions! CICSWEBSITE DAVE’S STUFF GUTTER WORKBOB’S STUFF Importance 1 Importance 2 Importance 3 Importance 4 Importance 5 Workload Manager Level of Importance

49  Varies logical CPs online/offline to LPs  Goals: – Higher effective logical CP speed – Less LPAR overhead and switching  Characteristics: – Aggressive:Vary logical CP online – Conservative:Vary logical CP offline  Influenced by IRD LP weight adjustments Logical CP = Logical ProcessorLP= Logical Partition WLM Vary CPU Management

50  Only initially online logical CPs eligible – Operator varied offline not available  If z/OS LP switched to compatibility mode, – all IRD weight and vary logical CP adjustments “undone”. – LP reverts to initial CP and weight settings LP= Logical Partition CP= Central Processor Logical CP = Logical Processor Vary CPU Algorithm Parameters

51 What is Online Time? LCP ZOS2 RMF interval: 30 minutes LCP 0 LCP 1 LCP 2 ---------- 30 MINUTES ---------- Previously, the length of the interval was the MAX time that each LCP could be dispatched. RMF reports on the actual dispatch time. In the past, RMF only indicated that LPC 2 was not online at end of interval. Now, RMF reports the online time for each LPC for each partition. LCP 0 LCP 1 LCP 2 ---------- 30 MINUTES ---------- Interval time=online time LPC Dispatch Time Note: LCP 2 varied offline during interval PAST (pre-IRD) PRESENT (IRD)

52  Before IRD CPU Vary Management CPU Time= Interval Time - Wait Time andCPU % Busy = CPU Time * 100 Interval Time * No. Processors  After IRD CPU Vary Management CPU Time = Online Time - Wait Time andCPU % Busy = CPU Time * 100 Total Online Time  New SMF70ONT field, total time processor online to LP during RMF interval Vary CPU - CPU Percent Busy

53  Goals – Better I/O response (less pend) time – I/O configuration definition simplification – Reduces the need for > 256 channels – Enhanced availability – Reduced management  Operates in both goal and compatibility modes Dynamic Channel-Path Management (DCM)

54  Systems running on z900 or later CPC  Running z/OS 1.1 in 64 bit mode  Both LPAR and Basic mode supported  To share managed channels on same CPC – Systems in LPAR cluster  Balance mode either compatibility or goal  Goal mode requires WLM goal mode and that global I/O Priority queuing selected DCM = Dynamic Channel-path Management CPC = Central Processor Complex DCM Prerequisites

55 IRD DCM Balance Mode Channel Path PCTBusyPCTBusy Before IRD DCM Channel Path PCTBusyPCTBusy After IRD DCM Stated simply, the goal of IRD DCM is to evenly distribute I/O activity across all channel paths attached to the CPC DCM = Dynamic Channel-path Management CPC = Central Processor Complex

56  Moves channel bandwidth where needed  Simplified configuration definition 2 n – For managed CUs, define 2 non-managed paths plus n managed paths to meet peak workload  DCM balanced mode removes paths from non-busy CUs and adds paths to busy CUs  Currently manages paths to DASD CUs  New metric:I/O Velocity DCM = Dynamic Channel-path Management CU = Control Unit DCM Balance Mode

57 LCU productive time LCU productive time + channel contention delays  Calculated by WLM and the IOS  Uses the same CF structure for CPU mgmt  Calculated for every LCU in LPAR cluster to compute a weighted average  DCM attempts to ensure all managed LCUs have similar I/O velocities LCU = Logical Control UnitCF = Coupling Facility DCM = Dynamic Channel Path Management I/O Velocity = I/O Velocity

58  All LPs in I/O cluster in WLM goal mode  During policy adjustment routine WLM selects SSCP – IF I/O delays the problem & increasing I/O priority does not help & adding alias for PAV volumes will not help & I/O requests suffering channel contention … – THEN WLM estimates impact of increasing LCU I/O velocity and if benefits SCP PI, sets explicit velocity for LCU  Explicit velocity overrides Balance mode LCU = Logical Control Unit SSCP = Suffering Service Class Periods SCP = Service Class PeriodPI = Performance Index DCM Goal Mode

59  Dynamically manages channel subsystem I/O priority against WLM policy goals  Only meaningful when I/Os are queued  Supports prioritization of: – I/Os waiting for a SAP – I/Os waiting for a channel  Previously, these delays were handled FIFO SAP = System Assist Processor Channel Subsystem (CSS) I/O Priority Queueing

60  Uses Donor Receiver strategy  CSS I/O priority setting (waiting for a SAP) – System SCP assigned highest priority – I/O delayed SCP missing goal next – When meeting goals: Light I/O users higher Discretionary work has lowest priority  UCB and CU I/O priority setting (waiting for a channel) – System SCP assigned highest priority – I/O delayed SCP missing goal next – Less important SCP is donor SAP = System Assist Processor SCP = Service Class Period Channel Subsystem I/O Priority Queueing (cont’d)

61 IBM License Manager (ILM)  PROBLEMS: – Software charges based on CPC size – CPCs getting bigger – Workloads more erratic (spikes) eBusiness  SOLUTION – New LP setup option: Defined Capacity – Software priced at LP Defined Capacity CPC = Central Processor ComplexLP= Logical Partition

62 IBM License Manager (ILM)  z/900 servers running z/OS V1R1+  New external: “Defined Capacity” for shared LPs  Defined capacity expressed in MSUs – Millions of Service Units per hour  Rolling 4 hour average MSU use compared with defined capacity by WLM  When rolling 4 hour MSU usage exceeds defined capacity, WLM tells PR/SM to “soft cap” LP (really a temporary hard cap) MSU = Millions of Service UnitsLP= Logical Partition

63 Rolling Four Hour Average Actual MSUs Consumed Rolling Avg Detail Average Actual Avg Rolling

64 IBM License Manager (ILM)  For software pricing, IBM uses following: – Dedicated LPs - logical CPs * engine MSU – PR/SM hard cap - shared pool % * engine MSU – Defined Capacity- defined capacity – Basic mode- model’s MSU rating  www.ibm.com/servers/eserver/zseries/srm/ MSU = Millions of Service Units Logical CP = Logical Processor

65 IBM License Manager (ILM) If ZOS1 set with defined capacity of 100 MSU … Defined Capacity White Space Ability to handle spikes and unpredictable demand 100 175

66  CUoD supports non-disruptive activation of PUs as CPs or ICFs in a CPC  Specify “Reserved Processors”  When an upgrade is made (105 to 106) – LPs with reserved processors can begin using immediately after operator command  IBM recommends specifying as many reserved processors as model supports CUoD = Capacity Upgrade on Demand PU = Processor Unit CP= Central Processor ICF = Integrated Coupling Facility LP= Logical Partition CPC = Central Processor Complex Capacity Upgrade on Demand

67 z900 Model 2064-105 5 PUs Configured as CPs = Model 105 CPC MEMORY All contain a 12 PU MultiChip Module (MCM) PU CP PU SAP CP Central (General) Processor CP CPs Defined = Model Number

68 HMC Image Profile ZOS1 Shared LPs

69 CPs Defined = Model Number z900 Model 2064-106 5 +1 PUs Configured as CPs = Model 106 CPC MEMORY All contain a 12 PU MultiChip Module (MCM) PUCP PU SAP CP Central (General) Processor CP PU

70 LP Weights - LP Weight % ZOS1ZOS2 PR/SM LPAR LIC LCP L LLLL ZOS1 ImageZOS2 Image Total of LP Weights= 400 + 100 = 500 ZOS1 LP Weight %= 100 * 400/500 = 80% ZOS2 LP Weight %= 100 * 100/500 = 20% 400100 CPC MEMORY CP Shared CP Pool CP LCP

71 ZOS1ZOS2 PR/SM LPAR LIC ZOS1 Weight % = 80% Target CPs = 0.8 *6 = 4.8 CPs ZOS2 LP Weight % = 20% Target CPs = 0.2 * 6 = 1.2 CPs CPC MEMORY CP Shared CP Pool CP LCP L LLLL ZOS1 ImageZOS2 Image 400100 LCP LP Target CPs

72  ZOS1 LP is guaranteed 4.8 physical CPs – ZOS1 can dispatch work to 5 logical CPs – Each ZOS1 logical CP gets 4.8/5 or 0.96 CP – ZOS1 effective speed = 0.96 potential speed  ZOS2 LP is guaranteed 1.2 physical CP – ZOS2 can dispatch work to 3 logical CPs – Each ZOS2 logical CP gets 1.2/3 or 0.40 CP – ZOS2 effective speed = 0.40 potential speed Logical CP = Logical ProcessorCP= Central Processor LP Logical CP share

73 Large system resource, configuration, and workload management tasks shifting towards intelligent, automated, dynamic WLM functions Responsibility of capacity planners and performance analysts shifting towards better understanding of business workloads relative importance and performance requirements Conclusions

74 NeuMICS Support  April 2002 PSP CAP - New IRD and ILM Planning Applications PER - New MSU and Soft Capping Analysis  October 2001 PSP RMF6580 - IRD, ILM, and CUoD  April 2001 PSP RMF6560 - z/OS (64 bit) Multisystem Enclaves, USS Kernal, and 2105 Cache Controllers  October 2000 PSP RMF6540 - ICF, IFL, and PAV support

75  Parallel Sysplex Overview: Introducing Data Sharing and Parallelism in a Sysplex (SA22-7661-00)  Redbook: z/OS Intelligent Resource Director (SG24- 5952-00)  IBM e-server zSeries 900 and z/OS Reference Guide (G326-3092-00)  zSeries 900 Processor Resource/Systems Manger Planning Guide (SB10-7033-00) References

76 Trademarks PR/SM RMF SMF Sysplex Timer S/390 VSE/ESA zSeries z/OS CICSDB2 ESCON FICON IBM IMS MVS MVS/ESA OS/390 Parallel Sysplex Processor Resource/Systems Manager The following terms are trademarks of the International Business Machines Corporation in the United States, or other countries, or both:

77 Thanks! Questions ??? Darrell Faulkner NeuMICS Development Manager Computer Associates Email: darrell.faulkner@ca.com


Download ppt "Keeping Up with z/OS’ Alphabet Soup Darrell Faulkner Computer Associates Development Manager NeuMICS."

Similar presentations


Ads by Google