LPAR Capacity Planning Update © Al Sherkow I/S Management Strategies, Ltd. (414) Copyright© , I/S Management Strategies, Ltd., all rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written consent of the copyright owner. The OS/390 Expo and Performance Conference is granted a non-exclusive license to copy, reproduce or republish this presentation in whole or in part for conference handouts, conference CD, conference web site, and other activities only. I/S Management Strategies, Ltd. retains the right to distribute copies of this presentation to whomever it chooses. Session P12 Updated Presentation Available at
©I/S Management Strategies, Ltd., Trademarks Trademarks, these may be used throughout this presentation –Parallel Sysplex, PR/SM, Processor Resource/System Manager, OS/390*, S/390* are trademarks of IBM Corporation –Other trademarks, that may be used, are the property of their respective owners *Registered Trademarks
©I/S Management Strategies, Ltd., Goal and Objective Today's use of your resources –Visualization of LPARs –Visualization of Parallel Sysplexes –My experience: this is most difficult and often misrepresents the true use of resources What will change Preparing a plan –Updating the visualization Oct Announcements! Dropped Some Background Slides!
©I/S Management Strategies, Ltd., Consumption of Resources Consumers of Resources Growth of Business and Workload –same problems and issues weve always had magnitude of requirement time of requirement Limits –usually hardware, but could be software, database –batch windows –time for planned outages What is Capacity Planning? Predictions are always tricky, especially about the future, Yogi Berra
©I/S Management Strategies, Ltd., Important?? 15Feb1999 Why You May Need Capacity Planning -- It Reduces the constant need for upgrades It lets you better use idle capacity It allows better management of hardware
©I/S Management Strategies, Ltd., Important, or Not? 21Feb2000 Choose an architecture that can scale to at least 20 times what you really think youll need in six months? When was the last time anyone chewed you out for having too much disk-drive capacity?
©I/S Management Strategies, Ltd., Resources Working Together Processors –Number of CPs –Memory –Channels and Links Coupling Facilities –Also have CPs, memory and links I/O Keep Them Balanced
©I/S Management Strategies, Ltd., First, Is The Performance OK? What are the desired Goals for the workloads? Were the goals met? What response time and throughput were achieved? If a goal was not met determine why not?
©I/S Management Strategies, Ltd., LPAR Overview
©I/S Management Strategies, Ltd., Available Time on Physical System
©I/S Management Strategies, Ltd., LPAR Definitions The examples are a 5-way system with 3 partitions IBM, Amdahl or Hitachi does not matter for this discussion
©I/S Management Strategies, Ltd., Workload's Current Use
©I/S Management Strategies, Ltd., Current Use of the Physical System
©I/S Management Strategies, Ltd., Current Use of the Physical System
©I/S Management Strategies, Ltd., Trend Important Partition c9cap2 e
©I/S Management Strategies, Ltd., Represent Engines as Columns All the physical engines support all the logical engines As the utilization of the physical box approaches 100% the processing weights are used LPAR capacity limited by number of logical engines
©I/S Management Strategies, Ltd., Make It Easier to Understand Arrows Highlight the Logical pushpoint between Part A and Part C
©I/S Management Strategies, Ltd., Average Growth Over Time
©I/S Management Strategies, Ltd., Add Percentiles
©I/S Management Strategies, Ltd., Add Max to Percentiles
©I/S Management Strategies, Ltd., Consider: Averages or Percentiles? Averages Are Not Representative of Your Workload In an 8 Hour Shift With 15 Minute Intervals, There Are 160 Samples. 10%, 16 Samples, or 4 Hours Are Busier Than the P90 Value Many Would Argue, in Todays E-world You Should Use P95, P99 or Even MAX Percentiles Represent the Peaks Better But Percentiles Are Very Hard to Explain to Anyone, Technical or Management
©I/S Management Strategies, Ltd., Do Peaks Matter?
©I/S Management Strategies, Ltd., Waiting for CPU? Top Line is Partition Busy Bottom Line is PCTRDYWT
©I/S Management Strategies, Ltd., Waiting for CPU? Top line Whole Box Busy Spikey line PctRdyWt Bottom line LPAR Busy
©I/S Management Strategies, Ltd., Change Across an Upgrade
©I/S Management Strategies, Ltd., LPAR Review Views of available time One workload One LPAR, One LPAR of many Trending Representing logicals on physicals Averages, percentiles and peaks Latent Demand: Pct Ready Wait
©I/S Management Strategies, Ltd., Why You Want Parallel Sysplex Up to 32 OS/390 images managed as one Single image to applications Price/performance Granularity Scalability Availability
©I/S Management Strategies, Ltd., Sizing Coupling Facilities Data Sharing CFs should have MIPS that are 8% of the total in the Sysplex, or 10% of the data sharing workload. Try for %Busy < 50% Memory: Try out the CF Structure Sizer on IBMs website Links: for redundancy two from each image, watch power boundaries, and SAPs. Monitor RMF to determine if more are needed
©I/S Management Strategies, Ltd., Recovery Issues Avoid Single Points of Failure –Two CFs, two CPs in each CF, two Sysplex timers, multiple links, couple data sets on separate DASD subsystems Build failure-independent configurations Cannot rebuild ISGLOCK if left system is lost ISGLOCK is for GRS Star
©I/S Management Strategies, Ltd., What Does It cost? 1 CPU effect varies based on –data sharing workloads how much of system access to shared data –type of hardware for Links, CFs and CPUs –number of images, each adds about 1/2% System Level –resource sharing: 3% more –data sharing stress testing: 15% to 20% typical production: 5% to 11%
©I/S Management Strategies, Ltd., What Is Making Decisions? Two Sysplexes via Their 7 WLMs Three Partitioners Who sets the capacity? –The site through # of LPs and weights –The site through Goals –The partitioner does not know your goals –The WLM tries to satisfy your goals may be limited by # of LPs
©I/S Management Strategies, Ltd., What Can Push? 3 physical CPs, 2 LPs assigned to TEST 1 with weight of 33%, 3 LPs assigned to PROD 1 with weight of 66% Can the Parallel Sysplexes move the line, or only the partitioner?
©I/S Management Strategies, Ltd., What Can Push (IRD)? IRD Provides LPAR Clusters WLM talks to PR/SM z/900, z/OS in z/Architecture mode Optimizes CPU and Channels across LPARs
©I/S Management Strategies, Ltd., IRD-Channels Channels –Dynamic Channel-path Management Monitors I/O to LCUs Can Add or Remove Paths to an LCU Monitored with I/O Velocity 100* (device connect)/(device connect + channel pend time) Managed Channels Must Go To Switch Managed Channels Available to Only One LPAR Cluster
©I/S Management Strategies, Ltd., IRD-Channels Channels –Channel Subsystem Priority Queuing z900 Basic or LPAR mode z/OS sets this based on Goal Mode Policies –different calculation than WLMs I/O priorities –User sets up to 8 different values If 2 or more I/O requests are queued in the channel subsystem the CSS microcode honors priority order
©I/S Management Strategies, Ltd., IRD-CPU Management Manages Processor Weighting and Number of LPs in an LPAR Cluster by Goal Policies Sum of Partitions Weights is Viewed as a Pool, Controlled by the Site Value –Engines run with less interference because fewer time slices –Reduced overhead fewer LPs –Lets PR/SM understand the Goals New data: Partition min, max and avg weight, time at min and time at max
©I/S Management Strategies, Ltd., IRD-CPU Management Clustering Does Not Communicate Between Different Parallel Sysplexes A Single Parallel Sysplex Can Have LPAR Clusters on Multiple CECs A Single CEC Can Have Multiple LPAR Clusters Belonging to Separate Parallel Sysplexes
©I/S Management Strategies, Ltd., IRD-CPU Management Controls WLM CPU Management Functions can be Enabled/Disabled on an LPAR Basis Minimum and Maximum Partition Weight Partition Weight is Renamed to Initial Partition Weight
©I/S Management Strategies, Ltd., What Can Push?
©I/S Management Strategies, Ltd., What Can Push?
©I/S Management Strategies, Ltd., Engine Allocation
©I/S Management Strategies, Ltd., Average Utilization/Available
©I/S Management Strategies, Ltd., Capacity for Handling Peaks
©I/S Management Strategies, Ltd., Trend Similar to LPAR pstrend
©I/S Management Strategies, Ltd., Software Pricing z/OS, z/900 –Charges Based on LPAR Capacity –New External: Defined Capacity Rolling 4-hour average is limited by Defined Capacity Too Much Demand Leads to a Soft Cap –Can Run in Exception Mode! Records are Generated –White Space Can Have Engines Without LPARs Available for Spikes, Handled through 4-hour rolling average
©I/S Management Strategies, Ltd., Software Pricing: Why White Space 5 40 LPAR2 limit: 3*40 = 120 MSUs LPAR1 limit: 6*40 = 240 MSUs CICS Workload DB2 Workload Limit of Capacity is # of LPs or Weight In 100% Busy CEC zSeries 280 MSUs 40
©I/S Management Strategies, Ltd., Software Pricing White Space 5 40 White Space 55 MSUs LPAR2 defined 75 MSUs LPAR1 defined 150 MSUs Certificates CICS 225MSUs z/OS 225 MSUs DB2 75 MSUs CICS Workload DB2 Workload Sum of LPARs Must Be Less Than Phys Box White Space is Not Defined, It is Left Over by Your Configuration Must Use LPARs zSeries 280 MSUs
©I/S Management Strategies, Ltd., Summary LPAR –Capacity controlled by # of CPs –Flexible to 100% busy –WLMs do not talk to the partitioners IRD –Capacity on Demand may be writing on the wall Parallel Sysplex –WLMs in separate Sysplexes do not talk to each other –Your Sysplexes have goals that must be managed by you –Handling Peaks is more important than ever!
©I/S Management Strategies, Ltd., Al Sherkow (414) Questions? 1. King, Gary. OS/390 Conference Session P15. Oct Kelley, Joan. Many Coupling Facility Presentations 3. IBMs Parallel Sysplex website: 4. IBM eServer zSeries 900 Technical Guide. Oct2000. SG Workload License Charges & IBM License Manager (Expo Oct2000) Statistics are merely numbers and have no control over actual events. PBS, Savage Planet, Storms of the Century 06/17/2000