Presentation is loading. Please wait.

Presentation is loading. Please wait.

Software System Performance 2

Similar presentations


Presentation on theme: "Software System Performance 2"— Presentation transcript:

1 Software System Performance 2
CS 560 Lecture 12

2 Analyzing system performance
Five common issues when a performance problem exists: Product wasn’t developed using performance testing requirements. When a problem develops or is found, no one takes responsibility. Developers don’t use (don’t know about) tools that are available to solve the problem. After developing a list of possible causes, there’s no elimination of the major problems. Developers can’t determine the priority of the problems. Often developers don’t have the patience to examine the large amounts of data generated by performance testing.

3 Bob Wescott’s Rules of Performance -author of “The every computer performance book”
The less a development team knows about the work their system did in the last five minutes, the worse off they are.. If you can’t measure it, you cant manage it. What you fail to plan for, you are condemned to endure. Bad things WILL happen. Always preserve and protect the raw performance data. If you massage it too much, the data will lose it’s meaning. You’ll do this again. Always take time to make things easier for your future self. Write down everything that you do. Never offer more than two possible solutions, or discuss more than three. KISS

4 Performance Engineering Motivation
Lots of questions related to performance: Can you get performance for free? Does it naturally come from a “good” design? Can you add performance at the end of the project? Are performance problems {easy | hard} to fix? Is it easier to design for quality or performance? What are the politics of performance estimation? What if the project doesn’t meet the performance goal? How is outside project management involved? Do they emphasize performance, quality, or project scheduling?

5 Performance Engineering Motivation
Comments/excuses on performance engineering: It leads to more development time There will be maintenance problems (due to “tricky code”) Performance problems are rare It doesn’t matter on this project since few people will use it It can be solved with (relatively) inexpensive hardware We can tune it later

6 Performance Engineering Motivation
Performance engineering reality: MANY systems initially perform TERRIBLY because they weren’t well designed. Problems are often due to fundamental architectural or design factors rather than inefficient code. Performance problems are visible and memorable. It’s possible to avoid being surprised by the performance of the finished product.

7 Performance Engineering Motivation
Costs of performance engineering: Time is required by the development team to analyze system/component performance metrics Time is required for performance modifications to system/components Time cost of learning needed skills to increase performance

8 Characterization of Performance Metrics
Reliability A system X always outperforms a system Y if the performance metric(s) indicates that X always outperforms Y. Repeatability The same value of the metric is measured each time the same experiment is performed. Consistency Units of the metrics are the same across different systems and different configurations of the same system. Ease of measurement If a metric is hard to measure, it is unlikely anyone will actually use it. Independence Metrics should not be defined to favor particular systems.

9 Performance as Time Metrics
Time between the start and the end of an operation Also called running time, elapsed time, wall-clock time, response time, latency, execution time, ... Most straightforward measure: “Component X takes 2.5s (on average) using a Cortex-A53 CPU running at 1.2GHz.” Must be measured on a “dedicated” machine Raspberry Pi3 Time performance metrics will be different on other machines. Component X time metrics on Pi3 vs. Pi2 vs. Pi1 … Research the Linux “time” command.

10 Performance as Rate Metrics
Used often so that performance can be independent on the “size” of the application Ex: compressing a 1MB file takes 1 minute. compressing a 2MB file takes 2 minutes; the performance is the same. Mflops: Millions of floating point operations /sec Very popular assessment of hardware, but often misleading. Ex: A high MFlops rate in a poorly designed algorithm can still have poor application performance. Application-specific Number of frames rendered per second. Number of database transactions per second. Number of HTTP requests served per second. Application-specific metrics are often preferred over other metrics.

11 Performance Monitor (Documentation)
A Monitor is a tool to observe the activities on a system. In general, a monitor: Makes measurements on the system (Observation) Collects performance statistics (Collection) Analyzes the data (Analysis) Displays results (Presentation) Why do you need a monitor? A programmer wants to find frequently used segments of a program to optimize them. A system administrator wants to measure resource utilizations to find performance bottlenecks.

12 Performance Monitor (Documentation)
Each group should implement a (web-based) admin interface. This interface should provide real-time results from two components developed for the project. Examples: Average number of HTTP requests per hour Number of page visits for each page Number of user logins for a given time span Current number of active users Current CPU/RAM utilization Reliability Metrics Number of faults/transactions or requests Docker Container(s) status Active, unresponsive, etc.

13 Performance Engineering key points (Documentation)
Set measurable performance objectives Unambiguous Specify in terms of CPU time, I/O’s, etc. Specify the environment that will be used Measurable Every performance goal must have an associated, precisely defined measurement. Ex: Database 1200 trans/sec with local storage, 950 trans/sec with remote storage Ex: Wi-Fi throughput 48.2Mbps at 45ft. With no obstacles Ex: Port scanner 200ms/scan including updating data structures with port information

14 Common Performance Metrics
Function/system throughput: Measuring the amount of successful processing/output of data. Maximum throughput depends on software/hardware efficiency. Common throughputs measured: Maximum Theoretical largest possible quantity of data that can be processed in a short period of time under ideal circumstances. Maximum sustained Theoretical largest possible quantity of data that can be processed over a long amount of time under ideal circumstances. Peak Sometimes called instantaneous throughput, directly measured max throughput over a short period of time. Example: Users browsing e-commerce website (Maximum sustained throughput) web server can sustain 1,500 http requests/sec. (Peak throughput) web server can provide peak bursts of up to 2,500 http requests/sec.

15 Common Performance Metrics
Latency: Time measured from when a request is made until the request is processed. Latency metrics are used to measure the delays of: Messages Transactions Tasks Example: User sends url request over the network (latency)Packet takes 25ms to be received by the web server Robot uses an ultrasonic range finder to measure distance to objects (latency) sound has to propagate from sensor to object back to sensor

16 Common Performance Metrics
Component/System Reliability: Metrics used to determine the probability of failure free operation for a specific amount of time in a specified environment. Keep in mind that software does not wear out Measuring component/system reliability: number of operational failures/total number of operations A long term measurement is required to assess reliability. Reliability Metrics: Probability of Failure (Ex: 2%) For two in every 100 requests, the service fails Rate of Fault Occurrence (Ex: 5%) Five failures for each 100 operational time units (ms, s, etc.)

17 Common Performance Metrics
Component/System Availability: Metric used to measure the performance of autonomously repairable systems. Availability classifications: Instantaneous availability Probability that a system or component will be operational at a specific time. Average uptime availability Amount of time that a system or component will be operational over a specified time period. Achieved availability Computed by calculating the mean time between maintenance actions, and the mean maintenance downtime. Operational availability Measure of the “real” availability time of a component or system over a period of time including all sources of downtime.

18 Performance Engineering Inspections
Performance inspections are used to gather information needed to complete the performance documentation. Conducting the performance inspection: Should be conducted formally with all members of the development team. Questions brought up during the meeting will require thorough research. Do not get stuck on minor details. Remember that the idea is to increase the performance of the product as a whole. You can’t spend 40 hours increasing the performance of a single component by 1%.

19 Performance Engineering Requirements (documentation)
Overall goals for the performance engineering requirements: State the overall performance goals based on the resources available to the project. Determine the best and worst case expectations for the project components. Functions, network, CPU time, disk I/O, logins, connections, etc. State the performance needed to meet client/user needs. State the performance metrics in decreasing order of importance. State current competitor’s performance, if known.

20 Performance Engineering Requirements (documentation)
Questions to use on the performance inspection documentation: What is the current performance of your product? In order to meet updated performance requirements, is it acceptable to use all of the available resources? To be competitive in the market, what performance do you need? How will customers use the product? What are the customer expectations?

21 Performance Engineering functional specification (documentation)
Goal: Define the min performance requirements for the product. Set performance goals for CPU, I/O, memory, network, and developed software components. Describe decisions made by the development team: Why was one algorithm used vs. another? Why was a particular programming language used? Why data is stored in a particular way? Where is data stored?

22 Performance Engineering design specification (documentation)
Questions to document on performance engineering design: What is the frequency of calling the high level (team developed) functions? What fraction of the total system resources (utilization) is used by high level function calls? What is the frequency of calling the low level library routines? What fraction of the total system resources (utilization) is used by low level library calls? What kind of performance tests were used? These tests should be specific for individual components, interfaces, and combinations of components and interfaces.

23 Performance Engineering conclusion
The previous slides detailed a methodology for accommodating performance engineering requirements. Project groups should focus on performance engineering during milestone 3. The project documentation should accommodate the performance engineering tasks discussed in this lecture and the previous lecture. There should be many performance related questions in the upcoming team/client meetings.

24 This sprint should be used for focusing on:
MS3 Requirements This sprint should be used for focusing on: Requirements gathering/refining and documenting Development of models that drive the implementation of a prototype Non-functional issues such as performance and security Security threat model(s) Performance data Developing the second project prototype

25 MS3 Modeling Models (UML diagrams with supporting documentation) for this sprint should focus on what will be developed for the second prototype During the prototype demonstration, you should be able to supply models for all implemented features. Use UML diagrams that best fit the components that you’ve built. These diagrams must map back to specific requirements Bold reqs that are the focus for this sprint.

26 Project Prototype Demo 2 Grading Rubric
Component Pass/Fail Notes Compiled document of the components given in this prototype demonstration grading rubric (can be added to your project documentation) List of requirements used to guide development of this prototype Ability to explain requirements in sufficient detail that allows for generation of models/diagrams List of models/diagrams used to guide development of this prototype Ability to explain models/diagrams and show how they map back to requirements List of software/hardware components included in the prototype Demonstration of independent software/hardware modules and/or integrated software modules List and explain test procedures of independent software/hardware modules and/or integrated software modules List and explain data generated and/or used by the prototype List areas of focus for next prototype demonstration

27 Milestone 3 Document Grading Rubric
Total Pts. Format Requirements (from course website) 5 Length Requirements -5 Pts. Per Missing Page 45 pages minimum, 20% max for Appendix/source code Content Requirements Project Requirements 2.5 Functional Non-Functional Project scope (boundaries) Defined high-level tasks Hardware architecture details Software architecture details Front-end frameworks/GUI Back-end frameworks/OS/Docker Project Group's organizational approach How/where did the group meet Project Manager is defined Schedule Organization Activity Graph w/ time estimates and documentation Gantt Chart with documentation Progress Visibility Defined Software Process Model Used 1 Defined Quality Control Steps Risk Management Identification, analysis, planning, monitoring System Boundaries Physical, logical Developed Scenarios Use Case Diagram(s) with outline and documentation Class Diagram(s) with documentation Sequence Diagram(s) with documentation State Diagram(s) with documentation PseudoCode with documentation Prototyping approach - plan, documentation, evaluation

28 Milestone 3 Document Grading Rubric
Data Dictionary 2.5 Component diagram(s) with interfaces and documentation Deployment diagram(s) with documentation Security Issues Addressed in Reqs, Design, Implementation, Testing Security Threat Model(s) 5 User/Admin Authentication/Authorization Software System performance Defined Performance Requirements Measurable Performance Objectives Test Case definitions and Resuts (Graphs) System/Application Workload Data and Descriptions Hardware/Software Bottenecks Descriptions Sysbench data (in Container and on Base OS) 9.5 File I/O, CPU, Database Time-Based Performance Data and Descriptions Rate-Based Performance Data and Descriptions Development of a Performance Monitor (Admin) Total: 100

29 MS 3 Project Prototype Demo 2
Breakout: Friday, April 6 in COHH 2101 Pacman: Monday, April 9 in COHH 2101 Plan for a 30 minute meeting/demo Bring your prototype documentation to the meeting


Download ppt "Software System Performance 2"

Similar presentations


Ads by Google