Presentation is loading. Please wait.

Presentation is loading. Please wait.

Software Architecture in Practice

Similar presentations


Presentation on theme: "Software Architecture in Practice"— Presentation transcript:

1 Software Architecture in Practice
Performance Testing

2 Introduction Performance testing is a type of testing intended to determine the responsiveness, throughput, reliability, and/or scalability of a system under a given workload. Performance testing is commonly conducted to accomplish the following: Assess production readiness Evaluate system against performance criteria Compare performance characteristics of multiple systems or system configurations Find the source of performance problems Support system tuning Find throughput levels

3 Performance Test Types
Single-Thread Test Stress Test Load Test Spike Test Endurance Test Performance Test Determines the speed, scalability, and/or stability characteristics of the system under test. Performance testing represents the superset of all of the other subcategories of performance-related testing Single-Thread Test Determines the best case response times with a single user (i.e. no load on the system). Also used to determine cost of transactions in terms of CPU cycles, network bandwidth etc. Stress Test Focused on determining performance characteristics of the system under test when subjected to conditions beyond those anticipated during production operations Spike Test Focused on determining performance characteristics of the system under test when subjected to workload models and load volumes that repeatedly increase beyond anticipated production operations for short periods of time Load Test Focused on determining performance characteristics of the system under test when subjected to workloads and load volumes anticipated during production operations Endurance Test Focused on determining performance characteristics of the system under test when subjected to workload models and load volumes anticipated during production operations over an extended period of time

4 Transaction volume or load
Transaction Volumes Point at which system fails Response time at which system is “unusable” Target transactional workload Target response time Response time Transaction volume or load Volume for: Single thread test Volume for: Load test Endurance test (long time) Volume for: Stress test Spike test (short time) 4

5 Generic Test Environment
The test environment needs to be instrumented, and to have simulated workload injected Test Controller Injector Performance Test Environment Test Stubs Emulating the backend systems the system under test interacts with, operating under load and responding with ‘realistic’ data Injector Load Injector Injector Injector Injector Load Injector Injector Injector Injector Load Injector Injector CPU, Memory, Disk, Network Monitor System Log Monitor Response Time Monitor Also need a means of collating, correlating and processing this data during analysis. And a way of investigating issues which may occur during the course of the test.

6 Sample tools: JMeter and LoadRunner

7 The “Observer Effect” Since any form of observation is also an interaction, the act of testing itself can also affect that which is being tested. For example: When log files are used in testing to record progress or events, the application under test may slow down drastically Observing the performance of a CPU by running both the observed and observing programs on the same machine will lead to inaccurate results because the observer program itself affects the CPU performance Observing (profiling) a running program will cause the observed program to slow down and use excessive system resources (CPU, memory, I/O etc.) Be aware of this in your performance testing!

8 Core Activities of Performance Testing
Performance Testing is usually an iterative process which interlocks with tuning activities

9 1. Identify Test Environment
Where will performance testing be performed? The ideal situation an exact replica of the production environment with the addition of load-generation and resource-monitoring tools This is not very common! You need to understand the difference between the production and test environments, e.g. Hardware capacity, load-balancing and clustering, network architecture, end user location, components shared between environments, logging levels, etc.

10 2. Identify Performance Acceptance Criteria
You should identify the desired performance characteristics of your application early in the development cycle, typically Response time E.g. “the product catalogue must display in less than 3 seconds” Throughput E.g. “the system must support 25 book orders per second” Resource utilization E.g. “processor utilization is not more than 70%” If your client will not state the performance requirements specifically (“it just has to be fast enough”) you should state your assumptions and let the client review them

11 3. Plan and Design Tests The design of the test is typically specified in a workload model which includes elements such as Key usage scenarios Navigation paths for key usage scenarios Individual user data and variances Relative distribution of scenarios Metrics to be collected during the test Target load levels Realistic test designs include: Realistic simulations of user delays and think times User abandonment, if users are likely to abandon a task for any reason Common user errors You also need to identify and create test data which is representative of the production situation You typically need a large set of data

12 Ways to represent your workload model
Be careful with the terms “concurrent users”, “simultaneous users” etc. They typically do not specify what the users do in the system and do not take think times and delays into account. Through the workload model you will translate this into a certain amount and mix of transactions per time unit which is a much more precise measure.

13 Modeling user loads BAD GOOD Not realistic!
Changes between no load and stress test of the application. BAD Realistic. Normally distributed user delays GOOD

14 4. Configure Test Environment
Load-generation and application-monitoring tools are almost never as easy to get up and running as one expects. Start early, to ensure that issues are resolved before you begin testing Monitor resource utilization (CPU, network, memory and disk) and throughput across servers in a load-balanced configuration during a load test to validate that the load is distributed And don’t forget to synchronize system clocks!

15 5. Implement Test Design The details of creating an executable performance test are extremely tool-specific Most tools can work either with a “designer” or with a “recorder tool” With a “designer tool” you visually program your test through selection of test elements, loops, validators etc. With a “recorder tool” you perform the scenario as an end user while the tool records your actions and the system responses Regardless of the tool you use, creating a performance test typically involves scripting a single usage scenario and then enhancing that scenario and combining it with other scenarios to ultimately represent a complete workload model Ensure that validation of transactions is implemented correctly Checking for an HTTP 200 status code is not enough!

16 6. Execute Tests Before launching the test you need to ensure that the environment is “reset” Records generated during a previous test must not impact the new test Observe your test during execution and pay close attention to any behavior you feel is unusual, e.g. Unexpected errors generated in server log files Unreasonably high response times Unreasonably low throughput Unreasonably many end user errors

17 7. Analyze, Report, and Retest
Managers and stakeholders need more than simply the results from various tests they need conclusions based on those results, and consolidated data that supports those conclusions. Technical team members also need more than just results they need analysis, comparisons, and details of how the results were obtained The key to effective reporting is to present information of interest to the intended audience in a quick, simple, and intuitive manner. Frequently reported performance data is End-user response times Resource utilizations Volumes, capacities and rates Component response times Trends You need to have a basic understanding of statistics for the analysis of performance test results

18 Sample reported test data
End-User response times Response time degradation Component response times Processor Utilization

19 DEMO JMeter + Server Agent BlazeMeter JVisualVM


Download ppt "Software Architecture in Practice"

Similar presentations


Ads by Google