Download presentation
Presentation is loading. Please wait.
1
Software System Performance
IMPORTANT NOTICE TO STUDENTS: These slides are NOT to be used as a replacement for student notes. These slides are sometimes vague and incomplete on purpose to spark class discussions. Software System Performance CS 360 Lecture 13
2
Software System Performance Outline
Software system performance definition When performance matters Software Performance Requirements Performance challenges Performance metrics and testing Sysbench (synthetic benchmarking) Software timers (application-specific benchmarking) Performance monitor
3
Software System Performance
The amount/type of work accomplished by a computing system involving one or more of the following: Data processing efficiency Response time Throughput Resource utilization Resource availability Scalability Portability Reliability
4
When Performance Matters
Real time systems Computation must be fast enough to support the service provided. EX: Internet routers examining packet headers Drive-by-wire, vehicle throttle controlled by software Real time performance measurements Context switching Time required to save the current process/thread and find the highest priority ready process/thread and restore its context Interrupt Latency Time Time between internal/external interrupt event until the interrupt processing begins. Immediate Response Time Time required to process a request immediately without context switching
5
When Performance Matters
User interfaces Where humans have high expectations. Mouse tracking must appear instantaneous. Performance considerations Start-up time factors for applications GPU initialization, Storage initialization, GUI process load time, GUI rendering time Web interfaces must process requests in near real time Use client-side and server-side programs to increase performance
6
Software Performance Requirements (documentation)
Tasks needed for performance documentation: Gathering performance requirements Developing a plan to test for performance Decide whether to use internal or external resources to perform tests. Ex: Sysbench (internal) requires resources to run Ex: Web browser (external) requires no resources to run Specify testing tools Define the test data Configure the testing environment
7
Software Performance Requirements (documentation)
Tasks needed for performance documentation: Develop proof of concept test scripts for each component/interface Initial test run – to check correctness of testing script Execute tests (repeatedly, to get average) Record and analyze the results (Pass/Fail) Investigate corrective action if test fails Re-tune components/interfaces/system and repeat process
8
Performance challenges for all software systems (Documentation)
Predict performance problems before a system is created. Hardware bottlenecks? Research/Document Raspberry Pi hardware component performance. (CPU, memory type/interface, SD card type/interface, network interface) Software bottlenecks? Research/Document performance for software languages/frameworks used, component interfaces, process algorithms.
9
Look for bottlenecks Usually, CPU performance is not the limiting factor. Hardware bottlenecks Moving data Disk to main memory Main memory to CPU Shortage of memory Paging Network capacity Bandwidth Inefficient software Poorly written algorithms that do not scale well Sequential processing where a parallel approach should be used.
10
Performance as Time Metrics
Time between the start and the end of an operation Also called running time, elapsed time, wall-clock time, response time, latency, execution time, ... Most straightforward measure: “Component X takes 2.5s (on average) using a Cortex-A53 CPU running at 1.2GHz.” Must be measured on a “dedicated” machine Raspberry Pi3 model B Time performance metrics will be different on other machines. Component X time metrics on Pi3 vs. Pi2 vs. Pi1 … Research the Linux “time” command.
11
Performance as Rate Metrics
Measurement of the throughput of a software/hardware component. Performance rate can be independent on the “size” of the application: Compressing a 1MB file takes 1 minute. compressing a 2MB file takes 2 minutes; the performance is the same in both cases. 1MB/sec Mflops: Millions of floating point operations /sec Very popular assessment of hardware (CPU), but often misleading. Ex: 450 MFlops Doesn’t take into account data movement between hardware components. Research “Linpack” Application-specific Number of frames rendered per second. Number of database transactions per second. Number of HTTP requests served per second.
12
Predicting system performance
Direct measurement on subsystem (benchmark) File I/O CPU Network All require detailed understanding of the interaction between software and hardware systems.
13
Direct measurement on subsystem – SysBench (Documentation)
Sysbench is a modular, cross-platform, multi-threaded benchmark tool used to quickly gain information about system performance.
14
Direct measurement on subsystem – SysBench (Documentation)
Test the following (on the base OS, and in a Docker Container): CPU performance Cpu-max-prime test: 10,000 (calculates all prime numbers between 1 and 10,000). Graph (line graph) the results Number of threads (X-axis) vs. Execution time (Y-axis) Threads should increase from 1 to 256, by powers of 2. Generate documentation for the graph, why it curves, etc.
15
Direct measurement on subsystem – SysBench (Documentation)
Test the following (on the base OS, and in a Docker Container): File I/O performance fileio test: perform random reads/writes to a dataset of a specified size. Graph (line graph) the results File size (X-axis) vs. Throughput(Y-axis) File size should increase from 1MB to 2GB, by powers of 2. Generate documentation for the graph, why it curves, etc. Execute the cleanup command after each test run: sysbench --test=fileio --file-total-size=X cleanup
16
Direct measurement on subsystem – SysBench (Documentation)
Test the following (in the DB container): Database Service Performance oltp test: performs random read, write, select, update, delete, and insert queries on a database table of a specified size. Num-threads (simulated users/requests) (X-axis) vs. Throughput(Y- axis) Threads should increase from 1 to 256, by powers of 2. Max_connections will be reached before 256. Generate documentation for the graph, why it curves, etc. Execute the cleanup command after each test run: sysbench --test=oltp --mysql-db=test --mysql-user=root --mysql- password=yourrootsqlpassword cleanup
17
Measurements on Software modules (documentation)
Creating timers to measure function performance: DatabaseCallMethod(String[] d){ for 1 to d.length //format values in String array //connect and update database with String data }//end method
18
Measurements on Software modules (documentation)
Creating timers to measure function performance: DatabaseCallMethod(String[] d){ for 1 to String.length //format values in String[] //connect and update database with String data }//end method DatabaseCallMethod(data);
19
Measurements on Software modules (documentation)
Creating timers to measure function performance: DatabaseCallMethod(String[] d){ for 1 to String.length //format values in String[] //connect and update database with String data }//end method start = getSystemTime(); DatabaseCallMethod(data); end = getSystemTime(); MethodRunTime = end – start; Invoke the calling method multiple times to calculate the average execution time.
20
Fixing bad performance
If a system performs badly, begin by identifying the cause: Instrumentation: Add timers to the code. Reveals delays in specific parts of the system. Test loads: Run the system with varying loads High transaction rates, large input files, many users, etc. Design and code reviews: Team review of system and program design, and suspect sections of code. May reveal algorithms/components that are running very slowly. Find the underlying cause and fix it or the problem will return!
21
Analyzing system performance
Five common issues when a performance problem exists: Product wasn’t developed using performance testing requirements. When a problem develops or is found, no one takes responsibility. Developers don’t use (don’t know about) tools that are available to solve the problem. After developing a list of possible causes, there’s no elimination of the major problems. Developers can’t determine the priority of the problems. Often developers don’t have the patience to examine the large amounts of data generated by performance testing.
22
Performance Monitor (Documentation)
Each group should implement a (web-based) admin interface. This interface should provide real-time results about multiple components developed for the project. Examples: Average number of HTTP requests per hour Number of page visits for each page Number of user logins for a given time span Current number of active users Current CPU/RAM utilization Reliability Metrics Number of faults/transactions or requests Docker Container(s) status Active, unresponsive, etc.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.