Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSE403 Software Engineering Autumn 2000 Performance Issues

Similar presentations


Presentation on theme: "CSE403 Software Engineering Autumn 2000 Performance Issues"— Presentation transcript:

1 CSE403 Software Engineering Autumn 2000 Performance Issues
CSE 403 Autumn 2000 CSE403 Software Engineering Autumn 2000 Performance Issues Gary Kimura Lecture #20 November 8, 2000 November 8, 2000

2 Today Misattributed quote from Monday “Vote early and vote often.”
- Al Capone (1899 – 1947) 10 days and counting Quiz #3 next week Finish with tracking bugs Performance What is it? How do we measure it? How do we improve it?

3 Tracking bugs (from pervious notes)
Commonly tracked in a bug database (e.g., Microsoft uses what they call a “Raid” database) Need a reproduction scenario Need a history of fixes or resolutions Other information Who found the bug How found Severity In which build Priority of the bug (somewhat based on the severity) Categorize bugs by module

4 More on tracking bugs (from pervious notes)
The database can gives an indication of the quality of the system Number of total bugs Incoming bug rate Resolution rate But beware of extrapolating too much… Daily bug reports Keep the developers focused Keep managers informed There comes a time in a project when only bug fixes should be allowed

5 What do we mean by performance?
The requirements document might specify certain performance goals Usually some qualitative or quantitative metric based on time or space Does the system run in a given amount of time on a given amount of hardware? Competing software companies use unique feature sets to differentiate themselves, but they also use various performance benchmarks to help sell their product

6 How do we measure performance?
The most common performance metric is the time needed to complete a given task. But even then time can be measured different ways Wall clock time. What was the overall time needed to run a particular program or solve a problem? CPU usage. How much user and/or kernel CPU time was spend running the program? Average response. For example in a database application what is the average response time for a query? Longest response. Again for a database application what is the longest response time for a query?

7 Measuring beyond time Time is not the only performance metric. Other metrics can be Memory usage or utilization. What is the applications working set size? Disk space utilization. How much disk space does the application need to use for its program and data? I/O usage. How much I/O is the application doing and is it distributed over various devices? Network throughput. How many packets are being sent between machines? Client capacity. What is the maximum number of clients or users that an application can support?

8 Benchmarks Commercial benchmark suites, often used by magazines, compare competing systems Ziff-Davis Benchmark Operation (ZDBOp) is a large developer to benchmark software, and also runs test Benchmarks try and simulate actual loads, not always successfully With applications such as word processors, spreadsheets, and databases they typically run scripts and measure the performance For network application they set up entire labs of servers and clients on a private network. Software developers often focus on these benchmarks, to the determent sometimes of the real product. Major software venders are even invited to help ZDBOp tune the system to run the benchmark properly and fix bugs There is a funny story with Windows CopyFile performance concerning benchmarks

9 Academic versus Commercial
The academic and research community also have their own genre of benchmarks. For example, some old timers favored solving the towers of Hanoi problem, 8 Queens chess problem, or compiling and running a set of Unix sources as a measure of performance. Side note: many benchmarks miss the fact that performance can degrade over an extended period of time. Some try and account for degradation but not always successfully. A good example is disk fragmentation where we can allocate sectors really fast if we ignore long term fragmentation issues. This is sort of like “pay me now or pay me later.”

10 Both hardware and software affect performance
In hardware there is CPU speed, memory size, network speed, cache size, disk size and speed, etc. In software its in the basic algorithms, data layout, and various things that we can do in software to exploit the hardware. Think of the software goal as helping the hardware run more efficiently.

11 What can we do to improve performance?
Buying faster and bigger machines should improve performance but this is not always practical From a software viewpoint we always need to examine the algorithmic complexity of the system. Some examples are: When sorting data it is important to pick an algorithm appropriate for the data items and keys (Sometimes insertion sort is the right choice) Avoid redoing complicated calculations multiple times. This is where you weigh the cost of storing an answer versus redoing the calculation Avoid unnecessary overhead such as maintaining extra pointers linked data structures. Note that this item can go against making a product extensible, maintainable, and easily debug-able. Are there other examples?

12 More improvements Picking the right implementation language and constructs within the language also has an impact on performance. But beyond algorithmic complexity the single most important thing you can do to increase performance is properly layout your data structures. Grouping commonly accessed data item close together typically increases both hardware and software cache efficiency (i.e., a smaller working set). Note however, that this might go against the logically design of the system. Also on a MP system we have to worry about cache line tearing.

13 Still more areas for improvement
Other software things to consider include: Function call overhead Process and thread switching overhead Kernel call overhead Disk and other I/O bottlenecks Lock contention, especially on MP systems

14 A sideways look at performance
Percent of CPU usage and idle times are sometimes a measure of system efficiency. This metric is pretty much ignores speed and is more concerned with why the processor is idling. Resolving page faults and lock contention are two of the big bug-a-boos here.

15 Heap performance enhancements
What are groups doing? Have you looked at lock contention? Have you looked at how often you’re going to need to call down to the OS? Have you looked at memory footprint and data structure layout?

16 Cross platform development
By nature, does cross platform development go against achieving performance goals?


Download ppt "CSE403 Software Engineering Autumn 2000 Performance Issues"

Similar presentations


Ads by Google