Profiling Tools 1 Profiling tools By Vitaly Kroivets for Software Design Seminar.

Slides:



Advertisements
Similar presentations
Computer-System Structures Er.Harsimran Singh
Advertisements

CPU Structure and Function
Paging: Design Issues. Readings r Silbershatz et al: ,
Computer System Organization Computer-system operation – One or more CPUs, device controllers connect through common bus providing access to shared memory.
Intel® performance analyze tools Nikita Panov Idrisov Renat.
Virtual Memory Introduction to Operating Systems: Module 9.
Chapter 12 CPU Structure and Function. CPU Sequence Fetch instructions Interpret instructions Fetch data Process data Write data.
Computer Organization and Architecture
Tools for applications improvement George Bosilca.
Processes CSCI 444/544 Operating Systems Fall 2008.
Chapter 14 Chapter 14: Server Monitoring and Optimization.
1 Process Description and Control Chapter 3. 2 Process Management—Fundamental task of an OS The OS is responsible for: Allocation of resources to processes.
OS Spring’03 Introduction Operating Systems Spring 2003.
Computer System Overview
Chapter 1 and 2 Computer System and Operating System Overview
1 Last Class: Introduction Operating system = interface between user & architecture Importance of OS OS history: Change is only constant User-level Applications.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Processes.
Group 5 Alain J. Percial Paula A. Ortiz Francis X. Ruiz.
1 CSC 2405: Computer Systems II Spring 2012 Dr. Tom Way.
Chocolate Bar! luqili. Milestone 3 Speed 11% of final mark 7%: path quality and speed –Some cleverness required for full marks –Implement some A* techniques.
System Calls 1.
Multi-core Programming VTune Analyzer Basics. 2 Basics of VTune™ Performance Analyzer Topics What is the VTune™ Performance Analyzer? Performance tuning.
Process Management. Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication.
CSC 501 Lecture 2: Processes. Process Process is a running program a program in execution an “instantiation” of a program Program is a bunch of instructions.
2.1 Silberschatz, Galvin and Gagne ©2003 Operating System Concepts with Java Chapter 2: Computer-System Structures Computer System Operation I/O Structure.
CHAPTER 2: COMPUTER-SYSTEM STRUCTURES Computer system operation Computer system operation I/O structure I/O structure Storage structure Storage structure.
Silberschatz, Galvin, and Gagne  Applied Operating System Concepts Module 2: Computer-System Structures Computer System Operation I/O Structure.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
Application Profiling Using gprof. What is profiling? Allows you to learn:  where your program is spending its time  what functions called what other.
® IBM Software Group © 2006 IBM Corporation PurifyPlus on Linux / Unix Vinay Kumar H S.
Silberschatz, Galvin and Gagne  Operating System Concepts Process Concept An operating system executes a variety of programs:  Batch system.
Precomputation- based Prefetching By James Schatz and Bashar Gharaibeh.
CE Operating Systems Lecture 2 Low level hardware support for operating systems.
Operating Systems 1 K. Salah Module 1.2: Fundamental Concepts Interrupts System Calls.
1 Computer Systems II Introduction to Processes. 2 First Two Major Computer System Evolution Steps Led to the idea of multiprogramming (multiple concurrent.
Operating Systems CSE 411 CPU Management Sept Lecture 10 Instructor: Bhuvan Urgaonkar.
Silberschatz, Galvin and Gagne  Applied Operating System Concepts Chapter 2: Computer-System Structures Computer System Architecture and Operation.
CSCI1600: Embedded and Real Time Software Lecture 33: Worst Case Execution Time Steven Reiss, Fall 2015.
CE Operating Systems Lecture 2 Low level hardware support for operating systems.
Processor Structure and Function Chapter8:. CPU Structure  CPU must:  Fetch instructions –Read instruction from memory  Interpret instructions –Instruction.
Thread basics. A computer process Every time a program is executed a process is created It is managed via a data structure that keeps all things memory.
Interrupt driven I/O Computer Organization and Assembly Language: Module 12.
1 Process Description and Control Chapter 3. 2 Process A program in execution An instance of a program running on a computer The entity that can be assigned.
Copyright 2014 – Noah Mendelsohn Performance Analysis Tools Noah Mendelsohn Tufts University Web:
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Operating Systems Overview: Using Hardware.
Embedded Real-Time Systems Processing interrupts Lecturer Department University.
Some of the utilities associated with the development of programs. These program development tools allow users to write and construct programs that the.
Beyond Application Profiling to System Aware Analysis Elena Laskavaia, QNX Bill Graham, QNX.
Introduction to Operating Systems Concepts
Chapter 2: Computer-System Structures
Processes and threads.
Process Management Process Concept Why only the global variables?
Chapter 3: Process Concept
Process Management Presented By Aditya Gupta Assistant Professor
CSCI1600: Embedded and Real Time Software
Chapter 9: Virtual-Memory Management
Module 2: Computer-System Structures
Process & its States Lecture 5.
Process Description and Control
Module 2: Computer-System Structures
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
Chapter 2: Computer-System Structures
Chapter 2: Computer-System Structures
Module 2: Computer-System Structures
Module 2: Computer-System Structures
CSCI1600: Embedded and Real Time Software
Dynamic Binary Translators and Instrumenters
Presentation transcript:

Profiling Tools 1 Profiling tools By Vitaly Kroivets for Software Design Seminar

Profiling Tools2 Contents Introduction Software optimization process, optimization traps and pitfalls Benchmark Performance tools overview Optimizing compilers System Performance monitors Profiling tools GNU gprof INTEL VTune Valgrind What does it mean to use system efficiently

Profiling Tools3 The Problem PC speed increased 500 times since 1981, but today’s software is more complex and still hungry for more resources How to run faster on same hardware and OS architecture? Highly optimized applications run tens times faster than poorly written ones. Using efficient algorithms and well-designed implementations leads to high performance applications

Profiling Tools4 The Software Optimization Process Find hotspots Modify application Retest using benchmark Investigate causes Create benchmark Hotspots are areas in your code that take a long time to execute

Profiling Tools5 Extreme Optimization Pitfalls Large application’s performance cannot be improved before it runs Build the application then see what machine it runs on Runs great on my computer… Debug versus release builds Performance requires assembly language programming Code features first then optimize if there is time leftover

Profiling Tools6 Key Point: Software optimization doesn’t begin where coding ends – It is ongoing process that starts at design stage and continues all the way through development

Profiling Tools7 The Benchmark The benchmark is program that used to Objectively evaluate performance of an application Provide repeatable application behavior for use with performance analysis tools Industry standard benchmarks : TPC-C 3D-Winbench Enterprise Services Graphics/Applications HPC/OMP Java Client/Server Mail Servers Network File System Web Servers

Profiling Tools8 Attributes of good benchmark Repeatable (consistent measurements) Remember system tasks, caching issues “incoming fax” problem : use minimum performance number Representative Execution of typical code path, mimic how customer uses the application Poor benchmarks : Using QA tests

Profiling Tools9 Benchmark attributes (cont.) Easy to run Verifiable need QA for benchmark! Measure Elapsed Time vs. other number Use benchmark to test functionality Algorithmic tricks to gain performance may break the application…

Profiling Tools10 How to find performance bottlenecks Determine how your system resources, such as memory and processor, are being utilized to identify system-level bottlenecks Measure the execution time for each module and function in your application Determine how the various modules running on your system affect the performance of each other Identify the most time-consuming function calls and call sequences within your application Determine how your application is executing at the processor level to identify microarchitecture-level performance problems

Profiling Tools11 Performance Tools Overview Timing mechanisms Stopwatch : UNIX time tool Optimizing compiler (easy way) System load monitors vmstat, iostat, perfmon.exe, Vtune Counter Software profiler Gprof, VTune, Visual C++ Profiler, IBM Quantify Memory debugger/profiler Valgrind, IBM Purify, Parasoft Insure++

Profiling Tools12 Using Optimizing Compilers Always use compiler optimization settings to build an application for use with performance tools Understanding and using all the features of an optimizing compiler is required for maximum performance with the least effort

Profiling Tools13 Optimizing Compiler : choosing optimization flags combination

Profiling Tools14 Optimizing Compiler’s effect

Profiling Tools15 Optimizing Compilers: Conclusions Some processor-specific options still do not appear to be a major factor in producing fast code More optimizations do not guarantee faster code Different algorithms are most effective with different optimizations Idea : using statistics gathered by profiler as input for compiler/linker

Profiling Tools16 Windows Performance Monitor Sampling “profiler” Uses OS timer interrupt to wake up and record the value of software counters – disk reads, free memory Maximum resolution : 1 sec Cannot identify piece of code that caused event to occur Good for finding system issues Unix tools : vmstat, iostat, xos, top, oprofile, etc.

Profiling Tools17 Performance Monitor Counters

Profiling Tools18 Profilers Profiler may show time elapsed in each function and its descendants number of calls, call-graph (some) Profilers use either instrumentation or sampling to identify performance issues

Profiling Tools19 Sampling vs. Instrumentation SamplingInstrumentation Overhead Typically about 1%High, may be 500% ! System-wide profiling Yes, profiles all app, drivers, OS functionsJust application and instrumented DLLs Detect unexpected events Yes, can detect other programs using OS resources No Setup NoneAutomatic ins. of data collection stubs required Data collected Counters, processor an OS stateCall graph, call times, critical path Data granularity Assembly level instr., with src lineFunctions, sometimes statements Detects algorithmic issues No, Limited to processes, threadsYes – can see algorithm, call path is expensive

Profiling Tools20 Profiling Tools Gprof Intel VTune Valgrind Old, buggy and inaccurate $700. Unstable Is not profiler really …

Profiling Tools 21 GNU gprof Instrumenting profiler for every UNIX-like system

Profiling Tools22 Using gprof GNU profiler Compile and link your program with profiling enabled cc -g -c myprog.c utils.c -pg cc -o myprog myprog.o utils.o -pg Execute your program to generate a profile data file Program will run normally (but slower) and will write the profile data into a file called gmon.out just before exiting Program should exit using exit() function Run gprof to analyze the profile data gprof a.out

Profiling Tools23 Example Program

Profiling Tools24 The flat profile shows the total amount of time your program spent executing each function. If a function was not compiled for profiling, and didn't run long enough to show up on the program counter histogram, it will be indistinguishable from a function that was never called Understanding Flat Profile

Profiling Tools25 Flat profile : %time Percentage of the total execution time your program spent in this function. These should all add up to 100%.

Profiling Tools26 Flat profile: Cumulative seconds This is cumulative total number of seconds the spent in this functions, plus the time spent in all the functions above this one

Profiling Tools27 Number of seconds accounted for this function alone Flat profile: Self seconds

Profiling Tools28 Number of times was invoked Flat profile: Calls

Profiling Tools29 Average number of sec per call Spent in this function alone Flat profile: Self seconds per call

Profiling Tools30 Average number of seconds spent in this function and its descendents per call Flat profile: Total seconds per call

Profiling Tools31 Call Graph : call tree of the program Current Function: g( ) Called by : main ( ) Descendants: doit ( )

Profiling Tools32 Call Graph : understanding each line Current Function: g( ) Unique index of this function Percentage of the `total‘ time spent in this function and its children. Total time propagated into this function by its children total amount of time spent in this function Number of times was called

Profiling Tools33 Call Graph : understanding each line Current Function: g( ) Time that was propagated from the function's children into this parent Time that was propagated directly from the function into this parent Number of times this parent called the function `/‘ total number of times the function was called Call Graph : parents numbers

Profiling Tools34 Call Graph : “children” numbers Current Function: g( ) Amount of time that was propagated from the child's children to the function Amount of time that was propagated directly from the child into function Number of times this function called the child `/‘ total number of times this child was called

Profiling Tools35 How gprof works Instruments program to count calls Watches the program running, samples the PC every 0.01 sec Statistical inaccuracy : fast function may take 0 or 1 samples Run should be long enough comparing with sampling period Combine several gmon.out files into single report The output from gprof gives no indication of parts of your program that are limited by I/O or swapping bandwidth. This is because samples of the program counter are taken at fixed intervals of run time number-of-calls figures are derived by counting, not sampling. They are completely accurate and will not vary from run to run if your program is deterministic Profiling with inlining and other optimizations needs care

Profiling Tools 36 VTune performance analyzer To squeeze every bit of power out of Intel architecture !

Profiling Tools37 VTune Modes/Features Time- and Event-Based, System-Wide Sampling provides developers with the most accurate representation of their software's actual performance with negligible overhead Call Graph Profiling provides developers with a pictorial view of program flow to quickly identify critical functions and call sequences Counter Monitor allows developers to readily track system activity during runtime which helps them identify system level performance issues

Profiling Tools38 Sampling mode Monitors all active software on your system including your application, the OS, JIT- compiled Java* class files, Microsoft*.NET files, 16-bit applications, 32-bit applications, device drivers Application performance is not impacted during data collection

Profiling Tools39 Sampling Mode Benefits Low-overhead, system-wide profiling helps you identify which modules and functions are consuming the most time, giving you a detailed look at your operating system and application Benefits of sampling: Profiling to find hotspots. Find the module, functions, lines of source code and assembly instructions that are consuming the most time Low overhead. Overhead incurred by sampling is typically about one percent No need to instrument code. You do not need to make any changes to code to profile with sampling

Profiling Tools40 How does sampling work? Sampling interrupts the processor after a certain number of events and records the execution information in a buffer area. When the buffer is full, the information is copied to a file. After saving the information, the program resumes operation. In this way, the VTune™ maintains very low overhead (about one percent) while sampling Time-based sampling: collects samples of active instruction addresses at regular time-based intervals (1ms. by default) Event-based sampling: collects samples of active instruction addresses after a specified number of processor events After the program finishes, the samples are mapped to modules and stored in a database within the analyzer program.

Profiling Tools41 Starting the Sampling Wizard

Profiling Tools42 Starting the Sampling Wizard Hardware prevents from sampling of many counters simultaneously

Profiling Tools43 Starting the Sampling Wizard

Profiling Tools44 Starting the Sampling Wizard Unsupported CPU ? Ha-ha-ha…

Profiling Tools45 EBS : choosing events

Profiling Tools46 Events counted by VTune Basic Events: clock cycles, retired instructions Instruction Execution: instruction decode, issue and execution, data and control speculation, and memory operations Cycle Accounting Events: stall cycle breakdowns Branch Events: branch prediction Memory Hierarchy: instruction prefetch, instruction and data caches System Events: operating system monitors, instruction and data TLBs About 130 different events in Pentium 4 architecture !

Profiling Tools47 Sampling …

Profiling Tools48 Viewing Sampling Results Process view all the processes that ran on the system during data collection Thread view the threads that ran within the processes you select in Process view Module view the modules that ran within the selected processes and threads Hotspot view the functions within the modules you select in Module view

Profiling Tools49 Different events collected – modules view Our program System-wide look at software running on the system CPI- good average indication

Profiling Tools50 Hotspot Graph Each bar represents one of the functions of our program Click on hotspot bar VTune displays source code view

Profiling Tools51 Source View Test_if function

Profiling Tools52 See how much time is spent on each one line Annotated Source View (% of module) Check this “for” loop ! 10% of CPU spent in few statements

Profiling Tools53 VTune Tuning assistant In few clicks we reached to the performance problem! Now, how to solve it ? Tuning Assistant highlights performance problems Provides approximate time lost by each performance problem Database contains performance metrics based on Intel’s experience of tuning hundreds of applications Analyzes the data gathered by our application Generates tuning recommendations for each “hotspot” Gives user idea what might be done to fix the problem

Profiling Tools54 Tuning Assistance Report

Profiling Tools55 Hotspot Assistant Report : Penalties

Profiling Tools56 Hotspot Assistant Report

Profiling Tools57 Call Graph Mode Provides with a pictorial view of program flow to quickly identify critical functions and call sequences Call graph profiling reveals: Structure of your program on a function level Number of times a function is called from a particular location The time spent in each function Functions on a critical path.

Profiling Tools58 Call Graph Screenshot Critical Path displayed as red lines: call sequence in an application that took the most time to execute. the function summary pane Switch to Call- list View

Profiling Tools59 Call Graph (Cont.) Wait time – how much time spent waiting for event to occur Additional info available - by hovering the move over the functions

Profiling Tools60 Jump to Source view

Profiling Tools61 Call Graph – Call List View Caller Functions are the functions that called the Focus Function Callee Functions are the functions that called by Focus Function

Profiling Tools62 Counter Monitor Use the Counter Monitor feature of the VTune™ to collect and display performance counter data. Counter monitor selectively polls performance counters, which are grouped categorically into performance objects. With the VTune analyzer, you can: Monitor selected counters in performance objects. Correlate performance counter data with data collected by other features in the VTune analyzer, such as sampling. Trigger the collection of counter data on events other than a periodic timer.

Profiling Tools63 Counter Monitor

Profiling Tools64 Getting Help Context –sensitive help Online Help repository

Profiling Tools65 VTune Summary Pros: Allows to get best possible performance out of Intel architecture Cons: Extreme tuning requires deep understanding of processor and OS internals

Profiling Tools 66 Valgrind Multi-purpose Linux x86 profiling tool

Profiling Tools67 Valgrind Toolkit Memcheck is memory debugger detects memory-management problems Cachegrind is a cache profiler performs detailed simulation of the I1, D1 and L2 caches in your CPU Massif is a heap profiler performs detailed heap profiling by taking regular snapshots of a program's heap Helgrind is a thread debugger finds data races in multithreaded programs

Profiling Tools68 Memcheck Features When a program is run under Memcheck's supervision, all reads and writes of memory are checked, and calls to malloc/new/free/delete are intercepted Memcheck can detect: Use of uninitialised memory Reading/writing memory after it has been free'd Reading/writing off the end of malloc'd blocks Reading/writing inappropriate areas on the stack Memory leaks -- where pointers to malloc'd blocks are lost forever Passing of uninitialised and/or unaddressible memory to system calls Mismatched use of malloc/new/new [] vs free/delete/delete [] Overlapping src and dst pointers in memcpy() and related functions Some misuses of the POSIX pthreads API

Profiling Tools69 Memcheck Example Using non- initialized value Using “free” of memory allocated by “new” Access of unallocated memory Memor y leak

Profiling Tools70 Memcheck Example (Cont.) Compile the program with –g flag: g++ -c a.cc –g –o a.out Execute valgrind : valgrind --tool=memcheck --leak-check=yes a.out > log View log Debug leaks Executabl e name

Profiling Tools71 Memcheck report

Profiling Tools72 Memcheck report (cont.) Leaks detected: STACKSTACK

Profiling Tools73 Cachegrind Detailed cache profiling can be very useful for improving the performance of the program On a modern x86 machine, an L1 miss will cost around 10 cycles, and an L2 miss can cost as much as 200 cycles Cachegrind performs detailed simulation of the I1, D1 and L2 caches in your CPU Can accurately pinpoint the sources of cache misses in your code Identifies number of cache misses, memory references and instructions executed for each line of source code, with per-function, per-module and whole- program summaries Cachegrind runs programs about x slower than normal

Profiling Tools74 How to run Run valgrind --tool=cachegrind in front of the normal command line invocation Example : valgrind --tool=cachegrind ls -l When the program finishes, Cachegrind will print summary cache statistics. It also collects line-by-line information in a file cachegrind.out.pid Execute cg_annotate to get annotated source file: cg_annotate a.cc > a.cc.annotated PID Source files

Profiling Tools75 Cachegrind Summary output I-cache reads (instructions executed) I1 cache read misses L2-cache instruction read misses Instruction caches performance

Profiling Tools76 Cachegrind Summary output D-cache reads (memory reads) L2-cache data read misses Data caches READ performance D1 cache read misses

Profiling Tools77 Cachegrind Summary output D-cache writes (memory writes) D1 cache write misses L2-cache data write misses Data caches WRITE performance

Profiling Tools78 Cachegrind Accuracy Valgrind's cache profiling has a number of shortcomings: It doesn't account for kernel activity -- the effect of system calls on the cache contents is ignored It doesn't account for other process activity (although this is probably desirable when considering a single program) It doesn't account for virtual-to-physical address mappings; hence the entire simulation is not a true representation of what's happening in the cache

Profiling Tools79 Massif tool Massif is a heap profiler - it measures how much heap memory programs use. It can give information about: Heap blocks Heap administration blocks Stack sizes Help to reduce the amount of memory the program uses smaller program interact better with caches, avoid paging Detect leaks that aren't detected by traditional leak- checkers, such as Memcheck That's because the memory isn't ever actually lost - a pointer remains to it - but it's not in use anymore

Profiling Tools80 Executing Massif Run valgrind –tool=massif prog Produces following: Summary Graph Picture Report Summary will look like this: Total spacetime: 2,258,106 ms.B Heap: 24.0% Heap admin: 2.2% Stack (s): 73.7% number of words allocated on heap, via malloc(), new and new[]. Space (in bytes) multiplied by time (in milliseconds).

Profiling Tools81 Spacetime Graphs

Profiling Tools82 Spacetime Graph (Cont.) Each band represents single line of source code It's the height of a band that's important Triangles on the x-axis show each point at which a memory census was taken Not necessarily evenly spread; Massif only takes a census when memory is allocated or de-allocated The time on the x-axis is wall-clock time not ideal because can get different graphs for different executions of the same program, due to random OS delays

Profiling Tools83 Text/HTML Report example Contains a lot of extra information about heap allocations that you don't see in the graph. Shows places in the program where most memory was allocated

Profiling Tools84 Valgrind – how it works Valgrind is compiled into a shared object, valgrind.so. The shell script valgrind sets the LD_PRELOAD environment variable to point to valgrind.so. This causes the.so to be loaded as an extra library to any subsequently executed dynamically-linked ELF binary The dynamic linker allows each.so in the process image to have an initialization function which is run before main(). It also allows each.so to have a finalization function run after main() exits When valgrind.so's initialization function is called by the dynamic linker, the synthetic CPU to starts up. The real CPU remains locked in valgrind.so until end of run System call are intercepted; Signal handlers are monitored

Profiling Tools85 Valgrind Summary Valgrind will save hours of debugging time Valgrind can help speed up your programs Valgrind runs on x86-Linux Valgrind works with programs written in any language Valgrind is actively maintained Valgrind can be used with other tools (gdb) Valgrind is easy to use uses dynamic binary translation, so no need to modify, recompile or re-link applications. Just prefix command line with valgrind and everything works Valgrind is not a toy Used by large projects : 25 millions lines of code Valgrind is free

Profiling Tools86 Other Tools Tools not included in this presentation: IBM Purify Parasoft Insure KCachegrind Oprofile GCC’s and GLIBC’s debugging hooks

Profiling Tools87 Writing Fast Programs Select right algorithm Implement it efficiently Detect hotspots using profiler and fix them Understanding of target system architecture is often required – such as cache structure Use platform-specific compiler extensions – memory pre-fetching, cache control-instruction, branch prediction, SIMD instructions Write multithreaded applications (“Hyper Threading Technology”)

Profiling Tools88 CPU Architecture (Pentium 4) Instruction fetch Instruction decode Branch prediction Execution Units retirement Instruction pool Memory Out-of-order Execution !

Profiling Tools89 Instruction Execution Instruction pool Dispatch unit Integer Memory Save Memory Load Floating point Execution Units

Profiling Tools90 Keeping CPU Busy Processors are limited by data dependencies and speed of instructions Keep data dependencies low Good blend of instructions keep all execution units busy at same time Waiting for memory with nothing else to execute is most common reason for slow applications Goals: ready instructions, good mix of instructions and predictable branches Remove branches if possible Reduce randomness of branches, avoid function pointers and jump tables

Profiling Tools91 Memory Overview (Pentium 4) L1 cache (data only) 8 kbytes Execution Trace Cache that stores up to 12K of decoded micro-ops L2 Advanced Transfer Cache (data + instructions) 256 kbytes, 3 times slower than L1 L3 : 4MB cache (optional) Main RAM (usually 64M … 4G), 10 times slower than L1

Profiling Tools92 Fixing memory problems Use less memory to reduce compulsory cache misses Increase cache efficiency (place items used at same time near each other) Read sooner with prefetch Write memory faster without using cache Avoid conflicts Avoid capacity issues Add more work for CPU (execute non- dependent instruction while waiting)

Profiling Tools93 References SPEC website The Software Optimization Cookbook High-Performance Recipes for the Intel® Architecture by Richard Gerber GCC Optimization flags Options.html Options.html Valgrind Homepage An Evolutionary Analysis of GNU C Optimizations Using Natural Selection to Investigate Software Complexities by Scott Robert Ladd Intel VTune Performace Analyzer webpage Gprof man page

Profiling Tools94 Questions?