Performance analysis

From Wikipedia, the free encyclopedia

In software engineering, performance analysis, more commonly profiling, is the investigation of a program's behavior using information gathered as the program runs (i.e. it is a form of dynamic program analysis, as opposed to static code analysis). The usual goal of performance analysis is to determine which parts of a program to optimize for speed or memory usage.

Contents

[edit] Use of profilers

A profiler is a performance analysis tool that measures the behavior of a program as it runs, particularly the frequency and duration of function calls. The output is a stream of recorded events (a trace) or a statistical summary of the events observed (a profile). Profilers use a wide variety of techniques to collect data, including hardware interrupts, code instrumentation, operating system hooks, and performance counters. The usage of profilers is called out in the performance engineering process.

As the summation in a profile often is done related to the source code positions where the events happen, the size of measurement data is linear to the code size of the program. In contrast, the size of a trace is linear to the program's execution time, making it somewhat impractical. For sequential programs, a profile is usually enough, but performance problems in parallel programs (waiting for messages or synchronization issues) often depend on the time relationship of events, thus requiring the full trace to get an understanding of the problem.

Program analysis tools are extremely important for understanding program behavior. Computer architects need such tools to evaluate how well programs will perform on new architectures. Software writers need tools to analyze their programs and identify critical pieces of code. Compiler writers often use such tools to find out how well their instruction scheduling or branch prediction algorithm is performing... (ATOM, PLDI, '94)

[edit] History

Profiler-driven program analysis on Unix dates back to at least 1979, when Unix systems included a basic tool "prof" that listed each function and how much of program execution time it used. In 1982, gprof extended the concept to a complete call graph analysis (Gprof: a Call Graph Execution Profiler [1])

In 1994, Amitabh Srivastava and Alan Eustace of Digital Equipment Corporation published a paper describing ATOM [2]. ATOM is a platform for converting a program into its own profiler. That is, at compile time, it inserts code into the program to be analyzed. That inserted code outputs analysis data. This technique, modifying a program to analyze itself, is known as "instrumentation".

In 2004, both the Gprof and ATOM papers appeared on the list of the 50 most influential PLDI papers of all time. [3]

[edit] Profiler Types based on Output

[edit] Flat profiler

Flat profilers compute the average call times, from the calls, and do not breakdown the call times based on the callee or the context.

[edit] Call-Graph profiler

Call Graph profilers show the call times, and frequencies of the functions, and also the call-chains involved based on the callee. However context is not preserved.

[edit] Methods of data gathering

[edit] Event based profilers

The programming languages listed here have event-based profilers:

  1. .NET: Can attach a profiling agent as a COM server to the CLR. Like Java, the runtime then provides various callbacks into the agent, for trapping events like method JIT / enter / leave, object creation, etc. Particularly powerful in that the profiling agent can rewrite the target application's bytecode in arbitrary ways.
  2. Java: JVM-Tools Interface (formerly JVM Profiling Interface) JVM API provides hooks to profiler, for trapping events like calls, class-load, unload, thread enter leave.
  3. Python: Python profiling includes the profile module, hotspot (which is call-graph based), and using the 'sys.set_profile()' module to trap events like c_{call,return,exception}, python_{call,return,exception}.
  4. Ruby: Ruby also uses a similar interface like Python for profiling. Flat-profiler in profile.rb, module, and ruby-prof a C-extension are present.

[edit] Statistical profilers

Some profilers operate by sampling. A sampling profiler probes the target program's program counter at regular intervals using operating system interrupts. Sampling profiles are typically less accurate and specific, but allow the target program to run at near full speed.

Some profilers instrument the target program with additional instructions to collect the required information. Instrumenting the program can cause changes in the performance of the program, causing inaccurate results and heisenbugs. Instrumenting can potentially be very specific but slows down the target program as more specific information is collected.

The resulting data are not exact, but a statistical approximation. The actual amount of error is usually more than one sampling period. In fact, if a value is n times the sampling period, the expected error in it is the square-root of n sampling periods. [4]

Some of the most commonly used statistical profilers are GNU's gprof, Oprofile and SGI's Pixie.

[edit] Instrumentation

  • Manual: Done by the programmer, e.g. by adding instructions to explicitly calculate runtimes.
  • Compiler assisted: Example: "gcc -pg ..." for gprof, "quantify g++ ..." for Quantify
  • Binary translation: The tool adds instrumentation to a compiled binary. Example: ATOM
  • Runtime instrumentation: Directly before execution the code is instrumented. The program run is fully supervised and controlled by the tool. Examples: PIN, Valgrind
  • Runtime injection: More lightweight than runtime instrumentation. Code is modified at runtime to have jumps to helper functions. Example: DynInst
  • Hypervisor: Data are collected by running the (usually) unmodified program under a hypervisor. Example: SIMMON
  • Simulator: Data are collected by running under an Instruction Set Simulator. Example: SIMMON

[edit] Simple manual technique

When a sequential program has an infinite loop, the simplest way to find the problem is to run it under a debugger, halt it with a "pause" button (not a breakpoint), and examine the Call stack. Each statement (or instruction) on the call stack is a function call, except for the one at the "bottom" of the stack. One of those statements is in an infinite loop, which can be found by single stepping and examining the context of each statement.

The method works even if the running time is finite. First the program is modified, if necessary, to make it take more than a few seconds, perhaps by adding a temporary outer loop. Then, while the program is doing whatever seems to take too long, it is randomly halted, and a record is made of the call stack. The process is repeated to get additional samples of the call stack. At the same time, the call stacks are compared, so as to find any statements that appear on more than one. Any such statement, if a way can be found to invoke it much less frequently or eliminate it, reduces execution time by the fraction of time it resided on the call stack. Once that is done, the entire process can be repeated, up to several times, usually resulting in significant cumulative speedups. This method is called "random halting" or "deep sampling".

In making these performance-enhancing modifications, one gets the sense that one is fixing bugs of a type that only make a program slow, not wrong. A name for this type of bug is "slug" (slowness bug). Programs as first written generally contain both bugs and slugs. Bugs are usually removed during program testing, while slugs are usually not, unless performance analysis is employed during development.

There are different kinds of slugs. Generally, things that could be done intentionally to make a program run longer can also occur unintentionally. One commonly accepted kind of slug is a "hot spot", which is a tight inner loop where the program counter spends much of its time. For example, if one often finds at the bottom of the call stack a linear search algorithm instead of binary search, this would be a true hot spot slug. However, if another function is called in the search loop, such as string compare, that function would be found at the bottom of the stack, and the call to it in the loop would be at the next level up. In this case, the loop would not be a hot spot, but it would still be a slug. In all but the smallest programs, hot spot slugs are rare, but slugs are quite common.

Data structures that are too general for the problem at hand might also slow software down. For example, if a collection of objects remains small, a simple array with linear search could be much faster than something like a "dictionary" class, complete with hash coding. With this kind of slug, the program counter is most often found in system memory allocation and freeing routines as the collections are being constructed and destructed.

Another common motif is that a powerful function is written to collect a set of useful information (from a database, for example). Then that function is called multiple times, rather than taking the trouble to save the results from a prior call. A possible explanation for this could be that it is beyond a programmer's comprehension that a function call might take a million times as long to execute as an adjacent assignment statement. A contributing factor could be "information hiding", in which external users of a module can be ignorant of what goes on inside it.

Slug removal process
Slug removal process

At this time, there are certain misconceptions in performance analysis. One is that timing is important. Knowing how much time is spent in functions is good for reporting improvements, but it provides only vague help in finding problems. The information that matters is the fraction of time that individual statements reside on the call stack.

Another misconception is that statistical precision matters. Typical slugs sit on the call stack between 5 and 95 percent of the time. The larger they are, the fewer samples are needed to find them. As in sport fishing, the object is to catch them first, and measure them later, if ever.

As an example, the iteration of slug removal tends to go something like this: Slug X1 could be taking 50% of the time, and X2 could be taking 25% of the time. If X1 is removed, execution time is cut in half, at which point X2 takes 50% of the time. If on the first pass X2 is removed instead, the time is only reduced by 1/4, but then X1 is seen as taking 67% of the time, so it is even more obvious, and can be removed. Either way, removing both X1 and X2 reduces execution time by 75%, so the remaining slugs are four times larger. This "magnification effect" allows the process to continue through X3, X4, and so on until all the easily-removed slugs have been fixed.

[edit] See also

[edit] References

  • Dunlavey, “Performance tuning with instruction-level cost derived from call-stack sampling”, ACM SIGPLAN Notices 42, 8 (August, 2007), pp. 4-8.
  • Dunlavey, “Performance Tuning: Slugging It Out!”, Dr. Dobb's Journal, Vol 18, #12, November 1993, pp 18-26.

[edit] External links