Performance tuning

From Wikipedia, the free encyclopedia

Performance tuning is the improvement of system performance. This is typically a computer application, but the same methods can be applied to economic markets, bureaucracies or other complex systems. The motivation for such activity is called a performance problem, which can be real or anticipated. Most systems will respond to increased load with some degree of decreasing performance. A system's ability to accept higher load is called scalability, and modifying a system to handle a higher load is synonymous to performance tuning.

Systematic tuning follows these steps:

  1. Assess the problem and establish numeric values that categorize acceptable behaviour.
  2. Measure the performance of the system before modification.
  3. Identify the part of the system that is critical for improving the performance. This is called the bottleneck.
  4. Modify that part of the system to remove the bottleneck.
  5. Measure the performance of the system after modification.

This is an instance of the measure-evaluate-improve-learn cycle from quality assurance.

A performance problem may be identified by slow or unresponsive systems. This usually occurs because high system loading, causing some part of the system to reach a limit in its ability to respond. This limit within the system is referred to as a bottleneck.

A handful of techniques are used to improve performance. Among them are code optimization, load balancing, caching strategy, and distributed computing.

Contents

[edit] Performance analysis

See the main article at Performance analysis

[edit] Performance engineering

See the main article at Performance engineering

[edit] Code optimization

See the main article at Optimization (computer science).

Enhancing performance by rewriting specific portions of a program to run faster is one form of code optimization. The term code optimization can refer to improving the implementation of a particular algorithm for performing a task (code tuning). It can also refer to utilizing a better algorithm. Examples of code optimization include improving the code so that work is done once before a loop rather than inside a loop or replacing a call to a simple selection sort with a call to the more complicated algorithm for a quicksort.

[edit] Caching strategy

Main article: Cache

Caching is a fundamental method of removing performance bottlenecks that are the result of slow access to data. Caching improves performance by retaining frequently used information in high speed memory, which reduces access time and thus improves performance. Caching is an effective manner of improving performance in situations where the principle of locality of reference applies.

The methods used to determine which data is stored in progressively faster storage are collectively called caching strategies.

[edit] Load balancing

A system can consist of independent components, each able to service requests. If all the requests are serviced by one of these systems (or a small number) while others remain idle then time is wasted waiting for used system to be available. Arranging so all systems are used equally is referred to as load balancing and can improve over-all performance.

Load balancing is often used to achieve further gains from a distributed system by intelligently selecting which machine to run an operation on based on how busy all potential candidates are, and how well suited each machine is to the type of operation that needs to be performed.

[edit] Distributed computing

Main article: Distributed computing

Distributed computing is used to increase the performance of operations that can be performed in parallel, by concurrently executing multiple operations. Operations may be distributed across multiple processes on a single CPU, taking advantage of multitasking, multiple processes across multiple CPUs, or across multiple machines. As operations are executed concurrently, ensuring synchronization between processes is essential to ensure correct results.

As the trend of increasing the potential for parallel execution on modern CPU architectures continues, the use of distributed systems is essential to achieve performance benefits from the available parallelism. High performance cluster computing is a well known use of distributed systems for performance improvements.

Distributed computing and clustering can negatively impact latency while simultaneously increasing load on shared resources, such as database systems. To minimize latency and avoid bottlenecks, distributed computing can benefit significantly from distributed caches.

[edit] Performance tools