Worst case execution time

From Wikipedia, the free encyclopedia

The worst case execution time (WCET) of a computational task is the maximum time span a task may execute on a specific hardware platform. Knowing task worst-case execution times (WCET) is of prime importance for the timing analysis of hard real-time systems.

Timing analysis is in general performed on two levels:

  • WCET analysis
  • Higher-level/system-level analysis

WCET analysis considers the execution time of an isolated task. At this level, activities other than ones related to the considered task are ignored. Tasks are assumed to never block (blocking is dealt with by the schedulability analysis).

At the higher, system, level, the effects on the system performance is analyzed, given the results of the WCET analysis for each task or program in the system. Multiple tasks are usually assumed to execute on a single processor and compete for resources, and thus possibly block while attempting to access the resources. The most common type of analysis here is schedulability analysis: for example, fixed-priority analysis or rate-monotonic scheduling analysis. The tightness of schedulability analysis relies on the accuracy of the WCET analysis.

A static WCET analysis tool should be able to work on the high-level to determine the structure of a program's task, working either on a piece of source code or disassembled binary executable. But it should also work on a low-level basis, depending on the real hardware that the task will execute on, with all the specific features it holds. Making those two kinds of analysis, the tool should give an upper bound on the time required to execute a given task on a given hardware.

At the low-level, static WCET analysis is complicated due to the presence of architectural features that improve the performances of the processor: instruction/data caches, branch prediction and pipeline for example. Taking into account modern architectural features makes it possible to determine tight WCET bounds. On a given architecture, it is possible thanks to the modeling of modern architectural features.

Besides static analysis approaches, which have dominated research in the area since the late 1980's, dynamic or measurement-based approaches have recently been added to the research arena. The motivation cited by researchers in this branch is that hardware and CPUs have reached a complexity which is extremely hard to model, in particular because the process can fault in several layers: errors in chip design, lack of documentation, errors in documentation, errors in model creation, all leading to cases where the model predicts a different behavior to the one which may be observed on real HW. On the other hand the measurement based approaches are also considered to be potentially inaccurate. Measurement-based approaches usually try to measure execution times on a fine granular level of the code and using static analysis methods to compute worst case behavior of the code as a whole. This is driven by the philosophy that the WCET of a basic block is easily measured, but creating the WCET of all basic blocks executed in the worst case path is impossible.

Historically industry has either used end-to-end measurements with an added safety margin for soft-real-time systems or manual static analysis on simple HW for safety critical systems. Recently industry has shown more interest in research, because complexity is becoming an increasing issue in manual analysis and safety margins have become a liability in soft-real-time systems, as they are either too generous, increasing the cost of the device, or too tight, causing device failure. A likely scenario for safety critical systems is that they will be required to be analyzed by both approaches.

[edit] See also

[edit] External links

[edit] Articles and white papers