Worst-case execution time

From Wikipedia, the free encyclopedia

The worst-case execution time (WCET) of a computational task is the maximum length of time the task could take to execute on a specific hardware platform. Knowing worst-case execution times is of prime importance for the timing analysis of hard real-time systems.

Contents

[edit] Analysis structure

Timing analysis is in general performed on two levels:

  • WCET analysis
  • Higher-level/system-level analysis

WCET analysis considers the execution time of an isolated task. At this level, activities other than ones related to the considered task are ignored. Tasks are assumed never to block or to be interrupted (blocking is dealt with by the schedulability analysis).

At the higher level, the overall system performance is analyzed, given the results of the WCET analysis for each task or program in the system. Multiple tasks are usually assumed to execute on a single processor and compete for resources, and thus possibly block while attempting to access the resources. The most common type of analysis here is schedulability analysis: for example, fixed-priority analysis or rate-monotonic scheduling analysis. The tightness, or precision of schedulability analysis relies on the accuracy of the WCET analysis. If the WCET values are pessimistic (greater than any execution time that can occur in a running system) then the scheduler will be forced to allocate more time to those tasks than actually required.

A static WCET analysis tool should be able to work at a high-level to determine the structure of a program's task, working either on a piece of source code or disassembled binary executable. But it should also work at a low-level, using timing information about the real hardware that the task will execute on, with all its specific features. By combining those two kinds of analysis, the tool should give an upper bound on the time required to execute a given task on a given hardware platform.

At the low-level, static WCET analysis is complicated by the presence of architectural features that improve the performance of the processor: instruction/data caches, branch prediction and instruction pipelines for example. It is possible to determine tight WCET bounds if these modern architectural features are taken into account in the timing model used by the analysis.

Besides static analysis approaches, which have dominated research in the area since the late 1980s, dynamic or measurement-based approaches have recently entered the research arena. The motivation cited by researchers in this branch is that computing hardware (the CPU in particular) has reached a complexity which is extremely hard to model, in particular because the modelling process can introduce errors from several sources: errors in chip design, lack of documentation, errors in documentation, errors in model creation; all leading to cases where the model predicts a different behavior to that observed on real hardware. On the other hand, measurement based approaches are also considered to be potentially inaccurate, because they rely on observing the worst-case effects during testing. Measurement-based approaches usually try to measure the execution times of short code segments (basic blocks) and then use static analysis methods to compute the worst case behavior of the code as a whole. This is driven by the philosophy that the WCET of a basic block is easily measured, but creating a test case in which each block on the worst case path is exercised is extremely difficult.

Historically industry has either used end-to-end measurements with an added safety margin for soft-real-time systems, or manual static analysis on simple hardware for safety critical systems. Recently industry has shown more interest in research into automated methods of calculating WCET. Complexity is increasingly becoming an issue in manual analysis and safety margins have become a liability in soft-real-time systems: they are either too generous, increasing the cost of the device, or too tight, causing device failure. In the future, it is likely that a requirement for safety critical systems is that they are analyzed using both static and measurement-based approaches.

[edit] Research groups

Europe is traditionally in the leading scientific position in the fields of code-level timing analysis and real-time scheduling analysis. The most active research groups are in Sweden (Mälardalen, Linköping), Germany (Saarbrücken, Braunschweig), France (Toulouse, Rennes), Austria (Vienna), UK (York), Italy (Bologna), Spain (Cantabria, Valencia), and Switzerland (Zurich).

Recently, the topic of code-level timing analysis has internationally found more attention by research groups in the US (North Carolina, Florida), Australia and Singapore. It should be noted that all activities except in Singapore are led by researchers coming from Europe. These groups currently lack momentum, though. All recent advances and breakthroughs originate from European research, and all commercial tools for WCET determination currently come from Europe.

[edit] WCET Tool Challenge 2006

The first international WCET Tool Challenge took place during the autumn of 2006. It was organized by the University of Mälardalen and sponsored by the ARTIST2 Network of Excellence on Embedded Systems Design. The aim of the Challenge was to inspect and to compare different approaches in analyzing the worst-case execution time. All available tools and prototypes able to determine safe upper bounds for the WCET of tasks have participated. The final results (PDF) were presented in November 2006 at the ISoLA 2006 International Symposium in Paphos, Cyprus.

[edit] See also


[edit] Articles and white papers

Languages