Dependability

Dependability is a value showing the reliability of a person to others because of his/her integrity, truthfulness, and trustfulness, traits that can encourage someone to depend on him/her.

The wider use of this noun is in Systems engineering.

Dependability as applied to a computer system is defined by the IFIP 10.4 Working Group on Dependable Computing and Fault Tolerance as:

"[..] the trustworthiness of a computing system which allows reliance to be justifiably placed on the service it delivers [..]" [1]

an alternative and broader definition is provided by IEC IEV 191-02-03:

"dependability (is) the collective term used to describe the availability performance and its influencing factors : reliability performance, maintainability performance and maintenance support performance"[2]

This definition was developed by Technical Committee 56 Dependability of the International Electrotechnical Commission (IEC). The committee also develops and maintains International Standards in the field of dependability. The standards provide systematic methods and tools for dependability assessment and management of equipment, services and systems throughout their life cycles.

This concept can be further extended to encompass mechanisms to increase and maintain the Dependability of a system.[3] Dependability can be thought of as being composed of three elements:

Contents

History

The field of Dependability grew out of previous related fields such as fault tolerance and system reliability in the 1960s. As interest in these fields increased during the 1970s and early part of the 1980s the term reliability began to be overloaded and was being used outside of its originally intended definition, as a measurement of failures in a system, to encompass more diverse measures which would now come under other classifications such as safety, integrity, etc.[4] Jean-Claude Laprie thus coined the term Dependability to encompass these related disciplines in the early 1980.[5]

The field of Dependability has evolved from these beginnings to be an internationally active field of research. This research is fostered by a number of prominent international conferences, notably the International Conference on Dependable Systems and Networks, the International Symposium on Reliable Distributed Systems and the International Symposium on Software Reliability Engineering.

The original definition of dependability [5] for a computing system gathers the following attributes or non-functional requirements:

and combines them with the concepts of Threats and Failures to create Dependability.

This definition was further enhanced [6] to incorporate Safety and Security.

Elements of dependability

Attributes

Attributes are qualities of a system. These can be assessed to determine its overall dependability using Qualitative or Quantitative measures. Avizienis et al. define the following Dependability Attributes:

As these definitions suggested, only Availability and Reliability are quantifiable by direct measurements whilst others are more subjective. For instance Safety cannot be measured directly via metrics but is a subjective assessment that requires judgmental information to be applied to give a level of confidence, whilst Reliability can be measured as failures over time.

Confidentiality, i.e. the absence of unauthorized disclosure of information is also used when addressing security. Security is a composite of Confidentiality, Integrity, and Availability. Security is sometimes classed as an attribute [7] but the current view is to aggregate it together with dependability and treat Dependability as a composite term called Dependability and Security.[3]

Practically, applying security measures to the appliances of a system generally improves the dependability by limiting the number of externally-originated errors.

Threats

Threats are things that can affect a system and cause a drop in Dependability. There are three main terms that must be clearly understood:

It is important to note that Failures are recorded at the system boundary. They are basically Errors that have propagated to the system boundary and have become observable. Faults, Errors and Failures operate according to a mechanism. This mechanism is sometimes known as a Fault-Error-Failure chain.[8] As a general rule a fault, when activated, can lead to an error (which is an invalid state) and the invalid state generated by an error may lead to another error or a failure (which is an observable deviation from the specified behaviour at the system boundary).

Once a fault is activated an error is created. An error may act in the same way as a fault in that it can create further error conditions, therefore an error may propagate multiple times within a system boundary without causing an observable failure. If an error propagates outside the system boundary a failure is said to occur. A failure is basically the point at which it can be said that a service is failing to meet its specification. Since the output data from one service may be fed into another, a failure in one service may propagate into another service as a fault so a chain can be formed of the form: Fault leading to Error leading to Failure leading to Error, etc.

Means

Since the mechanism of a Fault-Error-Chain is understood it is possible to construct means to break these chains and thereby increase the dependability of a system. Four means have been identified so far:

  1. Prevention
  2. Removal
  3. Forecasting
  4. Tolerance

Fault Prevention deals with preventing faults being incorporated into a system. This can be accomplished by use of development methodologies and good implementation techniques.

Fault Removal can be sub-divided into two sub-categories: Removal During Development and Removal During Use.
Removal during development requires verification so that faults can be detected and removed before a system is put into production. Once systems have been put into production a system is needed to record failures and remove them via a maintenance cycle.

Fault Forecasting predicts likely faults so that they can be removed or their effects can be circumvented.

Fault Tolerance deals with putting mechanisms in place that will allow a system to still deliver the required service in the presence of faults, although that service may be at a degraded level.

Dependability means are intended to reduce the number of failures presented to the user of a system. Failures are traditionally recorded over time and it is useful to understand how their frequency is measured so that the effectiveness of means can be assessed.

Dependability of information systems and survivability

Recent works, such [9] upon dependability take benefit of structured information systems, e.g. with SOA, to introduce a more efficient ability, the survivability, thus taking into account the degraded services that an Information System sustains or resumes after a non-maskable failure.

The flexibility of current frameworks encourage system architects to enable reconfiguration mechanisms that refocus the available, safe resources to support the most critical services rather than over-provisioning to build failure-proof system.

With the generalisation of networked information systems, accessibility was introduced to give greater importance to users' experience.

To take into account the level of performance, the measurement of performability is defined as "quantifying how well the object system performs in the presence of faults over a specified period of time".[10]

See also

References

  1. ^ IFIP WG10.4 on Dependable Computing and Fault Tolerance
  2. ^ [1] (search for dependability)
  3. ^ a b A. Avizienis, J.-C. Laprie, B. Randell, and C. Landwehr, "Basic Concepts and Taxonomy of Dependable and Secure Computing," IEEE Transactions on Dependable and Secure Computing, vol. 1, pp. 11-33, 2004.
  4. ^ Brian Randell, "Software Dependability: A Personal View", in the Proc of the 25th International Symposium on Fault-Tolerant Computing (FTCS-25), California, USA, pp 35-41, June 1995.
  5. ^ a b J.C. Laprie. "Dependable Computing and Fault Tolerance: Concepts and terminology," in Proc. 15th IEEE Int. Symp. on Fault-Tolerant Computing, 1985
  6. ^ A. Avizienis, J.-C. Laprie and B. Randell: Fundamental Concepts of Dependability. Research Report No 1145, LAAS-CNRS, April 2001
  7. ^ I. Sommerville, Software Engineering: Addison-Wesley, 2004.
  8. ^ A. Avizienis, V. Magnus U, J. C. Laprie, and B. Randell, "Fundamental Concepts of Dependability," presented at ISW-2000, Cambridge, MA, 2000.
  9. ^ John C. Knight, Elisabeth A. Strunk, Kevin J. Sullivan: Towards a Rigorous Definition of Information System Survivability
  10. ^ John F. Meyer, Willam H. Sanders Specification and construction of performability models

Further reading

Papers

Journals

Books

Research projects