Interval arithmetic
Interval arithmetic, interval mathematics, interval analysis, or interval computation, is a method developed by mathematicians since the 1950s and 1960s, as an approach to putting bounds on rounding errors and measurement errors in mathematical computation and thus developing numerical methods that yield reliable results. Very simply put, it represents each value as a range of possibilities. For example, instead of estimating the height of someone using standard arithmetic as 2.0 metres, using interval arithmetic we might be certain that that person is somewhere between 1.97 and 2.03 metres.
This concept is suitable for a variety of purposes. The most common use is to keep track of and handle rounding errors directly during the calculation and of uncertainties in the knowledge of the exact values of physical and technical parameters. The latter often arise from measurement errors and tolerances for components or due to limits on computational accuracy. Interval arithmetic also helps find reliable and guaranteed solutions to equations and optimization problems.
Mathematically, instead of working with an uncertain real we work with the two ends of the interval which contains . In interval arithmetic, any variable lies between and , or could be one of them. A function when applied to is also uncertain. In interval arithmetic produces an interval which is all the possible values for for all .
Introduction
The main focus of interval arithmetic is the simplest way to calculate upper and lower endpoints for the range of values of a function in one or more variables. These endpoints are not necessarily the supremum or infimum, since the precise calculation of those values can be difficult or impossible.
Treatment is typically limited to real intervals, so quantities of form
where and are allowed; with one of them infinite we would have an unbounded interval, while with both infinite we would have the extended real number line.
As with traditional calculations with real numbers, simple arithmetic operations and functions on elementary intervals must first be defined.[1] More complicated functions can be calculated from these basic elements.[1]
Example
Take as an example the calculation of body mass index (BMI). The BMI is the body weight in kilograms divided by the square of height in metres. Measuring the mass with bathroom scales may have an accuracy of one kilogram. We will not know intermediate values – about 79.6 kg or 80.3 kg – but information rounded to the nearest whole number. It is unlikely that when the scale reads 80 kg, someone really weighs exactly 80.0 kg. In normal rounding to the nearest value, the scales showing 80 kg indicates a weight between 79.5 kg and 80.5 kg. The relevant range is that of all real numbers that are greater than or equal to 79.5, while less than or equal to 80.5, or in other words the interval [79.5,80.5].
For a man who weighs 80 kg and is 1.80 m tall, the BMI is about 24.7. With a weight of 79.5 kg and the same height the value is 24.5, while 80.5 kilograms gives almost 24.9. So the actual BMI is in the range [24.5,24.9]. The error in this case does not affect the conclusion (normal weight), but this is not always the position. For example, weight fluctuates in the course of a day so that the BMI can vary between 24 (normal weight) and 25 (overweight). Without detailed analysis it is not possible to always exclude questions as to whether an error ultimately is large enough to have significant influence.
Interval arithmetic states the range of possible outcomes explicitly. Simply put, results are no longer stated as numbers, but as intervals which represent imprecise values. The size of the intervals are similar to error bars to a metric in expressing the extent of uncertainty. Simple arithmetic operations, such as basic arithmetic and trigonometric functions, enable the calculation of outer limits of intervals.
Simple arithmetic
Returning to the earlier BMI example, in determining the body mass index, height and body weight both affect the result. For height, measurements are usually in round centimetres: a recorded measurement of 1.80 metres actually means a height somewhere between 1.795 m and 1.805 m. This uncertainty must be combined with the fluctuation range in weight between 79.5 kg and 80.5 kg. The BMI is defined as the weight in kilograms divided by the square of height in metre. Using either 79.5 kg and 1.795 m or 80.5 kg and 1.805 m gives approximately 24.7. But the person in question may only be 1.795 m tall, with a weight of 80.5 kilograms – or 1.805 m and 79.5 kilograms: all combinations of all possible intermediate values must be considered. Using the interval arithmetic methods described below, the BMI lies in the interval
An operation , such as addition or multiplication, on two intervals is defined by
- .
For the four basic arithmetic operations this can become
provided that is allowed for all and .
For practical applications this can be simplified further:
- Addition:
- Subtraction:
- Multiplication:
- Division: , where if .
For division by an interval including zero, first define
- and .
For , we get which as a single interval gives ; this loses useful information about . So typically it is common to work with and as separate intervals.
Because several such divisions may occur in an interval arithmetic calculation, it is sometimes useful to do the calculation with so-called multi-intervals of the form . The corresponding multi-interval arithmetic maintains a disjoint set of intervals and also provides for overlapping intervals to unite.[2]
Since a real number can be interpreted as the interval , intervals and real numbers can be freely and easily combined.
With the help of these definitions, it is already possible to calculate the range of simple functions, such as . If, for example, , and , it is clear
- .
Interpreting this as a function of the variable with interval parameters and , then it is possible to find the roots of this function. It is then
the possible zeros are in the interval .
As in the above example, the multiplication of intervals often only requires two multiplications. It is in fact
The multiplication can be seen as a destination area of a rectangle with varying edges. The result interval covers all levels from the smallest to the largest.
The same applies when one of the two intervals is non-positive and the other non-negative. Generally, multiplication can produce results as wide as , for example if is squared. This also occurs, for example, in a division, if the numerator and denominator both contain zero.
Notation
To make the notation of intervals smaller in formulae, brackets can be used.
So we can use to represent an interval. For the set of all finite intervals, we can use
as an abbreviation. For a vector of intervals we can also use a bold font: .
Note that in such a compact notation, should not be confused between a so-called improper or single point interval and the lower and upper limit.
Elementary functions
Interval methods can also apply to functions which do not just use simple arithmetic, and we must also use other basic functions for redefining intervals, using already known monotonicity properties.
For monotonic functions in one variable, the range of values is also easy. If is monotonically rising or falling in the interval , then for all values in the interval such that , one of the following inequalities applies:
- , or .
The range corresponding to the interval can be calculated by applying the function to the endpoints and :
- .
From this the following basic features for interval functions can easily be defined:
- Exponential function: , for ,
- Logarithm: , for positive intervals and
- Odd powers: , for odd .
For even powers, the range of values being considered is important, and needs to be dealt with before doing any multiplication. For example for should produce the interval when . But if is taken by applying interval multiplication of form then the result will appear to be , wider than necessary.
Instead consider the function as a monotonically decreasing function for and a monotonically increasing function for . So for even :
- , if ,
- , if ,
- , otherwise.
More generally, one can say that for piecewise monotonic functions it is sufficient to consider the endpoints of the interval , together with the so-called critical points within the interval being those points where the monotonicity of the function changes direction.
For the sine and cosine functions, the critical points are at or for all respectively. Only up to five points matter as the resulting interval will be if the interval includes at least two extrema. For sine and cosine, only the endpoints need full evaluation as the critical points lead to easily pre-calculated values – namely -1, 0, +1.
Interval extensions of general functions
In general, it may not be easy to find such a simple description of the output interval for many functions. But it may still be possible to extend functions to interval arithmetic. If is a function from a real vector to a real number, then is called an interval extension of if
- .
This definition of the interval extension does not give a precise result. For example, both and are allowable extensions of the exponential function. Extensions as tight as possible are desirable, taking into the relative costs of calculation and imprecision; in this case should be chosen as it give the tightest possible result.
The natural interval extension is achieved by combining the function rule with the equivalents of the basic arithmetic and elementary functions.
The Taylor interval extension (of degree ) is a times differentiable function defined by
- ,
for some , where is the th order differential of at the point and is an interval extension of the Taylor remainder
The vector lies between and with , is protected by . Usually one chooses to be the midpoint of the interval and uses the natural interval extension to assess the remainder.
The special case of the Taylor interval extension of degree is also referred to as the mean value form. For an interval extension of the Jacobian we get
- .
A nonlinear function can be defined by linear features.
Complex interval arithmetic
An interval can also be defined as a locus of points at a given distance from the centre, and this definition can be extended from real numbers to complex numbers.[3] As it is the case with computing with real numbers, computing with complex numbers involves uncertain data. So, given the fact that an interval number is a real closed interval and a complex number is an ordered pair of real numbers, there is no reason to limit the application of interval arithmetic to the measure of uncertainties in computations with real numbers.[4] Interval arithmetic can thus be extended, via complex interval numbers, to determine regions of uncertainty in computing with complex numbers.[4]
The basic algebraic operations for real interval numbers (real closed intervals) can be extended to complex numbers. It is therefore not surprising that complex interval arithmetic is similar to, but not the same as, ordinary complex arithmetic.[4] It can be shown that, as it is the case with real interval arithmetic, there is no distributivity between addition and multiplication of complex interval numbers except for certain special cases, and inverse elements do not always exist for complex interval numbers.[4] Two other useful properties of ordinary complex arithmetic fail to hold in complex interval arithmetic: the additive and multiplicative properties, of ordinary complex conjugates, do not hold for complex interval conjugates.[4]
Interval arithmetic can be extended, in an analogous manner, to other multidimensional number systems such as quaternions and octonions, but with the expense that we have to sacrifice other useful properties of ordinary arithmetic.[4]
Interval methods
The methods of classical numerical analysis can not be transferred one-to-one into interval-valued algorithms, as dependencies between numerical values are usually not taken into account.
Rounded interval arithmetic
In order to work effectively in a real-life implementation, intervals must be compatible with floating point computing. The earlier operations were based on exact arithmetic, but in general fast numerical solution methods may not be available. The range of values of the function for and are for example . Where the same calculation is done with single digit precision, the result would normally be . But , so this approach would contradict the basic principles of interval arithmetic, as a part of the domain of would be lost. Instead, it is the outward rounded solution which is used.
The standard IEEE 754 for binary floating-point arithmetic also sets out procedures for the implementation of rounding. An IEEE 754 compliant system allows programmers to round to the nearest floating point number; alternatives are rounding towards 0 (truncating), rounding toward positive infinity (i.e. up), or rounding towards negative infinity (i.e. down).
The required external rounding for interval arithmetic can thus be achieved by changing the rounding settings of the processor in the calculation of the upper limit (up) and lower limit (down). Alternatively, an appropriate small interval can be added.
Dependency problem
The so-called dependency problem is a major obstacle to the application of interval arithmetic. Although interval methods can determine the range of elementary arithmetic operations and functions very accurately, this is not always true with more complicated functions. If an interval occurs several times in a calculation using parameters, and each occurrence is taken independently then this can lead to an unwanted expansion of the resulting intervals.
As an illustration, take the function defined by . The values of this function over the interval are really . As the natural interval extension, it is calculated as , which is slightly larger; we have instead calculated the infimum and supremum of the function over . There is a better expression of in which the variable only appears once, namely by rewriting as addition and squaring in the quadratic .
So the suitable interval calculation is
and gives the correct values.
In general, it can be shown that the exact range of values can be achieved, if each variable appears only once and if is continuous inside the box. However, not every function can be rewritten this way.
The dependency of the problem causing over-estimation of the value range can go as far as covering a large range, preventing more meaningful conclusions.
An additional increase in the range stems from the solution of areas that do not take the form of an interval vector. The solution set of the linear system
for is precisely the line between the points and . Interval methods deliver the best case, but in the square , The real solution is contained in this square (this is known as the wrapping effect).
Linear interval systems
A linear interval system consists of a matrix interval extension and an interval vector . We want the smallest cuboid , for all vectors which there is a pair with and satisfying
- .
For quadratic systems – in other words, for – there can be such an interval vector , which covers all possible solutions, found simply with the interval Gauss method. This replaces the numerical operations, in that the linear algebra method known as Gaussian elimination becomes its interval version. However, since this method uses the interval entities and repeatedly in the calculation, it can produce poor results for some problems. Hence using the result of the interval-valued Gauss only provides first rough estimates, since although it contains the entire solution set, it also has a large area outside it.
A rough solution can often be improved by an interval version of the Gauss–Seidel method. The motivation for this is that the -th row of the interval extension of the linear equation
can be determined by the variable if the division is allowed. It is therefore simultaneously
- and .
So we can now replace by
- ,
and so the vector by each element. Since the procedure is more efficient for a diagonally dominant matrix, instead of the system one can often try multiplying it by an appropriate rational matrix with the resulting matrix equation
left to solve. If one chooses, for example, for the central matrix , then is outer extension of the identity matrix.
These methods only work well if the widths of the intervals occurring are sufficiently small. For wider intervals it can be useful to use an interval-linear system on finite (albeit large) real number equivalent linear systems. If all the matrices are invertible, it is sufficient to consider all possible combinations (upper and lower) of the endpoints occurring in the intervals. The resulting problems can be resolved using conventional numerical methods. Interval arithmetic is still used to determine rounding errors.
This is only suitable for systems of smaller dimension, since with a fully occupied matrix, real matrices need to be inverted, with vectors for the right hand side. This approach was developed by Jiri Rohn and is still being developed.[5]
Interval Newton method
An interval variant of Newton's method for finding the zeros in an interval vector can be derived from the average value extension.[6] For an unknown vector applied to , gives
- .
For a zero , that is , and thus must satisfy
- .
This is equivalent to . An outer estimate of can be determined using linear methods.
In each step of the interval Newton method, an approximate starting value is replaced by and so the result can be improved iteratively. In contrast to traditional methods, the interval method approaches the result by containing the zeros. This guarantees that the result will produce all the zeros in the initial range. Conversely, it will prove that no zeros of were in the initial range if a Newton step produces the empty set.
The method converges on all zeros in the starting region. Division by zero can lead to separation of distinct zeros, though the separation may not be complete; it can be complemented by the bisection method.
As an example, consider the function , the starting range , and the point . We then have and the first Newton step gives
- .
More Newton steps are used separately on and . These converge to arbitrarily small intervals around and .
The Interval Newton method can also be used with thick functions such as , which would in any case have interval results. The result then produces intervals containing .
Bisection and covers
The various interval methods deliver conservative results as dependencies between the sizes of different intervals extensions are not taken into account. However the dependency problem becomes less significant for narrower intervals.
Covering an interval vector by smaller boxes so that is then valid for the range of values So for the interval extensions described above, is valid. Since is often a genuine superset of the right-hand side, this usually leads to an improved estimate.
Such a cover can be generated by the bisection method such as thick elements of the interval vector by splitting in the centre into the two intervals and . If the result is still not suitable then further gradual subdivision is possible. Note that a cover of intervals results from divisions of vector elements, substantially increasing the computation costs.
With very wide intervals, it can be helpful to split all intervals into several subintervals with a constant (and smaller) width, a method known as mincing. This then avoids the calculations for intermediate bisection steps. Both methods are only suitable for problems of low dimension.
Application
Interval arithmetic can be used in various areas (such as set inversion, motion planning, set estimation or stability analysis), in order to be treated estimates for which no exact numerical values can stated.[7]
Rounding error analysis
Interval arithmetic is used with error analysis, to control rounding errors arising from each calculation. The advantage of interval arithmetic is that after each operation there is an interval which reliably includes the true result. The distance between the interval boundaries gives the current calculation of rounding errors directly:
- Error = for a given interval .
Interval analysis adds to rather than substituting for traditional methods for error reduction, such as pivoting.
Tolerance analysis
Parameters for which no exact figures can be allocated often arise during the simulation of technical and physical processes. The production process of technical components allows certain tolerances, so some parameters fluctuate within intervals. In addition, many fundamental constants are not known precisely.[2]
If the behavior of such a system affected by tolerances satisfies, for example, , for and unknown then the set of possible solutions
- ,
can be found by interval methods. This provides an alternative to traditional propagation of error analysis. Unlike point methods, such as Monte Carlo simulation, interval arithmetic methodology ensures that no part of the solution area can be overlooked. However, the result is always a worst case analysis for the distribution of error, as other probability-based distributions are not considered.
Fuzzy interval arithmetic
Interval arithmetic can also be used with affiliation functions for fuzzy quantities as they are used in fuzzy logic. Apart from the strict statements and , intermediate values are also possible, to which real numbers are assigned. corresponds to definite membership while is non-membership. A distribution function assigns uncertainty which can be understood as a further interval.
For fuzzy arithmetic[8] only a finite number of discrete affiliation stages are considered. The form of such a distribution for an indistinct value can then represented by a sequence of intervals
- . The interval corresponds exactly to the fluctuation range for the stage .
The appropriate distribution for a function concerning indistinct values and the corresponding sequences can be approximated by the sequence . The values are given by and can be calculated by interval methods. The value corresponds to the result of an interval calculation.
History
Interval arithmetic is not a completely new phenomenon in mathematics; it has appeared several times under different names in the course of history. For example Archimedes calculated lower and upper bounds 223/71 < π < 22/7 in the 3rd century BC. Actual calculation with intervals has neither been as popular as other numerical techniques, nor been completely forgotten.
Rules for calculating with intervals and other subsets of the real numbers were published in a 1931 work by Rosalind Cicely Young, a doctoral candidate at the University of Cambridge. Arithmetic work on range numbers to improve reliability of digital systems were then published in a 1951 textbook on linear algebra by Paul Dwyer (University of Michigan); intervals were used to measure rounding errors associated with floating-point numbers. A comprehensive paper on interval algebra in numerical analysis was published by Sunaga (1958).[9]
The birth of modern interval arithmetic was marked by the appearance of the book Interval Analysis by Ramon E. Moore in 1966.[10][11] He had the idea in Spring 1958, and a year later he published an article about computer interval arithmetic.[12] Its merit was that starting with a simple principle, it provided a general method for automated error analysis, not just errors resulting from rounding.
Independently in 1956, Mieczyslaw Warmus suggested formulae for calculations with intervals,[13] though Moore found the first non-trivial applications.
In the following twenty years, German groups of researchers carried out pioneering work around Götz Alefeld[14] and Ulrich Kulisch[1] at the University of Karlsruhe and later also at the Bergische University of Wuppertal. For example, Karl Nickel explored more effective implementations, while improved containment procedures for the solution set of systems of equations were due to Arnold Neumaier among others. In the 1960s, Eldon R. Hansen dealt with interval extensions for linear equations and then provided crucial contributions to global optimisation, including what is now known as Hansen's method, perhaps the most widely used interval algorithm.[6] Classical methods in this often have the problem of determining the largest (or smallest) global value, but could only find a local optimum and could not find better values; Helmut Ratschek and Jon George Rokne developed branch and bound methods, which till then had only applied to integer values, by using intervals to provide applications for continuous values.
In 1988, Rudolf Lohner developed Fortran-based software for reliable solutions for initial value problems using ordinary differential equations.[15]
The journal Reliable Computing (originally Interval Computations) has been published since the 1990s, dedicated to the reliability of computer-aided computations. As lead editor, R. Baker Kearfott, in addition to his work on global optimisation, has contributed significantly to the unification of notation and terminology used in interval arithmetic (Web: Kearfott).
In recent years work has concentrated in particular on the estimation of preimages of parameterised functions and to robust control theory by the COPRIN working group of INRIA in Sophia Antipolis in France (Web: INRIA).
Implementations
There are many software packages that permit the development of numerical applications using interval arithmetic.[16] These are usually provided in the form of program libraries. There are also C++ and Fortran compilers that handle interval data types and suitable operations as a language extension, so interval arithmetic is supported directly.
Since 1967, Extensions for Scientific Computation (XSC) have been developed in the University of Karlsruhe for various programming languages, such as C++, Fortran and Pascal.[17] The first platform was a Zuse Z 23, for which a new interval data type with appropriate elementary operators was made available. There followed in 1976 Pascal-SC, a Pascal variant on a Zilog Z80 which it made possible to create fast complicated routines for automated result verification. Then came the Fortran 77-based ACRITH XSC for the System/370 architecture, which was later delivered by IBM. Starting from 1991 one could produce code for C compilers with Pascal-XSC; a year later the C++ class library supported C-XSC on many different computer systems. In 1997 all XSC variants were made available under the GNU General Public License. At the beginning of 2000 C-XSC 2.0 was released under the leadership of the working group for scientific computation at the Bergische University of Wuppertal, in order to correspond to the improved C++ standard.
Another C++-class library was created in 1993 at the Hamburg University of Technology called Profil/BIAS (Programmer's Runtime Optimized Fast Interval Library, Basic Interval Arithmetic), which made the usual interval operations more user friendly. It emphasized the efficient use of hardware, portability and independence of a particular presentation of intervals.
The Boost collection of C++ libraries contains a template class for intervals. Its authors are aiming to have interval arithmetic in the standard C++ language.[18]
Gaol[19] is another C++ interval arithmetic library that is unique in that it offers the relational interval operators used in interval constraint programming.
The Frink programming language has an implementation of interval arithmetic which can handle arbitrary-precision numbers. Programs written in Frink can use intervals without rewriting or recompilation.
In addition computer algebra systems, such as Mathematica, Maple and MuPAD, can handle intervals. There is a Matlab extension Intlab which builds on BLAS routines, as well as the Toolbox b4m which makes a Profil/BIAS interface.[20] Moreover, the Software Euler Math Toolbox includes an interval arithmetic.
A library for the functional language OCaml was written in assembly language and C.[21]
IEEE Std 1788-2015 – IEEE standard for interval arithmetic
A standard for interval arithmetic has been approved in June 2015.[22] There are two reference implementations freely available,[23] which have been developed by members of the standard's working group: The libieeep1788[24] library for C++, and the interval package[25] for GNU Octave.
A minimalistic subset of the standard is currently under development, which shall be easier to implement and speed up production of implementations.[26]
Conferences and Workshop
Several international conferences or workshop take place every year in the world. The main conference is probably SCAN (International Symposium on Scientific Computing, Computer Arithmetic, and Verified Numerical Computation), but there is also SWIM (Small Workshop on Interval Methods), PPAM (International Conference on Parallel Processing and Applied Mathematics), REC (International Workshop on Reliable Engineering Computing).
See also
- Affine arithmetic
- Automatic differentiation
- Multigrid method
- Monte-Carlo simulation
- Interval finite element
- Fuzzy number
- Significant figures
References
- 1 2 3 Kulisch, Ulrich (1989). Wissenschaftliches Rechnen mit Ergebnisverifikation. Eine Einführung (in German). Wiesbaden: Vieweg-Verlag. ISBN 3-528-08943-1.
- 1 2 Dreyer, Alexander (2003). Interval Analysis of Analog Circuits with Component Tolerances. Aachen, Germany: Shaker Verlag. ISBN 3-8322-4555-3.
- ↑ Complex interval arithmetic and its applications, Miodrag Petkovi?, Ljiljana Petkovi?, Wiley-VCH, 1998, ISBN 978-3-527-40134-5
- 1 2 3 4 5 6 Hend Dawood (2011). Theories of Interval Arithmetic: Mathematical Foundations and Applications. Saarbrücken: LAP LAMBERT Academic Publishing. ISBN 978-3-8465-0154-2.
- ↑ Jiri Rohn, List of publications
- 1 2 Walster, G. William; Hansen, Eldon Robert (2004). Global Optimization using Interval Analysis (2nd ed.). New York: Marcel Dekker. ISBN 0-8247-4059-9.
- ↑ Jaulin, Luc; Kieffer, Michel; Didrit, Olivier; Walter, Eric (2001). Applied Interval Analysis. Berlin: Springer. ISBN 1-85233-219-0.
- ↑ Application of Fuzzy Arithmetic to Quantifying the Effects of Uncertain Model Parameters, Michael Hanss, University of Stuttgart
- ↑ T.Sunaga. "Theory of interval algebra and its application to numerical analysis", RAAG Memoirs, 2 (1958), pp. 29-46.".
- ↑ Moore, R. E. (1966). Interval Analysis. Englewood Cliff, New Jersey: Prentice-Hall. ISBN 0-13-476853-1.
- ↑ Cloud, Michael J.; Moore, Ramon E.; Kearfott, R. Baker (2009). Introduction to Interval Analysis. Philadelphia: Society for Industrial and Applied Mathematics. ISBN 0-89871-669-1.
- ↑ Hansen, E. R. (August 13, 2001). "Publications Related to Early Interval Work of R. E. Moore". University of Louisiana. Retrieved 2015-06-29.
- ↑ Precursory papers on interval analysis by M. Warmus Archived April 18, 2008 at the Wayback Machine
- ↑ Alefeld, Götz; Herzberger, Jürgen. Einführung in die Intervallrechnung. Reihe Informatik (in German) 12. Mannheim, Wien, Zürich: B.I.-Wissenschaftsverlag. ISBN 3-411-01466-0.
- ↑ Bounds for ordinary differential equations of Rudolf Lohner (in German)
- ↑ Software for Interval Computations collected by Vladik Kreinovich , University of Texas at El Paso
- ↑ History of XSC-Languages
- ↑ A Proposal to add Interval Arithmetic to the C++ Standard Library
- ↑ Gaol is Not Just Another Interval Arithmetic Library
- ↑ INTerval LABoratory and b4m
- ↑ Alliot, Jean-Marc; Gotteland, Jean-Baptiste; Vanaret, Charlie; Durand, Nicolas; Gianazza, David (2012). Implementing an interval computation library for OCaml on x86/amd64 architectures. 17th ACM SIGPLAN International Conference on Functional Programming
- ↑ IEEE Standard for Interval Arithmetic
- ↑ Revol, Nathalie (2015). The (near-)future IEEE 1788 standard for interval arithmetic. 8th small workshop on interval methods. Slides (PDF)
- ↑ C++ implementation of the preliminary IEEE P1788 standard for interval arithmetic
- ↑ GNU Octave interval package
- ↑ IEEE Project P1788.1
Further reading
- Hayes, Brian (November–December 2003). "A Lucid Interval" (PDF). American Scientist (Sigma Xi) 91 (6): 484–488. doi:10.1511/2003.6.484.
External links
- Introductory Film (mpeg) of the COPRIN teams of INRIA, Sophia Antipolis
- Bibliography of R. Baker Kearfott, University of Louisiana at Lafayette
- Interval Methods from Arnold Neumaier, University of Vienna
- INTLAB, Institute for Reliable Computing, Hamburg University of Technology
|