Nonlinear system

This article describes the use of the term nonlinearity in mathematics. For other meanings, see nonlinearity (disambiguation).

In mathematics, a nonlinear system is one that does not satisfy the superposition principle, or one whose output is not directly proportional to its input; a linear system fulfills these conditions. In other words, a nonlinear system is any problem where the variable(s) to be solved for cannot be written as a linear combination of independent components. A nonhomogeneous system, which is linear apart from the presence of a function of the independent variables, is nonlinear according to a strict definition, but such systems are usually studied alongside linear systems, because they can be transformed to a linear system of multiple variables.

Nonlinear problems are of interest to engineers, physicists and mathematicians because most physical systems are inherently nonlinear in nature. Nonlinear equations are difficult to solve and give rise to interesting phenomena such as chaos.[1] The weather is famously chaotic, where simple changes in one part of the system produce complex effects throughout.

Contents

Definition

In mathematics, a linear function (or map) f(x) is one which satisfies both of the following properties:

(Additivity implies homogeneity for any rational α, and, for continuous functions, for any real α. For a complex α, homogeneity does not follow from additivity; for example, an antilinear map is additive but not homogeneous.) The conditions of additivity and homogeneity are often combined in the superposition principle

f(\alpha x %2B \beta y) = \alpha f(x) %2B \beta f(y) \,

An equation written as

f(x) = C\,

is called linear if f(x) is a linear map (as defined above) and nonlinear otherwise. The equation is called homogeneous if C = 0.

The definition f(x) = C is very general in that x can be any sensible mathematical object (number, vector, function, etc.), and the function f(x) can literally be any mapping, including integration or differentiation with associated constraints (such as boundary values). If f(x) contains differentiation of x, the result will be a differential equation.

Nonlinear algebraic equations

Nonlinear algebraic equations, which are also called polynomial equations, are defined by equating polynomials to zero. For example,

x^2 %2B x - 1 = 0\,.

For a single polynomial equation, root-finding algorithms can be used to find solutions to the equation (i.e., sets of values for the variables that satisfy the equation). However, systems of algebraic equations are more complicated; their study is one motivation for the field of algebraic geometry, a difficult branch of modern mathematics. It is even difficult to decide if a given algebraic system has complex solutions (see Hilbert's Nullstellensatz). Nevertheless, in the case of the systems with a finite number of complex solutions, these systems of polynomial equations are now well understood and efficient methods exist for solving them.

Nonlinear recurrence relations

A nonlinear recurrence relation defines successive terms of a sequence as a nonlinear function of preceding terms. Examples of nonlinear recurrence relations are the logistic map and the relations that define the various Hofstadter sequences.

Nonlinear differential equations

A system of differential equations is said to be nonlinear if it is not a linear system. Problems involving nonlinear differential equations are extremely diverse, and methods of solution or analysis are problem dependent. Examples of nonlinear differential equations are the Navier–Stokes equations in fluid dynamics, the Lotka–Volterra equations in biology, and the Black–Scholes PDE in finance.

One of the greatest difficulties of nonlinear problems is that it is not generally possible to combine known solutions into new solutions. In linear problems, for example, a family of linearly independent solutions can be used to construct general solutions through the superposition principle. A good example of this is one-dimensional heat transport with Dirichlet boundary conditions, the solution of which can be written as a time-dependent linear combination of sinusoids of differing frequencies; this makes solutions very flexible. It is often possible to find several very specific solutions to nonlinear equations, however the lack of a superposition principle prevents the construction of new solutions.

Ordinary differential equations

First order ordinary differential equations are often exactly solvable by separation of variables, especially for autonomous equations. For example, the nonlinear equation

\frac{\operatorname{d} u}{\operatorname{d} x} = -u^2\,

will easily yield u = (x + C)−1 as a general solution. The equation is nonlinear because it may be written as

\frac{\operatorname{d} u}{\operatorname{d} x} %2B u^2=0\,

and the left-hand side of the equation is not a linear function of u and its derivatives. Note that if the u2 term were replaced with u, the problem would be linear (the exponential decay problem).

Second and higher order ordinary differential equations (more generally, systems of nonlinear equations) rarely yield closed form solutions, though implicit solutions and solutions involving nonelementary integrals are encountered.

Common methods for the qualitative analysis of nonlinear ordinary differential equations include:

Partial differential equations

The most common basic approach to studying nonlinear partial differential equations is to change the variables (or otherwise transform the problem) so that the resulting problem is simpler (possibly even linear). Sometimes, the equation may be transformed into one or more ordinary differential equations, as seen in the similarity transform or separation of variables, which is always useful whether or not the resulting ordinary differential equation(s) is solvable.

Another common (though less mathematic) tactic, often seen in fluid and heat mechanics, is to use scale analysis to simplify a general, natural equation in a certain specific boundary value problem. For example, the (very) nonlinear Navier-Stokes equations can be simplified into one linear partial differential equation in the case of transient, laminar, one dimensional flow in a circular pipe; the scale analysis provides conditions under which the flow is laminar and one dimensional and also yields the simplified equation.

Other methods include examining the characteristics and using the methods outlined above for ordinary differential equations.

Pendula

A classic, extensively studied nonlinear problem is the dynamics of a pendulum under influence of gravity. Using Lagrangian mechanics, it may be shown[2] that the motion of a pendulum can be described by the dimensionless nonlinear equation

\frac{d^2 \theta}{d t^2} %2B \sin(\theta) = 0\, (one should note that in this equation g = L = 1)

where gravity points "downwards" and \theta is the angle the pendulum forms with its rest position, as shown in the figure at right. One approach to "solving" this equation is to use d\theta/dt as an integrating factor, which would eventually yield

\int \frac{d \theta}{\sqrt{C_0 %2B 2 \cos(\theta)}} = t %2B C_1\,

which is an implicit solution involving an elliptic integral. This "solution" generally does not have many uses because most of the nature of the solution is hidden in the nonelementary integral (nonelementary even if C_0 = 0).

Another way to approach the problem is to linearize any nonlinearities (the sine function term in this case) at the various points of interest through Taylor expansions. For example, the linearization at \theta = 0, called the small angle approximation, is

\frac{d^2 \theta}{d t^2} %2B \theta = 0\,

since \sin(\theta) \approx \theta for \theta \approx 0. This is a simple harmonic oscillator corresponding to oscillations of the pendulum near the bottom of its path. Another linearization would be at \theta = \pi, corresponding to the pendulum being straight up:

\frac{d^2 \theta}{d t^2} %2B \pi - \theta = 0\,

since \sin(\theta) \approx \pi - \theta for \theta \approx \pi. The solution to this problem involves hyperbolic sinusoids, and note that unlike the small angle approximation, this approximation is unstable, meaning that |\theta| will usually grow without limit, though bounded solutions are possible. This corresponds to the difficulty of balancing a pendulum upright, it is literally an unstable state.

One more interesting linearization is possible around \theta = \pi/2, around which \sin(\theta) \approx 1:

\frac{d^2 \theta}{d t^2} %2B 1 = 0.

This corresponds to a free fall problem. A very useful qualitative picture of the pendulum's dynamics may be obtained by piecing together such linearizations, as seen in the figure at right. Other techniques may be used to find (exact) phase portraits and approximate periods.

Types of nonlinear behaviors

Examples of nonlinear equations

See also the list of nonlinear partial differential equations

Software for solving nonlinear system

See also

References

Further reading

  • Diederich Hinrichsen and Anthony J. Pritchard (2005). Mathematical Systems Theory I - Modelling, State Space Analysis, Stability and Robustness. Springer Verlag. ISBN 0-978-3-540-441250. 
  • Jordan, D. W.; Smith, P. (2007). Nonlinear Ordinary Differential Equations (fourth ed.). Oxford Univeresity Press. ISBN 978-0-19-9208241. 
  • Khalil, Hassan K. (2001). Nonlinear Systems. Prentice Hall. ISBN 0-13-067389-7. 
  • Kreyszig, Erwin (1998). Advanced Engineering Mathematics. Wiley. ISBN 0-471-15496-2. 
  • Sontag, Eduardo (1998). Mathematical Control Theory: Deterministic Finite Dimensional Systems. Second Edition. Springer. ISBN 0-387-984895. 

External links

References