Differential (calculus)

From Wikipedia, the free encyclopedia

The differential dy
Enlarge
The differential dy

In calculus, a differential is an infinitesimally small change in a variable. A differential is a change in a variable much like the familiar Δx. The difference is that a differential (dx) is infinitely small and thus does not have an actual value.


Contents

[edit] Uses

A derivative (of a single variable equation) is a ratio of two differentials, typically denoted \frac{dy}{dx} which is the Leibniz notation equivalent of y' \big( x \big) = \dot y in Newton's notation for differentiation. This is a consequence of the slope equation m = \frac{\Delta\ y}{\Delta\ x} where the only difference is the replacement of Deltas with differentials to reflect the fact that the x and y values are at one point, not across two. For a more in-depth explanation see derivative.

Integrals often use differentials in their notation. The differential is commonly used to indicate the integration variable. In the way the Riemann integral is defined, it refers to some kind of thickness, an idea which is mathematically not correct, for the differential represents a linear transformation.

[edit] The differential as a local linear transformation

Several authors have attempted to define the differential without reference to infinitesimals. This definition is based on the definition in Apostol's book:[1]

Consider a real-valued function f defined on an open subset S of \mathbb{R}^n. If \mathbf{a} \in S is a point in S, then we say that f has a differential at \mathbf{a} if there exists g_\mathbf{a} satisfying:

  1. g_\mathbf{a} is a real-valued function defined in the whole of \mathbb{R}^n
  2. g_\mathbf{a} is linear. That is given \mathbf{t}, \mathbf{t'} \in \mathbb{R}^n and \alpha, \beta \in \mathbb{R}:
    g_\mathbf{a}\left(\alpha \mathbf{t} + \beta \mathbf{t'}\right)  =  \alpha g_\mathbf{a}(\mathbf{t}) + \beta g_\mathbf{a}(\mathbf{t'})
  3. For every ε > 0, there exists a neighborhood N(\mathbf{a}) of \mathbf{a} such that:
    \mathbf{t} \in N(\mathbf{a}) \Longrightarrow \left|f(\mathbf{t})-f(\mathbf{a})-g_\mathbf{a}(\mathbf{t}-\mathbf{a})\right|<\epsilon\left|\mathbf{t}-\mathbf{a}\right|

g_\mathbf{a} \left(\mathbf{t}\right) is often thought of as a function of two n-dimensional variables and written g \left(\mathbf{a}; \mathbf{t}\right). Note, however, that it may not be defined over the whole of S. It is common to write the variables \mathbf{t} = \left(t_1,t_2, \dots t_n\right) as d\mathbf{x} = \left(dx_1,dx_2, \dots dx_n\right) and the differential, if it exists as df\left(\mathbf{x}; d\mathbf{x}\right). It is then possible to prove that it is unique and satisfies:

df\left(\mathbf{x}; d\mathbf{x}\right) = \sum_{k=1}^n D_k f\left(\mathbf{x}\right)dx_k,

Where D_kf\left(\mathbf{x}\right) are the the n partial derivatives at \mathbf{x}. This can be written more briefly using the following notation:

df = \frac{\partial f}{\partial x_{1}}dx_1 + \frac{\partial f}{\partial x_{2}}dx_2 + \dots \frac{\partial f}{\partial x_{n}}dx_n.

In the one dimensional case this becomes:

df = \frac{df}{dx}dx.

This notation is very suggestive but it should be realised that \frac{df}{dx} is a complete symbol whereas dx is a linear transformation of a one dimensional space. Thus there is no question of "cancelling" the dx.

The existence of all the partial derivatives of f\left(\mathbf{x}\right) at \mathbf{x} is a necessary condition for the existence of a differential at \mathbf{x}. However it is not a sufficient condition. It is possible to prove that if f\left(\mathbf{x}\right) has a differential at \mathbf{x} then it is continuous at \mathbf{x}. However the following function:

f(x,y)= \left\{\begin{matrix} \frac{xy^{2}}{x^{2}+y^{4}}, & \mbox{if }x\neq0,\\ 0, & \mbox{if } x=0 \end{matrix}\right.

has finite directional derivatives in all directions at \left(0,0\right), and therefore has all partial derivatives at the origin. However it is not continuous at the origin since it has value \frac{1}{2} at every point on the parabola x = y2, except at the origin, where it has value 0. It therefore does not possess a differential at the origin.

[edit] Confusion

It is common for math students (especially first year calculus students) to confuse differential and derivative. Although they sound similar, the mathematic meanings are distinct.

Although in some cases certain algebraic functions such as cancellation in fractions are applicable to differentials, it is important not to carry this convenient property too far. Differentials are not numbers (or variables) and cannot always be treated as numbers. Differentials have the same unit as the variable they are associated with.

[edit] History

Differentials were essential to the development of calculus and were discovered in the same time frame. However, the math innovation that made differentials more apparent and visible was Leibniz notation.

[edit] References

  1. ^ Tom M Apostol (1967). Calculus, 2nd Ed. Wiley. ISBN 0-471-00005-1 and ISBN 0-471-00007-8.

[edit] See also