Leibniz notation
From Wikipedia, the free encyclopedia
In calculus, the Leibniz notation, named in honor of the 17th century German philosopher and mathematician Gottfried Wilhelm Leibniz was originally the use of dx and dy and so forth to represent "infinitely small" increments of quantities x and y, just as Δx and Δy represent finite increments of x and y respectively. According to Leibniz, the derivative of y with respect to x, which mathematicians later came to view as
was the quotient of an infinitely small (i.e., infinitesimal) increment of y by an infinitely small increment of x. Thus if
then
It can clearly be seen that, in Leibniz's notation, the second derivative (using implicit differentiation) is:
and has the units of . Note that is saying , or the second differential of y over the square of the first differential of x. The denominator is not the differential of x2, nor is it the second differential of x.
Similarly, although mathematicians may now view an integral as
- ,
Leibniz viewed it as the sum of infinitely many infinitesimal quantities:
- .
[edit] History
In the 19th century, mathematicians ceased to take Leibniz's notation for derivatives and integrals literally. That is, mathematicians saw that the concept of infinitesimals contained logical contradictions in the development. A number of 19th century mathematicians (Cauchy, Weierstrass and others) found logically rigorous ways to treat derivatives and integrals without infinitesimals using limits as shown above. Nonetheless, Leibniz's notation is still in general use. Although the notation needs not be taken literally, it is usually simpler than alternatives when the technique of separation of variables is used in the solution of differential equations. In physical applications, one may for example regard f(x) as measured in meters per second, and dx in seconds, so that f(x) dx is in meters, and so is the value of its definite integral. In that way the Leibniz notation is in harmony with dimensional analysis.
However, in the 1950s and 1960s, Abraham Robinson introduced ways of treating infinitesimals both literally and logically rigorously, and so rewriting calculus from that point of view. But Robinson's methods are not used by most mathematicians. (One mathematician, Jerome Keisler, has gone so far as to write a first-year-calculus textbook according to Robinson's point of view.)