Differential calculus
In mathematics, differential calculus is a subfield of calculus that studies the rates at which quantities change. It is one of the two traditional divisions of calculus, the other being integral calculus—the study of the area beneath a curve.
The primary objects of study in differential calculus are the derivative of a function, related notions such as the differential, and their applications. The derivative of a function at a chosen input value describes the rate of change of the function near that input value. The process of finding a derivative is called differentiation. Geometrically, the derivative at a point is the slope of the tangent line to the graph of the function at that point, provided that the derivative exists and is defined at that point. For a real-valued function of a single real variable, the derivative of a function at a point generally determines the best linear approximation to the function at that point.
Differential calculus and integral calculus are connected by the fundamental theorem of calculus, which states that differentiation is the reverse process to integration.
Differentiation has applications to nearly all quantitative disciplines. For example, in physics, the derivative of the displacement of a moving body with respect to time is the velocity of the body, and the derivative of velocity with respect to time is acceleration. The derivative of the momentum of a body with respect to time equals the force applied to the body; rearranging this derivative statement leads to the famous equation associated with Newton's second law of motion. The reaction rate of a chemical reaction is a derivative. In operations research, derivatives determine the most efficient ways to transport materials and design factories.
Derivatives are frequently used to find the maxima and minima of a function. Equations involving derivatives are called differential equations and are fundamental in describing natural phenomena. Derivatives and their generalizations appear in many fields of mathematics, such as complex analysis, functional analysis, differential geometry, measure theory, and abstract algebra.
Derivative
The derivative of at the point is defined as the slope of the tangent to. In order to gain an intuition for this definition, one must first be familiar with finding the slope of a linear equation, written in the form. The slope of an equation is its steepness. It can be found by picking any two points and dividing the change in by the change in, meaning that. As an example, the graph of has a slope of, as shown in the diagram below:Thus, the slope equals. For brevity, is often written as, with being the Greek letter Delta, meaning 'change in'. The slope of a linear equation is constant, meaning that the steepness is the same everywhere. However, many graphs, for instance, vary in their steepness. This means that you can no longer pick any two arbitrary points and compute the slope. Instead, the slope of the graph is defined using a tangent line—a line that 'just touches' a particular point. The slope of a curve at a particular point is defined as the slope of the tangent to that point. For example, has a slope of at because the slope of the tangent line to that point is equal to :
The derivative of a function is defined as the slope of this tangent line. Even though the tangent line only touches a single point, it can be approximated by a line that goes through two points. This is known as a secant line. If the two points that the secant line goes through are close together, then the secant line closely resembles the tangent line, and, as a result, its slope is also very similar:
The advantage of using a secant line is that its slope can be calculated directly. Consider the two points on the graph and, where is a small number. As before, the slope of the line passing through these two points can be calculated with the formula. This gives
As gets closer and closer to, the slope of the secant line gets closer and closer to the slope of the tangent line. This is formally written as
The above expression roughly translates to 'as gets closer and closer to 0, the slope of the secant line gets closer and closer to a certain value'. The value that is being approached is the derivative of ; this can be written as. If, the derivative can also be written as, with representing an infinitesimal change. For example, represents an infinitesimal change in x. In summary, if, then the derivative of is
provided such a limit exists. Differentiating a function using the above definition is known as differentiation from first principles. Here is a proof, using differentiation from first principles, that the derivative of is :
As,. Therefore,. This proof can be generalised to show that if and are constants. For example,. However, many other functions cannot be differentiated as easily as polynomial functions, meaning that sometimes further techniques are needed to find the derivative of a function. These techniques include the chain rule and the product rule. Other functions cannot be differentiated at all, giving rise to the concept of differentiability.
A closely related concept to the derivative of a function is its differential. When and are real variables, the derivative of at is the slope of the tangent line to the graph of at. Because the source and target of are one-dimensional, the derivative of is a real number. If and are vectors, then the best linear approximation to the graph of depends on how changes in several directions at once. Taking the best linear approximation in a single direction determines a partial derivative, which is usually denoted. The linearization of in all directions at once is called the total derivative.
History of differentiation
The concept of a derivative in the sense of a tangent line is a very old one, familiar to Greek geometers such as Euclid, Archimedes and Apollonius of Perga. Archimedes also introduced the use of infinitesimals, although these were primarily used to study areas and volumes rather than derivatives and tangents; see Archimedes' use of infinitesimals.The use of infinitesimals to study rates of change can be found in Indian mathematics, perhaps as early as 500 AD, when the astronomer and mathematician Aryabhata used infinitesimals to study the orbit of the Moon. The use of infinitesimals to compute rates of change was developed significantly by Bhāskara II ; indeed, it has been argued that many of the key notions of differential calculus can be found in his work, such as "Rolle's theorem".
The Islamic mathematician, Sharaf al-Dīn al-Tūsī, in his Treatise on Equations, established conditions for some cubic equations to have solutions, by finding the maxima of appropriate cubic polynomials. He proved, for example, that the maximum of the cubic occurs when, and concluded therefrom that the equation has exactly one positive solution when, and two positive solutions whenever. The historian of science, Roshdi Rashed, has argued that al-Tūsī must have used the derivative of the cubic to obtain this result. Rashed's conclusion has been contested by other scholars, however, who argue that he could have obtained the result by other methods which do not require the derivative of the function to be known.
The modern development of calculus is usually credited to Isaac Newton and Gottfried Wilhelm Leibniz, who provided independent and unified approaches to differentiation and derivatives. The key insight, however, that earned them this credit, was the fundamental theorem of calculus relating differentiation and integration: this rendered obsolete most previous methods for computing areas and volumes, which had not been significantly extended since the time of Ibn al-Haytham. For their ideas on derivatives, both Newton and Leibniz built on significant earlier work by mathematicians such as Pierre de Fermat, Isaac Barrow, René Descartes, Christiaan Huygens, Blaise Pascal and John Wallis. Regarding Fermat's influence, Newton once wrote in a letter that "I had the hint of this method from Fermat's way of drawing tangents, and by applying it to abstract equations, directly and invertedly, I made it general." Isaac Barrow is generally given credit for the early development of the derivative. Nevertheless, Newton and Leibniz remain key figures in the history of differentiation, not least because Newton was the first to apply differentiation to theoretical physics, while Leibniz systematically developed much of the notation still used today.
Since the 17th century many mathematicians have contributed to the theory of differentiation. In the 19th century, calculus was put on a much more rigorous footing by mathematicians such as Augustin Louis Cauchy, Bernhard Riemann, and Karl Weierstrass. It was also during this period that the differentiation was generalized to Euclidean space and the complex plane.
Applications of derivatives
Optimization
If is a differentiable function on and is a local maximum or a local minimum of, then the derivative of at is zero. Points where are called critical points or stationary points. If is not assumed to be everywhere differentiable, then points at which it fails to be differentiable are also designated critical points.If is twice differentiable, then conversely, a critical point of can be analysed by considering the second derivative of at :
- if it is positive, is a local minimum;
- if it is negative, is a local maximum;
- if it is zero, then could be a local minimum, a local maximum, or neither.
Taking derivatives and solving for critical points is therefore often a simple way to find local minima or maxima, which can be useful in optimization. By the extreme value theorem, a continuous function on a closed interval must attain its minimum and maximum values at least once. If the function is differentiable, the minima and maxima can only occur at critical points or endpoints.
This also has applications in graph sketching: once the local minima and maxima of a differentiable function have been found, a rough plot of the graph can be obtained from the observation that it will be either increasing or decreasing between critical points.
In higher dimensions, a critical point of a scalar valued function is a point at which the gradient is zero. The second derivative test can still be used to analyse critical points by considering the eigenvalues of the Hessian matrix of second partial derivatives of the function at the critical point. If all of the eigenvalues are positive, then the point is a local minimum; if all are negative, it is a local maximum. If there are some positive and some negative eigenvalues, then the critical point is called a "saddle point", and if none of these cases hold then the test is considered to be inconclusive.
Calculus of variations
One example of an optimization problem is: Find the shortest curve between two points on a surface, assuming that the curve must also lie on the surface. If the surface is a plane, then the shortest curve is a line. But if the surface is, for example, egg-shaped, then the shortest path is not immediately clear. These paths are called geodesics, and one of the most fundamental problems in the calculus of variations is finding geodesics. Another example is: Find the smallest area surface filling in a closed curve in space. This surface is called a minimal surface and it, too, can be found using the calculus of variations.Physics
Calculus is of vital importance in physics: many physical processes are described by equations involving derivatives, called differential equations. Physics is particularly concerned with the way quantities change and develop over time, and the concept of the "time derivative" — the rate of change over time — is essential for the precise definition of several important concepts. In particular, the time derivatives of an object's position are significant in Newtonian physics:- velocity is the derivative of an object's displacement
- acceleration is the derivative of an object's velocity, that is, the second derivative of an object's position.
then the object's velocity is
and the object's acceleration is
which is constant.
Differential equations
A differential equation is a relation between a collection of functions and their derivatives. An ordinary differential equation is a differential equation that relates functions of one variable to their derivatives with respect to that variable. A partial differential equation is a differential equation that relates functions of more than one variable to their partial derivatives. Differential equations arise naturally in the physical sciences, in mathematical modelling, and within mathematics itself. For example, Newton's second law, which describes the relationship between acceleration and force, can be stated as the ordinary differential equationThe heat equation in one space variable, which describes how heat diffuses through a straight rod, is the partial differential equation
Here is the temperature of the rod at position and time and is a constant that depends on how fast heat diffuses through the rod.
Mean value theorem
The mean value theorem gives a relationship between values of the derivative and values of the original function. If is a real-valued function and and are numbers with, then the mean value theorem says that under mild hypotheses, the slope between the two points and is equal to the slope of the tangent line to at some point between and. In other words,In practice, what the mean value theorem does is control a function in terms of its derivative. For instance, suppose that has derivative equal to zero at each point. This means that its tangent line is horizontal at every point, so the function should also be horizontal. The mean value theorem proves that this must be true: The slope between any two points on the graph of must equal the slope of one of the tangent lines of. All of those slopes are zero, so any line from one point on the graph to another point will also have slope zero. But that says that the function does not move up or down, so it must be a horizontal line. More complicated conditions on the derivative lead to less precise but still highly useful information about the original function.
Taylor polynomials and Taylor series
The derivative gives the best possible linear approximation of a function at a given point, but this can be very different from the original function. One way of improving the approximation is to take a quadratic approximation. That is to say, the linearization of a real-valued function at the point is a linear polynomial, and it may be possible to get a better approximation by considering a quadratic polynomial. Still better might be a cubic polynomial, and this idea can be extended to arbitrarily high degree polynomials. For each one of these polynomials, there should be a best possible choice of coefficients,,, and that makes the approximation as good as possible.In the neighbourhood of, for the best possible choice is always, and for the best possible choice is always. For,, and higher-degree coefficients, these coefficients are determined by higher derivatives of. should always be, and should always be. Using these coefficients gives the Taylor polynomial of. The Taylor polynomial of degree is the polynomial of degree which best approximates, and its coefficients can be found by a generalization of the above formulas. Taylor's theorem gives a precise bound on how good the approximation is. If is a polynomial of degree less than or equal to, then the Taylor polynomial of degree equals.
The limit of the Taylor polynomials is an infinite series called the Taylor series. The Taylor series is frequently a very good approximation to the original function. Functions which are equal to their Taylor series are called analytic functions. It is impossible for functions with discontinuities or sharp corners to be analytic, but there are smooth functions which are not analytic.
Implicit function theorem
Some natural geometric shapes, such as circles, cannot be drawn as the graph of a function. For instance, if, then the circle is the set of all pairs such that. This set is called the zero set of, and is not the same as the graph of, which is a paraboloid. The implicit function theorem converts relations such as into functions. It states that if is continuously differentiable, then around most points, the zero set of looks like graphs of functions pasted together. The points where this is not true are determined by a condition on the derivative of. The circle, for instance, can be pasted together from the graphs of the two functions. In a neighborhood of every point on the circle except and, one of these two functions has a graph that looks like the circle.The implicit function theorem is closely related to the inverse function theorem, which states when a function looks like graphs of invertible functions pasted together.