Calculus

This section is meant to provide the core curriculum for Calculus I, II, III, and IV. We’ll start with a short funny video going over some of the basics of calculus.

Why is Calculus Important for modeling things?

Calculus is the study of change, in the same way that geometry is the study of shape and algebra is the study of operations and their application to solving equations. Any problem involving rates of change or variables which don’t remain constant require calculus in order to solve. Lets begin with some of the history leading up to the development of calculus.

The great Greek philosopher Zeno of Elea (born sometime between 495 and 480 B.C.) proposed four paradoxes in an effort to challenge the accepted notions of space and time that he encountered in various philosophical circles. His paradoxes confounded mathematicians for centuries, and it wasn’t until Cantor’s development (in the 1860’s and 1870’s) of the theory of infinite sets that the paradoxes could be fully resolved. Zeno’s paradoxes focus on the relation of the discrete to the continuous, an issue that is at the very heart of mathematics.

 ? If everything when it occupies an equal space is at rest, and if that which is in locomotion is always occupying such a space at any moment, the flying arrow is therefore motionless. ? ?Aristotle, Physics VI:9, 239b5

In the arrow paradox (also known as the fletcher’s paradox), Zeno states that for motion to occur, an object must change the position which it occupies. He gives an example of an arrow in flight. He states that in any one (durationless) instant of time, the arrow is neither moving to where it is, nor to where it is not. It cannot move to where it is not, because no time elapses for it to move there; it cannot move to where it is, because it is already there. In other words, at every instant of time there is no motion occurring. If everything is motionless at every instant, and time is entirely composed of instants, then motion is impossible.

The continuity of space or time, considered by Zeno and others, is represented in mathematics by the continuity of points on a line. As late as the seventeenth century, mathematicians continued to believe, as the ancient Greeks had, that this continuity of points was a simple result of density, meaning that between any two points, no matter how close together, there is always another. This is true, for example, of the rational numbers. However, the rational numbers do not form a continuum, since irrational numbers like ? 2 are missing, leaving holes or discontinuities. The irrational numbers are required to complete the continuum. Together, the rational and irrational numbers do form a continuous set, the set of real numbers. Thus, the continuity of points on a line is ultimately linked to the continuity of the set of real numbers, by establishing a one-to-one correspondence between the two. This approach to continuity was first established in the 1820s, by Augustin-Louis Cauchy, who finally began to solve the problem of handling continuity logically. In Cauchy’s view, any line corresponding to the graph of a function is continuous at a point, if the value of the function at x, denoted by f(x), gets arbitrarily close to f(p), when x gets close to a real number p. If f(x) is continuous for all real numbers x contained in a finite interval, then the function is continuous in that interval. If f(x) is continuous for every real number x, then the function is continuous everywhere.

Cauchy’s definition of continuity is essentially the one we use today, though somewhat more refined versions were developed in the 1850s, and later in the nineteenth century. For example, the concept of continuity is often described in relation to limits.

Limits  Whenever a point x is within ? units of c, f(x) is within ? units of L. For all x > S, f(x) is within ? of L.

Suppose f(x) is a real-valued function and c is a real number. The expression $\lim_{x \to c}f(x) = L$

means that f(x) can be made to be as close to L as desired by making x sufficiently close to c. In that case, it can be stated that “the limit of f of x, as x approaches c, is L“. Note that this statement can be true even if f(c) ? L. Indeed, the function f(x) need not even be defined at c.

For example, if $f(x) = \frac{x^2 - 1}{x - 1}$

then f(1) is not defined (see division by zero), yet as x moved arbitrarily close to 1, f(x) correspondingly approaches 2:

 f(0.9) f(0.99) f(0.999) f(1.0) f(1.001) f(1.01) f(1.1) 1.900 1.990 1.999 ? undef ? 2.001 2.010 2.100

Thus, f(x) can be made arbitrarily close to the limit of 2 just by making x sufficiently close to 1.

Augustin-Louis Cauchy in 1821, followed by Karl Weierstrass, formalized the definition of the limit of a function into what became known as the (?, ?)-definition of limit in the 19th century. The definition uses ? (the lowercase Greek letter epsilon) to represent a small positive number, so that “f(x) becomes arbitrarily close to L” means that f(x) lies in the interval (L – ?, L + ?), which can also be written using absolute value as |f(x) – L| < ?. The statement “x approaches c” then indicates that a positive number ? (the lowercase greek letter delta) exists within either (c – ?, c) or (c, c + ?), which can be expressed with 0 < |x – c| < ?. The first inequality means that the distance between x and c is more than 0 and that x ? c, while the second indicates that x is within distance ? of c.

In addition to limits at finite values, functions can also have limits at infinity. For example, consider $f(x) = {2x-1 \over x}$
• f(100) = 1.9900
• f(1000) = 1.9990
• f(10000) = 1.99990

As x becomes extremely large, the value of f(x) approaches 2, and the value of f(x) can be made as close to 2 as one could wish just by picking x sufficiently large. In this case, the limit of f(x) as x approaches infinity is 2. In mathematical notation, $\lim_{x \to \infty} f(x) = 2.$

Principles

Limits and infinitesimals

Calculus is usually developed by manipulating very small quantities. Historically, the first method of doing so was by infinitesimals. These are objects which can be treated like numbers but which are, in some sense, “infinitely small”. An infinitesimal number dx could be greater than 0, but less than any number in the sequence 1, 1/2, 1/3, … and less than any positive real number. Any integer multiple of an infinitesimal is still infinitely small, i.e., infinitesimals do not satisfy the Archimedean property. From this point of view, calculus is a collection of techniques for manipulating infinitesimals. This approach fell out of favor in the 19th century because it was difficult to make the notion of an infinitesimal precise. However, the concept was revived in the 20th century with the introduction of non-standard analysis and smooth infinitesimal analysis, which provided solid foundations for the manipulation of infinitesimals.

In the 19th century, infinitesimals were replaced by limits. Limits describe the value of a function at a certain input in terms of its values at nearby input. They capture small-scale behavior, just like infinitesimals, but use the ordinary real number system. In this treatment, calculus is a collection of techniques for manipulating certain limits. Infinitesimals get replaced by very small numbers, and the infinitely small behavior of the function is found by taking the limiting behavior for smaller and smaller numbers. Limits are the easiest way to provide rigorous foundations for calculus, and for this reason they are the standard approach.

Differential calculus

Differential calculus is the study of the definition, properties, and applications of the derivative of a function. The process of finding the derivative is called differentiation. Given a function and a point in the domain, the derivative at that point is a way of encoding the small-scale behavior of the function near that point. By finding the derivative of a function at every point in its domain, it is possible to produce a new function, called the derivative function or just the derivative of the original function. In mathematical jargon, the derivative is a linear operator which inputs a function and outputs a second function. This is more abstract than many of the processes studied in elementary algebra, where functions usually input a number and output another number. For example, if the doubling function is given the input three, then it outputs six, and if the squaring function is given the input three, then it outputs nine. The derivative, however, can take the squaring function as an input. This means that the derivative takes all the information of the squaring function?such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on?and uses this information to produce another function. (The function it produces turns out to be the doubling function.)

The most common symbol for a derivative is an apostrophe-like mark called prime. Thus, the derivative of the function of f is f’, pronounced “f prime.” For instance, if f(x) = x2 is the squaring function, then f’(x) = 2x is its derivative, the doubling function.

If the input of the function represents time, then the derivative represents change with respect to time. For example, if f is a function that takes a time as input and gives the position of a ball at that time as output, then the derivative of f is how the position is changing in time, that is, it is the velocity of the ball.

If a function is linear (that is, if the graph of the function is a straight line), then the function can be written as y = mx + b, where x is the independent variable, y is the dependent variable, b is the y-intercept, and: $m= \frac{\text{rise}}{\text{run}}= \frac{\text{change in } y}{\text{change in } x} = \frac{\Delta y}{\Delta x}.$

This gives an exact value for the slope of a straight line. If the graph of the function is not a straight line, however, then the change in y divided by the change in x varies. Derivatives give an exact meaning to the notion of change in output with respect to change in input. To be concrete, let f be a function, and fix a point a in the domain of f. (af(a)) is a point on the graph of the function. If h is a number close to zero, then a + h is a number close to a. Therefore (a + hf(a + h)) is close to (af(a)). The slope between these two points is $m = \frac{f(a+h) - f(a)}{(a+h) - a} = \frac{f(a+h) - f(a)}{h}.$

This expression is called a difference quotient. A line through two points on a curve is called a secant line, so m is the slope of the secant line between (af(a)) and (a + hf(a + h)). The secant line is only an approximation to the behavior of the function at the point a because it does not account for what happens between a and a + h. It is not possible to discover the behavior at a by setting h to zero because this would require dividing by zero, which is impossible. The derivative is defined by taking the limit as h tends to zero, meaning that it considers the behavior of f for all small values of h and extracts a consistent value for the case when h equals zero: $\lim_{h \to 0}{f(a+h) - f(a)\over{h}}.$

Geometrically, the derivative is the slope of the tangent line to the graph of f at a. The tangent line is a limit of secant lines just as the derivative is a limit of difference quotients. For this reason, the derivative is sometimes called the slope of the function f.

Here is a particular example, the derivative of the squaring function at the input 3. Let f(x) = x2 be the squaring function. The derivative f’(x) of a curve at a point is the slope of the line tangent to that curve at that point. This slope is determined by considering the limiting value of the slopes of secant lines. Here the function involved (drawn in red) is f(x) = x3 – x. The tangent line (in green) which passes through the point (-3/2, -15/8) has a slope of 23/4. Note that the vertical and horizontal scales in this image are different. \begin{align}f'(3) &=\lim_{h \to 0}{(3+h)^2 - 3^2\over{h}} \\ &=\lim_{h \to 0}{9 + 6h + h^2 - 9\over{h}} \\ &=\lim_{h \to 0}{6h + h^2\over{h}} \\ &=\lim_{h \to 0} (6 + h) \\ &= 6. \end{align}

The slope of tangent line to the squaring function at the point (3,9) is 6, that is to say, it is going up six times as fast as it is going to the right. The limit process just described can be performed for any point in the domain of the squaring function. This defines the derivative function of the squaring function, or just the derivative of the squaring function for short. A similar computation to the one above shows that the derivative of the squaring function is the doubling function.

Leibniz notation

A common notation, introduced by Leibniz, for the derivative in the example above is \begin{align} y=x^2 \\ \frac{dy}{dx}=2x. \end{align}

In an approach based on limits, the symbol dy/dx is to be interpreted not as the quotient of two numbers but as a shorthand for the limit computed above. Leibniz, however, did intend it to represent the quotient of two infinitesimally small numbers, dy being the infinitesimally small change in y caused by an infinitesimally small change dx applied to x. We can also think of d/dx as a differentiation operator, which takes a function as an input and gives another function, the derivative, as the output. For example: $\frac{d}{dx}(x^2)=2x.$

In this usage, the dx in the denominator is read as “with respect to x”. Even when calculus is developed using limits rather than infinitesimals, it is common to manipulate symbols like dx and dy as if they were real numbers; although it is possible to avoid such manipulations, they are sometimes notationally convenient in expressing operations such as the total derivative.

Integral calculus

Integral calculus is the study of the definitions, properties, and applications of two related concepts, the indefinite integral and the definite integral. The process of finding the value of an integral is called integration. In technical language, integral calculus studies two related linear operators.

The indefinite integral is the antiderivative, the inverse operation to the derivative. F is an indefinite integral of f when f is a derivative of F. (This use of upper- and lower-case letters for a function and its indefinite integral is common in calculus.)

The definite integral inputs a function and outputs a number, which gives the area between the graph of the input and the x-axis. The technical definition of the definite integral is the limit of a sum of areas of rectangles, called a Riemann sum.

A motivating example is the distances traveled in a given time. $\mathrm{Distance} = \mathrm{Speed} \cdot \mathrm{Time}$

If the speed is constant, only multiplication is needed, but if the speed changes, then we need a more powerful method of finding the distance. One such method is to approximate the distance traveled by breaking up the time into many short intervals of time, then multiplying the time elapsed in each interval by one of the speeds in that interval, and then taking the sum (a Riemann sum) of the approximate distance traveled in each interval. The basic idea is that if only a short time elapses, then the speed will stay more or less the same. However, a Riemann sum only gives an approximation of the distance traveled. We must take the limit of all such Riemann sums to find the exact distance traveled.

If f(x) in the diagram on the left represents speed as it varies over time, the distance traveled (between the times represented by a and b) is the area of the shaded region s.

To approximate that area, an intuitive method would be to divide up the distance between a and b into a number of equal segments, the length of each segment represented by the symbol ?x. For each small segment, we can choose one value of the function f(x). Call that value h. Then the area of the rectangle with base ?x and height h gives the distance (time ?x multiplied by speed h) traveled in that segment. Associated with each segment is the average value of the function above it, f(x)=h. The sum of all such rectangles gives an approximation of the area between the axis and the curve, which is an approximation of the total distance traveled. A smaller value for ?x will give more rectangles and in most cases a better approximation, but for an exact answer we need to take a limit as ?x approaches zero.

The symbol of integration is $\int \,$, an elongated S (the S stands for “sum”). The definite integral is written as: $\int_a^b f(x)\, dx.$

and is read “the integral from a to b of f-of-x with respect to x.” The Leibniz notation dx is intended to suggest dividing the area under the curve into an infinite number of rectangles, so that their width ?x becomes the infinitesimally small dx. In a formulation of the calculus based on limits, the notation $\int_a^b \ldots\, dx$

is to be understood as an operator that takes a function as an input and gives a number, the area, as an output; dx is not a number, and is not being multiplied by f(x).

The indefinite integral, or antiderivative, is written: $\int f(x)\, dx.$

Functions differing by only a constant have the same derivative, and therefore the antiderivative of a given function is actually a family of functions differing only by a constant. Since the derivative of the function y = x² + C, where C is any constant, is y’ = 2x, the antiderivative of the latter is given by: $\int 2x\, dx = x^2 + C.$

An undetermined constant like C in the antiderivative is known as a constant of integration.

Fundamental theorem

The fundamental theorem of calculus states that differentiation and integration are inverse operations. More precisely, it relates the values of antiderivatives to definite integrals. Because it is usually easier to compute an antiderivative than to apply the definition of a definite integral, the Fundamental Theorem of Calculus provides a practical way of computing definite integrals. It can also be interpreted as a precise statement of the fact that differentiation is the inverse of integration.

The Fundamental Theorem of Calculus states: If a function f is continuous on the interval [ab] and if F is a function whose derivative is f on the interval (ab), then $\int_{a}^{b} f(x)\,dx = F(b) - F(a).$

Furthermore, for every x in the interval (ab), $\frac{d}{dx}\int_a^x f(t)\, dt = f(x).$

This realization, made by both Newton and Leibniz, who based their results on earlier work by Isaac Barrow, was key to the massive proliferation of analytic results after their work became known. The fundamental theorem provides an algebraic method of computing many definite integrals?without performing limit processes?by finding formulas for antiderivatives. It is also a prototype solution of a differential equation. Differential equations relate an unknown function to its derivatives, and are ubiquitous in the sciences.

Product Rule

In calculus, the product rule is a formula used to find the derivatives of products of two or more functions. It may be stated thus: $(f\cdot g)'=f'\cdot g+f\cdot g' \,\!$

or in the Leibniz notation thus: $\dfrac{d}{dx}(u\cdot v)=u\cdot \dfrac{dv}{dx}+v\cdot \dfrac{du}{dx}$.

The derivative of the product of three functions is: $\dfrac{d}{dx}(u\cdot v \cdot w)=\dfrac{du}{dx} \cdot v \cdot w + u \cdot \dfrac{dv}{dx} \cdot w + u\cdot v\cdot \dfrac{dw}{dx}$.

Quotient Rule

In calculus, the quotient rule is a method of finding the derivative of a function that is the quotient of two other functions for which derivatives exist.

If the function one wishes to differentiate, f(x), can be written as $f(x) = \frac{g(x)}{h(x)}$

and $h(x)\not=0$, then the rule states that the derivative of g(x) / h(x) is $f'(x) = \frac{h(x)g'(x) - h'(x)g(x)}{[h(x)]^2}.$

More precisely, if all x in some open set containing the number a satisfy $h(x)\not=0$, and g‘(a) and h‘(a) both exist, then f‘(a) exists as well and $f'(a)=\frac{h(a)g'(a) - h'(a)g(a)}{[h(a)]^2}.$

And this can be extended to calculate the second derivative as well (you can prove this by taking the derivative of f(x) = g(x)(h(x)) ? 1 twice). The result of this is: $f''(x)=\frac{g''(x)[h(x)]^2-2g'(x)h(x)h'(x)+g(x)[2[h'(x)]^2-h(x)h''(x)]}{[h(x)]^3}.$

The quotient rule formula can be derived from the product rule and chain rule.