Consider various applications of integration such as computing volume, arc length, surface area, work, hydrostatic force, centroids of planar regions, and applications to business, economics, and life sciences.

Let $f(x)$ and $g(x)$ be continuous functions such that $f(x) \geq g(x)$ over an interval $[a, b]$. Let $\text{R}$ denote the region bounded above by $f(x)$, below by $g(x)$, on the left by $x = a$, and on the right by $x = b$. The area of $\text{R}$ is given by

$A = \int_{a}^{b} [f(x) - g(x)]dx$

If we want to look at regions bounded by the graphs of functions that cross one another, we modify the formula by using the absolute value function.

$A = \int_{a}^{b} |f(x) - g(x)|dx$

In practice, you must find the points of intersection of the curves and divide the integrals according to the *leading* and *trailing* curve.

$A = \int_{a}^{z} [f(x) - g(x)]dx + \int_{z}^{b} [f(x) - g(x)]dx$

Let $u(y)$ and $v(y)$ be continuous functions such that $u(y) \geq v(y)$ over an interval $[c, d]$ along the $y$-axis. A horizontal strip has a width of $u(y) - v(y)$. Thus, the integration formula for area is

$A = \int_{c}^{d} [u(y) - v(y)]dy$

Area methods can be modified to compute the volume of three-dimensional solids.

The **volume of a solid with a known cross-sectional area** perpendicular to the $x$-axis.

$V = \int_{a}^{b} A(x) dx$

The **disk method** is used to find a volume generated when a region is revolved about an axis perpendicular to an approximating strip.

$V = \int_{a}^{b} \pi [f(x)]^2 dx$

The **washer method** is used to find a volume generated when a region between two curves is revolved about an axis perpendicular to an approximating strip.

$V = \int_{a}^{b} \pi ([f(x)]^2 - [g(x)]^2) dx$

The **cylindrical shells method** is used to find a volume generated when a region is revolved about an axis parallel to an approximating strip.

$V = \int_{a}^{b} 2 \pi x \ f(x) dx$

In the **polar coordinate system**, points are plotted in relation to a fixed point $O$, called the origin or pole and a fixed ray emanating from the origin called the **polar axis**. We associate with each point $P$ in the plane an ordered pair of numbers $P(r, \theta)$, where $r$ (**radial coordinate**) is the distance from $O$ to $P$ and $\theta$ (**polar angle**) is the angle measured from the polar axis.

**Changing coordinates**

To convert polar coordinates to Cartesian coordinates:

- $x = r\cos(\theta)$
- $y = r\sin(\theta)$

To convert Cartesian coordinates to polar coordinate:

- $r = \sqrt{x^2 + y^2}$
- $\theta = \arctan(\frac{y}{x})$

The graph of an equation in polar coordinates is the set of all points $P$ whose polar coordinates satisfy the given equation. Solve by constructing a table of values for $r$ and solving for $\theta$.

Similar to Riemann sums in rectangular form, instead of using rectangular areas as the basic units being summed, we sum areas of circular sectors.

The area of a circular sector if radius $r$ is given by: $A = \frac{1}{2}r^2\theta$

Using this formula, a formula for finding the area enclosed by a polar curve is:

$A = \int_{a}^{b} \frac{1}{2}[f(\theta)]^2 d\theta$

- Find all simultaneous solutions of the given system of equations.
- Determine whether the pole $r=0$ like on the two graphs.
- Graph the curves to look for other points of intersection.

Arc length formula in integral form:

for $y =f(x)$

$s = \int_{a}^{b} \sqrt{1 + \left( \frac{dy}{dx} \right)^2} dx$

for $x =g(y)$

$s = \int_{c}^{d} \sqrt{1 + \left( \frac{dg}{dy} \right)^2} dy$

for $y =f(x)$

$s = \int_{a}^{b} 2\pi y \sqrt{1 + \left( \frac{dy}{dx} \right)^2} dx$

The **work done by a constant force** if a body moves a distance $d$ in the direction of an applied constant force $F$, the work $W$ done is:

$W=Fd$

The **work done by a variable force** $F(x)$ in moving an object along the $x$-axis from $x=a$ to $x=b$ is given by:

$W=\int_{a}^{b} F(x) dx$

**Hooke’s law** states that the force $F$ on a *spring* is proportional to the displacement $x$:

$F(x) = kx$

where $k$ is the **spring constant**.

To model work using Hooke’s formula, (1) solve for $k$ then (2) use the *work done by a variable force* formula to solve for Hooke’s formula using the displacement as the integral’s lower and upper limits.

$F(x) = \int_{a}^{b} kx \ dx$

If you are given work done $W$ instead of force $F$, integration is required first to solve for $k$

$\int_{a}^{b} kx \ dx = W$

where $a$ and $b$ are the given displacement, then plug $k$ into the *work done by a variable force* formula as above.

**Work done in emptying a tank**

$\text{Force} = \text{Density} \times \text{Volume}$

*First*, set up a coordinate system; decide where to place $x = 0$.

To calculate work done in emptying a tank, use the *work done by a variable force* equation where the lower limit $a$ is the minimum x-value of the liquid and the upper limit $b$ is the maximum x-value of the liquid. Remember that $F(x) = dF$ where $d$ is the distance between $x$ and its destination $(\text{larger} - \text{smaller})$ and force $F$ is the volume of $x$, a “slice” of liquid $v_{x}$ times density $\rho$ i.e. $(\pi r^2 dx)(\rho)$.

$\int_{a}^{b} d(v_{x} \rho)$

Note that the volume of a “slice” depends on the shape of the tank but *always* uses $dx$ as a dimension. For example a “slice” in a cylindrical tank will have a volume of $\pi r^2 dx$.

**Hydrostatic force problems**

$\text{Force} = \text{Density} \times \text{Depth} \times \text{Area}$

First, set up a coordinate system i.e. let $x = 0$ be the waterline. Then let $\text{Depth}$ equal the distance between $x$ and its destination $(\text{larger} - \text{smaller})$. Remember that $\text{Area}$ is the area of $x$ (a “slice”) and *not* the total surface area. Integrate from $x_{min}$ to $x_{max}$ of the fluid.

**Moments and center of mass**

$\bar{x}$ is the **center of mass**: the point where the fulcrum should be placed to make the system balance.

**Moments** measure the tendency of a body to rotate about an axis.

Let $\rho$ denote density of the lamina.

- The mass of the lamina is

$m = \rho \int_{a}^{b} f(x) dx$

- The moment $M_{x}$ with respect to the $x$-axis is

$M_{x} = \int_{a}^{b} \left( \frac{f(x) + g(x)}{2} \right)(f(x) - g(x))dx$

- The moment $M_{y}$ with respect to the $y$-axis is

$M_{y} = \int_{a}^{b} x(f(x) - g(x))dx$

- The coordinates of the center of mass $(\bar{x}, \bar{y})$ of the system are

$\bar{x} = \frac{M_{y}}{m}$

$\bar{y} = \frac{M_{x}}{m}$

where $M_{x}$ is moment about the $x$-axis and $M_{y}$ is moment about the $y$-axis.

**Finding the centroid of a region bounded by two functions**

- Calculate the points of intersection to set the limits of integration

Solve for $f(x) = g(x)$

Then set $\int_{a}^{b}$

- Calculate the total mass

Assuming we are calculating for a lamina and $\rho = 1$ (See *area between two curves*)

$\int_{a}^{b} [f(x) - g(x)]dx$

where $f(x) \geq g(x)$ on $[a, b]$.

- Compute the moments

Distance to $y$-axis is $x$; distance to $x$-axis is the average of $f(x)$ and $g(x)$.

where $f(x) \geq g(x)$ on $[a, b]$.

- Solve for the coordinates of the center of mass.

**Integration by parts** is the inverse of the product rule for differentiation: $d(uv) = u \ dv + v \ du$.

Integrate both sides to find the formula for integration by parts: $uv = \int u \ dv + \int v \ du$.

Rewritten as

$\int u \ dv = uv - \int v \ du$

If the derivative of $u$ is eventually $0$, you can use a table to obtain the answer.

Example: $\int x^3 \sin(x)dx$

Derivatives | Integrals |
---|---|

$x^3$ | $\sin(x)$ |

$3x^2$ | $-\cos(x)$ |

$6x$ | $-\sin(x)$ |

$6$ | $\cos(x)$ |

$0$ | $s\sin(x)$ |

Use the table to keep track of each “round’s” $uv$ (the *next* integral in the table).

Answer: $x^3(-\cos x) - 3x^2(-\sin x) + 6x(\cos x) - 6(\sin x) + C$

$\int \sin^m x \cos^n x dx$

For even powers, square the trig identity.

For odd powers, peel off one trig function to become $du$.

This process can be thought of as the reverse of adding fractional algebraic expressions, and it allows us to break up rational expressions into simpler terms.

General rules:

- Linear terms get constants $A$
- Quadratic terms get linear terms $Bx + C$
- Multiple terms get repeated

From finite to $\infty$

$\int_{1}^{\infty} f(x) dx = \lim\limits_{N \to \infty} \int_{1}^{N} f(x) dx$

If the limit is finite, we say that the improper integral **converges**, otherwise, the integral **diverges**.

Rule:

$\int_{1}^{\infty} \frac{1}{x^p} dx \hspace{1em} \text{converges} \Leftrightarrow p > 1$

From $-\infty$ to $\infty$

$\int_{-\infty}^{\infty} f(x) dx = \int_{-\infty}^{0} f(x) dx + \int_{0}^{\infty} f(x) dx$

$\frac{d}{dx} \cosh x = \sinh x$

$\frac{d}{dx} \sinh x = \cosh x$

A **sequence** is a function whose domain is $\N$. The functional values $a_1, a_2, a_3, ...$ are called the **terms** of the sequence, and $a_n$ is called the **$n$th term**, or **general term**, of the sequence written as

$(S_n) \hspace{1em} \text{or} \hspace{1em} \{S_n\}$

Growth (quickest to slowest) |
---|

$n^n$ |

$n!$ |

$3^n$ |

$e^n$ |

$n^4$ |

$n^2$ |

$n$ |

$\sqrt{n}$ |

$\sqrt[4]{n}$ |

$\ln{n}$ |

If the **bottom** of a limit is more “powerful,” the fraction goes to $0$.: $\lim\limits_{n \to \infty} \frac{n}{n^2} = 0$

If the **top** of a limit is more “powerful,” the fraction goes to $\infty$: $\lim\limits_{n \to \infty} \frac{n^2}{n} = \infty$

If the top and bottom are equally “powerful,” break out the coefficients: $\lim\limits_{n \to \infty} \frac{2n}{3n} = \frac{2}{3}$

$\lim\limits_{n \to \infty} \left(1 + \frac{1}{k} \right)^k = e$

$\lim\limits_{n \to \infty} k^{\frac{1}{k}} = 1$

$\lim\limits_{n \to \infty} \sqrt[k]{a} = 1$

Name | Condition |
---|---|

Strictly increasing | $a_{n + 1} > a_n$ for all $n$ |

Increasing | $a_{n + 1} \geq a_n$ for all $n$ |

Strictly decreasing | $a_{n + 1} < a_n$ for all $n$ |

Decreasing | $a_{n + 1} \leq a_n$ for all $n$ |

Monotone | if either increasing or decreasing |

Bounded above by $M$ | $a_n \leq M$ for all $n$ |

Bounded below by $m$ | $a_n \geq m$ for all $n$ |

Bounded | if both bounded above and below |

A **series** is the sum of a sequence; an expression of the form

$a_1 + a_2 + a_3 + ... = \sum_{k=1}^\infty a_k$

and the **$n$th partial sum** of the series is

$S_n = a_1 + a_2 + ... + a_n = \sum_{k=1}^n a_k$

The series converges if the sequence of partial sums converges.

A **geometric series**

$ar^{n} + ar^{n+1} + ... = \sum_{k=n}^\infty ar^k
\left\{
\begin{array}{l}
\text{converges to} \hspace{1em} \frac{ar^n}{1 - r} \hspace{1em} \text{if} \hspace{1em} |r| < 1\\
\text{diverges if} \hspace{1em} |r| \geq 1
\end{array}
\right.$

where $a$ is the first term, $r$ is the **common ratio**, and $n$ is the number of terms.

For **telescoping series**, in the form

$\sum_{k=1}^\infty \frac{1}{k(k + 1)}$

use partial fractions.

$\sum_{k=1}^\infty \frac{1}{k} - \frac{1}{k + 1}$

Notice how part of the terms cancel out.

$S_n =
\left(1 - \cancel{\frac{1}{2}}\right) +
\left( \cancel{\frac{1}{2}} - \cancel{\frac{1}{3}} \right) +
\left( \cancel{\frac{1}{3}} - \cancel{\frac{1}{4}} \right) +
... +
\left( \cancel{\frac{1}{n}} - \frac{1}{n + 1} \right)
= 1 - \frac{1}{n + 1}$

Therefore

$\lim\limits_{n \to \infty} S_n = \lim\limits_{n \to \infty} 1 - \frac{1}{n + 1} = 1$

The convergence or divergence of an infinite series is determined by the behavior of its $n$th partial sum, $S_n$, as $n \to \infty$. Unfortunately, it is often difficult or impossible to find a usable formula for the $n$th partial sum of a series, and other techniques must be used.

**Theorem: The divergence test**. If $\lim\limits_{k \to \infty} \neq 0$, then the series $\Sigma a_k$ must diverge.

Warning: You can never say a series *converges* because of the *divergence test*.

**Theorem: Convergence criterion for series with non-negative terms**. A series $\Sigma a_k \geq 0$ for all $k$ converges if its sequence of partial sums is bounded from above and diverges otherwise.

**Theorem: The integral test**. If $a_k = f(x)$ for $k = 1, 2, ...$, where $f$ is a positive, continuous, and decreasing function of $x$ for $x \leq 1$, then

$\sum_{k=1}^\infty a_k \hspace{1em} \text{and} \hspace{1em} \int_{1}^{\infty} f(x) dx$

either both converge or both diverge.

The **harmonic series**

$\sum_{k=1}^\infty \frac{1}{k}$

diverges by the integral test.

$\int_{1}^\infty \frac{1}{x} dx = \lim\limits_{b \to \infty} \int_{1}^{b} \frac{1}{x} dx = \lim\limits_{b \to \infty} [\ln b - \ln 1] = \infty$

The harmonic series is a special case of a more general series called a p-series.

$\sum_{k=1}^\infty \frac{1}{k^p}$

where $p$ is a positive constant.

**Theorem: The p-series test**. The p-series $\sum_{k=1}^\infty \frac{1}{k^p}$ converges if $p > 1$ and diverges if $p \leq 1$.

**Theorem: Direct comparison test**. Suppose $0 \leq a_k \leq b_k$ for all $k$.

If $\Sigma b_k$ converges, then $\Sigma a_k$ converges.

If $\Sigma a_k$ diverges, then $\Sigma b_k$ diverges.

**Theorem: Limit comparison test**. Suppose $a_k > 0$ and $a_b > 0$ for all sufficiently large $k$.

$\lim\limits_{k \to \infty} \frac{a_k}{b_k} = L$

where $L$ is finite and positive (is not zero or infinity). Then $\Sigma a_k$ and $\Sigma b_k$ either both converge or both diverge.

**Theorem: Zero-infinity limit comparison test**. Suppose $a_k > 0$ and $a_b > 0$ for all sufficiently large $k$.

If $\lim\limits_{k \to \infty} \frac{a_k}{b_k} = 0$ and $\Sigma b_k$ converges, then the series $\Sigma a_k$ converges.

If $\lim\limits_{k \to \infty} \frac{a_k}{b_k} = \infty$ and $\Sigma b_k$ diverges, then the series $\Sigma a_k$ diverges.

**Theorem: The ratio test**. Given the series $\Sigma a_k$ with $a_k > 0$, and $\lim\limits_{k \to \infty} \frac{a_{k + 1}}{a_k} = L$

The **ratio test** states the following:

- If $L < 1$, then $\Sigma a_k$ converges.
- If $L > 1$, or if $L$ is infinite, then $\Sigma a_k$ diverges.
- If $L = 1$, then the test is inconclusive.

**Theorem: The root test**. Given the series $\Sigma a_k$ with $a_k > 0$, and $\lim\limits_{k \to \infty} \sqrt[k]{a_k} = L$

The **root test** states the following:

- If $L < 1$, then $\Sigma a_k$ converges.
- If $L > 1$, or if $L$ is infinite, then $\Sigma a_k$ diverges.
- If $L = 1$, then the test is inconclusive.

**Theorem: The alternating series test**. An alternating series

$\sum (-1)^k a_k \hspace{1em} \text{or} \hspace{1em} \sum (-1)^{k+1} a_k$

with $a_k > 0$ converges if both

- $\lim\limits_{k \to \infty} a_k = 0$
- $\{a_k\}$ is a decreasing sequence

**Theorem: The absolute convergence test**. A series of real numbers $\Sigma a_k$ must converge if the related absolute value series $\Sigma |a_k|$ converges.