# Theory of Limits

The original formulation of the calculus employed by its originators Isaac Newton, Gottfried Leibnitz, Jacob and Johann Bernoulli, Guillaume de l'Hôpital, and others, employed the assertion that there could be quantities that are infinitesimal meaning that they are vanishingly small, in such a way that an infinite number of such quantities added together would yield a finite number. While such a notion did permit the ancients to deduce formulas for some classical mathematical objects, for example the circumference and the area of a circle, the notion was poorly formulated, which led to paradoxes and philosophical discussions.

Infinitesimals were banished from the realm of analysis in the 19th century with the introduction of the concept of limit by Bernhard Bolzano in 1817, and its refinement modern form of the ${\displaystyle \epsilon -\delta }$ method by August Cauchy and later by, Karl Weierstrass.

## A sketch of the theory of limits

We are used to the notion of a function ${\displaystyle y=f(x)}$ as a rule that maps each real number represented by ${\displaystyle x}$ to one and only one real number ${\displaystyle y}$. In mathematical analysis we are concerned with the behavior of functions in the vicinity of points where they are defined.

For example, it is easy to see that the functions ${\displaystyle y=x}$ and ${\displaystyle y=\sin x}$ are both equal to zero at ${\displaystyle x=0}$, but it is not so clear as to what the value of

${\displaystyle y={\frac {\sin(x)}{x}}}$

is at ${\displaystyle x=0}$. Both ${\displaystyle x}$ and ${\displaystyle \sin x}$ are zero at ${\displaystyle x=0}$, but we know that we cannot simply perform division, because division by zero is not defined in mathematics. Thus we need an approach where we examine functions in a neighborhood of the point of interest. This approach is called the Theory of Limits.

## Formal definition of the limit of a function

We say that a real number ${\displaystyle L}$ is the limit of a real-valued function ${\displaystyle f(t)}$ at ${\displaystyle t=t_{0}}$ if for every ${\displaystyle \epsilon >0,}$ there exists ${\displaystyle \delta }$ such that whenever ${\displaystyle |t-t_{0}|<\delta }$

${\displaystyle |f(t)-L|<\epsilon .}$

The fundamental notion is that ${\displaystyle \epsilon }$ constrains the range of the function ${\displaystyle f(t)}$ whereas ${\displaystyle \delta }$ constrains the domain of ${\displaystyle f(t).}$ These constraints are in the form of open sets. Because ${\displaystyle \epsilon }$ is arbitrary, the open sets defined by ${\displaystyle \epsilon }$ and ${\displaystyle \delta }$ may be made as small as we desire. In effect, this formal definition is like a microscope of infinite resolution allowing us to examine the behavior of ${\displaystyle f(t)}$ at any length scale. As we reduce the value of ${\displaystyle \epsilon }$ or

We use the notion ${\displaystyle \lim _{t\rightarrow t_{0}}f(t)=L}$ to indicate that ${\displaystyle L}$ is the limit of ${\displaystyle f(t)}$ as ${\displaystyle t}$ approaches ${\displaystyle t_{0}.}$