Corrections to

A Radical Approach to Real Analysis

2nd printing

Maple programs for the exercises are available from Tommy Ratliff at Wheaton College.

Line 6 b refers to the 6 th line from the bottom of the page.

Mathematical notation is written using LaTeX commands.

page 9, second line of exercise 5, last term should read

$ { (-1)^{n-1} \over 2n-1 } \cos{ (2n-1) \pi x \over 2}$

page 19, lines 11 & 12 should read:

The following Mathematica program will alternately sum r terms from the alternating harmonic series with odd denominators and then subtract the next s terms with even ...

page 35, line 9: $1 + 1/2 + 1/3 + \cdots + 1/n$ should read $1 + 1/2 + 1/3 + \cdots + 1/(n-1)$.

page 106, line 12, the Hint should read: "Prove that $\ln(1+x) < x$ when $x>0$ and therefore"

page 109: line 15b should end with

$(y_2 - x_2)/2^{k-2} = (y_1-x_1)/2^{k-1}$,

page 116: line10: after $f(x) > f(x_2)$ insert: (find $x' $ for which $f(x_1) < f(x')$ and $x''$ for which $f(x'') > f(x_2)$ and choose as $x$ whichever of these two values gives the the greater value)

page 117: lines 14-16: change to read:

$f'(x) > 0$ for all $x \in [a,b]$, then $f$ is strictly increasing over $[a,b]$, ($a \leq x_1 < x_2 \leq b$ implies that $f(x_1) < f(x_2)$). Hint: Let $\cal{S}$ be the set of all $x$ for which $a \leq x \leq x_2$ and $f(x) \geq f(x_2)$.

line 18: change to:

(a) Use the fact that $f'(B) > 0$ to prove that ...

line 21: change $f'(x) \geq 0$ to $f'(x) > 0$

page 118: line 4: "is largest" should read "is the largest".

page 177: equations 5.1 and 5.2, in both equations, the coefficient of the cosine should be { (-1)^{k-1} \over 2k-1 } rather than just (-1)^{k-1}

page 178: line 4: $\lim_{x\to 0}$ in middle limit should read $\lim_{y \to 0}$

page 181: line 21: "section 4.3" should read "section 4.4".

page 182: line 3b: "(5.13)" should read "(5.11)".

page 196: line 4: "uniform continuity" should read "uniform convergnece".

page 199: line 4b change "an infinite series for which" to "an infinite series that converges at $x=a$ and for which"

page 200: line 9: Change to:

"Of course, we do not yet know that $F(x) = \sum_{k=1}^{\infty}f_k(x)$ exists when $x$ is not $a$, but let us suspend that question for a moment. We know that we can control the size of $\cal{E}_n(x,a)$. We assume that $F(x)$ in a neighborhood of $a$ and rewrite the quantity to be bounded as:"

change lines 6b through line 1 on page 201 to read:

"Knowing what we need, we begin the proof by defining


g_k(x) = \frac{ f_k(x) - f_k(a)}{x-a}, \quad x \neq a.


We shall show that $\sum_{k=1}^{\infty}$ converges uniformly over $I$. We know that the partial sums are

\[ \sum_{k=1}^n g_k(x) = \frac{F_n(x) - F_n(a)}{x-a}. \]

page 201: line 9b: Change to read:

"Since $\sum_{k=1}^{\infty} g_k(x) = \sum_{k=1}^{\infty} (f_k(x) - f_k(a))/(x-a)$ converges and so does $\sum_{k=1}^{\infty} f_k(a)$, it follows that $\sum_{k=1}^{\infty} f_k(x)$ must converge. We are now justified in using $F(x)$. Furthermore, since we have uniform convergence of $\sum g_k(x)$, we can choose an $n$ large enough so that each of the first two pieces ... "

page 201, line 4b: change "converges uniformly over ..." to "converges uniformly over a finite interval $I$ and $\sum f_k(a)$ converges for some $a \in I$, then ..."

page 202, line 1: should read "We choose a fixed value of $a \in I$ for which $\sum f_k(a)$ converges, use ..."

page 206: add exercises 11 & 12:

11. Prove that if $\sum_{k=1}^{\infty} f_k(a)$ and $\sum_{k=1}^{\infty} (f_k(x) - f_k(a))/(x-a)$ converges, then so does $\sum_{k=1}^{\infty} f_k(x)$.

12. Find an example of a sequence of functions, $\{f_k(x)\}$ for which $\sum_{k=1}^{\infty} f_k'(x)$ converges uniformly over an interval, $I$, but $\sum_{k=1}^{\infty} f_k(x)$ does not converge unfiromly over $I$. Hint: By Corollary 5.1, $\sum_{k=1}^{\infty} f_k(x)$ cannot converge at any point in $I$.

page 214, line 5: "section 4.4" should read "section 4.5"

page 240: lines 2-5, change to read:


\sum_{j=1}6n \frac{(x_j - x_{j-1})^2}{2} < \frac{\delta}{2} \sum_{j=1}^n (x_j - x_{j-1}) = \frac{\delta}{2}(x_n-x_0) = \frac{3\delta}{2}.


We can guarantee that this error is less than $\epsilon$ if we choose $\delta = 2\epsilon/3$.

page 241: line 4b: $(x_{j k-1} - x_{j k})$ should read $(x_{j k} - x_{j k-1})$.

page 251: line 14: should read "Riemann's definition is equivalent to Cauchy's when the fucntion is bounded."

page 313: section 3.1, #9, third root at x = -0.979366938599298642

page 313: section 3.3, #17: should read "The Lagrange remainder gives a tighter bound for $k \geq 10$."