Widget HTML Atas

Newton`s Method and Its Extensions

If we only want an algorithm, we can consider the technique graphically, as is often done in calculus. Another possibility is to derive Newton’s method as a technique to obtain faster convergence than offered by other types of functional iteration, as is done in Section 2.4. A third means of introducing Newton’s method, which is discussed next, is based on Taylor polynomials. We will see that here this particular derivation produces not only the method, but also a bound for the error of the approximation.

Suppose that $f \in C^2[a,b]$. Let $p_0 \in [a,b]$ be an approximation to $p$ such that $f'(p_0) \neq 0$ and $|p - p_0|$ is ``small.'' Consider the first Taylor polynomial for $f(x)$ expanded about $p_0$ and evaluated at $x=p$:



\[f(p) = f(p_0) + (p-p_0) f'(p_0) + \frac{(p-p_0)^2}{2} f''(\xi(p)),\]

where $\xi(p)$ lies between $p$ and $p_0$. Since $f(p)=0$, this equation gives

\[0 = f(p_0) + (p-p_0) f'(p_0) + \frac{(p-p_0)^2}{2} f''(\xi(p)).\]

Newton’s method is derived by assuming that since $|p-p_0|$ is small, the term involving $(p-p_0)^2$ is much smaller, so

\[0 \approx f(p_0) + (p-p_0) f'(p_0).\]

Solving for $p$ gives

\[p \approx p_0 - \frac{f(p_0)}{f'(p_0)} = p_1.\]

This sets the stage for Newton’s method, which starts with an initial approximation $p_0$ and generates the sequence $\{p_n\}_{n=0}^\infty$ by

\[p_n = p_{n-1} - \frac{f(p_{n-1})}{f'(p_{n-1})}, \quad n \geq 1. \tag{2.7}\]

Newton’s Algorithm


To find a solution to $f(x)=0$ given an initial approximation $p_0$:

INPUT: initial approximation $p_0$; tolerance $TOL$; maximum number of iterations $N_0$.  
OUTPUT: approximate solution $p$ or message of failure.

  • Set $i=1$.
  • While $i \leq N_0$ do Steps 3–6.
  • Set \[p = p_0 - \frac{f(p_0)}{f'(p_0)}. \quad (\text{Compute } p_i)\]
  • If $|p - p_0| < TOL$, then  OUTPUT $(p)$ (The procedure was successful.) STOP.
  • Set $i = i+1$.
  • Set $p_0 = p$. (Update $p_0$).
  • OUTPUT (The method failed after $N_0$ iterations, $N_0 = i-1$); STOP.

The stopping-technique inequalities given with the Bisection method are applicable to Newton’s method. That is, select a tolerance $\varepsilon > 0$, and construct $p_1, p_2, \ldots, p_N$ until

\[|p_N - p_{N-1}| < \varepsilon, \tag{2.8}\]

\[\frac{|p_N - p_{N-1}|}{|p_N|} < \varepsilon, \quad p_N \neq 0, \tag{2.9}\]

or 

\[|f(p_N)| < \varepsilon. \tag{2.10}\]



Newton’s method is a functional iteration technique with $p_n = g(p_{n-1})$, for which

\[g(p_{n-1}) = p_{n-1} - \frac{f(p_{n-1})}{f'(p_{n-1})}, \quad n \geq 1. \tag{2.11}\]

In fact, this is the functional iteration technique that we used to give the rapid convergence we saw in column (c) of Table 2.2 in Section 2.2.  

It is clear from Equation (2.7) that Newton’s method cannot be continued if $f'(p_{n-1})=0$ for some $n$. In fact, we will see that the method is most effective when $f'$ is bounded away from zero near $p$.

Example


Consider the function $f(x) = \cos x - x = 0$. Approximate a root of $f$ using (a) a fixed-point method, and (b) Newton’s Method.

Solution. (a) A solution to this root-finding problem is also a solution to the fixed-point problem $x = \cos x$, and the graph in Figure 2.9 implies that a single fixed-point $p$ lies in $[0,\pi/2]$.

Graph of $y=\cos x$ and $y=x$ with intersection near $p$


Table 2.3 shows the results of fixed-point iteration with $p_0 = \pi/4$. The best we could conclude from these results is that $p \approx 0.74$.

(b) To apply Newton’s method to this problem we need $f'(x) = -\sin x - 1$. Starting again with $p_0 = \pi/4$, we generate the sequence defined, for $n \geq 1$, by

\[p_n = p_{n-1} - \frac{f(p_{n-1})}{f'(p_{n-1})} = p_{n-1} - \frac{\cos(p_{n-1}) - p_{n-1}}{-\sin(p_{n-1}) - 1}.\]

This gives the approximations in Table 2.4. An excellent approximation is obtained with $n=3$. Because of the agreement of $p_3$ and $p_4$, we could reasonably expect this result to be accurate to three places listed.

Convergence using Newton’s Method


Example 1 shows that Newton’s method can provide extremely accurate approximations with very few iterations. For that example, only one iteration of Newton’s method was needed to give a correct result to three decimal places. Thus it is now time to examine Newton’s method more carefully to discover why it is so effective.


Theorem 2.6


Let $f \in C^2[a,b]$. If $p \in (a,b)$ is such that $f(p)=0$ and $f'(p) \neq 0$, then there exists a $\delta > 0$ such that Newton’s method generates a sequence $\{p_n\}_{n=0}^\infty$ converging to $p$ for any initial approximation $p_0 \in [p-\delta, p+\delta]$.

Proof. The proof is based on analyzing Newton’s method as the functional iteration scheme

\[p_n = g(p_{n-1}), \quad n \geq 1, \quad \text{with } g(x) = x - \frac{f(x)}{f'(x)}.\]

Let $\delta > 0$. We must find an interval $[p-\delta, p+\delta]$ that maps into itself and for which $|g'(x)| \leq k < 1$ for all $x \in [p-\delta, p+\delta]$.

Since $f$ is continuous and $f(p)=0$, part (a) of Exercise 29 in Section 1.1 implies that there exists a $\delta > 0$, such that $f(x)$ for $x \in [p-\delta, p+\delta]$ is small. Thus $g$ is defined and continuous on $[p-\delta, p+\delta]$.

\[g'(x) = 1 - \frac{f'(x)f'(x) - f(x)f''(x)}{(f'(x))^2} = \frac{f(x)f''(x)}{(f'(x))^2}.\]

By assumption, $f(p)=0$, so 

\[g'(p) = \frac{f(p)f''(p)}{(f'(p))^2} = 0.\]

Since $g'$ is continuous and 0 $< k < 1$, part (b) of Exercise 29 in Section 1.1 implies that there exists a $\delta > 0$, with $|g'(x)| \leq k$, for all $x \in [p-\delta, p+\delta]$.

It remains to show that $g$ maps $[p-\delta, p+\delta]$ into $[p-\delta, p+\delta]$. If $x \in [p-\delta, p+\delta]$, the Mean Value Theorem implies that for some number $\xi$ between $x$ and $p$, 

\[|g(x)-p| = \left|\frac{f(x)}{f'(x)}\right| = \left|\frac{f(x)-f(p)}{f'(\xi)}\right| \leq |x-p| \cdot \frac{|f'(\xi)|}{|f'(\xi)|} = |x-p|.\]

Since $|g(x)-p| \leq |x-p|$, it follows that $|g(x)-p| < \delta$. Hence, $g$ maps $[p-\delta, p+\delta]$ into itself.  

All hypotheses of the Fixed-Point Theorem 2.4 are now satisfied, so the sequence $\{p_n\}$ defined by

\[p_n = g(p_{n-1}) = p_{n-1} - \frac{f(p_{n-1})}{f'(p_{n-1})}, \quad n \geq 1,\]

converges to $p$ for any $p_0 \in [p-\delta, p+\delta]$.

Theorem 2.6 states that, under reasonable assumptions, Newton’s method converges provided a sufficiently accurate initial approximation is chosen. It also implies that the constant k that bounds the derivative of g, and, consequently, indicates the speed of convergence of the method, decreases to 0 as the procedure continues. This result is important for the theory of Newton’s method, but it is seldom applied in practice because it does not tell us how to determine δ. 

In a practical application, an initial approximation is selected and successive approximations are generated by Newton’s method. These will generally either converge quickly to the root, or it will be clear that convergence is unlikely

No comments for "Newton`s Method and Its Extensions"