\(\newcommand{\bx}{\textbf{x}} \newcommand{\bo}{\textbf{0}} \newcommand{\bv}{\textbf{v}} \newcommand{\bu}{\textbf{u}} \newcommand{\bq}{\textbf{q}} \newcommand{\by}{\textbf{y}} \newcommand{\bb}{\textbf{b}} \newcommand{\ba}{\textbf{a}} \newcommand{\grad}{\boldsymbol{\nabla}} \newcommand{\pd}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\pdd}[2]{\frac{\partial^2 #1}{\partial #2^2}} \newcommand{\pddm}[3]{\frac{\partial^2 #1}{\partial #2 \partial #3}} \newcommand{\deriv}[2]{\frac{d #1}{d #2}} \newcommand{\lt}{ < } \newcommand{\gt}{ > } \newcommand{\amp}{ & } \)

Section3.4Quasi-Newton Methods

What if we don't have or it is hard to find derivative information for the function \(f(x)\)? There are several families of root-finding techniques that use Newton-like iterations but also avoid taking derivatives. These methods take the form \(x_{n+1} = x_n - \frac{f(x_n)}{g_n}\) where \(g_n \approx f'(x_n)\). The main advantage here is that we don't need the by-hand computation or the symbolic computer computation of a derivative. The possible down side is that we may lose the quadratic convergence that makes Newton's method so nice.