In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-semidefinite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems.
The conjugate gradient method can also be used to solve unconstrained optimization problems such as energy minimization. It is commonly attributed to Magnus Hestenes and Eduard Stiefel,[1][2] who programmed it on the Z4,[3] and extensively researched it.[4][5]
The biconjugate gradient method provides a generalization to non-symmetric matrices. Various nonlinear conjugate gradient methods seek minima of nonlinear optimization problems.
^Hestenes, Magnus R.; Stiefel, Eduard (December 1952). "Methods of Conjugate Gradients for Solving Linear Systems" (PDF). Journal of Research of the National Bureau of Standards. 49 (6): 409. doi:10.6028/jres.049.044.
^Straeter, T. A. (1971). On the Extension of the Davidon–Broyden Class of Rank One, Quasi-Newton Minimization Methods to an Infinite Dimensional Hilbert Space with Applications to Optimal Control Problems (PhD thesis). North Carolina State University. hdl:2060/19710026200 – via NASA Technical Reports Server.
^Speiser, Ambros (2004). "Konrad Zuse und die ERMETH: Ein weltweiter Architektur-Vergleich" [Konrad Zuse and the ERMETH: A worldwide comparison of architectures]. In Hellige, Hans Dieter (ed.). Geschichten der Informatik. Visionen, Paradigmen, Leitmotive (in German). Berlin: Springer. p. 185. ISBN 3-540-00217-0.
^Polyak, Boris (1987). Introduction to Optimization.
^Greenbaum, Anne (1997). Iterative Methods for Solving Linear Systems. doi:10.1137/1.9781611970937. ISBN 978-0-89871-396-1.
and 27 Related for: Conjugate gradient method information
In mathematics, the conjugategradientmethod is an algorithm for the numerical solution of particular systems of linear equations, namely those whose...
Gradient descent is a method for unconstrained mathematical optimization. It is a first-order iterative algorithm for finding a local minimum of a differentiable...
In numerical linear algebra, the conjugategradient squared method (CGS) is an iterative algorithm for solving systems of linear equations of the form...
biconjugate gradientmethod is an algorithm to solve systems of linear equations A x = b . {\displaystyle Ax=b.\,} Unlike the conjugategradientmethod, this...
In numerical linear algebra, the conjugategradientmethod is an iterative method for numerically solving the linear system A x = b {\displaystyle {\boldsymbol...
residual over the subspace formed. The prototypical method in this class is the conjugategradientmethod (CG) which assumes that the system matrix A {\displaystyle...
steepest descent method and the conjugategradientmethod, but proximal gradientmethods can be used instead. Proximal gradientmethods starts by a splitting...
similar to the much more popular conjugategradientmethod, with similar construction and convergence properties. This method is used to solve linear equations...
Conjugate gradientmethod, an algorithm for the numerical solution of particular systems of linear equations Nonlinear conjugategradientmethod, generalizes...
biconjugate gradientmethod (BiCG) and has faster and smoother convergence than the original BiCG as well as other variants such as the conjugategradient squared...
Isogonal conjugate, in geometry Conjugategradientmethod, an algorithm for the numerical solution of particular systems of linear equations Conjugate points...
usually used as though they were not, e.g. GMRES and the conjugategradientmethod. For these methods the number of steps needed to obtain the exact solution...
iteration Conjugategradientmethod (CG) — assumes that the matrix is positive definite Derivation of the conjugategradientmethod Nonlinear conjugate gradient...
unknowns associated with subdomain interfaces is solved by the conjugategradientmethod. Suppose we want to solve the Poisson equation − Δ u = f , u |...
optimal control. As a pioneer in computer science, he devised the conjugategradientmethod, published jointly with Eduard Stiefel. Born in Bricelyn, Minnesota...
necessarily approximate the optimum. One example of the former is conjugategradientmethod. The latter is called inexact line search and may be performed...
\mathbf {1} } the Identity matrix. In contrast to the Conjugategradientmethod, here the gradient calculates by twice multiplying matrix H : G ∼ H → G...
Numerous methods exist to compute descent directions, all with differing merits, such as gradient descent or the conjugategradientmethod. More generally...
space iterative methods, such as the conjugategradientmethod or GMRES. In overlapping domain decomposition methods, the subdomains overlap by more than...
preconditioned iterative methods for linear systems include the preconditioned conjugategradientmethod, the biconjugate gradientmethod, and generalized minimal...
9} for Newton or quasi-Newton methods and c 2 = 0.1 {\displaystyle c_{2}=0.1} for the nonlinear conjugategradientmethod. Inequality i) is known as the...
Locally Optimal Block Preconditioned ConjugateGradient (LOBPCG) is a matrix-free method for finding the largest (or smallest) eigenvalues and the corresponding...
using multigrid preconditioners in the locally optimal block conjugategradientmethod. Electronic Transactions on Numerical Analysis, 15, 38–55, 2003...
is symmetric and positive definite, so a technique such as the conjugategradientmethod is favored. For problems that are not too large, sparse LU decompositions...