Conjugate Gradients solves \$Ax = b\$ approximately for \$x\$ where \$A\$ is a symmetric, positive-definite linear operator and \$b\$ the right-hand side vector. The method uses short recurrences and therefore has fixed memory costs and fixed computational costs per iteration.

## Usage

``cg(A, b; kwargs...) -> x, [history]``

Same as `cg!`, but allocates a solution vector `x` initialized with zeros.

source
``cg!(x, A, b; kwargs...) -> x, [history]``

Arguments

• `x`: Initial guess, will be updated in-place;
• `A`: linear operator;
• `b`: right-hand side.

Keywords

• `statevars::CGStateVariables`: Has 3 arrays similar to `x` to hold intermediate results;
• `initially_zero::Bool`: If `true` assumes that `iszero(x)` so that one matrix-vector product can be saved when computing the initial residual vector;
• `Pl = Identity()`: left preconditioner of the method. Should be symmetric, positive-definite like `A`;
• `tol::Real = sqrt(eps(real(eltype(b))))`: tolerance for stopping condition `|r_k| / |r_0| ≤ tol`;
• `maxiter::Int = size(A,2)`: maximum number of iterations;
• `verbose::Bool = false`: print method information;
• `log::Bool = false`: keep track of the residual norm in each iteration.

Output

if `log` is `false`

• `x`: approximated solution.

if `log` is `true`

• `x`: approximated solution.
• `ch`: convergence history.

ConvergenceHistory keys

• `:tol` => `::Real`: stopping tolerance.
• `:resnom` => `::Vector`: residual norm at each iteration.
source

## On the GPU

The method should work fine on the GPU. As a minimal working example, consider:

``````using LinearAlgebra, CuArrays, IterativeSolvers

n = 100
A = cu(rand(n, n))
A = A + A' + 2*n*I
b = cu(rand(n))
x = cg(A, b)``````
Note

Make sure that all state vectors are stored on the GPU. For instance when calling `cg!(x, A, b)`, one might have an issue when `x` is stored on the GPU, while `b` is stored on the CPU – IterativeSolvers.jl does not copy the vectors to the same device.

## Implementation details

The current implementation follows a rather standard approach. Note that preconditioned CG (or PCG) is slightly different from ordinary CG, because the former must compute the residual explicitly, while it is available as byproduct in the latter. Our implementation of CG ensures the minimal number of vector operations.

Tip

CG can be used as an iterator.