Chapter 10

Jacobi and Gauss-Seidel Methods

10.1 The Jacobi and Gauss-Seidel Methods: An Example ............ 148

10.2 Splitting Methods ................................................ 148

10.3 Some Examples of Splitting Methods ............................ 150

10.4 Jacobi’s Algorithm and JOR ..................................... 151

10.4.1 The JOR in the Nonnegative-Deﬁnite Case ............. 152

10.5 The Gauss-Seidel Algorithm and SOR .......................... 153

10.5.1 The Nonnegative-Deﬁnite Case ......................... 153

10.5.2 The GS Algorithm as ART .............................. 155

10.5.3 Successive Overrelaxation ............................... 155

10.5.4 The SOR for Nonnegative-Deﬁnite Q ................... 156

10.6 Summary ......................................................... 156

In this chapter we examine two well known iterative algorithms for solving

square systems of linear equations, the Jacobi method and the Gauss-Seidel

method, in terms of averaged and paracontractive operators. Both these

algorithms are easy to describe and to motivate. They both require not

only that the system be square, that is, have the same number of unknowns

as equations, but satisfy additional constraints needed for convergence.

Linear systems Ax = b need not be square but can be associated with

two square systems, A

†

Ax = A

†

b, the so-called normal equations,and

AA

†

z = b, sometimes called the Bj¨orck-Elfving equations [107]. Both the

Jacobi and the Gauss-Seidel algorithms can be modiﬁed to apply to any

square system of linear equations, Sz = h. The resulting algorithms, the

Jacobi overrelaxation (JOR) and successive overrelaxation (SOR) methods,

involve the choice of a parameter. The JOR and SOR will converge for more

general classes of matrices, provided that the parameter is appropriately

chosen. Particular cases of the Jacobi and the Gauss-Seidel methods are

equivalent to the Landweber algorithm and the ART, respectively.

When we say that an iterative method is convergent, or converges, under

certain conditions, we mean that it converges for any consistent system of

the appropriate type, and for any starting vector; any iterative method

will converge if we begin at the right answer. We assume throughout this

chapter that A is an I by J matrix.

147

148 Iterative Optimization in Inverse Problems

10.1 The Jacobi and Gauss-Seidel Methods: An

Example

Suppose we wish to solve the 3 by 3 system

S

11

z

1

+ S

12

z

2

+ S

13

z

3

= h

1

S

21

z

1

+ S

22

z

2

+ S

23

z

3

= h

2

S

31

z

1

+ S

32

z

2

+ S

33

z

3

= h

3

, (10.1)

which we can rewrite as

z

1

= S

−1

11

[h

1

− S

12

z

2

− S

13

z

3

]

z

2

= S

−1

22

[h

2

− S

21

z

1

− S

23

z

3

]

z

3

= S

−1

33

[h

3

− S

31

z

1

− S

32

z

2

], (10.2)

assuming that the diagonal terms S

mm

are not zero. Let z

0

=(z

0

1

,z

0

2

,z

0

3

)

T

be an initial guess for the solution. We then insert the entries of z

0

on the

right sides and use the left sides to deﬁne the entries of the next guess z

1

.

This is one full cycle of Jacobi’s method.

The Gauss-Seidel method is similar. Let z

0

=(z

0

1

,z

0

2

,z

0

3

)

T

be an initial

guess for the solution. We then insert z

0

2

and z

0

3

on the right side of the

ﬁrst equation, obtaining a new value z

1

1

on the left side. We then insert

z

0

3

and z

1

1

on the right side of the second equation, obtaining a new value

z

1

2

on the left. Finally, we insert z

1

1

and z

1

2

into the right side of the third

equation, obtaining a new z

1

3

on the left side. This is one full cycle of the

Gauss-Seidel (GS) method.

10.2 Splitting Methods

The Jacobi and the Gauss-Seidel methods are particular cases of a more

general approach known as splitting methods. Splitting methods apply to

square systems of linear equations. Let S be an arbitrary N by N square

matrix, written as S = M −K. Then the linear system of equations Sz = h

is equivalent to Mz = Kz+h.IfM is invertible, then we can also write z =

M

−1

Kz + M

−1

h. This last equation suggests a class of iterative methods

Get *Iterative Optimization in Inverse Problems* now with O’Reilly online learning.

O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.