dc.contributor.advisor Benson, Maurice dc.contributor.author Krettmann, Jurgen dc.date.accessioned 2017-06-05T13:35:06Z dc.date.available 2017-06-05T13:35:06Z dc.date.created 1982 dc.date.issued 1982 dc.identifier.uri http://knowledgecommons.lakeheadu.ca/handle/2453/862 dc.description.abstract In this thesis we are concerned with iterative parallel algorithms for solving finite difference equations arising from boundary value problems. We propose various new methods based on approximate inverses. The Jacobi iterative method for the solution of linear equations is highly parallel. Its slow convergence rate, however, has initiated the study of a variety of acceleration techniques. We introduce the “Diagonal-Block” and the "Least Squares" techniques for calculating approximate inverses which lead naturally to generalized Jacobi methods that are well suited to parallel processing. In addition, the calculations required to establish the iterative process can be done in parallel. A convergence theorem for a one-dimensional model problem is given. Gauss-Seidel versions of these methods are also considered. These versions converge faster but often at the cost of decreased parallelism. Preconditioning has been successfully used in conjunction with the conjugate gradient method. Many such techniques use an approximation M to the matrix A of the linear system under consideration and solve a system of the form M z = d at each iteration. If, however, an approximate inverse M[superscript -1] is used, the solution z = M[superscript -1] d only involves a matrix-vector multiplication v/hich is well suited to parallel computation. We examine the Diagonal-Block and Least-Squares techniques for preconditioning the conjugate gradient method. Numerical results are presented. The proposed Jacobi like and Gauss-Seidel like methods are compared with parallel Gauss-Seidel methods in a class of parallel algorithms proposed by Evans and Barlow (1982). The preconditioned conjugate gradient methods are compared with the plain conjugate gradient method. In all cases the speed of convergence increased substantially, but often at the expense of increased computational work. In some cases it was necessary to assume the number of processors to be on the order of N ( the number of unknowns ) in order to gain an advantage from parallel processing. dc.language.iso en_US dc.subject Algorithms dc.subject Numerical analysis dc.title Parallel algorithms for the iterative solution of large sparse linear systems dc.type Thesis etd.degree.name Master of Science etd.degree.level Master etd.degree.discipline Mathematical Sciences etd.degree.grantor Lakehead University
﻿