WEKO3
アイテム
Approximate Generalized Inverse Preconditioning Methods for Least Squares Problems
https://ir.soken.ac.jp/records/1501
https://ir.soken.ac.jp/records/15016b91704a-4e73-4c99-8457-f543e6ea8eee
名前 / ファイル | ライセンス | アクション |
---|---|---|
要旨・審査要旨 (370.9 kB)
|
||
本文 (877.0 kB)
|
Item type | 学位論文 / Thesis or Dissertation(1) | |||||
---|---|---|---|---|---|---|
公開日 | 2010-06-09 | |||||
タイトル | ||||||
タイトル | Approximate Generalized Inverse Preconditioning Methods for Least Squares Problems | |||||
タイトル | ||||||
タイトル | Approximate Generalized Inverse Preconditioning Methods for Least Squares Problems | |||||
言語 | en | |||||
言語 | ||||||
言語 | eng | |||||
資源タイプ | ||||||
資源タイプ識別子 | http://purl.org/coar/resource_type/c_46ec | |||||
資源タイプ | thesis | |||||
著者名 |
崔, 小可
× 崔, 小可 |
|||||
フリガナ |
サイ, ショカ
× サイ, ショカ |
|||||
著者 |
CUI, Xiaoke
× CUI, Xiaoke |
|||||
学位授与機関 | ||||||
学位授与機関名 | 総合研究大学院大学 | |||||
学位名 | ||||||
学位名 | 博士(情報学) | |||||
学位記番号 | ||||||
内容記述タイプ | Other | |||||
内容記述 | 総研大甲第1286号 | |||||
研究科 | ||||||
値 | 複合科学研究科 | |||||
専攻 | ||||||
値 | 17 情報学専攻 | |||||
学位授与年月日 | ||||||
学位授与年月日 | 2009-09-30 | |||||
学位授与年度 | ||||||
値 | 2009 | |||||
要旨 | ||||||
内容記述タイプ | Other | |||||
内容記述 | A basic problem in science is to fit a model to observations subject to errors. It is clear that the more observations that are available the more accurate will it be possible to calculate the parameters in the model. This gives rise to the problem of "solving"an overdetermined linear or nonlinear system of equations. When enough observations are not available, it gives rise to underdetermined systems. Overdetermined systems together with underdeter-mined systems are called <i>least squares problems</i>. It can be shown that the solution which minimizes a weighted sum of the squares of the residual is optimal in a certain sense. These solutions are called <i>least squares solutions.</i><br /><br /> Least squares problems are usually written in the form<br /><br /> min||b-Ax ||<small>2</small>, A∈R<sup>m×n</sup>, b∈R<sup>n</sup>, (0.1)<br /> <small>x∈R<sup>n</sup></small><br /><br />where the norm ||・||<small>2</small> stands for 2-norm. When <i>A</i> is large and sparse, it is advantageous to apply iterative methods to the normal equations A<sup>T</sup>(Ax - b) = 0 or AA<sup>T</sup> y- b=0. Since the condition number of <i>A<sup>T</sup> A </i>or <i>AA<sup>T</sup></i> is the square of that of <i>A</i>, when <i>A</i> is ill-conditioned, preconditioning for the iterative methods becomes necessary.<br /><br /> In this thesis, we consider constructing preconditioners for some Krylov subspace it-erative methods to solve least squares problems more efficiently. We especially focused on one kind of preconditioners, in which preconditioners are the approximate generalized inverses of the coefficient matrices of the least squares problems. We proposed two different approaches for how to construct the approximate generalized inverses of the coefficient matrices: one is based on the <i>Minimal Residual</i> method with the steepest descend direction, and the other is based on the <i>Greville's Method</i> which is an old method developed for computing the generalized inverse based on the rank-one update. And for these two preconditioners, we also discuss how to apply them to least squares problems. Both theoretical issues and practical implementation issues about the preconditioning are discussed in this thesis. Our numerical tests showed that our methods performed competitively rank deficient ill-conditioned problems. As an example of problems from the real world, we apply our preconditioners to the linear programming problems, where many large-scale sparse least squares problems with rank deficient coefficient matrices arise. Our numerical tests showed that our methods showed more robustness than the Cholesky decomposition method. | |||||
所蔵 | ||||||
値 | 有 |