Dissertations, Theses, and Capstone Projects

Date of Degree

9-2020

Document Type

Dissertation

Degree Name

Ph.D.

Program

Mathematics

Advisor

Victor Y. Pan

Committee Members

Christina Zamfirescu

Christina Sormani

Subject Categories

Numerical Analysis and Computation

Keywords

Algebraic and Numerical Algorithms, Linear Algebra, Matrix Low Rank Approximation, Numerical Computation, Randomized Matrix Algorithms, Sublinear Cost Algorithms

Abstract

A matrix algorithm runs at sublinear cost if the number of arithmetic operations involved is far fewer than the number of entries of the input matrix. Such algorithms are especially crucial for applications in the field of Big Data, where input matrices are so immense that one can only store a fraction of the entire matrix in memory of modern machines. Typically, such matrices admit Low Rank Approximation (LRA) that can be stored and processed at sublinear cost. Can we compute LRA at sublinear cost? Our counter example presented in Appendix C shows that no sublinear cost algorithm can compute accurate LRA for arbitrary input. However, for a decade, researchers observed that many sublinear cost algorithms, such as Cross Approximations (C--A) iterations, routinely compute accurate LRA. We partly resolve this long-known contradiction by proving that: (i) sublinear cost variations of a popular subspace sampling algorithm can compute accurate LRA for a large class of inputs with high probability; (ii) a single two-stage C--A loop computes accurate LRA given that the input is reasonably close to a low rank matrix and the C--A loop starts with a submatrix that shares the same numerical rank with the input; (iii) for arbitrary Symmetric Positive Semi-Definite (SPSD) input, there exists a deterministic sublinear cost algorithm that outputs close to optimal LRA in the Chebyshev norm; (iv) for any input, an LRA based on given sets of columns and rows can be computed at sublinear cost, and this approximation is near optimal.

Share

COinS