In mathematics, a block matrix pseudoinverse is a formula for the pseudoinverse of a partitioned matrix.
This is useful for decomposing or approximating many algorithms updating parameters in signal processing, which are based on the least squares method.
Consider a column-wise partitioned matrix: If the above matrix is full column rank, the Moore–Penrose inverse matrices of it and its transpose are This computation of the pseudoinverse requires (n + p)-square matrix inversion and does not take advantage of the block form.
To reduce computational costs to n- and p-square matrix inversions and to introduce parallelism, treating the blocks separately, one derives [1] where orthogonal projection matrices are defined by The above formulas are not necessarily valid if
, then Given the same matrices as above, we consider the following least squares problems, which appear as multiple objective optimizations or constrained problems in signal processing.
Eventually, we can implement a parallel algorithm for least squares based on the following results.
solves an over-determined system: Using the block matrix pseudoinverse, we have Therefore, we have a decomposed solution: Suppose a solution
solves an under-determined system: The minimum-norm solution is given by Using the block matrix pseudoinverse, we have Instead of
, we need to calculate directly or indirectly[citation needed][original research?]
In a dense and small system, we can use singular value decomposition, QR decomposition, or Cholesky decomposition to replace the matrix inversions with numerical routines.
In a large system, we may employ iterative methods such as Krylov subspace methods.
Considering parallel algorithms, we can compute