When a boundary integral operator \(B\) in BEM to be used as a preconditioner is not elliptic on its whole domain, such as the hypersingular operator \(D\), generalized inverse operator \(\dot {B}^{-1}\) is needed (at least theoretically), which is spectrally equivalent to the original operator \(A\). In (Steinbach and Wendland), the preconditioning operator 1 is \(B: H^{s-2\alpha }(\Gamma ) \rightarrow H^s(\Gamma )\). Its generalized inverse is \begin{equation} \dot {B}^{-1}: V^{s,0}(\Gamma ,B) \rightarrow V^{s-2\alpha ,0}(\Gamma ,B). \end{equation}

Because the generalized inverse is an extension of the Moore-Penrose pseudoinverse, we’ll first introduce the latter concept. We’ve already met pseudoinverse matrices in linear algebra. For a matrix equation \(Ax=b\), when \(A\) has full column rank, it has a unique Moore-Penrose pseudoinverse matrix \begin{equation} A^{\dagger } = (A^{\ast }A)^{-1}A^{\ast }, \end{equation} where \(A^{\ast }\) is the Hermite transpose of \(A\). \(A^{\dagger }\) satisfies the four Penrose conditions (Wang et al.):

  1. \(AA^{\dagger }A=A\)

  2. \(A^{\dagger }AA^{\dagger }=A^{\dagger }\)

  3. \((AA^{\dagger })^{\ast }=AA^{\dagger }\)

  4. \((A^{\dagger }A)^{\ast }=A^{\dagger }A\)

From these conditions, we know that \(A^{\dagger }\) is just the left inverse of \(A\). According to our previous knowledge about the kernel and range spaces of a matrix, when the matrix has full column rank, it is an injective map which should have a left inverse.

The basic idea behind Moore-Penrose pseudoinverse is simple. Assume \(A\) maps from \(V\) to \(W\). Let \(y\) belongs to \(W\) and we want to find its pre-image \(x\) in \(V\) in the sense of pseudoinverse. When \(\mathrm {ker}(A)\) is not \(\{ 0 \}\), \(A\) is not surjective. So we first apply \(A^{\ast }\) to \(y\), which maps \(y\) back into \(( \mathrm {ker}(A) )^{\perp }\). In this smaller subspace of \(V\), \(A^{\ast }A\) is bijective and the pre-image of \(A^{\ast }y\) can be found by applying its inverse.

Before the study on matrix pseudoinverse by Penrose, there had been research on the generalized inverse of integral or differential operators by Hilbert, Fredholm et al. Let \(A\) be a bounded linear operator from Hilbert space \(V\) to \(W\). The operator equation is \(Ax=b\), where \(x\in V\) and \(b\in W\). If the range \(\mathrm {Im}(A)\) of \(A\) is closed in \(W\), the following generalized solutions are equivalent (Wang et al.), which are called the least square solution:

  1. \(Ax=Pb\), where \(P\) is the projection operator maps onto \(\mathrm {Im}(A)\).

  2. \(\argmin _{x\in V} \lVert Ax-b \rVert _{W}\).

  3. \(A^{\ast }Ax=A^{\ast }b\).

If we loosen the condition by assuming \(V\) and \(W\) are Banach spaces instead of Hilbert spaces, according to the closed range theorem in (Steinbach, page 48), when \(A\) has a closed range, \(\mathrm {Im}(A)\) is the annihilator of the kernel of the adjoint operator \(A': W' \rightarrow V'\), i.e. \begin{equation} \mathrm {Im}(A)=(\mathrm {ker}(A'))^{\circ }, \end{equation} and for any \(y\in \mathrm {Im}(A)\) and \(x\in \mathrm {ker}(A')\), the duality pairing \(\langle y,x \rangle \) is zero. Because there are no inner product structures on \(V\) and \(W\), we do not have the concepts of orthogonal complement space and Hilbert-adjoint anymore. Then the above Moore-Penrose pseudoinverse cannot be used. But still the domain \(V\) of \(A\) can be decomposed as \begin{equation} V = \mathrm {ker}(A) \oplus Z, \end{equation} where \(Z\) is a closed subspace of \(V\) such that \(\mathrm {ker}(A) \cap Z = \{ 0 \}\). If we restrict the domain of \(A\) to \(Z\), the map \(A\big \vert _Z: Z \rightarrow \mathrm {Im}(A)\) is bijective, which of course has an inverse. If the range space \(W\) of \(A\) is decomposed as \begin{equation} W = \mathrm {Im}(A) \oplus Y = (\mathrm {ker}(A'))^{\circ } \oplus Y, \end{equation} the generalized inverse \(A^+\) of \(A\) can be defined as \begin{equation} A^{+}(y) = \begin {cases} A\big \vert _Z^{-1}(y) & y\in \mathrm {Im}(A) = (\mathrm {ker}(A'))^{\circ } \\ 0 & y\in Y \end {cases}. \end{equation} It is easy to know that such generalized pseudoinverse only satisfies the first two Moore-Penrose conditions:

  1. \(AA^{+}A=A\)

  2. \(A^{+}AA^{+}=A^{+}\)

References

   Olaf Steinbach. Numerical Approximation Methods for Elliptic Boundary Value Problems: Finite and Boundary Elements. Springer Science & Business Media. ISBN 978-0-387-31312-2.

   Olaf Steinbach and Wolfgang L. Wendland. The construction of some efficient preconditioners in the boundary element method. 9(1-2):191–216. URL http://link.springer.com/article/10.1023/A:1018937506719.

   Wang, Yimin Wei, and Sanzheng Qiao. Generalized Inverses: Theory and Computations, volume 53 of Developments in Mathematics. Springer. ISBN 9789811301452 9789811301469. doi: 10.1007/ 978-981-13-0146-9.

1Here we explicitly say “preconditioning operator” not simply “preconditioner”, because we want to distinguish it from “preconditioning matrix”. While a preconditioning operator such as \(B\) is an approximate inverse of the original operator \(A\), a preconditioning matrix is the discretized Galerkin matrix associated with \(\dot {B}^{-1}\), not \(B\). To apply a preconditioning matrix to a discretized linear system, we need to multiply its approximate inverse matrix to both sides of the equation. For simplicity, we will say “preconditioner” instead of “preconditioning operator” from now on.