\( % Math symbol commands \newcommand{\intd}{\,\mathrm{d}} % Symbol 'd' used in integration, such as 'dx' \newcommand{\diff}{\mathrm{d}} % Symbol 'd' used in differentiation \newcommand{\Diff}{\mathrm{D}} % Symbol 'D' used in differentiation \newcommand{\pdiff}{\partial} % Partial derivative \newcommand{\DD}[2]{\frac{\diff}{\diff #2}\left( #1 \right)} \newcommand{\Dd}[2]{\frac{\diff #1}{\diff #2}} \newcommand{\PD}[2]{\frac{\pdiff}{\pdiff #2}\left( #1 \right)} \newcommand{\Pd}[2]{\frac{\pdiff #1}{\pdiff #2}} \newcommand{\rme}{\mathrm{e}} % Exponential e \newcommand{\rmi}{\mathrm{i}} % Imaginary unit i \newcommand{\rmj}{\mathrm{j}} % Imaginary unit j \newcommand{\vect}[1]{\boldsymbol{#1}} % Vector typeset in bold and italic \newcommand{\normvect}{\vect{n}} % Normal vector: n \newcommand{\dform}[1]{\overset{\rightharpoonup}{\boldsymbol{#1}}} % Vector for differential form \newcommand{\cochain}[1]{\overset{\rightharpoonup}{#1}} % Vector for cochain \newcommand{\Abs}[1]{\big\lvert#1\big\rvert} % Absolute value (single big vertical bar) \newcommand{\abs}[1]{\lvert#1\rvert} % Absolute value (single vertical bar) \newcommand{\Norm}[1]{\big\lVert#1\big\rVert} % Norm (double big vertical bar) \newcommand{\norm}[1]{\lVert#1\rVert} % Norm (double vertical bar) \newcommand{\ouset}[3]{\overset{#3}{\underset{#2}{#1}}} % over and under set % Super/subscript for column index of a matrix, which is used in tensor analysis. \newcommand{\cscript}[1]{\;\; #1} \newcommand{\suchthat}{\textit{S.T.\;}} % S.T., such that % Star symbol used as prefix in front of a paragraph with no indent \newcommand{\prefstar}{\noindent$\ast$ } % Big vertical line restricting the function. % Example: $u(x)\restrict_{\Omega_0}$ \newcommand{\restrict}{\big\vert} % Math operators which are typeset in Roman font \DeclareMathOperator{\sgn}{sgn} % Sign function \DeclareMathOperator{\erf}{erf} % Error function \DeclareMathOperator{\Bd}{Bd} % Boundary of a set or domain, used in topology \DeclareMathOperator{\Int}{Int} % Interior of a set or domain, used in topology \DeclareMathOperator{\rank}{rank} % Rank of a matrix \DeclareMathOperator{\divergence}{div} % Divergence \DeclareMathOperator{\curl}{curl} % Curl \DeclareMathOperator{\grad}{grad} % Gradient \DeclareMathOperator{\diag}{diag} % Diagonal \DeclareMathOperator{\tr}{tr} % Trace \DeclareMathOperator{\lhs}{LHS} % Left hand side \DeclareMathOperator{\rhs}{RHS} % Right hand side \DeclareMathOperator{\argmax}{argmax} \DeclareMathOperator{\argmin}{argmin} \DeclareMathOperator{\esssup}{ess\,sup} \DeclareMathOperator{\essinf}{ess\,inf} \DeclareMathOperator{\kernel}{ker} % The kernel set of a map \DeclareMathOperator{\image}{Im} % The image set of a map \DeclareMathOperator{\diam}{diam} % Diameter of a domain or a set \DeclareMathOperator{\dist}{dist} % Distance between two sets \DeclareMathOperator{\const}{const} \DeclareMathOperator{\adj}{adj} \DeclareMathOperator{\spann}{span} \DeclareMathOperator{\real}{Re} \DeclareMathOperator{\imag}{Imag} \)

ChapterĀ 1
Adjoint operators in functional analysis

1.1 Basic definitions of adjoint operators

There are two types of adjoint operators in functional analysis.

1.
Adjoint operator in normed spaces

Let \(X\) and \(Y\) be two normed spaces. \(X'\) and \(Y'\) are their dual spaces respectively. Let \(A: X \rightarrow Y\) be a bounded linear operator. Then for all \(x\in X\) and \(y\in Y'\), the adjoint operator \(A': Y' \rightarrow X'\) of \(A\) should satisfy \begin{equation} \langle Ax,y \rangle _{Y,Y'} = \langle x,A'y \rangle _{X,X'}, \end{equation} where \(\langle \cdot ,\cdot \rangle _{Y,Y'}\) is the duality pairing between \(Y\) and \(Y'\) and \(\langle \cdot ,\cdot \rangle _{X,X'}\) is the duality pairing between \(X\) and \(X'\). Usually, the subscript can be omitted \begin{equation} \label {eq:dual-adjoint} \langle Ax,y \rangle = \langle x,A'y \rangle \end{equation} and the order of the two operands in \(\langle \cdot ,\cdot \rangle \) does not matter. In (Yosida, page 193), \(A'\) is also called dual operator.

A duality pairing means applying a linear functional to a function and producing a scalar value. Therefore, the above equation can be written as \begin{equation} (Ax)(y) = y(Ax) = x(A'y) = (A'y)(x). \end{equation} Whether we apply for example \(Ax\) to \(y\) or apply \(y\) to \(Ax\) does not matter. In analogy with C/C++ programming, we consider an object in the primal space \(X\) or \(Y\) as the data to be manipulated and an object in the dual space \(X'\) or \(Y'\) as a function. The behavior of the duality pairing depends on how such a function in the dual space is defined. A typical example is the boundary integral operator. Let \(V\) be the boundary integral operator related to the single layer potential. We usually consider its input function \(u(y)\) to be in the primal space and \(V\) itself in the dual space. The output of \(Vu\) is another function: \begin{equation} ( Vu(y) )(x) = \int _{\Gamma } k(x,y) u(y) ds_y. \end{equation} Therefore, as a function or operation, \(V(\cdot ) = \int _{\Gamma } k(x,y) (\cdot ) ds_y\), which can be considered as a function object in C++. Its behavior is fully determined by the kernel function \(k(x,y)\) and the definite integration operation.

2.
Adjoint operator in Hilbert spaces (Kreyszig, page 196)

Let \(X\) and \(Y\) be two Hilbert spaces and \(A: X \rightarrow Y\) be a bounded linear operator. For all \(x\in X\) and \(y\in Y\), the Hilbert-adjoint operator \(A^{\ast }: Y \rightarrow X\) of \(A\) should satisfy \begin{equation} \label {eq:hilbert-adjoint} \langle Ax,y \rangle _Y = \langle x,A^{\ast }y \rangle _X, \end{equation} where \(\langle \cdot ,\cdot \rangle _X\) is the inner product in \(X\) and \(\langle \cdot ,\cdot \rangle _Y\) is the inner product in \(Y\). Usually, the above condition is simply written as \begin{equation} \langle Ax,y \rangle = \langle x,A^{\ast }y \rangle . \end{equation} Such Hilbert-adjoint operator \(A^{\ast }\) uniquely exists and satisfies \(\lVert A^{\ast } \rVert = \lVert A \rVert \).

We should also note that even though \(A^{\ast }\) maps from \(Y\) to \(X\), it is not the inverse operator \(A^{-1}: Y \rightarrow X\) of \(A\).

1.2 Relationship between adjoint and Hilbert-adjoint in Hilbert spaces

Because a Hilbert space is also a normed space, when \(X\) and \(Y\) are both Hilbert spaces, the operator \(A: X \rightarrow Y\) has both adjoint operator \(A'\) and Hilbert-adjoint operator \(A^{\ast }\). Their relationship can be visualized in the following diagram. According to Riesz representation theorem, \(J_X\) is the Riesz map from \(X'\) and \(X\) and \(J_Y\) is the Riesz map from \(Y'\) to \(Y\). Then we have \begin{equation} A^{\ast } = J_X A' J_Y^{-1}. \end{equation}

1.3 Self-adjointness

When in the context of normed spaces, the operator \(A: X \rightarrow Y\) is self-adjoint, if \(A': Y' \rightarrow X'\) is equal to \(A\). This requires \(X = Y'\) and \(Y = X'\), which indicates \(X\) and \(Y\) are dual to each other.

When in the context of Hilbert spaces, the operator \(A: X \rightarrow Y\) is self-adjoint, if \(A^{\ast }: Y \rightarrow X\) is equal to \(A\). This requires \(X\) and \(Y\) to be a same Hilbert space.

Example In (Steinbach) and (Steinbach and Wendland), the boundary integral operators \(V: H^{-1/2}(\Gamma ) \rightarrow H^{1/2}(\Gamma )\) and \(D: H^{1/2}(\Gamma ) \rightarrow H^{-1/2}(\Gamma )\) are self-adjoint. Because the Sobolev spaces \(H^{-1/2}(\Gamma )\) and \(H^{1/2}(\Gamma )\) are dual to each other, such self-adjointness is in the sense of normed spaces. Meanwhile, even though both \(H^{-1/2}(\Gamma )\) and \(H^{1/2}(\Gamma )\) are Hilbert spaces, they are not a same space. Therefore, the said self-adjointness is not in the sense of Hilbert spaces.

1.4 Adjoint operators in discrete case

Let \(X\) be \(\mathbb {K}^{n}\) and \(Y\) be \(\mathbb {K}^m\), where \(\mathbb {K}\) can be \(\mathbb {R}\) or \(\mathcal {C}\). Let \(x\in X\), \(y\in Y\) and \(\tilde {y}\in Y'\). \(\tilde {y}\) is the dual vector associated with \(y\). In the language of differential geometry, \(y\) is a tangent vector and \(\tilde {y}\) is the corresponding cotangent vector. When \(\mathbb {K}=\mathbb {R}\), \begin{equation} \tilde {y}_i = g_{ij} y^j, \end{equation} where \(g_{ij}\) is the metric tensor. Because \(X\) and \(Y\) are Cartesian spaces with orthonormal bases, \(g_{ij}=\delta _{ij}\). Hence \(\tilde {y}_i = y^i\).

When \(\mathbb {K}=\mathbb {C}\), there is an additional complex conjugation to get \(\tilde {y}_i\): \begin{equation} \tilde {y}_i = \overline {g_{ij} y^j} = \overline {y^i}. \end{equation} As a convention, in the discrete form, we use a column vector to represent a tangent vector and a row vector to represent a cotangent vector. Therefore, \begin{equation} \tilde {y} = y^{\mathrm {H}}, \end{equation} where \((\cdot )^{\mathrm {H}}\) is the Hermitian transpose.

If we treat \(X\) and \(Y\) as normed spaces, the left hand side of the adjoint condition in Equation 1.2 is a duality pairing between a tangent vector and a cotangent vector. This operation is just taking the sum of the coefficient-wise product of the two vectors, i.e. \begin{equation} \langle Ax,\tilde {y} \rangle = \sum _i \left ( \sum _j \overline {g_{ij}y^j} \right ) (Ax)^i = \sum _i \overline {y^i} (Ax)^i = y^{\mathrm {H}} A x. \end{equation} The right hand side is \begin{equation} \langle x,A'\tilde {y} \rangle = \sum _i \left ( A' \tilde {y}^{\mathrm {T}} \right )^i x^i = \sum _i \left ( A' \overline {y} \right )^i x^i = (A' \overline {y})^{\mathrm {T}} x = y^{\mathrm {H}} A'^{\mathrm {T}} x. \end{equation} Therefore the adjoint operator is the transpose of the original operator, i.e. \(A' = A^{\mathrm {T}}\). Here the identifications between \(X\) and \(X'\), \(Y\) and \(Y'\) are implied.

If we treat \(X\) and \(Y\) as Hilbert spaces, the left hand side of the adjoint condition in Equation 1.5 is \begin{equation} \langle Ax,y \rangle = \sum _{i,j} g_{ij} ( Ax )^i \overline {y^j} = y^{\mathrm {H}} Ax. \end{equation} Here we remember that in differential geometry, the inner product of two vectors in a same space involves the metric tensor. When \(\mathbb {K}=\mathbb {C}\), there should also be a complex conjugate on the second operand.

The right hand side of the adjoint condition is \begin{equation} \langle x,A^{\ast }y \rangle = \sum _{i,j} g_{ij} x^i \overline {( A^{\ast }y )^j} = \sum _i x^i \overline {(A^{\ast }y)^i} = y^{\mathrm {H}} (A^{\ast })^{\mathrm {H}} x. \end{equation} Therefore, the Hilbert-adjoint operator is the Hermitian transpose of the original operator, i.e. \(A^{\ast } = A^{\mathrm {H}}\).

1.5 Clarification

From above, we can see the phenomenon of terminology abuse (e.g. adjoint, self-adjoint) or symbol abuse (e.g. \(\langle \cdot ,\cdot \rangle \)) in mathematics. Therefore, we clarify or recapitulate the following points.