Notes for "Deep null space learn

2020-02-05  本文已影响0人  jjx323

J. Schwab, S. Antholzer, M. Haltmeier, Deep null space learning for inverse problems: convergence analysis and rates, Inverse Problems, 35, 2019, 025008.

Page 8, proof of Theorem 2.8

We have P_{ran(A^{\dagger})}\mathcal{M}_{\mu,\rho,\Phi} = (A^{*}A)^{\mu}\left( \overline{B_{\rho}(0)} \right) and P_{ran(A^{\dagger})}R_{\alpha} = g_{\alpha}(A^{*}A)A^{*}

Notes:
In the following, we assume A is a compact, bounded linear operator. Hence, we immediately have
ran(A^{\dagger}) = ker(A)^{\perp} = \overline{ran(A^{*})} = \overline{ran((A^{*}A)^{1/2})}.
Since g_{\alpha} is a piecewise continuous function, according to formula (2.43) in

H. W. Engl, M. Hanke, A. Neubauer, Regularization of Inverse Problems, vol 375, 1996

we obtain
ran(g_{\alpha}(A^{*}A)A^{*}) = ran(A^{*}g_{\alpha}(AA^{*}))\subset ran(A^{*}) \subset ran(A^{\dagger}).
Hence, we find that P_{ran(A^{\dagger})}g_{\alpha}(A^{*}A)A^{*} = g_{\alpha}(A^{*}A)A^{*} which implies P_{ran(A^{\dagger})}R_{\alpha} = g_{\alpha}(A^{*}A)A^{*}. Similarly, we need to show ran((A^* A)^{1/2})\subset ran(A^{\dagger}) = ker(A)^{\perp}. Assume A has an eigen-system \{ (\sigma_n, v_n, u_n)\}_{n=1}^{\infty}. For each y\in ran((A^* A)^{\mu}), there exists a x such that y = (A^* A)^{\mu}x = \sum_{n=1}^{\infty}\sigma_{n}^{2\mu}\langle x, v_n \rangle v_n. Let z\in ker(A), we obtain
\|Az\|^2 = \sum_{n=1}^{\infty}\sigma_n^2\langle z, v_n\rangle^2 = 0,
which implies \langle z, v_n \rangle = 0 for every n=1,2,\cdots. Now, we can give the following calculations
\langle z, y \rangle = \langle z, \sum_{n=1}^{\infty}\sigma_{n}^{2\mu}\langle x, v_n \rangle v_n \rangle = \sum_{n=1}^{\infty}\sigma_n^{2\mu}\langle x, v_n \rangle \langle z, v_n \rangle = 0.
The above equality implies y\in ker(A)^{\perp}. Obviously, we already proved ran((A^* A)^{\mu})\subset ker(A)^{\perp} = ran(A^{\dagger}).

Page 7, proof of Proposition 2.7

We have R_{\alpha}=(Id_X + \Phi) \circ P_{ran(A^{\dagger})} \circ R_{\alpha} and, \cdots

Notes:
In general, I can not understand that why this equality holds true. If R_{\alpha} := (Id_X + \Phi)\circ B_{\alpha} with B_{\alpha}:=g_{\alpha}(A^{*}A)A^{*}, I can deduce this equality by using similar deductions in note "Page 8, proof of Theorem 2.8 ".

上一篇下一篇

猜你喜欢

热点阅读