<<

. 70
( 83 .)



>>

and the synthesis operator, which is just the adjoint operator of T , by
T — : c ∈ ˜2 (Z) ’ T — c = cj f : (5)
j
j

The frame operator S is deÿned by S = T — T , hence Sf = j f; f f . S is bounded by AI 6S6BI
jj
and hence invertible on H .
We will also make use of the operator TT — in form of its Gram matrix representation R :
˜2 (Z) ’ ˜2 (Z) with entries Rj; l = f ; f . On R(T ) = R(R) the matrix R is bounded by AI 6R6BI
jl
2
and invertible. On ˜ (Z) this inverse extends to the Moore“Penrose inverse or pseudo-inverse R+
(cf. [12]).
Given a frame {f }j∈Z for H , any f ∈ H can be expressed as
j

f= f; f = f; f; (6)
j j j j
j∈Z j∈Z

where the elements j :=S ’1 f form the so-called dual frame and the frame operator induced by
j
’1
j coincides with S . Hence if a set {f }j∈Z establishes a frame for H , we can reconstruct any
j
function f ∈ H from its moments f; f . j
One possibility to connect sampling theory to frame theory is by means of the sinc-function
sin t
sinc(t) = : (7)
t
300 T. Strohmer / Journal of Computational and Applied Mathematics 122 (2000) 297“316


Its translates give rise to a reproducing kernel for B via
f(t) = f; sinc(· ’ t) ∀t; f ∈ B: (8)
Combining (8) with formulas (3) and (6) we obtain following well-known result [13,2].

Theorem 1.1. If the set {sinc(· ’ tj )}j∈Z is a frame for B; then the function f ∈ B is uniquely
deÿned by the sampling set {f(tj )}j∈Z . In this case we can recover f from its samples by
= S ’1 sinc(· ’ tj );
f(t) = f(tj ) j ; where (9)
j
j∈Z

or equivalently by
f(t) = cj sinc(t ’ tj ); where Rc = b (10)
j∈Z

with R being the frame Gram matrix with entries Rj; l = sinc(tj ’ tl ) and b = {bj } = {f(tj )}.

The challenge is now to ÿnd easy-to-verify conditions for the sampling points tj such that {sinc(·’
tj )}j∈Z (or equivalently the exponential system {e2 itj ! }j∈Z ) is a frame for B. This is a well-traversed
area (at least for one-dimensional signals), and the reader should consult [1,15,24] for further details
and references. If not otherwise mentioned from now on we will assume that {sinc(· ’ tj )}j∈Z is a
frame for B.
Of course, neither of formulas (9) and (10) can be actually implemented on a computer, because
both involve the solution of an inÿnite-dimensional operator equation, whereas in practice we can
only compute a ÿnite-dimensional approximation. Although the design of a valid ÿnite-dimensional
model poses severe mathematical challenges, this step is often neglected in theoretical but also in
numerical treatments of the nonuniform sampling problem. We will see, in the sequel, that the way
we design our ÿnite-dimensional model is crucial for the stability and e ciency of the resulting
numerical reconstruction algorithms.
In the next two sections we describe two di erent approaches for obtaining ÿnite-dimensional ap-
proximations to formulas (9) and (10). The ÿrst and more traditional approach, discussed in Section
2, applies a ÿnite section method to Eq. (10). This approach leads to an ill-posed problem involv-
ing the solution of a large unstructured linear system of equations. The second approach, outlined
in Section 3, constructs a ÿnite model for the operator equation in (9) by means of trigonomet-
ric polynomials. This technique leads to a well-posed problem that is tied to e cient numerical
algorithms.


2. Truncated frames lead to ill-posed problems

According to Eq. (10) we can reconstruct f from its sampling values f(tj ) via f(t) = j∈Z cj
sinc(t ’ tj ), where c = R+ b with bj = f(tj ); j ∈ Z. In order to compute a ÿnite-dimensional approx-
imation to c = {cj }j∈Z we use the ÿnite section method [17]. For x ∈ ˜2 (Z) and n ∈ N we deÿne
the orthogonal projection Pn by
Pn x = (: : : ; 0; 0; x’n ; x’n+1 ; : : : ; x n’1 ; x n ; 0; 0; : : :) (11)
T. Strohmer / Journal of Computational and Applied Mathematics 122 (2000) 297“316 301


and identify the image of Pn with the space C2n+1 . Setting Rn = Pn RPn and b(n) = Pn b, we obtain the
nth approximation c(n) to c by solving
Rn c(n) = b(n) : (12)
It is clear that using the truncated frame {sinc(·’tj )}n
j=’n in (10) for an approximate reconstruction
of f leads to the same system of equations.
If {sinc(·’tj )}j∈Z is an exact frame (i.e., a Riesz basis) for B then we have following well-known
result.

Lemma 2.1. Let {sinc(· ’ tj )}j∈Z be an exact frame for B with frame bounds A; B and Rc = b and
Rn c(n) = b(n) as deÿned above. Then R’1 converges strongly to R’1 and hence c(n) ’ c for n ’ ∞.
n


Since the proof of this result given in [9] is somewhat lengthy we include a rather short proof
here.

Proof. Note that R is invertible on ˜2 (Z) and A6R6B. Let x ∈ C2n+1 with ||x|| = 1, then Rn x; x =
Pn RPn x; x = Rx; x ¿A. In the same way we get ||Rn ||6B, hence the matrices Rn are invertible
and uniformly bounded by A6Rn 6B and
1 1
6R’1 6 for all n ∈ N:
n
B A
The lemma of Kantorovich [32] yields that R’1 ’ R’1 strongly.
n


If {sinc(· ’ tj )}j∈Z is a nonexact frame for B the situation is more delicate. Let us consider
following situation.

Example 1. Let f ∈ B and let the sampling points be given by tj = j=m; j ∈ Z; 1 ¡ m ∈ N,
i.e., the signal is regularly oversampled at m times the Nyquist rate. In this case the reconstruction
of f is trivial, since the set {sinc(·’tj )}j∈Z is a tight frame with frame bounds A=B=m. Shannon™s
Sampling Theorem implies that f can be expressed as f(t) = j∈Z cj sinc(t ’ tj ) where cj = f(tj )=m
and the numerical approximation is obtained by truncating the summation, i.e.,
n
f(tj )
fn (t) = sinc(t ’ tj ):
m
j=’n

Using the truncated frame approach one ÿnds that R is a Toeplitz matrix with entries
sin( =m)(j ’ l)
Rj; l = ; j; l ∈ Z;
( =m)(j ’ l)
in other words, Rn coincides with the prolate matrix [36,39]. The unpleasant numerical properties
of the prolate matrix are well-documented. In particular, we know that the singular values n of Rn
cluster around 0 and 1 with log n singular values in the transition region. Since the singular values
of Rn decay exponentially to zero the ÿnite-dimensional reconstruction problem has become severely
ill-posed [12], although the inÿnite-dimensional problem is “perfectly posed” since the frame operator
satisÿes S = mI , where I is the identity operator.
302 T. Strohmer / Journal of Computational and Applied Mathematics 122 (2000) 297“316


Of course, the situation does not improve when we consider nonuniformly spaced samples. In
this case it follows from standard linear algebra that (R) ⊆{0 ∪ [A; B]}, or expressed in words, the
singular values of R are bounded away from zero. However for the truncated matrices Rn we have
(Rn ) ⊆{(0; B]}
and the smallest of the singular values of Rn will go to zero for n ’ ∞, see [23].
Let A = U V — be the singular value decomposition of a matrix A with = diag({ k }). Then the
Moore“Penrose inverse of A is A+ = V + U — , where (see, e.g., [18])
1= if = 0;
k k
+ + +
= diag({ k }); = (13)
k
0 otherwise:
For Rn = Un n Vn this means that the singular values close to zero will give rise to extremely large
coe cients in R+ . In fact, ||R+ || ’ ∞ for n ’ ∞ and consequently c(n) does not converge to c.
n n
+
Practically, ||Rn || is always bounded due to ÿnite precision arithmetics, but it is clear that it will
lead to meaningless results for large n. If the sampling values are perturbed due to round-o error or
data error, then those error components which correspond to small singular values k are ampliÿed by
the (then large) factors 1= k . Although for a given Rn these ampliÿcations are theoretically bounded,
they may be practically unacceptable large.
Such phenomena are well known in regularization theory [12]. A standard technique to compute
a stable solution for an ill-conditioned system is to use a truncated singular value decomposition
(TSVD) [12]. This means in our case we compute a regularized pseudo-inverse R+; = Vn n Un— +;
n
where
1= k if k ¿ ;
+;
= diag({d+ }); d+ = (14)
k k
0 otherwise:
In [23] it is shown that for each n we can choose an appropriate truncation level such that the
regularized inverses R+; converge strongly to R+ for n ’ ∞ and consequently limn’∞ ||f’f(n) ||=0,
n
where
n
cj ) sinc(t ’ tj )
(n;
(n)
f (t) =
j=’n

with
c(n; ) = R+; b(n) :
n

The optimal truncation level depends on the dimension n, the sampling geometry, and the noise
level. Thus it is not known a priori and has in principle to be determined for each n independently.
Since is of vital importance for the quality of the reconstruction, but no theoretical explanations
for the choice of are given in the sampling literature, we brie y discuss this issue. For this purpose
we need some results from regularization theory.

2.1. Estimation of regularization parameter

Let Ax = y be given where A is ill-conditioned or singular and y is a perturbed right-hand side
with ||y ’ y ||6 ||y||. Since in our sampling problem the matrix under consideration is symmetric,
T. Strohmer / Journal of Computational and Applied Mathematics 122 (2000) 297“316 303


we assume for convenience that A is symmetric. From a numerical point of view ill-conditioned
systems behave like singular systems and additional information is needed to obtain a satisfactory
solution to Ax = y. This information is usually stated in terms of “smoothness” of the solution x. A
standard approach to qualitatively describe smoothness of x is to require that x can be represented
in the form x = Sz with some vector z of reasonable norm, and a “smoothing” matrix S, cf. [12,29].
Often it is useful to construct S directly from A by setting
S = Ap ; p ∈ N0 : (15)
Usually, p is assumed to be ÿxed, typically at p = 1 or 2.
We compute a regularized solution to Ax = y via a truncated SVD and want to determine the
optimal regularization parameter (i.e., truncation level) .
Under the assumption that
x = Sz; ||Ax ’ y ||6 ||z||; (16)
it follows from Theorem 4:1 in [29] that the optimal regularization parameter for the TSVD is
1=(p+1)
1
ˆ= ; (17)
2p

where 1 = 2 = 1 (see [29, Section 6]).
However z and are in general not known. Using ||Ax ’ y ||6 ||y|| and ||y|| = ||Ax|| = ||ASz|| =
||Ap+1 z|| we obtain ||y||6||A||p+1 ||z||. Furthermore, setting ||y|| = ||z|| implies
6 ||A||p+1 : (18)
Hence combining (17) and (18) we get
1=(p+1) 1=(p+1)
||A||p+1
ˆ6 = ||A|| : (19)
p p
Applying these results to solving Rn c(n) = b(n) via TSVD as described in the previous section, we
get
1=(p+1) 1=(p+1) 1=(p+1)
ˆ6||Rn || 6||R|| =B ; (20)
p p p
where B is the upper frame bound. Fortunately, estimates for the upper frame bound are much easier
to obtain than estimates for the lower frame bound.
Thus using the standard setting p = 1 or 2 a good choice for the regularization parameter is
⊆ [B( =2)1=3 ; B( )1=2 ]: (21)
Extensive numerical simulations conÿrm this choice, see also Section 5.
For instance for the reconstruction problem of Example 1 with noise-free data and machine pre-
cision = = 10’16 , formula (21) implies ⊆ [10’6 ; 10’8 ]. This coincides very well with numerical
experiments.
If the noise level is not known, it has to be estimated. This di cult problem will not be
discussed here. The reader is referred to [29] for more details.
Although we have arrived now at an implementable algorithm for the nonuniform sampling prob-
lem, the disadvantages of the approach described in the previous section are obvious. In general, the
304 T. Strohmer / Journal of Computational and Applied Mathematics 122 (2000) 297“316


matrix Rn does not have any particular structure, thus the computational costs for the singular value
decomposition are O(n3 ) which is prohibitive large in many applications. It is deÿnitely not a good
approach to transform a well-posed inÿnite-dimensional problem into an ill-posed ÿnite-dimensional
problem for which a stable solution can only be computed by using a “heavy regularization machin-
ery”.
The methods in [40 “ 42,33,2] coincide with or are essentially equivalent to the truncated frame
approach, therefore they su er from the same instability problems and the same numerical ine -
ciency.



2.2. CG and regularization of the truncated frame method

As mentioned above one way to stabilize the solution of Rn c(n) = b(n) is a truncated singular value
decomposition, where the truncation level serves as regularization parameter. For large n the costs
of the singular value decomposition become prohibitive for practical purposes.
We propose the conjugate gradient method [18] to solve Rn c(n) = b(n) . It is in general much more
e cient than a TSVD (or Tikhonov regularization as suggested in [40]), and at the same time it
can also be used as a regularization method.
The standard error analysis for CG cannot be used in our case, since the matrix is ill-conditioned.
Rather we have to resort to the error analysis developed in [28,22].
When solving a linear system Ax = y by CG for noisy data y following happens. The iterates xk
of CG may diverge for k ’ ∞, however the error propagation remains limited in the beginning of
the iteration. The quality of the approximation therefore depends on how many iterative steps can
be performed until the iterates turn to diverge. The idea is now to stop the iteration at about the
point where divergence sets in. In other words, the iterations count is the regularization parameter

<<

. 70
( 83 .)



>>