. 98
( 136 .)


exactly as a polynomial. The residual function R(x; a0 , . . . , aN ) is a polynomial of degree
N , too, but the initial condition provides one constraint, leaving only N free coef¬cients
in uN (x) to force the (N + 1) coef¬cients of R(x) to vanish. Lanczos observed that the
modi¬ed problem

vx + v = „ TN (x) & v(’1) = 1

does have a unique solution. As before, if we take the inner product of (21.14) with each
of the ¬rst N Chebyshev polynomials and reserve the ¬rst row to impose the boundary
condition, we obtain an (N + 1) — (N + 1) system to determine the coef¬cients of the
Chebyshev series for v(x) ” independent of „ . We shall not bother to write down this
matrix equation because it is identical with “Galerkin™s method with boundary bordering”
as de¬ned in Chapter 3.
This example is solved in Fox & Parker by expanding TN (x) as an ordinary polyno-
mial and matching powers of x. This is more ef¬cient for hand calculation even though it
ignores the orthogonality of the Chebyshev polynomials and makes it necessary to deter-
mine „ simultaneously with the coef¬cients of the power series representation of v(x). We
obtain the same answer either way; the important point is that the perturbation on the R.
H. S. in (21.14) is a Chebyshev polynomial, which guarantees spectral accuracy. In contrast,
perturbing (21.13) by a forcing proportional to xN would give a solution which would be
the ¬rst (N + 1) terms of the power series of exp(’x ’ 1), the exact solution.
Just as for the rational function, the error can be bounded by a function proportion to
„ . Just as for the rational function, this error analysis is usually not worth the bother.
For more complicated differential equations, the same principle applies except that it
may be necessary to use many „ terms or even an in¬nite series.

21.4 Canonical Polynomials
Lanczos observed that one strategy for bypassing the cost of inverting an (N + 1) by (N +
1) matrix is to use what he dubbed the “canonical” polynomials. Note that for (21.13),

Table 21.1: A Selected Bibliography of Tau Methods

References Comments
Lanczos(1938) Invention of the tau method
Wragg(1966) Lanczos-tau method for Stefan problem
Namasivayam&Ortiz(1981) Best approximation and PDEs via tau method
Ortiz&Samara(1981,1983) Operational approach: ODEs, eigenvalue eqs.
Liu&Ortiz&Pun(1984) Steklov™s PDE eigenvalue problem
Ortiz&Pun(1985) Nonlinear PDEs
Liu&Ortiz(1986) PDEs through the tau-lines method
Ito&Teglas(1986,1987) Legendre-tau for functional differential equations
and linear quadratic optimal control
Ortiz(1987) Review: singular PDEs and tau method
da Silva(1987) Review
Ortiz&Pham Ngoc Dinh(1987) Recursions for nonlinear PDEs
Liu&Ortiz(1987a) High order ODE eigenvalue problems; Orr-Sommerfeld Eq.
Liu&Ortiz(1987b) Nonlinear dependence of linear ODE on eigenparameter
Liu&Ortiz(1989) Functional-differential eigenvalue problems
Hosseini Ali Abadi Non-uniform space-time elements for tau
& Ortiz(1991b) simulation of solitons
Khajah&Ortiz(1992) functional equations
Namasivayam&Ortiz(1993) Error analysis for tau method
Khajah&Ortiz(1993) Rational approximations via tau method
El-Daou & Ortiz Uni¬ed approach to tau-method and
& Samara(1993) Chebyshev expansions
El-Daou & Ortiz(1993) Error analysis
El-Daou & Ortiz(1994a) Recursive collocation with canonical polynomials
El-Daou & Ortiz(1994b) Collocation/tau; weighting subspaces
Ortiz(1994) Review
Dongarra&Straughan Eigenvalue calculations
Straughan&Walker(1996) Compound matrix and tau for eigenproblems
in porous convection
Siyyam&Syam(1997) Poisson equation
Froes Bunchaft(1997) Extensions to theory of canonical polynomials
El-Daou&Ortiz(1997) Existence & stability proofs for singular perturbation
problems, independent of perturbation parameter

it is trivial to obtain the solution when the R. H. S. is a power of x. These “canonical
polynomials” pj (x) are de¬ned as the solutions of
pj, x + pj = xj (21.15)
; pj (’1) = 1
Since each is the exact, power series solution to a simple problem, they can be computed
Now TN (x) is a sum of powers of x, so it follows that the „ -solution of 21.14) is a sum
of the canonical polynomials. If we write
tN,j xj (21.16)
TN (x) =

v(x) = „ tN,j pj (x)

We then choose „ so that the boundary condition is satis¬ed ” in this case, this requires
„ = 1/ tN,j ” and v(x) is completely determined. Since each pj (x) may be computed in
O(N ) operations via recursion and since there are N polynomials, the total cost is O(N 2 )
” a huge savings over the O(N 3 ) cost of matrix inversion.
The method of canonical polynomials is less ef¬cient when one needs several „ terms
instead of a single Chebyshev polynomial as suf¬ced for (21.13). Furthermore, the canoni-
cal polynomials are problem-dependent, and thus must be recomputed from scratch for each
differential equation.
Nevertheless, E. L. Ortiz and his collaborators have solved many problems including
nonlinear partial differential equations by combining the „ -method with the technique of
“canonical polynomials”. He and his group have made many re¬nements to extend the
range of Lanczos™ ideas. The relevant articles are described in the bibliography. However,
the canonical polynomials method has never been popular outside of his group.

21.5 Nomenclature
We have not used this nomenclature of “tau-method” earlier in the book because from a
programmer™s viewpoint, the distinction is between “basis recombination” (=Galerkin™s)
and “boundary bordering” (=tau-method). The accuracy difference between (21.1) and
(21.3) is usually negligible. Furthermore, these same options of “basis recombination” and
“boundary bordering” also exist for collocation methods (although in this case, the accu-
racy difference is not merely small but zero). Lastly, when the test functions are different
from the basis functions, the label preferred by the ¬nite element community is “Petrov-
Galerkin” rather than “tau-method”.
Chapter 22

Domain Decomposition Methods

“We stress that spectral domain decomposition methods are a recent and rapidly evolving
subject. The interested reader is advised to keep abreast of the literature.”
” Canuto, Hussaini, Quarteroni, and Zang (1988)

22.1 Introduction
The historical trend in both ¬nite difference and ¬nite element methods has been the re-
placement of second order schemes by higher order formulas. Indeed, ¬nite elements have
¬ssioned into “h-type” and “p-type” strategies. The former improve accuracy by reducing
the grid spacing while p, the order of the method, is kept ¬xed. In contrast, “p-type” codes
partition the domain into a few large pieces (¬xed h) and re¬ne the solution by increasing
the degree p of the polynomial within each element.
In the last few years, the trend in spectral methods has been in the opposite direction:
to replace a global approximation by local polynomials de¬ned only in part of the domain.
Such piecewise spectral methods are almost indistinguishable from p-type ¬nite elements.
These techniques are variously called “global elements”, “spectral elements”, “spectral
substructuring”, and a variety of other names. Since “domain decomposition pseudospec-
tral and Chebyshev-Galerkin methods” is rather a jawbreaker, we shall use “spectral ele-
ments” as a shorthand for all the various algorithms in this family.
There are several motives for “domain decomposition” or “substructuring”. One is
that spectral elements convert differential equations into sparse rather than dense matrices
which are cheaper to invert. A second is that in complicated geometry, it may be dif¬cult
or impossible to map the domain into a rectangle or a disk without introducing arti¬cial
singularities or boundary layers into the transformed solution. Mapping the domain into
multiple rectangles and/or disks is much easier. A third reason is that mapping into secto-
rial elements can eliminate “corner singularities”.
It seems improbable that spectral elements will ever completely chase global expan-
sions from the ¬eld, any more than higher order methods have eliminated second order
calculations. Nevertheless, they have become a major part of spectral algorithms and, as
noted in the quote above, an important research frontier.


22.2 Notation
„¦ will denote the total domain, ‚„¦ its boundary, and „¦j , j = 1, . . . , M , will denote its M
subdomains. Nj will denote the degree of the polynomial used to approximate u on the
j-th subdomain, and N ≡ N1 + N2 + · · · NM is the total number of degrees of freedom on
the global domain. The numerical solution will be denoted by uN while uj will indicate
the restriction of the numerical solution to the j-th subdomain. In spectral elements, we
approximate u(x) by a collection of separate approximations which are each valid only
on a particular subdomain and are unde¬ned elsewhere. Thus, uN (x) is the collection of
polynomials while uj (x) is a single polynomial de¬ned only on „¦j .

22.3 Connecting the Subdomains: Patching
De¬nition 44 (PATCHING) Subdomains are connected by “PATCHING” if the solutions in dif-
ferent elements are matched along their common boundary by requiring that u(x) and a ¬nite num-
ber of its derivatives are equal along the interface.
The number of matching conditions is the number that is necessary to give a unique solution in
each subdomain. As a rule-of-thumb, two matching conditions are equivalent to a single numerical
boundary condition. Thus, one must match both u(x) and du/dx at an interface for a second order
differential equation.

For ordinary differential equations and for elliptic partial differential equations, patch-
ing is usually straightforward. For example, suppose the goal is to solve a second order
ordinary differential equation on the interval x ∈ [’1, 1] by splitting the segment into two
subintervals: [’1, d] & [d, 1]. If one knew u(d) [in addition to the usual Dirichlet boundary
conditions, u(’1) = ± and u(1) = β], then the original problem would be equivalent to
two completely separate and distinct boundary value problems, one on each subinterval,
which could be solved independently of one another.
Unfortunately, we are rarely given three boundary conditions for a second order prob-
lem! However, by demanding that both u and du/dx are continuous at x = d, we obtain
two interface conditions which, together with the Dirichlet conditions at x = ±1, give a to-
tal of four constraints. This is precisely what is needed to uniquely determine the solution
of two otherwise independent boundary value problems of second order.
To be speci¬c, let „¦ denote the interval x ∈ [a, b] and

L u ≡ q2 (x) uxx + q1 (x) ux + q0 (x) u = f (x) in „¦ (22.1)

u(a) = ± & u(b) = β

The two-element subdomain solution may be mathematically expressed as

in „¦1 (22.3)
L u1 = f & u1 (a) = ±


in „¦2 (22.4)
L u2 = f & u2 (b) = β

with the interface conditions

u1 (d) = u2 (d)
u1,x (d) = u2,x (d)

where the subscript “x” denotes differentiation with respect to x.
Patching for hyperbolic equations is more complex. The simplest such equation is

[“One-Dimensional Advection Eq.”] (22.6)
ut + c ux = 0

on x ∈ [a, b] where (c > 0)

u(a, t) = uL (t)
u(x, 0) = u0 (x)

which has the exact solution
 u0 (x ’ c t) x > (a + c t)

 uL t ’ x ’ a
u(x, t) =
x ¤ (a + c t)

Thus, u(x, t) at a given point x0 is in¬‚uenced only by events “upstream”, that is, by the
forcing and initial condition for x < x0 . The solution at x = x0 is unaffected by downstream
values of the initial condition.
The numerical strategy which conforms to the mathematics of the exact solution is to
solve the wave equation independently on each element, one at a time, beginning at the left
boundary. The computed solution at the right of the j-th element provides the left-hand
boundary forcing for the (j+1)-st element.
If instead one uses an averaged interface condition such as
‚uN c ‚uj c ‚uj+1
at x = dj (22.10)
+ + =0
‚t 2 ‚x 2 ‚x
the calculation may be unstable, at least if the number of grid points in element j is greater
than that in (j+1). Eq. (22.10) would seem to be stupid almost beyond belief since it both
increases the computational work and misrepresents the mathematics of the original prob-
lem. One would never want to apply (22.10) to a single scalar equation like (22.6). However,
when we solve a system of equations with waves travelling both to the left and the right, it
may be dif¬cult to apply anything better than ( 22.10).
The best approach, as reviewed by Canuto et al. (1988) and Kopriva (1986, 1987, 1989a,
1998, 1999), is to diagonalize the operator of the system of differential equations ” either
exactly or approximately ” so as to separate right-running and left-running waves. One
may then separately apply “upwind” conditions on each wave component. This is still
very much a research frontier.

22.4 The Weak Coupling of Elemental Solutions: the Key to


. 98
( 136 .)