<<

. 3
( 67 .)



>>

to b the corkscrew would move into the direction of c.
The vector product of a vector a with itself yields the zero vector since in that
case φ = 0:
a — a = 0. (1.10)
The vector product is not commutative, since the vector product of b and a yields
a vector that has the opposite direction of the vector product of a and b:
a — b = ’b — a. (1.11)
The triple product of three vectors a, b and c is a scalar, de¬ned by
a — b · c = ( a — b) · c. (1.12)
So, ¬rst the vector product of a and b is determined and subsequently the inner
product of the resulting vector with the third vector c is taken. If all three vectors
a, b and c are non-zero vectors, while the triple product is equal to zero then the
Vector calculus
4

vector c lies in the plane spanned by the vectors a and b. This can be explained
by the fact that the vector product of a and b yields a vector perpendicular to the
plane spanned by a and b. Reversely, this implies that if the triple product is non-
zero then the three vectors a, b and c are not in the same plane. In that case the
absolute value of the triple product of the vectors a, b and c equals the volume of
the parallelepiped spanned by a, b and c.
The dyadic or tensor product of two vectors a and b de¬nes a linear transfor-
mation operator called a dyad ab. Application of a dyad ab to a vector p yields
a vector into the direction of a, where a is multiplied by the inner product of b
and p:

ab · p = a ( b · p) . (1.13)

So, application of a dyad to a vector transforms this vector into another vector.
This transformation is linear, as can be seen from

ab · ( ±p + βq) = ab · ±p + ab · βq = ±ab · p + βab · q. (1.14)

The transpose of a dyad ( ab)T is de¬ned by

( ab)T · p = ba · p, (1.15)

or simply

( ab)T = ba. (1.16)

An operator A that transforms a vector a into another vector b according to

b = A · a, (1.17)

is called a second-order tensor A. This implies that the dyadic product of two
vectors is a second-order tensor.
In the three-dimensional space a set of three vectors c1 , c2 and c3 is called a basis
if the triple product of the three vectors is non-zero, hence if all three vectors are
non-zero vectors and if they do not lie in the same plane:

c1 — c2 · c3 = 0. (1.18)

The three vectors c1 , c2 and c3 , composing the basis, are called basis vectors.
If the basis vectors are mutually perpendicular vectors the basis is called an
orthogonal basis. If such basis vectors have unit length, then the basis is called
orthonormal. A Cartesian basis is an orthonormal, right-handed basis with
basis vectors independent of the location in the three-dimensional space. In the
following we will indicate the Cartesian basis vectors with ex , ey and ez .
5 1.4 Decomposition of a vector with respect to a basis

1.4 Decomposition of a vector with respect to a basis

As stated above, a Cartesian vector basis is an orthonormal basis. Any vector can
be decomposed into the sum of, at most, three vectors parallel to the three basis
vectors ex , ey and ez :
a = ax ex + ay ey + az ez . (1.19)
The components ax , ay and az can be found by taking the inner product of the
vector a with respect to each of the basis vectors:
ax = a · ex
ay = a · ey (1.20)
az = a · ez ,
where use is made of the fact that the basis vectors have unit length and are
mutually orthogonal, for example:
a · ex = ax ex · ex + ay ey · ex + az ez · ex = ax . (1.21)
The components, say ax , ay and az , of a vector a with respect to the Cartesian
vector basis, may be collected in a column, denoted by a:

⎡ ¤
ax
⎢ ⎥
a = ⎣ ay ¦ . (1.22)

az
So, with respect to a Cartesian vector basis any vector a may be decomposed in
components that can be collected in a column:
a ←’ a . (1.23)


This ˜transformation™ is only possible and meaningful if the vector basis with
which the components of the column a are de¬ned has been speci¬ed. The choice

of a different vector basis leads to a different column representation a of the vector

a, this is illustrated in Fig. 1.4. The vector a has two different column representa-
tions, a and a— , depending on which vector basis is used. If, in a two-dimensional
∼ ∼
context {ex , ey } is used as a vector basis then

ax
a ’’ a = , (1.24)

ay

while, if {ex— , ey— } is used as vector basis:

a—

a ’’ a = x
. (1.25)
a—

y
Vector calculus
6

ey
*
ey
a
a
ay
* *
ay ex
*
ax
ax ex

Figure 1.4
——
Vector a with respect to vector basis {ex , ey } and {ex , ey }.



Consequently, with respect to a Cartesian vector basis, vector operations such as
multiplication, addition, inner product and dyadic product may be rewritten as
˜column™ (actually matrix) operations.
Multiplication of a vector a = ax ex + ay ey + az ez with a scalar ± yields a new
vector, say b:

b = ±a = ±( ax ex + ay ey + az ez )
= ±ax ex + ±ay ey + ±az ez . (1.26)

So

b = ±a ’’ b = ±a. (1.27)
∼ ∼


The sum of two vectors a and b leads to

c = a + b ’’ c = a + b . (1.28)
∼ ∼ ∼


Using the fact that the Cartesian basis vectors have unit length and are mutually
orthogonal, the inner product of two vectors a and b yields a scalar c according to

c = a · b = ( ax ex + ay ey + az ez ) · ( bx ex + by ey + bz ez )
= ax bx + ay by + az bz . (1.29)

In column notation this result is obtained via

c = aT b , (1.30)
∼∼


where aT denotes the transpose of the column a , de¬ned as
∼ ∼


aT = ax ay az , (1.31)



such that:
⎡ ¤
bx
⎢ ⎥
a b = ax ay az ⎣ by ¦ = ax bx + ay by + az bz .
T
(1.32)
∼∼
bz
7 1.4 Decomposition of a vector with respect to a basis


Using the properties of the basis vectors of the Cartesian vector basis:

ex — ex = 0
ex — ey = ez
ex — ez = ’ey

ey — ex = ’ez
ey — ey = 0 (1.33)
ey — ez = ex

ez — ex = ey
ez — ey = ’ex
ez — ez = 0,

the vector product of a vector a and a vector b is directly computed by means of

a — b = ( ax ex + ay ey + az ez ) — ( bx ex + by ey + bz ez )
= ( ay bz ’ az by ) ex + ( az bx ’ ax bz ) ey + ( ax by ’ ay bx ) ez .
(1.34)

If by de¬nition c = a — b, then the associated column ∼ can be written as:
c
⎡ ¤
ay bz ’ az by
⎢ ⎥
c = ⎣ az bx ’ ax bz ¦ . (1.35)

ax by ’ ay bx

The dyadic product ab transforms another vector c into a vector d, according to
the de¬nition

d = ab · c = A · c , (1.36)

with A the second-order tensor equal to the dyadic product ab. In column notation
this is equivalent to

d = a ( bT c ) = ( a bT ) ∼ ,
c (1.37)
∼ ∼∼ ∼∼



with a bT a (3 — 3) matrix given by
∼∼
⎡ ¤ ⎡ ¤
ax ax bx ax by ax bz
⎢ ⎥ ⎢ ⎥
A = a bT = ⎣ ay ¦ bx by bz = ⎣ ay bx ay bz ¦ , (1.38)
ay by
∼∼
az az bx az by az bz
or

d = A c. (1.39)


Vector calculus
8

In this case A is called the matrix representation of the second-order tensor A, as
the comparison of Eqs. (1.36) and (1.39) reveals.



Exercises

The basis {ex , ey , ez } has a right-handed orientation and is orthonormal.
1.1
(a) Determine |ei | for i = x, y, z.
(b) Determine ei · ej for i, j = x, y, z.
(c) Determine ex · ey — ez .
(d) Why is: ex — ey = ez ?
Let {ex , ey , ez } be an orthonormal vector basis. The force vectors F x =
1.2
3ex + 2ey + ez and F y = ’4ex + ey + 4ez act on point P. Calculate a
vector F z acting on P in such a way that the sum of all force vectors is the
zero vector.
Let {ex , ey , ez } be a right-handed and orthonormal vector basis. The follow-
1.3
ing vectors are given: a = 4ez , b = ’3ey + 4ez and c = ex + 2 ez .
(a) Write the vectors in column notation.
(b) Determine a + b and 3( a + b + c).

<<

. 3
( 67 .)



>>