<<

. 34
( 59 .)



>>

2x
CH ;  ( ) G C ;  ( ) G C : 0.
() (x ) ; (14)
 *x C (L ( ) G G G
*x *x
H H G H
Now let ; 0 through a sequence of values . Then since #x # : , we have
I C
that

x ; 0. (15)
CI
Since the vectors ( ( ), ( ), ( )) are all unit vectors [see (13)], there exists a

subsequence, that we again denote as , and a unit vector ( , , ) such that
I 
( ( ), ( ), ( )) ; ( , , ). (16)
I I I 
Since ( , , ) is a unit vector, it is different from zero.

154 OPTIMIZATION PROBLEMS


From (14), (15), (16), and the continuity of the partials of f , g, and h, we get

*f P *g (0) I *h (0)
G ; G : 0.
(0) ;  (17)
 *x G *x G *x
H G H G H
From (13) and (16) we see that . 0 for i : 0, 1, . . . , r and : 0 for
G G
i : r ; 1, . . . , m. Thus, . 0 and . 0. Since g (0) : 0 for i : 1, . . . , r and
 G
: 0 for i : r ; 1, . . . , m, we have that g (0) : 0 for i : 1, . . . , m. Also, we
G GG
can take the upper limit in the second term in (17) to be m and write

*f K *g (0) I *h (0)
G ; G : 0.
(0) ; 
 *x G *x G *x
H G H G H
This completes the proof of the theorem.

: 0 cannot occur in
We now present conditions that ensure that

Theorem 5.1. Such conditions are called constraint quali¬cations and are
statements about the geometry of the feasible set in a neighborhood of x .
*
There are many constraint quali¬cations in the literature. We have selected two
that are relatively simple and easy to apply. The second is easier to apply, but
the ¬rst, given in De¬nition 5.2 below, will capture more points at which : 0

cannot occur. All of the constraint quali¬cations that are appropriate to
problem PI have the defect that, in general, they require knowledge of the point
x rather than requiring knowledge of global properties of the functions
*
involved. The reader who is interested in other constraint quali¬cations and
the relationships among them is referred to Bazzara, Sherali, and Shetty [1993]
and Mangasarian [1969].

De¬nition 5.2. The functions g and h satisfy the constraint quali¬cation CQ at
a feasible point x if

(i) the vectors
h (x ), . . . ,
h (x ) are linearly independent and
 I
(ii) the system


g (x )z : 0,
h(x )z : 0, (18)
# 
has a solution z in RL. Here, E : +i : g (x ) : 0,.
G
If the equality constraints are absent, then the statements involving
h(x )

are to be deleted.
Condition (ii) of the constraint quali¬cation says that ˜˜to ¬rst order™™ we can
move from x into the feasible set by moving along the surface de¬ned by the

equality constraints h(x) : 0 and into the interior of the set de¬ned by
g(x) - 0. It turns that this implies that we can actually move from x into the

DIFFERENTIABLE NONLINEAR PROGRAMMING PROBLEMS 155


feasible set in this fashion. We shall prove this in the corollary to Lemma 6.1
below.
Note that CQ does not impose any condition on the point x other than

that it be feasible. Also note that z : 0 cannot be a solution of (18).
The constraint quali¬cation CQ of De¬nition 5.2 was introduced by
Mangasarian and Fromovitz [1967].

T 5.2. L et x be a solution of problem PI and let CQ hold at x . T hen
* *
9 0 and there exists a vector . 0 in RK and a vector in RI such that

(i) 1 , g(x )2 : 0 and
*
(ii)
f (x ) ; R
g(x ) ; R
h(x ) : 0.
* * *
We shall refer to the necessary conditions of Theorem 5.2 as the Karush”
Kuhn”Tucker (KKT) conditions and to the multipliers of Theorem 5.2 as the
KKT multipliers.
We now prove the theorem. First we will show that if CQ holds at x , then
*
in Theorem 5.1 cannot be zero. Hence if ( , , ) is as in Theorem 5.1, so
 
is (1, / , / ) and Theorem 5.2 follows. Note that once we have 9 0, then
  
( , , ) " (0, 0 , 0 ).
 KI
We now show that : 0 cannot occur. If : 0 did occur, then by (iii) of
 
Theorem 5.1

R
g(x ) ; R
h(x ) : 0. (19)
* *
If : 0, then " 0, since ( , , ) : (0, 0, ) " 0. But then R
h(x ) : 0,
 *
contradicting the linear independence of
h (x ), . . . ,
h (x ). Thus, " 0.
* I*
Since : ( , ) and : 0 , we have that " 0. We may therefore
#' ' ' #
rewrite (19) as

R
g (x ) ; R
h(x ) : 0. (20)
# #* *
Multiplying on the right by a vector z that is a solution of (18) gives

R
g (x )z : 0.
# #*
" 0, . 0, and
g (x )z : 0, we have a contradiction. Thus " 0.
Since
# # #* 
C¬¬ 1. L et x be a solution of problem PI such that the set E de¬ned in
*
(6) is +1, . . . , r, and such that the vectors


g (x ), . . . ,
g (x ),
h (x ), . . . ,
h (x ) (21)
* P* * I*
are linearly independent. T hen the conclusion of T heorem 5.2 holds.
156 OPTIMIZATION PROBLEMS


Proof. If : 0 in Theorem 5.1, then (20) holds. Since the vectors in (21)

are linearly independent, this implies that ( , ) : 0. Hence ( , , ) : 0,
# 
which cannot occur, and so 9 0.

We leave to the reader the modi¬cations of the arguments in the proofs of
Theorem 5.2 and Corollary 1 if the equality constraints are absent.

Remark 5.6. We noted in Remark 5.3 that the necessary condition (i) of
Theorem 5.2 is equivalent to the m equations

g (x ) : 0, i : 1, . . . , m. (i)
GG *
Condition (ii) in component form gives the system of n equations

*f K *g (x ) P *h
G *; G : 0,
(x ) ;  i : 1, . . . , n. (ii)
G *x G *x
*x *
H G H G H
Since x is a solution, it is feasible, and so we obtain an additional k equations
*
h (x ) : 0, i : 1, . . . , k.
G*
Thus we have a system of m ; n ; k equations in the m ; n ; k unknowns

: ( , . . . , ), x : (x , . . . , x ), :( ,..., ).
 K * *L  I
*
Remark 5.7. If the inequality constraints are absent and we only have the
equality constraints h(x) : 0, then CQ becomes ˜˜
h (x ), . . . ,
h (x ) are
 I
linearly independent.™™ The constraint in Corollary 1 reduces to the same
statement. Under these circumstances, the multiplier (1, ) is unique. This
follows from the observation that if (1, ) with "  is another multiplier,
then (0, 9 ) will be a multiplier. This, however, is impossible since : 0

cannot be. A similar argument shows that if all the inequality constraints
g (x) - 0 are inactive at x , then the multiplier (1, 0, ) is unique.
G *
In Remark 5.7 we have established the form of the Lagrange multiplier rule
that is usually given in multivariate calculus courses. We state it as a corollary
to Theorems 5.1 and 5.2.

C¬¬ 2. L et x be a solution to the problem of minimizing f (x) subject to
*
the equality constraints h(x) : 0 and such that the vectors
h (x ), . . . ,
h (x )
* I*
are linearly independent. T hen there exists a unique vector in RI such that


f (x ) ; R
h(x ) : 0
* *
DIFFERENTIABLE NONLINEAR PROGRAMMING PROBLEMS 157


or, in component form,

*f I *h (x )
G * : 0,
(x ) ;  j : 1, . . . , n.
G *x
*x *
H G H
The following example illustrates CQ by showing what happens when it is
not ful¬lled. Also, the vectors in (21) are linearly dependent in this example.

Example 5.2. Let X : R, let f (x) : 9x , and let
 
9x 0

g(x) : 9x -0.

x ; (x 9 1) 0
 
The feasible set X is sketched in Figure 4.2.




Figure 4.2.
158 OPTIMIZATION PROBLEMS


The minimum is attained at x : (1, 0), and so
*
9x

g (x) : .
# x ; (x 9 1)
 
Hence
0 91 z 9z 0
: :

g (x )z :

<<

. 34
( 59 .)



>>