<<

. 35
( 59 .)



>>

#* z z
0 1 0
 
has no solution in R. Thus neither CQ nor the linear independence of the
vectors in (21) holds at x : (1, 0).
*
The necessary condition with : 1 would imply

0 91

f (x ) ; ( , ) : (0, 0)
0
* 1
or
(91, 0) ; (0, 9 ; ) : (0, 0),
 
which is impossible. On the other hand, if we took : 0, then any ( , )
 
with : " 0 would satisfy the conditions of Theorem 5.1.
 
Example 5.3. In this example we consider a problem with equality constraints
only in which CQ does not hold at the minimum point. Let X : R, let

f (x) : 2x 9 x, and let h(x) : xx 9 x. The constraint set X consists of the
   
lines x : 0 and x : <x . The level curves of f are the two branches of the
  
hyperbolas 2x 9 x : c and the asymptotes of this family, the lines
 
x : <(2x . See Figure 4.3.
 
From the ¬gure we see that the point x : (0, 0) furnishes a minimum. Since
*

f (x) : (4x , 92x ) and
h(x) : (2x x , x 9 3x), we get that
  
 

f (x ) :
f (0, 0) : (0, 0),
h(x ) : (0, 0).
* *
Since
h(x ) : 0, the constraint quali¬cation CQ does not hold at x . Any
* *
( , ) of the form (0, ), arbitrary, will serve as a Lagrange multiplier, as will

any (1, ), arbitrary.

Example 5.1 (Continued). In this example CQ holds at a point x if g(x ) : 0
 
and there exists a solution z to the equation

1
g(x ), z2 : 1(2k, 92x ), (z , z )2 : 0.
  
Clearly, z : (9k, 0) is a solution. Thus CQ holds at all boundary points of the
constraint set. This is consistent with our being able to rule out : 0 at the

outset in our ¬rst treatment of this example.
DIFFERENTIABLE NONLINEAR PROGRAMMING PROBLEMS 159




Figure 4.3.



Our next theorem is a suf¬ciency theorem which states that if X , f , and g

are convex and if the KKT necessary conditions hold, then x is a solution.
*

T 5.3. L et f and g be as in the statement of problem PI and let X , f ,

and g be convex. L et x + RL and + RK be such that
*

(i) g(x ) - 0,
*
(ii) . 0,
160 OPTIMIZATION PROBLEMS


(iii) 1 , g(x )2 : 0, and
*
(iv)
f (x ) ; R
g(x ) : 0.
* *
T hen x is a solution of problem PI.
*
Remark 5.8. The suf¬ciency theorem can also be applied to problems with
equality constraints h(x) : 0 provided the function h is af¬ne, that is, has the
form h : Ax ; b. For then we can write the equality constraint h(x) : 0 as a
pair of inequality constraints h(x) - 0 and 9h(x) - 0, with both h and 9h
convex. The function g in the theorem is to be interpreted as incorporating this
pair of inequality constraints.

Remark 5.9. Under the assumptions of the theorem the feasible set X is
convex. To see this, let X : +x : x + X , g (x) - 0,. Since each g is convex,
G G G
each set X is convex. Since X : 5K X , we get that X is convex.
G G
G
Proof. Condition (i) states that x + X. Since f is convex,
*
f (x) 9 f (x ) . 1
f (x ), x 9 x 2, x + X.
* * *
Substituting (iv) into this inequality gives

f (x) 9 f (x ) . 19 R
g(x ), x 9 x 2
* * *
: 91 ,
g(x )(x 9 x )2, x + X. (22)
* *
Since g is convex,

g(x) 9 g(x ) .
g(x )(x 9 x ).
* * *
. 0,
Therefore, since

1 , g(x) 9 g(x )2 . 1 ,
g(x )(x 9 x )2.
* * *
Substituting this inequality into (22) and then using (iii) give

f (x) 9 f (x ) . 1 , g(x ) 9 g(x)2 : 91 , g(x)2.
* *
Since . 0 and since for x in X we have g(x) - 0, it follows that
91 , g(x)2 . 0. Hence f (x) . f (x ) for all x in X, and so x is a solution to
*
*
problem PI.

The function g in Example 5.1 is not convex, as a calculation of the Hes-
sian matrix of g shows. Therefore Theorem 5.3 cannot be used to analyze
Example 5.1.
DIFFERENTIABLE NONLINEAR PROGRAMMING PROBLEMS 161


Exercise 5.1. State and prove the form of Theorem 5.3 that results when af¬ne
equality constraints h(x) : 0 are also present.

Exercise 5.2. Use the following to complete the exercise:

(a) Sketch the feasible set.
(b) Use an appropriate theorem to show that a solution exists.
(c) Find all critical points and associated multipliers.
(d) Without using the second-order criteria of the next section determine, if
possible, whether a given critical point furnishes a solution.

(i) Minimize f (x) : x ; (x ; 1) subject to
 
x ; x - 4, x - x 9 1, 92x - x 9 1.
     
(ii) Minimize f (x) : x ; x subject to
 
x ; x - 4, x . x , 92x - x 9 1, 92x . x 9 2.
       
(iii) Minimize (x 9 5) ; x subject to


x ; x - 4, x 9 2x ; x - 0, x ; x - 2.
    
 
(iv) Minimize f (x) : #x#, where x + R, subject to

x x . 1, x . 0, x . 0.
  
(v) Maximize f (x) : x ; x ; ax x , a 9 0, subject to x ; x : 1. What
   

can you say about minimizing f (x) subject to x ; x : 1. [In the mini-
 
mization problem you need not answer questions (c) and (d)].

Exercise 5.3. Minimize f (x) : (x 9 2) ; (x 9 1) subject to
 
x 9 x - 0, x ; x 9 4 - 0, 9x - 0, 9x - 0.
     
Give a full justi¬cation of your answer. You may use geometric considerations
to assist you in ¬nding critical points.

Exercise 5.4. Maximize f (x) : x ; x x ; x subject to
 

93x 9 2x ; 6 - 0, 9x ; x 9 3 - 0, 4(x 9 2) ; 4(x 9 3) - 25.
     
Hint: Compare with Exercise 3.2.
162 OPTIMIZATION PROBLEMS


Exercise 5.5. Minimize f (x) : e\ V>V‚ subject to eV ; eV‚ - 20.

Exercise 5.6. Minimize f (x) : L c /x subject to
G G G

L
 a x : b, x 9 0, i : 1, . . . , n,
GG G
G

where c , a , i : 1, . . . , n, and b are positive constants.
GG

Exercise 5.7. Let A be a positive-de¬nite n;n symmetric matrix and let y be
a ¬xed vector in RL. Show that the maximum value of f (x) : 1y, x2 subject to
the constraint 1x, Ax2 - 1 is 1y, A\y2. Use this result to establish the
generalization of the Cauchy”Schwarz inequality

"1x, y2" - 1x, Ax21y, A\y2.

Exercise 5.8. Use the Lagrange multiplier rule to show that the distance (x )

from the point x to the hyperplane H? : +x : 1a, x2 : , is given by
 a





1a, x 2 9

(x ) : .
 #a#

(Hint: In problems involving the minimization of a norm or of a distance it is
often simpler to minimize the square of the norm or distance.)

Exercise 5.9. In this problem we will use the Lagrange multiplier rule to
deduce important properties of eigenvectors and eigenvalues of a real n;n
symmetric matrix A.
(a) Show that the maximum of f (x) : 1x, Ax2 subject to #x# : 1 is attained
at a unit eigenvector x and that f (x ) is the largest eigenvalue of A.
  
(b) Suppose m : n mutually orthogonal unit eigenvectors x , . . . , x of A
 K
have been determined. Show that the maximum of f (x) : 1x, Ax2 subject to
#x# : 1 and 1x, x 2 : 0, j : 1, . . . , m, is attained a point x that is an
H K>
eigenvector of A and that f (x ) is the corresponding eigenvalue.
K>
(c) Use (a) and (b) to show that a real n;n symmetric matrix has a set of
n orthonormal eigenvectors x , . . . , x with corresponding eigenvalues
 L
. .%. .
  L

Exercise 5.10. In this exercise we will use Farkas™s lemma and a constraint
quali¬cation introduced by Kuhn and Tucker [1951] that is different from CQ
to establish the conclusion of Theorem 5.2 without using Theorem 5.1. The
Kuhn”Tucker constraint quali¬cation KTQ is said to hold at a point x if there

SECOND-ORDER CONDITIONS 163


exists a vector z such that


g (x )z - 0,
h(x )z : 0
# 
and if for each such z there exists a C  function ( · ) de¬ned on an interval

<<

. 35
( 59 .)



>>