Explore BrainMass

Duality and Saddle Points

Please see the attached file for the fully formatted problem.

I am working on a way to find the minimum of a function J(Y) with the constraint set
C = {X E R^N such that gt(x) =<0 Vi E [1,n]}

Let L(Y, mu) = J(Y) + SIGMA m --> i = 1 muigi(Y) be the lagrangean of the problem.

I am having trouble proving the following theorem :

If is a saddle point of L, then and X minimises J on C.
If J and the gi are convex and differentiable in X, then for every solution X, there exists such that is a saddle point of the lagrangean L.

I would be grateful for a full proof of both the first and second parts of this theorem, in particular for the reciprocal.


Solution Summary

A theorem involving a Lagrangean and saddle points is proven. The solution is detailed and well presented.