Linear Programming Lecture Notes - Penn State Personal Web Server
Linear Programming Lecture Notes - Penn State Personal Web Server
Linear Programming Lecture Notes - Penn State Personal Web Server
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
We have formulated the general maximization problem in Proble 1.5. Suppose that we<br />
are interested in finding a value that minimizes an objective function z(x1, . . . , xn) subject<br />
to certain constraints. Then we can write Problem 1.5 replacing max with min.<br />
Exercise 3. Write the problem from Exercise 1 as a general minimization problem. Add<br />
any appropriate non-negativity constraints. [Hint: You must change max to min.]<br />
An alternative way of dealing with minimization is to transform a minimization problem<br />
into a maximization problem. If we want to minimize z(x1, . . . , xn), we can maximize<br />
−z(x1, . . . , xn). In maximizing the negation of the objective function, we are actually finding<br />
a value that minimizes z(x1, . . . , xn).<br />
Exercise 4. Prove the following statement: Consider Problem 1.5 with the objective<br />
function z(x1, . . . , xn) replaced by −z(x1, . . . , xn). Then the solution to this new problem<br />
minimizes z(x1, . . . , xn) subject to the constraints of Problem 1.5.[Hint: Use the definition of<br />
global maximum and a multiplication by −1. Be careful with the direction of the inequality<br />
when you multiply by −1.]<br />
2. Some Geometry for Optimization<br />
A critical part of optimization theory is understanding the geometry of Euclidean space.<br />
To that end, we’re going to review some critical concepts from Vector Calculus (Math<br />
230/231). I’ll assume that you remember some basic definitions like partial derivative and<br />
Euclidean space. If you need a refresher, you might want to consult [MT03, Ste07].<br />
We’ll denote vectors in R n in boldface. So x ∈ R n is an n-dimensional vector and we<br />
have x = (x1, . . . , xn).<br />
Definition 1.5 (Dot Product). Recall that if x, y ∈ Rn are two n-dimensional vectors,<br />
then the dot product (scalar product) is:<br />
n<br />
(1.7) x · y =<br />
i=1<br />
xiyi<br />
where xi is the i th component of the vector x.<br />
An alternative and useful definition for the dot product is given by the following formula.<br />
Let θ be the angle between the vectors x and y. Then the dot product of x and y may be<br />
alternatively written as:<br />
(1.8) x · y = ||x||||y|| cos θ<br />
This fact can be proved using the law of cosines from trigonometry. As a result, we have<br />
the following small lemma (which is proved as Theorem 1 of [MT03]):<br />
Lemma 1.6. Let x, y ∈ R n . Then the following hold:<br />
(1) The angle between x and y is less than π/2 (i.e., acute) iff x · y > 0.<br />
(2) The angle between x and y is exactly π/2 (i.e., the vectors are orthogonal) iff x·y =<br />
0.<br />
(3) The angle between x and y is greater than π/2 (i.e., obtuse) iff x · y < 0.<br />
Exercise 5. Use the value of the cosine function and the fact that x · y = ||x||||y|| cos θ<br />
to prove the lemma. [Hint: For what values of θ is cos θ > 0.]<br />
4