notes

myweb.clemson.edu

notes

IE 426

Optimization models and applications

Lecture 7 — September 16, 2008

◮ Troubles with the homework?

◮ MinMax

Reading: Fourer’s online chapter 3 (pages A97-A98).

Minimizing the maximum of a set of linear functions

◮ Easy. . . if y is the maximum of all those quantities, then it

must be greater than each of them:

y ≥ n

j=1 a1jxj + b1,

y ≥ n

j=1 a2jxj + b2,

.

y ≥ n

j=1 aHjxj + bH.

◮ each of these constraint is linear!

(re-write as n j=1 a1jxj − y ≤ −b1, . . . )

◮ objective function is y. The linear model is:

min y

y ≥ n

j=1 a kjxj + b k

∀k = 1, 2... , H

Minimizing the maximum of a set of linear functions

Consider an optimization problem of the form

Caveat

min max k=1,2...,H ( n

j=1 a kjxj + b k) =

min max { n

j=1 a1jxj + b1,

n

j=1 a2jxj + b2,

.

n

j=1 aHjxj + bH}.

◮ This is nonlinear (there’s a max term in the objective).

◮ However, the model easily becomes linear:

◮ Create a new variable y

◮ y is the maximum of all quantities n j=1 a1jxj + b1,

n j=1 a2jxj + b2. . . , n j=1 aHjxj + bH.

◮ The constraints above only say that y is at least the

maximum of all those linear functions.

⇒ They don’t guarantee that y is exactly the maximum of all

those linear functions.

◮ That is,

y ≥

only ensures that

not that

n

akjxj + bk j=1

∀k = 1, 2... , H

n y ≥ maxk=1,2...,H j=1 akjxj + bk, y = max k=1,2...,H

n

j=1 a kjxj + b k.


Minimizing the maximum of a set of functions

However, this model works as we are minimizing y:

◮ Although for all


feasible solutions

n

y≥ maxk=1,2...,H j=1 akjxj + bk, ◮ a solution (¯x1,¯x2 ...,¯xn,¯y) with

¯y > max k=1,2...,H

n

akj¯xj + bk (strictly >) is feasible, but it’s worse than


the solution

n

(¯x1,¯x2 ...,¯xn,ˇy) with ˇy= maxk=1,2...,H j=1 akj¯xj + bk ◮ (because the objective is y, and ¯y > ˇy = max ...)

Another example

min max{2x − 1, − 1

2x + 1}

0 ≤ x ≤ 2

min y

y ≥ 2x − 1

y ≥ − 1

2x + 1

0 ≤ x ≤ 2

j=1

y

x

Example

min(max{x − 3, −x + 2})

Nonlinear a !

min y

y ≥ x − 3

y ≥ −x + 2

a AMPL won’t complain,

but CPLEX will refuse to solve

the problem.

Example: bank loan

Our company wants to ask for a loan of 300k$ to two banks.

The interests paid to a bank depend on amount borrowed:

15

10

5

Bank 1:

◮ 5% of amt ≤ 100k$

◮ 8% of amt ≥ 100k$

50 100 150 200

15

10

5

y

Bank 2:

◮ 3% of amt ≤ 140k$

◮ 12% of amt ≥ 140k$

50 100 150 200

x


Example: bank loan

Determine how much to borrow from both banks in order to

minimize the total interests paid.

◮ Variables: x1 and x2, amount borrowed from Bank 1 and

Bank 2 (in k$)

◮ Constraints: x1 ≥ 0, x2 ≥ 0, and x1 + x2 = 300

◮ Objective function: sum of interests paid to the banks.

What are f1(x1) and f2(x2)?

min f1(x1) + f2(x2)

f1(x1): 0.05 ∗ x1 for 0 ≤ x1 ≤ 100,

f1(x1): 5 + 0.08 ∗ (x1 − 100) for x1 ≥ 100

f2(x2): 0.03 ∗ x2 for 0 ≤ x2 ≤ 140,

f1(x1): 4.2 + 0.12 ∗ (x2 − 140) for x2 ≥ 140

Example: bank loan

Linearization:

min y1 + y2

What does AMPL return?

y1 ≥ 0.05 ∗ x1

y1 ≥ 5 + 0.08 ∗ (x1 − 100)

y2 ≥ 0.03 ∗ x2

y2 ≥ 4.2 + 0.12 ∗ (x2 − 140)

x1 + x2 = 300

x1 ≥ 0

x2 ≥ 0

Example: bank loan

For this specific case 1 , both f1(x1) and f2(x2) can be written as

So the model is:

Nonlinear. . .

Caveat (2)

f1(x1) = max{0.05 ∗ x1, 5 + 0.08 ∗ (x1 − 100)}

f2(x2) = max{0.03 ∗ x2, 4.2 + 0.12 ∗ (x2 − 140)}

min max{0.05 ∗ x1, 5 + 0.08 ∗ (x1 − 100)}+

+ max{0.03 ∗ x2, 4.2 + 0.12 ∗ (x2 − 140)}

1 both f1 and f2 are convex!

x1 + x2 = 300

x1 ≥ 0

x2 ≥ 0

This works in “min-max” models, that is, when we’re

minimizing the maximum of a set of quantities:

◮ the variable “wants to”, “tends to”, “will eventually” be at

its lowest allowed value (depending on the functions)

◮ it should appear with a positive coefficient in a

minimization problem or with a negative one in a max.

problem

Symmetrically, it also works in “max-min” problems, that is,

when we’re maximizing the minimum of a set of quantities.

Caution! It does not work in general in other contexts, for

instance

max max k=1,2...,H( n

j=1 a kjxj + b k)


Example: job assignment

Problem:

◮ We have to assign m workers to m jobs. Everyone must be

assigned to exactly one job, and all jobs have to be done.

◮ The degree of preference of a worker i to job j is defined by

cij, for i = 1, 2... , m, j = 1, 2... , m.

◮ maximize the total preference, i.e. the sum of all

preferences cij for assignments (i, j) worker-job.

Bad objective?

◮ the total preference m m i=1 j=1 cijxij does not provide a fair

balance in assigning jobs: some worker may be very

dissatisfied with their assignments.

replacements c11 = 4

1

2

0

◮ For a fair assignment, we may instead maximize the

minimum assignment cost of each worker:

◮ How? The satisfaction of worker i is equal to m

j=1 cijxij

◮ New objective function (still to be maximized):

min

i=1,2...,m

m

j=1

cijxij

◮ Look at least satisfied worker(s) (as it results from variables

xij) and limit their dissatisfaction as much as possible

Job assignment: model

Variables: xij for worker i and job j.

Constraints:

◮ Every worker is assigned to exactly one job:

m

xij = 1 ∀i = 1, 2... , m

j=1

◮ Every job is done by exactly one worker:

m

xij = 1 ∀j = 1, 2... , m

i=1

◮ Variables xij are binary (a yes/no decision), but since we’re

doing LP let’s just use a relaxation: 0 ≤ xij ≤ 1

Objective function: total preference

m m

i=1

i=1

Job assignment: new model

max mini=1,2...,m

cijxij

m

j=1 cijxij

m

j=1 xij = 1 ∀i = 1, 2... , m

m

i=1 xij = 1 ∀j = 1, 2... , m

0 ≤ xij ≤ 1 ∀i, j = 1, 2... , m

It’s nonlinear! Let’s use the same trick, with different signs.

◮ New variable y (will be our objective function)



m

y is mini=1,2...,m j=1 cijxij, for each i = 1, 2... , m.

⇒ y is smaller than each of these quantities:

y ≤ m

j=1 c1jx1j

y ≤ m

j=1 c2jx2j

.

y ≤ m

j=1 cmjxmj

◮ or, more compact: y ≤ m

j=1 cijxij ∀i = 1, 2... , m


Job assignment: final linear model

max y

y ≤ m j=1 cijxij ∀i = 1, 2... , m

m j=1 xij = 1 ∀i = 1, 2... , m

m i=1 xij = 1 ∀j = 1, 2... , m

0 ≤ xij ≤ 1 ∀i, j = 1, 2... , m

Job assignment: alternative model

Let’s reduce it to a minimization problem. The obj.f. changes

sign, the problem becomes a minimization one:

m max mini=1,2...,m j=1 cijxij =

m −mini=1,2...,m j=1 cijxij


=

= − min

[apply the inverse rule inside the brackets. . . ]


= − min maxi=1,2...,m − m j=1 cijxij


=

m = − min maxi=1,2...,m j=1 (−cij)xij

More magazines by this user
Similar magazines