You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
Gaussian elimination<br />
Under our algebraic definition, merely writing down the coordinates of a vertex involves<br />
solving a system of linear equations. How is this done?<br />
We are given a system of n linear equations in n unknowns, say n = 4 and<br />
x 1 − 2x 3 = 2<br />
x 2 + x 3 = 3<br />
x 1 + x 2 − x 4 = 4<br />
x 2 + 3x 3 + x 4 = 5<br />
The high school method for solving such systems is to repeatedly apply the following rule:<br />
if we add a multiple of one equation to another equation, the overall system of equations<br />
remains equivalent. For example, adding −1 times the first equation to the third one, we get<br />
the equivalent system<br />
x 1 − 2x 3 = 2<br />
x 2 + x 3 = 3<br />
x 2 + 2x 3 − x 4 = 2<br />
x 2 + 3x 3 + x 4 = 5<br />
This transformation is clever in the following sense: it eliminates the variable x 1 from the<br />
third equation, leaving just one equation with x 1 . In other words, ignoring the first equation,<br />
we have a system of three equations in three unknowns: we decreased n by 1! We can solve<br />
this smaller system to get x 2 , x 3 , x 4 , and then plug these into the first equation to get x 1 .<br />
This suggests an algorithm—once more due to Gauss.<br />
procedure gauss(E, X)<br />
Input: A system E = {e 1 , . . . , e n } of equations in n unknowns X = {x 1 , . . . , x n }:<br />
e 1 : a 11 x 1 + a 12 x 2 + · · · + a 1n x n = b 1 ; · · · ; e n : a n1 x 1 + a n2 x 2 + · · · + a nn x n = b n<br />
Output: A solution of the system, if one exists<br />
if all coefficients a i1 are zero:<br />
halt with message ‘‘either infeasible or not linearly independent’’<br />
if n = 1: return b 1 /a 11<br />
choose the coefficient a p1 of largest magnitude, and swap equations e 1 , e p<br />
for i = 2 to n:<br />
e i = e i − (a i1 /a 11 ) · e 1<br />
(x 2 , . . . , x n ) = gauss(E − {e 1 }, X − {x 1 })<br />
x 1 = (b 1 − ∑ j>1 a 1jx j )/a 11<br />
return (x 1 , . . . , x n )<br />
(When choosing the equation to swap into first place, we pick the one with largest |a p1 | for<br />
reasons of numerical accuracy; after all, we will be dividing by a p1 .)<br />
Gaussian elimination uses O(n 2 ) arithmetic operations to reduce the problem size from<br />
n to n − 1, and thus uses O(n 3 ) operations overall. To show that this is also a good estimate<br />
of the total running time, we need to argue that the numbers involved remain polynomially<br />
bounded—for instance, that the solution (x 1 , . . . , x n ) does not require too much more<br />
precision to write down than the original coefficients a ij and b i . Do you see why this is true?<br />
221