Mathematical Journeys - Saint Ann's School

MATHEMATICAL

JOURNEYS

**Saint** Ann’s **School**

VOLUME 1

2010 - 2011

1

Table of Contents

EXPLORATIONS INTO EUCLIDEAN AND TAXICAB MINIMUM DISTANCES,

Hudson Cooke, Justin Lanier (mentor) ..........................................................................3

GENERATORS IN

Ζ p

n ,

Xingchen Ma, Paul Lockhart (mentor) ........................................................................11

ITERATED € COMPLEX EXPONENTIAL,

Nicholas Watters, Ted Theodosopoulos (mentor) .......................................................13

VICISSITUDES OF THE MARKET: AN EXPLORATION,

Jonah Kaner, Alex Shilen, Simon Hedges, Jared Cross (mentor)................................23

THE REALITY GAME,

Joshua Glasser, Ted Theodosopoulos (mentor) ...........................................................26

WALMART: SAVE MONEY? LIVE BETTER?,

Rachel Landman, Jordana Gluckow, Jared Cross (mentor).........................................30

TETRAHEDRA ON SQUARE LATTICES,

Lukas Burger, Nicholas Fiori (mentor)........................................................................36

2

LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS,

Theo McKenzie, Ted Theodosopoulos (mentor) .........................................................39

COMPLEX GEOMETRY,

Luci Cooke, Pleasant Garner, Ted Theodosopoulos (mentor).....................................41

3D DICE AND 1D SCALES,

Akash Mehta, Justin Lanier (mentor)...........................................................................45

COLLATZ CONJECTURE,

Lucas Neville, Paul Salomon (mentor) ........................................................................47

THE MOD-K GAME,

Harry Boyer, Seamus Whoriskey, Paul Salomon (mentor) .........................................51

EXPLORATIONS INTO DISCRETE MATH,

Gautama Mehta, Ted Theodosopoulos (mentor) .........................................................56

3

EXPLORATIONS INTO EUCLIDEAN AND TAXICAB

MINIMUM DISTANCES

HUDSON COOKE

Abstract. This paper treats discusses minimum distance problems

in both Euclidean and Taxicab settings. These problems are

ones that explore algebraic formulas as well as constructions and

graphs for visual information, all with the idea of finding what the

minimum possible length traveled is.

Contents

1. Introduction 1

2. Basic Notions 1

3. **School** Districts Problem 3

4. Problems with Obstructions 4

5. The Four Cities Problem 7

1. Introduction

What is the shortest distance between two points? The answer is

a straight line. What if you want a line to be equally distant from

two points or more in a city which uses street grids? You use Taxicab

geometry. What if you’re using Euclidean geometry and there is an

obstacle between two points? All these problems have the same theme.

Whether two points or four, arranged in a square or a line, the theme

is the shortest distance. While each and every one of these problems

has its own details and formula, they each follow the same basic theme,

which is getting points connected in the shortest way possible.

2. Basic Notions

In order to explore the more complex aspects of both Euclidean

and Taxicab geometry, one must have a grasp of the basic ideas of

Date: May 31, 2011.

1991 Mathematics Subject Classification. 51K05.

With thanks to Justin Lanier.

1

4

EXPLORATIONS INTO EUCLIDEAN AND TAXICAB MINIMUM DISTANCES 2

both types of geometry—for example, finding the distance between

two points. In Euclidean geometry, the two points (in this case, starting

point A and destination B) are situated on an open field with no

obstructions. For the purpose of finding the distance between these

Figure 1. Taxicab distances are the sum of x and y distances.

two points, we assign coordinates for both A and B. A will be (x 1 ,y 1 )

and B will be (x 2 ,y 2 ). To find the distance between the two points

requires the Pythagorean theorem. The distances along the base and

height of the triangle are x 2 − x 1 and y 2 − y 1 . So the slanted distance

is √ (x 2 − x 1 ) 2 +(y 2 − y 1 ) 2 .

In Taxicab geometry, it is easier to find the distance between two

points. Taxicab geometry is geometry on a Cartesian plane where a

line may only travel using angles which are multiples of 90 ◦ . Taxicab

geometry receives its name from the grid of a city, where taxicabs travel,

(and cant smash through the buildings to get to their destination). The

way to find the distance in Taxicab geometry using the same points A

and B, the Taxicab method is very different than Euclidean geometry.

As before, the horizontal and vertical distances are |x 2 −x 1 | and |y 2 −y 1 |,

but the total distance—regardless of how choppy the path is—is the

sum |x 2 −x 1 |+|y 2 −y 1 |. The interesting thing about Taxicab geometry

is that the line is able to go any which way to reach its destination,

with the possibility of an infinite number of turns, and the end distance

of the line will remain exactly the same length.

For example, if point A is at the origin of a Cartesian plane, and

point B is at the coordinated at (10, 10), then the line can take ten

units to the right and then turn left, and then continue till it reaches

5

EXPLORATIONS INTO EUCLIDEAN AND TAXICAB MINIMUM DISTANCES 3

Figure 2. Creating stair-steps keeps the total Taxicab

distance constant.

point B. Or it could take a right and then a left and so on, or it could

take an infinite number of alternating rights and lefts until it becomes

a straight line between the two points. So therefore a 2 + b 2 ≠ c 2 , but

a + b = c. This theorem, although false in Euclidean Geometry, is very

interesting. It is called the Lythagorean theorem. The idea behind the

theorem, not the theorem itself, is essential to solving more complex

problems, with the idea that a variety of paths in Taxicab geometry

will have the same distance as long as they reach the same end point.

3. **School** Districts Problem

A first optimization of distances problem is one that comes up in

many cities—to determine which school to place a child based on where

they live. If one has an undetermined amount of dots (i.e., schools)

placed upon a plane, then the mathematical problem is this: if you

place another dot (i.e., a home) upon this plane, which school dot is

it closest to? For a small number of homes, you can just do all of the

relevant measurements. But if there are a lot of homes, then we need

boundaries to clarify the different regions, or school districts. In Euclidean

geometry, this is a matter of drawing a line equally in between

the three or more points. First, find a point that is the same distance

from the three points. Then separate the line into three separate lines,

each line equally in between the two points it separates.

You must connect pairs of points, then draw in a perpendicular bisector

for each of these lines. Then you find the point where all the

perpendicular bisections meet. If there are more than three points and

more that one point where the perpendicular bisections meet, then you

6

EXPLORATIONS INTO EUCLIDEAN AND TAXICAB MINIMUM DISTANCES 4

Figure 3. Four schools; lines and perpendicular bisectors;

the boundaries defined.

just connect the two meeting places. In order to get the finished result

you get rid of everything except the bisectors.

4. Problems with Obstructions

Another interesting problem in this theme is finding the shortest distance

between two points with obstructions in the way of the otherwise

shortest distance. This type of problem is actually more practical than

it appears. It is not just for robots trying to find the way through a

maze, but what we do every day. Not just in Taxicab geometry, but

also in Euclidean as well. We, as humans, almost always find the shortest

path. Either when we walk home using Taxicab geometry, or when

we get up to get a glass of water using Euclidean. We don’t walk out

of our way by three feet or by three blocks; students even crawl under

and over desks to avoid walking around a classroom.

One simple case is to make the obstacle a circle; in particular, make

its radius 1 and place A and B at distance 1 from the circle on opposite

sides. Below are an examples non-minimal paths from A to B in

Taxicab (dotted) and Euclidean (bold) geometry.

Figure 4. These paths are clearly non-optimal.

7

EXPLORATIONS INTO EUCLIDEAN AND TAXICAB MINIMUM DISTANCES 5

In Taxicab geometry it is easy just to go around the obstacle, as

going right to the edge and following the perimeter around until it

reaches the goal is the same distance a making a big boxy path around

the whole thing.

Figure 5. Creating stair-steps keeps the total Taxicab

distance constant.

The same problem in Euclidean geometry is similar, but slightly

more complex. Where the line is tangent to the circle is expressed by

points x and y, the problem is finding the angle xzy.

Figure 6. The diagram to find the minimal Euclidean path.

To find the angle we must find the angles around it, mainly the

angles of the triangle axz. If you take that triangle and project a

mirror image next to itself, the result is an equilateral triangle. The

result is that all of the angles are 60 ◦ , meaning both the angles azx

and bzy must are also 60 ◦ , so that the angle xzy equals 60 ◦ . Having

this in mind, we can easily find the distances. Using the Pythagorean

theorem, ax equals √ 3, and the circle section xy equals 1 π. Thus the

3

total Euclidean distance is 2 √ 3+ 1π.

Using this method, one can

3

find the lengths of any circle or length. This means that the farther

points A and B move away from the circle, the closer points x and y

will move towards each other, and the smaller the angle xzy will be.

This problem is most easily solved physically by using something with

8

EXPLORATIONS INTO EUCLIDEAN AND TAXICAB MINIMUM DISTANCES 6

Figure 7. The mirrored triangle is equilateral.

elastic, such as a rubber band. It naturally desires to find the shortest

distance between two objects, and it does just that when an object is

placed in its way. The closest to a formula that works for this type of

problem is this: a line must follow the path of least resistance. However

if there is something in the way of that path, it must use the path of

the least resistance and go the corner (or curve if its a circle) which

is the closest to that path and its current position until it reaches its

goal.

Figure 8. By “hugging” the obstacles, the distance is minimized.

9

EXPLORATIONS INTO EUCLIDEAN AND TAXICAB MINIMUM DISTANCES 7

5. The Four Cities Problem

Another distance minimization problem is the four cities problem.

The way this differs from the previous problems is that it has no obstacles,

but rather has multiple points to connect. The problem is this:

if you have four cities arranged in a square, what is the shortest way

to have a line connect all the cities? That is, a line that touches each

city and allows transportation from each city, to each city, and the

line may branch or curve in any way it desires. There are many possible

answers, ranging from circle around the whole thing, to a simple

square around the perimeter. The best candidates for a shortest path,

however, looked like the following.

Figure 9. A family of paths for the four cities problems

whose length varies with x.

This problem is less about manipulating x to find the shortest possible

path, but more about creating a simple formula to find what the

whole of the path equals. The idea behind this is trying to find a formula

where you can plug in x and get out y, (y being the total length)

even if the square is a rectangle. This variation of the problem is not

the shortest route for this problem, it was just the one that had the

potential for a formula the most. To find the formula mentioned above,

it is best to have a purely algebraic approach to the problem, assigning

letter values to each length that matters. Simply put, y = x +4c, but

finding c isnt all that difficult. We plug in a and b and x and all the

letters that matter, The way to do this is adding up x and the four legs,

the distance legs being found from the right triangle surrounding it and

using the Pythagorean theorem, then multiplying the leg length by 4

10

EXPLORATIONS INTO EUCLIDEAN AND TAXICAB MINIMUM DISTANCES 8

√

and adding it to x. The end result is y = x+4 a 2 +( z−x

2 )2 which simplifies

down to y = x +2 √ x 2 − (2z)x +(z 2 +4a 2 ). Now while you can

just use that for solving for any and every rectangle, there are simpler

formulas that one can derive from the general formula. For example, a

one by one square, the formula (simplified from the general formula) is

y = x +2 √ x 2 − 2x + 2. There is a pattern, because the formula for a

two by two square is y = x +2 √ x 2 − 4x + 5. The second term within

the square root is 4x, and that can be derived from the second term

within the general formula, −2z. The third term, 5, can also be found

within the original formula, (z 2 +4a 2 ).

One thing that must be taken into consideration is how to express

x as a negative number, which is a problem in constructions, but not

for hypothetical situations. The way one gets around this problem is

that as x gets bigger and bigger, the two sets of two legs grow more

and more pointed out, so if the only variable is changed into being a

negative number, it only makes sense for the two sets of legs to point

more and more in. This is a concrete set of formulas for which you can

find what y equals when you plug in x, for any x and any form of a

rectangle.

Figure 10. An example of when x is a negative number.

These types of problems are what we have been working on this last

year, mainly with the theme of shortest distances. We have also used

applications such as GeoGebra and Grapher to view what we have been

discussing actually looks like. The practical applications of our work

are few but are ones which we use every day, such as figuring out the

quickest way to get home. I have thoroughly enjoyed my time working

with Mr. Lanier and am looking forward to math next year.

**Saint** Ann’s **School**, Brooklyn, NY 11201

E-mail address: hcooke@saintannsny.org

11

GENERATORS IN

Ζ p

n

XINGCHEN MA

€

€

€

€

€

In an integral system regarding of a specific number as modulus p (we call it

Ζ p

), we can find that all the numbers in that system (put as the remainder when

divided by the modulus) would form a cycle as long as the modulus p we choose is a

prime or power of a prime (except 2). Since in Ζ p

n (p is a prime (except 2) and n is a

positive integer. Here we are only talking about the numbers that are relatively prime

to p n ) there will always be at least one number, the generator, so that every number in

Ζ p

n (except 0) can be written as the powers of it to form the “modular circle”, we

€

wonder: is there anything special with those generators?

Let’s first take a look at Ζ 3

n . As we can find out, 2 is a generator in Ζ 3

(1——

>2 ()——>1 (1=4= in Z3)). In the same way, we get 2 and 5 are generators in Ζ 9

( Ζ 3

2

). Notice that 2 and 5 both equal 2 mod 3 (or in Z3). In Z27 (Z), the generators are 2,

11, 20 (= 2 mod 9) and

€

5, 14, 23 (= 5 mod 9). You are probably wondering € now, are

the numbers that are generators in Ζ p

n−1 always generators in Ζ p

n (the numbers that

€ €

are same as the generators in Ζ p

n−1 when mod p-1) too? Maybe there is an exception in

Ζ p

2 (notice that 8 also equals to 2 when mod 3, but is not a generator in Ζ 9

)?

€

€

€

€

12

€

€

Let p be a prime (other than 2). Suppose a is a generator in Ζ p

. Then

according to the definition of generator, a p-1 = 1 mod p. If a+mp is not a generator in

Ζ p

2 , then instead of ( a + mp)p p−1 =1=1, we should get a + mp

In

Ζ p

2 if 1 = a + mp

( ) p−1 = a p−1 + p −1

( ) p−1 =1=1.

( € )a p−2 mp

a p−1 −1 = mpa p−2

€

€

a( a p−1 −1) = mp

€ €

€

then we should get m = a ( a p−1 −1)

.

p

So there is only one € m that will make ( a + mp) p−1 =1 stand, which means that

there is only one exception a (the generator of Ζ p

) that will not be a generator of Ζ p

2 .

( ) =1

Then what will happen in Ζ €

p

3 , Ζ p

4 , …, Ζ p

n , …? In Ζ p

n (n>2), will ap n−2 p −1

happen? Is it the same case? €

If ap n−2

€ ( p −1) =1 mod p n

€

then

€ ( 1+

€

rp)p n−2 €

=1 mod p n €

(let a =1+ rp, r€

≠ 0 mod p)

(

1+ p n−2 ( rp) + pn−2 −1) p n−2

(

( rp) 2 + pn−2 − 2) ( p n−2 −1)p n−2

( rp) 3 +… =1 mod p n

€ 2

6

€

rp n−1 € = 0 mod p n

r = 0 mod p

So we get r = 0 mod p, which is opposite to what we said before r ≠ 0 mod p.

As a result, there will not be exceptions in Ζ

€

p

n for n>2. That is to say, as long as a is

a generator of Ζ p

n−1 , then all € the numbers that are equal to a mod p n−1 would be

generators € of Ζ €

p

n too (for all n>2).

€

€

€

€

13

!

ITERATED COMPLEX EXPONENTIAL

NICHOLAS WATTERS

!

!

!

!

!

!

14

!

!

!

!

!

!

! !

!

! "#$%&'!! ! ! ! ! ! "#$%&()!

!

!

15

!

! !

! "#$%&! ! ! ! ! ! ! "#$%'()%)!

!

!

!

"#$%'()*! ! ! ! ! "#$%'()'!

!

!

!

! !

!

!

16

!

! !

! "#$%&! ! ! ! ! ! "#$%&$'!

!

! !

! ! "#$%&$(! ! ! ! ! "#$%&$)!

!

! ! "#$%&*!! ! !

!

! ! "#(!

!

!

!

!

!

17

!

!

!

!

18

!

! !

!

!

!

19

!

!

! !

!

!

!

!

!

!

!

!

20

!

!

!

!

!

!

!

!

!

!

!

!

!

21

!

!

!

!

!

22

!

!

!

! "#!$%&'()*$!+(),!-.*/+)0*!)*!1&+(23&+)/&4!)+!),!&''&%2*+!+(&+!+(2%2!&56&#,!27),+,!

0*2!%00+!%2$&%852,,!0-!+(2!9&5.2!0-!!:!!;(),!)3'5)2,!+(&+!-0%!+(2!9&5.2,!0-!!!6(2*!0.%!,2%)2,!

802,!*0+!/0*92%$24!+(2!-&/+!+(&+!)+!802,*

23

VICISSITUDES OF THE MARKET: AN EXPLORATION

Introduction

JONAH
KANER,
ALEX
SHILEN,
SIMON
HEDGES

This semester in our independent math research group we explored the

possibility of quantitative stock analysis. The research took the form of a competition

between colleagues. Our advisor, Jared Cross, selected forty-four random stocks from

which we would each refine our portfolios. Our data comprised monthly closing

prices over a ten-year time span, from 1990 to 2000. The research spanned three

different competitions, each intended to reveal any trends in the data. One had two

diametric portfolios—aimed at maximizing and minimizing gains. The next was to

create five bundles with the most consistent growth, the worst performing of which

we were judged on. The last was to reduce volatility. We each approached our

portfolios individually, with a variety of strategies.

Choosing our Portfolios

Simon:

I favored stocks that had seemingly healthy and sustainable growth throughout the

90s, preferring mid and large caps that were less volatile in the ups and downs of the

market. In search of the stocks that I thought would perform poorly I chose stocks that

seemed volatile, inconsistent, and more affected by bubbles. In choosing my stock

bundle with the lowest standard deviation I chose the 5 stocks with the lowest

volatility in the 90s without calculating any correlations.

Alex:

I looked at the slope of each stock’s price changes in an attempt to predict its future

value. I postulated that in order to maximize the return of my first bundle I should

select stocks with positive slopes. I envisioned consistent growth. For my minimum

return I took stocks with stagnant growth, with a similar prediction—that they would

adhere to their previous tendencies. My approach turned out to be wildly bipolar and

my results unexpected. For my six least volatile bundles I looked for stocks with low

standard deviations, relying on the same conjecture.

Jonah:

I went with a very simple method of buy low, sell high. When the stock was priced

higher than the ten-year average, I would choose that stock as a loser, and when the

stock was priced lower than the ten-year average, I would choose it to be a winner.

Based on the winners and losers that I picked, I then organized them in bundles

depending on their standard deviation over the 10 years of data we had. The aim was

to come up with 5 bundles that all had low standard deviation over the previous

decade.

Results

24

The average return of all 44 stocks from 2000-2010 was 44%. Simon’s winners

returned 74%, while his losers returned 73%. Both of his bundles of 10 outperformed

the market, but he was unable to successfully pick 10 stocks that had a negative return

during this time period. Alex’s winners returned -44% and his winners returned

109%. Alex’s results provided the most information for later analysis. Jonah’s

winners went up 29%, underperforming the market, and his losers went down 8%.

While Jonah underperformed the market with his winners, he was able to choose a

bundle of 10 stocks that had a negative performance during this time period. Ted’s

winners returned 40%, just underperforming the market. Simon’s average monthly

percentage return was .19%, while his average monthly standard deviation was

3.17%. Jonah’s average monthly return was .38%, while his average monthly standard

deviation was 6.58%. Alex’s average monthly return was 1.03%, while his average

monthly standard deviation was 7.11%. Ted’s average monthly return was .45%,

while his average monthly standard deviation was 4.5%.

Winners and losers competition:

Analyst Bundle of 10 “winners” Bundle of 10 “losers”

Jonah 29% -8%

Alex -44% 109%

Simon 74% 73%

Dr. Theo 40%

Minimizing monthly volatility competition:

Analyst Average Monthly Gain Standard Deviation

Jonah 0.38% 6.58%

Alex 1.03% 7.11%

Simon 0.19% 3.17%

Dr. Theo 0.45% 4.50%

Composite of all 44 stocks 0.48% 5.84%

Conclusions

Our results yield, undoubtedly, confusing conclusions, as they were not at all what we

predicted. After further analysis of our data we believe our results can be largely

accredited to standard deviations as each of our winners and losers seem to have

random performances. We found that, to correctly do this sort of analysis we would

need to broaden our stock range to that of an entire market while simultaneously

greatly increasing the number of stock pickers. The most interesting piece of

information that we saw in our small pool of data was the result of Alex’s picks. His

method, already described, seemed to provide an obvious inverse correlation between

a stocks previous movement and its future movement, as the stocks that he found to

have the most negative average growth between 1990 to 2000 performed

extraordinarily while the stocks that had the highest average positive growth from

1990-2000 performed abysmally in the 2000-2010 period. Following up, in an attempt

to substantiate the inverse correlation that we found in Alex’s prediction we widened

our scope to look at the relation between past and present movements of all of the

25

stocks in the New York Stock Exchange. This inverse correlation between stock

movement in the 90s and the stock movement in the 00s was not evident. The lack of

correlation, however, may not refute our original hypothesis as Alex’s data consisted

of the extremes of the market, the absolute best performers and the absolute worst

while our correlation analysis included all of the stocks in between. Alex’s picks

included only stocks that performed abnormally and we believe that there could still

be a correlation present in the relationship of the movements of these abnormal

stocks.

Further Research to Pursue

Our aspirations are tremendous. Possibilities on the horizon include the application of

Sharpe Ratio standards to our portfolios, an enormously increased data pool, Mat-Lab

based simulations of teenage options trading, correlative analysis of extremes in stock

returns, and, summarily, the development of our own trading model.

26

THE REALITY GAME

JOSHUA GLASSER

ABSTRACT. Focusing on the intricacies of variants of the Reality game, this paper

explores the different mathematical constructs used to interpret such a game. Through

MatLab simulations and algebraic equations, this paper attempts to explain some of the

phenomena that occur as a result of the Reality game.

The reality Game, although less than a decade old, has already been the focal

point of a few major studies, Including Farmar’s ubiquitous paper describing it. Though

there have been many variations as to what in particular constitutes the reality game,

many agree that it is a game in which any number of players bet money on a coin whose

probability of landing on either heads or tails are as follows:

1.N players bet X amount of money each on heads or tails We varied this

eventually, but it was true in the beginning.

2. The probability of the coin landing on either heads or tails is a result of X-plus

over X total bet. X + is defined as … x + / Xtotalbet

3. The winners receive an equal take of the pot.

We then proceeded to have the players employ any of the following 8 strategies:

S1. Bet always on the same outcome.

!

!

S2. Bet on the outcome that came up in the last round.

S3. Bet on the outcome that didn't come up in the last round.

S4. Bet according to the monetary majority in the last round.

S5. Bet according to the monetary minority in the last round.

S6. Bet according to the numerical majority in the last round.

S7. Bet according to the numerical minority in the last round.

S8. Bet randomly, H or T with probability 1/2 each. This is closer to the original paper

(Farmer et.al.), except that they allowed probabilities other than !.

To see the pure results of these strategies, we simplified the number bet to a constant 1

unit throughout runs. While some advantages seemed to appear, they were contained in

limited runs of 100 or 1000 tosses. More even distributions of bets often led to more

even results as well, with much more varied results regarding cumulative P&L’s.

After building some experience with the behavior of these strategies, we decided

to embark on an analytical investigation of the expected winnings of players following

different strategies in a setting where everyone bets the same amount, in order to

disaggregate the effects of the amount and the direction of each bet. To begin with, we

studied the "pure strategies", i.e. the situations where all N players follow the same

strategy. The only pure strategies that make sense are 1 and 8. For the case of pure

strategy 1, let there be N + players betting +1 and N - N + players betting -1. Then the

expected winnings of a +1 player is E[P + ] = (N + /N)(N/N + - 1) - (1 - N + /N) = 0.

27

P+ = X ÷ X +

E[ P +

] = N + # N

N N "1 & #

% ( " 1" N + &

% ( = 0

$

+ ' $ N '

! Similarly, the expected winnings of a -1 player is 0 (of course the sum of the expected

winnings across all the players must be 0 because no money is created or destroyed in

each round). For the same reasons, and with an analogous computation, the case of pure

! strategy 8 also leads to expected winnings equal to 0, assuming we know that there are

N + players betting +1. What about the case when we don't know a priority to how many

players bet +1, under a pure strategy 8? Then the expectation must cover also the

randomness of the number of players betting +1. By using Conditional expectations, one

can isolate the expected profit. One that is useful in particular,

E[P + ] = E[P + | N + = 1] Pr(N + = 1) + E[P + | N + = 2] Pr(N + = 2) + ... + E[P + | N + = N]

Pr(N + = N) =

= {(1/N)(N - 1) - (1 - 1/N)} N2 -N + {(2/N)(N/2 - 1) - (1 - 2/N)} N(N-1)2 -N-1 +

... + {(N/N)(N - N) - (1 - N/N)} 2 -N = 0

because each conditional expectation, and therefore each term, is zero.

Thus, we convinced ourselves that no pure strategy could lead to expected profits. The

reason fundamentally is because, no matter how many players bet +1, the chance that you

win increases with the number of people that bet in the same direction with you but the

amount you win goes down at the same rate. Thus, no one has a consistent advantage.

We then attempted to modulate this simulation by allowing one player to bet !

under the pure strategy 1 environment while everyone else bets +1 or -1. Assuming there

are N + - 1 players betting +1, the expected profit of this player is

E[P*] = (N + - 1/2)/(N - 1/2)[(N - 1/2)/N + - 1/2] - [1 - (N + - 1/2)/(N - 1/2)]/2 = (1 -

1/N + )/2 > 0

and so we see that the player that bet less than everyone else has a consistent advantage

because when he wins, his return is higher than everyone else's, and when he loses, his

loss is smaller.

We then went on to focus on a more simple case: a quarter of the players use

strategy 1 and bet +1, another quarter use strategy 1 and bet -1, and the remaining half

use strategy 2, with half of them starting with a bet of +1 and the other half with a bet of -

1. In this setting we defined the three-state Markov chain that computes the profit or loss

(P&L) for a randomly chosen agent as follows:

The markov Chain above explains the expected out comes of any run, and the

probability of the next run’s outcome. We began using this Markov Chain setting to

28

investigate qualitative observations we have made from multiple long runs on MATLAB

of a reality game simulation with N=100 players that use the first two strategies as

described in the previous paragraph. In this case, as in most other cases we have

investigates in detail, we have shown that the wealth of each player (cumulative P&L)

follows a martingale, and therefore it has an expected value of 0. Nonetheless, the actual

paths of the wealth over sample simulations deviates significantly from 0, as shown in the

graphs below:

In each case, the blue trace represents the

wealth of the Strategy 1 players that bet +1,

the green trace represents the wealth of the

Strategy 1 players that bet -1 and the other two

traces, that are almost identical, correspond to

the Strategy 2 players that start with a bet of

+1 and -1 respectively (the reason they are

almost identical is because their actions and resulting P&L only differs in the first

period).

These deviations from 0 are surp[rising for a variety of reasons. since the game is

a martingale, the expected value is 0, so we would expect the cumulative P&L to stay

close to zero, and return to it quickly when it deviates. Instead we observe the following:

29

1. The maximum distance from 0 is high compared to the standard deviation of a

standard fair coin, which would be T for a game of T rounds. For T=10 5 this

would be about 316. Out of the fifteen traces in the five graphs above, this level

has been exceeded, whether on the up-side or the down-side, a total of ten times,

when we would only expect it to occur four times if we used a standard fair coin.

2. Also, the deviations from

!

0 are persistent, i.e. they last a long time. For a

sequence of tosses of a standard fair coin the probability of returning in T steps to

2

0 is approximately

( T "1) #T . For T=105 this probability is about 1.78 "10 #8 ,

yet we observe four cases out of the fifteen traces in the five graphs above that do

not return to 0 for the entire 10 5 number of rounds. That empirical probability,

four out of fifteen, is much larger than what we would expect

!

if we had used a

standard fair ! coin.

3. Finally, all three cross-correlations, i.e. between the three pairs of traces in each

of the five graphs, are around -0.5. How can this be? When X and Y are

negatively correlated, and Y and Z are negatively correlated, we would naively

expect the correlation of X and Z to be positive. That naïve expectation is

thwarted by the observations of the cumulative P&L of the different strategies

under the Reality Game.

What Farmar Outlined in his own research was a variant on what we had been

experimenting with, where the betting became more simple to analyze. By having all

players bet the entirety of their wealth in fixed proportions of Heads and Tails. This

created situations where one player would very quickly win the entirety of the wealth,

with the selection of this player being random.

In practice, some of the analytical math above is not as applicable. Instead, herd

mentality takes over as the presiding factor in the winnings of players. Depending on how

much the winner bet, the bets of other players would vary accordingly. While no

strategies, those uninitiated to the concept of the reality game would oftentimes realize

that betting the least would be the most beneficial. However, this realization was not

present in all cases. More field experiments might dictate a more accurate summarization

of strategic tendencies.

Overall, the variants that we explored dictated interesting findings. While there

was no conclusive winning strategy, we did establish that The Reality Game itself is a

martingale, despite the contradictory runs. And the Expected profit function, we were

able to prove that the game is indeed a martingale. For future studies, a further

exploration of the real-life application of the analytical procedures detailed could lead to

interesting results. Also, an exploration of the reasons behind the unexplainable jumps in

the cumulative P&L would further help flesh out our understanding of the game.

30

WALMART

SAVE
MONEY?
LIVE
BETTER?

RACHEL
LANDMAN

JORDANA
GLUCKOW

Background:

The
famous
Walmart
slogan
“Save
Money,
Live
Better”
may

not
be
as
concrete
as
it
seems.
For
our
independent
math
research
project,
we

wanted
to
investigate
whether
Walmart
economically
benefits
the
communities

that
it
enters.
In
order
to
understand
Walmart
as
a
company
we
started
by

researching
what
past
statisticians
and
mathematicians
had
done.
Many
analyses

have
been
done
regarding
Walmart
company
practices,
wages,
and
gender

equality,
yet
one
report
gave
us
our
footing
in
our
research.
Ken
Stone
published

“The
Economic
Impact
of
Wal‐Mart
Supercenters
on
Existing
Businesses
in

Mississippi.”
We
reached
out
to
him
and
asked
if
we
could
use
his
sales
tax
data

for
Mississippi
from
1990‐2000
and
he
gladly
helped
us.
We
replicated
his
study,

adding
data
from
2000‐2010,
and
took
our
analysis
to
another
level.

Process:
We
examined
automotive,
machinery,
equipment
and
supplies,
food

and
beverage,
furniture
and
fixtures,
public
utilities,
apparel
and
general

merchandise,
lumber
and
building
materials,
miscellaneous
retail,
miscellaneous

services,
wholesale,
contracting,
recreation
and
total
retail
sales
tax
data
for
each

county
in
Mississippi
dating
back
twenty
years.
Mississippi
was
chosen
for
two

reasons:
Mississippi
has
a
sales
tax
on
food
for
home
consumption,
and

Mississippi
sales
tax
data
is
readily
available.
Using
excel,
we
created
pull
factors

for
all
of
our
data.
A
pull
factor
is
defined
as
“a
measure
of
local
commerce:
a

measure
of
the
strength
of
the
retail
trade
in
an
area,
based
on
a
comparison
of

local
spending
in
relation
to
that
of
a
wider
geographic
area,
e.g.
a
state.” 1

The
pull
factor
for
each
county
was
calculated
as
the
per
capita
revenue
in
that

county
divided
by
the
per
capita
revenue
in
the
state
of
Mississippi.
We
then

logged
all
of
the
Walmart
Supercenter
Openings
and
Walmart
conversions
to
a

Walmart
Supercenter
with
their
date
and
location.
We
were
then
able
to
draw

correlations
between
Walmart
entering
a
community
and
the
revenue
that

county
was
receiving
in
sales
taxes.

Using
JMP
software,
we
also
grouped
each
county
with
similar
counties.

We
gathered
census
information
and
demographics
on
each
county
using
the

FIPS
number
for
each
county
as
our
identifier.
FIPS
is
short
for
Federal

Information
Processing
Standard.
JMP
allowed
us
to
compare
counties
using

metrics
that
we
determined
to
be
valuable
such
as
per
capita
income
and

population
density.
We
could
sort
similar
counties
in
clusters,
or
stack
them
all
in

a
dendrogram.
This
allowed
us
to
analyze
our
data
in
a
more
fine
tuned
manner.

We
used
CrackMaps
software
to
create
an
interactive
map
of
Mississippi
while

coding
each
county
with
a
color
specific
to
the
information
we
were
analyzing.

For
instance
we
were
able
to
visually
see
over
time
how
many
Walmarts
were
in

any
given
county.
This
was
instrumental
in
analyzing
our
data
as
well.

1 http://encarta.msn.com/dictionary_701709023/pull_factor.html

31

We
decided
to
call
residents
in
Indianola,
MS
(Sunflower
County)
since
a

Super
Center
opened
there
as
recently
as
2006.
We
asked
whether
residents

were
aware
of,
and
shopped
at
the
Walmart.
We
also
inquired
as
to
whether
the

residents
thought
Walmart
was
cost
effective
and
whether
they
noticed
any

changes
in
their
community
that
they
thought
correlated
to
the
opening
of
the

Walmart.
While
we
were
only
able
to
reach
a
small
number
of
residents
willing

to
take
our
survey,
the
results
were
unanimously
in
favor
of
Walmart
in
every

sense
of
the
word.
The
only
exception
to
this
was
a
woman
who
wanted
to
be

rehired
after
her
illness,
and
Walmart
would
not
rehire
her
or
give
her
a
reason

as
to
why.
Our
reliance
on
statistical
information
was
well
supplemented
with

real
human
input.

Results:

Super
Centers
in
1990

Super
Centers
in
2000

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0 0

0

0

0

0 0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0

0 0

0

0

0

32

Super
Centers
in
2010

1

2

0

0

0

1

0

0

0

0

0

1

0

0

0

0

0

0

0

0

0

0

0

1

1

0

0

0

1

0

0

0

0

0

0

1

0

0

0

1

0

0

0

0

0

1

1

0

0

0

0

1

1

1

1

0 1

1

0

0

1 0

0

1

1

0

0

0

0

0

1

1

0

0

1

0

1

0 1

0

0

0

33

Sales
Tax
Pull
Factor
Charts

2

2

1

1

0

1

0

0

1

0

0

1

0

0

0

0

1

0

0

0

0

0

0

1

1

1

0

0

1

1

1

0

0

0

0

1

0

0

0

1

0

1

0

1

1

1

1

1

0

1

0

1

1

1

1

0 1

1

0

0

1 1

1

1

1

0

1

0

0

0

1

1

0

1

1

1

1

0 1

0

0

0

34

Pull
Factor

Lumber
Pull
Factor

1.4

1.35

1.3

1.25

1.2

1.15

1.1

1.05

1

0.95

0.9

‐4
‐3
‐2
‐1
0
1
2
3
4

Year

Miscellaneous
Pull
Factor

1.200

Pull
Factor

1.150

1.100

1.050

1.000

‐4

0.950

Years
from
Walmart
Conversion
(year
0
is

‐3
‐2
‐1
0
1
2
3

the
year
the
of
the
conversion)

4

35

Furniture
Pull
Factor

1.6

1.5

Pul
Factor

1

1.05

1.4

1.28

1.3

1.2

1.1

1

1.08
1.09
1.08

1.1

0.9

‐4
‐3
‐2
‐1
Year
0
1
2
3
4

Conclusion:
It
is
difficult
to
make
a
definitive
statement
as
to
whether
Walmart

benefits
a
community
or
not.
There
are
many
additional
factors
at
play
that
we

have
no
way
to
account
for.
Additionally,
our
work
merely
found
correlations,
it

is
not
necessarily
a
cause
and
effect
scenario.
Economic
growth
occurred
before

and
after
the
opening
of
a
Walmart
so
it
is
hard
to
say
that
Walmart’s
opening

was
the
cause
of
a
flourishing
economy.
It
is
also
difficult
to
be
certain
if
the

change
we
see
is
due
to
chance
or
due
to
the
opening
of
a
Walmart.
The
long
bars

surrounding
each
of
the
points
are
error
bars
showing
the
standard
error
and

therefore
the
data
point
could
technically
be
anywhere
along
that
bar.
If
we
were

able
to
obtain
more
data
points
would
could
decrease
the
standard
error
and

possibly
have
more
conclusive
results.
Walmart
may
have
contributed,
and
it

also
may
have
redistributed
wealth
among
businesses
and
residents
in
the
area.

One
way
we
could
continue
our
project
is
to
use
the
correlations
we
found

between
specific
counties
using
JMP
and
find
similar
counties
to
compare
rather

than
comparing
counties
to
the
whole
state.
If
we
can
find
pairs
of
similar

counties,
one
in
which
a
Walmart
just
opened
we
could
see
if
the
change
in
the

economy
has
to
do
with
normal
economic
change
of
the
county
or
the
addition
of

Walmart.
We
look
forward
to
continuing
our
research
with
a
deeper

investigation
of
similar
counties,
and
the
input
of
residents
in
the
areas
that
we

are
studying.

36

TETRAHEDRA ON SQUARE LATTICES

LUKAS
BURGER

Introduction

We began by looking for an equilateral triangle (a triangle with the same sidelength)

on a square lattice. We proved this impossible in six cases, using different

variables that were odd or even or 1 (modulo 8) or 2 (modulo 8), and so on. Then we

looked for a tetrahedron on a three-dimensional lattice. We discovered an answer for

this easily by fitting the tetrahedron onto four corners of a cube. Noticing that the

problem works in dimension one and three we wondered if it worked in every odd

dimension. After searching for an answer in the 4 th and 5 th dimension we decided to

extend the pattern in our 3 rd dimensional answer to try to find solution in higher

dimensions. We generalized it and then put numbers into the equation. If it was not

possible with these numbers it would not give an integer. If it were possible it would

give us the dimension the numbers worked in. We collected a couple of dimensions

where the pattern worked and found a sequence between all of them. We found that

for all dimensions one less than a perfect square (of the form n 2 – 1) there exists a

tetrahedron, which meant there were an infinite number of dimensions that contain

tetrahedra on lattices.

Our initial exploration in two dimensions.

We began by seeing if it was possible to fit an equilateral triangle on a square

lattice with all the points of the triangle on the lattice.

This picture shows a hypothetical equilateral triangle on a lattice and gives variables

on the sides to let us use the Pythagorean theorem to find the triangles side length. If

you take all these lengths and set them to equal c with the Pythagorean theorem you

37

get , which equals ,

which equals

. We put these

equations together and broke the problem into four cases, depending on the parity

(oddness or evenness) of p and q. After considering the equations modulo 2 and 8,

each case led to a contradiction. Thus, there is no equilateral triangle that fits on a

square lattice in two dimensions.

Extending the Problem to Higher Dimensions

Extending the Pythagorean Theorem

We can extend the Pythagorean Theorem by adding another variable: i.e.

a 2 + b 2 + c 2 = d 2

What is a tetrahedron in higher dimensions?

A tetrahedron in any dimension by definition has one more point then the

dimension (or n+1 points) where any two of them are a constant distance from each

other.

What is special about n+1? It is the minimum number of points necessary to

have an n-dimensional object.

Extending the pattern of our three-dimensional solution to higher dimensions.

We wanted to mimic the solution we found from three dimensions.

(0,0,0)(0,1,1,)(1,0,1)(1,1,0)

We decided to look at a very specific set of examples. First we looked at n+1 points

in n-dimensions of the form:

(a,a,a,…, a)

(0,c,c,…,c)

(c,0,c,…, c)

(c,c,0,…,c)

…

and so on where the 0’s occupy every place. Setting the pair-wise distances equal to

each other, we obtained an equation that was too difficult to work with. So, we

simplified things even further, and decided on this type of pattern:

(a,a,a…a)(0,a-1,a-1,…a-1)(a-1,0,a-1,…a-1)(a-1, a-1,0,… a-1)…

It is obvious that the (0,a-1,a-1,…a-1) type of coordinates have equal distances from

each other. To find the distance we match the coordinates up and notice that the a-1s

cancel out and the only thing that doesn’t cancel is the 0s and a-1s:

(0,a-1)(a-1,0) You can figure out the distance for this using the Pythagorean theorem.

What you get is: 2(a !1) 2

The next distance we have to find is between (a,a,a…a) and (0,a-1,a-1,…a-1). The

distance between the a-1s and the a’s is the dimension minus 1 (or n !1) because

when you find their difference it equals 1 and there are n-1 1s. The distance squared

between the 0 and a equals a 2 .

We will set the two distances squared equal so that every point is equidistant:

n-1+ a 2 = 2(a !1) 2 . If you solve for a in this equation, you obtain:

n +1 + 2 = a

For this shape to fit on the lattice a has to be a rational number. So for a to be a

rational number n has to be one less then a perfect square i.e. 1,3,8,15,24,35…

38

So there are an infinite number of tetrahedra that fit on lattices but there may be many

more then the ones we found.

39

LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS

THEO MCKENZIE

€

This year for my independent math research I explored the fields of linear

algebra and multivariable calculus. These two subjects in mathematics are both highly

applicable and are usually learnt in the first year of college. Linear algebra is the study

of linear spaces, i.e. spaces defined by vectors. Multivariable calculus is used to

describe and analyze equations defined by more than one parameter.

After learning basic rules of linear equations I applied this to some simple

problems. One of the first problems that I did was the examination at a methane

molecule. In methane (CH 4 ) the electrons of hydrogen are attracted towards the

carbon atom, giving each of the hydrogen atoms a positive charge. This makes them

repel each other and go as far away as possible from each other. This creates a regular

tetrahedron with the carbon atom in the center. The problem was to find the angle

between two hydrogen atoms. This was doable because I could use an equality

equation with the dot product and solved for the angle.

Another problem to do with linear equations was finding the area of the face

of a tetrahedron, i.e. a triangle. This turns out to be double the cross product of any of

the two vectors compromising the triangle, considering that the cross product by

definition is the magnitude of two vectors multiplied by the sine of the angles between

them. This works out to the base of a triangle multiplied by the height, which when

divided by two is the area of a triangle.

I then looked at matrices and their function in linear spaces. Various matrices

have interesting properties, such as a matrix that rotates a vector a given angle in two

dimensions, matrix inverses and orthogonal matrices. Matrices can also be used to

solve linear equations. One problem that I did relating to this field was related to

cooking. I was able to find out the nutritional value of various pastries from two given

matrices in a small amount of time by putting the ingredients and the nutritional

values of the ingredients into matrices, then multiplying them using the inner product.

In terms of problems to do with multivariable calculus, I began by learning the

basic rules of differentiation of functions of more than one variable. I had already

made myself familiar with partial derivatives so most of my work was with the del in

terms of the gradient, the divergent and the curl. I used these to describe many graphs,

finding the maxima and minima and the tangent plane. Probably the problem that I

spent the most time on this year was to find the region defined by

0 < ∫∫ sin( x + y)dA

40

Another mathematical concept that I looked at this year was curvature of

functions. This works much like a derivative, considering I was looking for the

curvature at a specific point. This means that I could play with this and the related

concept of torsion, thereby looking at functions defined by their curvature. Looking at

curvature also led to three vectors, the normal, tangential and the binormal, and I also

looked into relationships between these three vectors.

My independent research project was unusual considering that the objective

was not to do new and innovative research where I didn’t know which direction my

work would take me, but rather it was a focus path with a set curriculum. The only

time when research came in was to introduce and enhance my learning of the subject I

was currently on. This project was fulfilling, and I look forward to expanding my

knowledge of these ideas in college.

41

COMPLEX GEOMETRY

Luci Cooke and Pleasant Garner

Over
the
course
of
the
past
year,
we
have
explored
the
world
of
complex

numbers.
We
originally
decided
to
join
the
class
in
order
to
enable
us
to
figure

out
the
complicated
problems
that
we
had
encountered
upon
joining
the
Math

Team.
Neither
of
us
had
encountered
complex
numbers
before,
and
we
soon

became
very
intrigued
with
them
to
the
point
that
we
decided
to
spend
the

bulk
of
our
year
studying
their
various
applications.
At
the
beginning
of
the

year,
we
covered
the
basics
of
complex
numbers,
then
moved
onto
a
very
cool

problem
about
finding
treasure,
then
looked
at
triangles
on
the
complex
plane,

and
finally
we
worked
on
translating
3‐D
shapes
to
a
2‐D
plane.

The first thing we had to do was to learn the basics of complex numbers.

We had previously never interacted with them in any capacity whatsoever, so Mr. T

had a big task in front of him. We learned that a complex number is made of a real

component and an imaginary component (comprised of the square root of negative

one). The real part is described by the x-axis and the imaginary part is measured by

the y-axis. The equation of a point on the complex plane is described as the sum of its

real and imaginary parts: z=x+yi. The world of complex numbers serves as an

alternative to the Cartesian plane. Each point on the complex plane has a conjugate--a

point reflected across the x-axis. The conjugate of the complex point z is written as

z*.The equation of z(bar) is x-yi. The equation of a line including the points a and b is

(z-a)/(z*- a*)=(b-a)/(b*-a*). When you multiply points on the complex plane, you add

the angles and multiply the lengths. When you add points, you simply add their

equations.

One interesting application of complex numbers is using them to prove

42

geometric properties that would be much more difficult to prove otherwise. We used

complex numbers to show that the orthocenter, centroid, and circumcenter exist and

that the Euler line connects the three, as well as the “9-point circle.” Because any

triangle can be shifted and scaled in order to fit within the unit circle, we simply

began with the generic triangle ABC. We showed that each of ethe three heights of

the triangle intersect at the same point, the orthocenter. Interestingly enough, the

equation of this point is A+B+C. The three medians of the triangle also all shared a

point, the centroid, which can be found at (A+B+C)/3, the average of the three points

of the triangle and also 1/3 of the orthocenter. The line connecting the orthocenter and

the centroid also passes through the circumcenter, which is the origin by construction.

This line is called the Euler line and the segment from the circumcenter to the

centroid is 1/3 of the length of the entire line. The point (A+B+C)/2, the center of the

Euler line, is also the center of a 9-point circle with a radius of 1/2. These 9

significant points on the circumference of the circle are the 3 midpoints of lines AB,

AC, BC; the 3 points where the heights of the triangle intersect with lines AB, AC,

BC; and the midpoints of the 3 lines running from the orthocenter to the latter three

points. The existence of Euler line and the 9-point circle are both quite complicated to

prove using geometry, but with complex numbers these constructions are relatively

simple to show.

Complex numbers can also be helpful in finding buried treasure, according

to George Gamow’s problem in One, Two, Three, …, Infinity. In this problem one

must begin at a certain point and draw a line directly to the oak tree, and then draw

another line perpendicular to the original line extending to the right with the same

distance as the first line. A stake is placed at the end of this line. Then one must go

back to the original fixed point and draw a line directly to the pine tree, and then draw

another line perpendicular to previous line extending to the left with the same length

of the previous line. Another stake is placed at the end of this line. The midpoint

between these two lines is where the treasure is buried. The problem arises when we

find that the original fixed point that each line is dependent on no longer exists.

43

Luckily it is possible to find the treasure using complex numbers. We began by

placing the pine and the oak at points –1 and 1, respectively. Wherever the two trees

actually are does not matter as the picture can be scaled and shifted to suit any

location. If we determine the locations of each stake as a function of the unknown

fixed point, we find that the midpoint of line between each stake is simply –i.

Therefore we know the treasure is simply found by drawing a perpendicular line

through the center of the line connecting the oak and the pine and extending it 1/2 the

distance between the oak and the pine. This problem demonstrates another way in

which it is much simpler and straightforward to use complex numbers rather than

geometry.

Another interesting application of complex numbers that we explored had to

do with the translation of a three-dimensional object onto a two-dimensional plane.

First, we read Drawing with Complex Numbers by Michael Eastwood and Roger

Penrose. We immediately learned how to represent numbers of higher dimensions as a

set of real numbers, and to identify which dimension these sets belong to. A triplet of

numbers belongs to our familiar three-dimensional space, whereas four numbers

would belong to the fourth-dimension, five numbers to the fifth, and so on. We also

learned, that it is possible in standard projection to project a lower dimensional object

onto a higher dimensional plane by simply putting a 0 in the remaining slots to be

filled. We talked about Gauss’ revolutionary theorem of axonometry which states that

if we project a 3-D cube onto the complex plane, with one corner projected onto the

origin, the sum of each of the adjacent corners’ projected points squared will be 0. We

found it rather shocking to see this complex physical transformation reduced to such a

simple equation, and we attempted to reconcile our intuition with the argument set

forth. We also attempted a few experiments of our own, using a poster board that we

had constructed for the Math Fair. We found that our experiments checked out, very

nearly equalling zero. It was surprising to us that our humble attempt at trying to

project a cube to a plane, armed with a laser and an imperfect graph, worked out so

well.

44

We both have learned so much about complex numbers this past year.

Both of us went into the class having absolutely no idea what they were. We had

never even heard the term before, and the idea of the square root of negative one was

a dimly remembered concept from previous math classes. We learned that they are so

useful, and very applicable to everyday life. Many of the complicated geometric

maneuvers that we had performed in Geometry could be expressed much simpler

using complex numbers. Even problems that we would not have previously thought

would have tangible solutions (like the Treasure Map problem and the idea of

projected a three dimensional object to a two dimensional plane) were relatively

easily solved using complex numbers. As we became better versed in the complex

plane, it became apparent that they would be useful in studying any number of things.

Pleasant’s brother, an engineer, expressed surprise that she was only just learning

about complex numbers; they had become such an integral part of his daily life that he

had forgotten that relatively few people were exposed to them. It is clear that complex

numbers are incredibly important and applicable to any number of different topics,

from video game design to the field of medical imaging to engineering. We both now

have a very strong foundation in the complex plane, a foundation on which we are

eager to expand upon further in the years to come.

Works
Cited

Eastwood, Michael, and Roger Penrose. "Drawing with Complex Numbers." The

**Mathematical** Intelligencer 22.4 (2000): 8-13. Print.

Gamow, George. One Two Three... Infinity. New York: New American Library, 1953.

Print.

Theodosopoulos, Ted. "Complex Geometry." **Saint** **Ann's** **School**. Web.

.

45

3D
DICE
AND
1D
SCALES

AKASH
MEHTA

ABSTRACT.
In
trying
to
determine
the
criteria
for
a
fair
die,
we
found
a

construction
that
yielded
fair
dice
for
any
number
of
possible
outcomes.
We

then
built
and
tested
dice
that
did
not
follow
this
construction
and
were
less

symmetrical.
These
tests
revealed
the
importance
of
weight
in
dice;
to

understand
weight
better,
we
explored
finding
centers
of
weight
in
one

dimension.

Our
project
began
this
February
with
a
simple
question:
what
makes
a
die

fair?

First,
we
limited
ourselves
to
completely
symmetrical
polyhedra.
We

managed
to
prove
that
there
are
only
5
dice
of
which
all
the
sides
and
points
are

congruent
to
each
other.
(For
instance,
a
cube.)
So
you
couldn't,
for
instance,

make
a
fair
die
with
numbers
between
1
and
302,
if
you
limit
yourself
to

completely
symmetrical
dice.
Then
came
the
question
–
what
if
you
don't?

We
first
proved
that
you
could
construct
a
die
with
any
numbers
that

would
be
completely
fair.
This
is
easily
achievable
by
essentially
sticking
two

cone‐like‐shapes
made
out
of
triangles
on
top
of
each
other,
each
one
with

numbers
X‐Y
on
them.
So
if
you
wanted
a
die
that
would
land
an
equal
amount
of

times
on
1,
2
and
3,
you
would
just
have
to
construct
a
die
out
of
six
triangles.

You
would
take
three
triangles
and
label
them
1,
2,
and
3,
connect
their
sides
so

you
get
three
walls
and
no
floor,
and
then
make
another
of
these
shapes
(again

with
the
triangles
labeled
1,
2
and
3)
and
then
stick
the
empty
side
of
one
on
the

empty
side
of
another.
Then
you'd
have
two
sides
with
1,
two
sides
with
2,
and

two
sides
with
3.
All
sides
would
have
an
equal
chance
of
landing.

For
another
instance,
say
you
wanted
to
use
this
method
to
get
a
six‐sided

die.
Here
is
a
cone‐like
shape
with
six
lateral
sides.
Label
each
of
the
six
sides
1

through
6.
Then
take
another
cone,
exactly
the
same
as
this
one,
and
label
each
of

its
sides
1
through
6.
Stick
the
two
cones
together
by
their
bases,
and
then
you

have
a
twelve
sided
die,
with
each
of
the
sides
landing
face
down
the
same

amount
of
times,
and
a
2/12
chance
of
any
number
coming
up.

After
this,
we
wanted
to
see
if
we
could
make
fair
dice
without
putting
the

same number on multiple sides. We couldn't find out the answer through

46

hypothesizing,
so
we
constructed
several
dice
and
tested
them.
We
found
a
few

interesting
things:
firstly,
surface
area
was
certainly
not
the
sole
factor.
For

instance,
we
made
a
die
where
we
took
two
cubes
and
stuck
them
together,

getting
a
long
box.
Even
though
each
of
the
rectangular
sides
had
only
twice
the

amount
of
surface
area
as
the
squares
on
the
ends,
the
squares
landed
2/100

times
(one
of
them
landed
2/100,
the
other
didn't
land
at
all.)

We
also
made
a
shape
with
four
triangles
and
a
square,
with
all
the

shapes
being
of
equal
area.
We
constructed
a
pyramid
out
of
them,
but

discovered
that
the
square
didn't
land
nearly
close
to
1/5
of
the
time,
which
is

what
it
should
have
if
dies
were
solely
based
on
surface
area.
Then
we
cut
all
the

triangles'
surface
areas
down
by
½,
and
found
that
the
results
were
much
closer

to
fairness.
We
conjectured
that
this
was
because
surface
area
itself
doesn't

matter,
its
only
the
surface
area
of
the
shape
around
your
2D
figure
that
matters,

so
the
triangles
were
really
taking
up
twice
as
much
space
each
as
the
square.
We

realized
that
weight
had
to
play
a
huge
part
in
the
making
of
fair
dice.
Of
course,

if
all
sides
and
points
are
congruent,
you
don't
have
to
worry
about
weight,
but
if

they
aren't,
you
do.

We
conjectured
that
it
had
something
to
do
with
the
center
of
gravity,
or

the
point
in
which
the
weight
on
opposite
sides
of
that
point
were
equal.
To
find

out
how
we
could
apply
centers
of
gravities
in
3D
figures
–
which
is
quite
a
task
–

we
constructed
a
rudimentary
balance
scale.
This
consisted
of
a
yardstick
and
a

paper
cup
filled
with
pennies.
We
discovered
that
even
if
there
is
twice
as
much

weight
on
one
side
of
the
middle
of
the
yardstick
(where
it
is
connected
to
the

ground)
the
yardstick
will
balance
if
the
weight
on
the
other
side
of
the
yardstick

is
twice
as
far
from
the
center,
which
is
the
center
of
gravity.

Our
seemingly
accurate
conjecture
was
this:
if
you
have
a
1
lb
weight
on
the

center
of
gravity
of
the
scale,
it
is
putting
half
of
its
weight
on
one
side
of
the

yardstick
and
the
other
half
of
its
weight
on
the
other.
If
you
move
it
a
little
left
–

say
¼
of
the
entire
length
of
the
yardstick
–
the
weight
it
puts
on
the
left
side
will

be
¾
lbs,
but
it'll
still
be
putting
¼
of
its
weight
on
the
right
side!
However,
if
you

put
it
at
the
very
end
of
the
yardstick,
it'll
put
its
entire
weight
on
that
side.

Basically,
it
works
like
fractions:
so
if
you
only
move
the
weight
a
tiny
bit
to
the

left,
even
if
it
is
a
10
lb
weight,
it
can
easily
be
balanced
by
a
1
lb
weight
by

putting
the
1
lb
weight
farther
to
the
right.
This
is
where
we
are
right
now.

47

COLLATZ CONJECTURE

LUCAS NEVILLE

1. Introduction

This is a documentation of my experience exploring the Collatz conjecture. I’ve included

several exercises to the reader. Feel free to think about them or simply read my answers.

Also if you have any interest in Collatz and would like to continue researching proofs for

different loops feel free to contact me or Paul Salomon. This is an unproven function due

to its properties of doing a specific thing for each number so we have to go bit by bit with

different length loops. Also this function is extremely interesting because of how it loops

and goes into to different lengths but as far as I know it always ends the same way. Now

with no further introduction we go into the math.

2. The Basics

{ }

n/2, if n is even.

C(n) =

3n +1, if n is odd.

2.1. The C function. This is a recursive function, call it C, that follows the following

recursion: ∀n ∈ Z,C(n) = 3n + 1 if n is odd, and C(n) = n 2 if n is even. For example

C(10) = 10 2 =5. Or for an odd C(9) = 3 · 9 + 1 = 28. In other words, put in 10, out comes

5. Put in 9, out comes 28.

Exercises. Is there a number, n, so that C(n) = 16? Can you think of another value for

n? How many are there? How many values of n are there so that C(n) = 17? In general,

how many n ′ s hit a given number?

Answers. So for C(n) = 16, n would have to equal 32. It could also equal 5, get it? This

kind of backwards thinking was essential to our work as you will learn later. For C(n) = 17,

the value of n would have to be 34. No other value works. In general, if n is even, C(n)

can be even or odd, but if n is odd, C(n) is always even!

2.2. Iteration. Iteration is the doing of something over and over again. We are going to

iterate the C function, feeding the output back in and using C over and over. Say n = 5,

and let’s do C a few times. C(5) = 3 · 5 + 1 = 16. C(16) = 16 2 =8.C(8) = 8 2 =4.

C(4) = 4 2 =2.C(2) = 2 2 =1.C(1) = 3 · 1 + 1 = 4. Which then leads you into a loop of

4,2,1,4,2,1, . . . Let’s try another starting value. How about 13?

13 → 40 → 20 → 10 → 5 → 16 → 8 → 4 → 2 → 1 → 4, 2, 1, . . .

1

48

2 LUCAS NEVILLE

This also ends in the 4,2,1 loop. It even goes through the sequence that starts with 5, as

before. One begins to wonder if every iteration ends in the 4,2,1 loop, no matter what

number you start with. It is also worth wondering if every sequence passes through 5.

(This last one is obviously not true, since 4 doesn’t pass through 5. Neither does 16.)

2.3. The Problem. Finally we present the Collatz Conjecture:

For any positive number, iterating the C-function will always end in the 4,2,1 loop, no

matter what you start with. Because of how the Conjecture works, we went about proving

this by showing the impossibility of different loops. We did this for various lengths of loops,

thinking about evens and odds. A loop of length three with even,even,odd would look like

this 3( ( n 2 )

2 ) + 1 = n, which we could look at algebraically.

3. Proofs

3.1. Length 1 Loops. With a length 1 loop, the value of n must equal itself when divided

by 2 (if n is even), or when multiplied by 3 and added to 1 (if n is odd). We have to consider

both cases. First let’s suppose n is odd. So for it to work, n would have to equal 3n + 1.

Which is possible, if n = − 1 2 , but we are looking at whole numbers and positives only, so

we can rule this out.

Now, let’s suppose n is even. Then for half of n to equal n, n would have to be 2n. So

the only situation where that works is if n is zero and half of zero equals zero. Since zero

is even, there is in fact a loop with length one! Once again, however, we are looking at

positives when we say that all loops end in 4, 2, 1, so this is sort of a non-disproof.

Hence, we have proven that there are definitely no loops of length 1 in the positive

numbers. □ A good start.

3.2. Length 2 Loops. In order to prove length 2 loops are impossible, we actually have

to consider quite a few possibilities. Above we considered two cases, if n is odd or even,

but here we must consider whether C(n) is odd or even. This gives us 4 cases, which can

be seen in figure 1, but as said earlier, odds only spit out evens, so we can remove the

odd-odd case altogether. We will show the other three cases are impossible.

Even-Even.

( n 2 )

2 = n

Dividing any positive number by 2 makes it smaller, not larger. Dividing twice by 2 makes

it a fourth of itself, so in this case, n =4n. This is the n = 0 case again, which we already

discussed. We have shown this case is impossible for positives. □

Even-Odd.

3( n 2 ) + 1 = n

For any positive number, 3( n 2 ) >n, so 3( n 2 ) + 1 >nas well. They cannot be equal, so this

case is impossible. □

49

COLLATZ CONJECTURE 3

Figure 1. The case tree for length 2 loops. The dashed line indicates an

impossible case.

Odd-Even.

(3n + 1)

= n

2

This means 3n + 1 = 2n, which is again impossible for positive numbers, since 3n >2n.

Hence we have shown that there are no loops of length two anywhere in the positives. □

We’re making real progress!

3.3. Loops in the Negatives. Looking back at the the odd/even case, we needed 3n +

1 = 2n. This is possible if n = −1. We ignored this before, but it shows something

interesting. In the negatives, loops do exist, but unlike the positives they don’t all end

in 4, 2, 1! They end simply in −2, −1, −2, −1 . . . We have found one exception the loop

of −5, −14, −7, −20, −10, −5 . . . Other than this, as far as we can tell, there are no other

exceptions, though this was not the heart of our research.

3.4. Length 3 Loops. Here we need to consider lots of cases, accounting for 3 levels of

odds and evens. This can be seen in figure 2. So for a loop of length three, if n is odd

then C(n) is even then odd than this would happen 3( (3n+1)

2 )+1. So if you think about

it, it is impossible to have two odds in a row because of how an odd is multiplied by 3

than added to 1, which just make it doubled which makes it even than it is tripled which

makes it odd than 1 is added which makes it even. So all loops with 2 odds next to each

other are removed than that leaves only even even even which can’t loop because an even

is cut in half and they would keep getting smaller. So finally we are left with 1 odd and 2

50

4 LUCAS NEVILLE

Figure 2. The case tree for length 3 loops. Again, dotted lines indicate

impossible cases.

evens the 4, 2, 1 pattern. We have ruled out the four possible cases, so we know there are

no length three loops in the positives, other than the 4, 2, 1 loop. □

3.5. Conclusion. So far, we have seen that every loop that we have tried ends in 4,2,1.

In other words we haven’t found a counterexample. Also we have built proofs for each

length of loop independently. Our work has taken us up to the impossibility of loops up

to length 3, other than the 4, 2, 1, . . . There is a great deal more work to do, and we could

certainly explore more in the negatives or even fractions. We could also change the function

completely and see what loops emerge. We could also look at known examples and examine

how long it takes them to work their way down to 1. Maybe someday though, someone

will find a number that never bottoms out, thus disproving the conjecture.

51

THE MOD-K GAME

HARRY BOYER AND SEAMUS WHORISKEY

1. Introduction

Get together with a friend. One of you will be the “0 player” and the other the “1

player”. You and your friend have the choice of putting out one finger or no fingers. If the

fingers add to an even number then player zero wins. If they add to an odd number, then

player 1 wins. Is this game fair? is there a good strategy? What if each player can put

out up to five fingers? is that fair? Questions like these formed the basis of our research

this year.

How can we add more players? For three players there will be P0, P1, and P2. Each

player can put out no fingers, 1 finger, or 2 fingers. If all the fingers add up to a multiple

of three then P0 wins. If they add up to one more than a multiple of three, then P1 wins.

And if they add up to two more than a multiple of three then P2 wins. Is this game fair?

What if they can only throw up to one finger?

For our research we studied a general version of this game, called The Mod-k Game.

The mod-k game is played with k players. Each player is assigned a number between 0

and k-1 (P0, P1, P2, and so on until everyone has a number). Each player then plays an

amount of fingers or things up to k-1. Take the total of the fingers and divide by k. If the

remainder is 0, then P0 wins. If the remainder is 1, then P1 wins, and so on. We looked

at how changing the number of finger options changes the game, looking at fairness and

strategy for a variety of k values.

Lets look at the 2 player game first. The simplest version having2 choices. The ”game

board,” including all possible outcomes, can be seen in figure 1. This game is fair, because

each player has an equal number of winning outcomes, and each outcome is equally likely

to occur, since players are choosing at random. We will skip three choices and go to four

choices. four choices is also fair, because board is 4x4, so it has an even number of spaces,

and again they are distributed evenly between the two players. In conclusion, any even

choice two player game is fair, because the board will always have an even number of

equally likely spaces, split evenly between the players.

2. research

2.1. 2 players. Some versions of the two player game are unfair. Like if each player can

only play no fingers then P0 will win all the time, for obvious reasons. The three choice

game is also unfair. The board for this game is a 3x3, so it has 9 spaces, which cant be

split evenly between two players. If the game has an odd number of choices then the game

will be unfair, because the board has n 2 spaces, and when you square an odd number then

1

52

2 HARRY BOYER AND SEAMUS WHORISKEY

Figure 1. All possible outcomes for 2 players with 2 choices. This is a fair

game, since each player has exactly 2 favorable outcomes.

Figure 2. All possible outcomes for 2 players with 4 choices. This is also fair.

53

THE MOD-K GAME 3

you will always come out with an odd number,so when theres an odd number of choices

the board can not be split evenly. So in the case of 9 choices P0 will win 41out of 81, times

to 40 for P1. This is a 1 out of 81 advantage, which isnt very much, and if the number of

choices goes up, then the advantage gets even smaller.

Figure 3. All possible outcomes for 2 players with 3 choices.

2.2. 3 players. The next game we will look at is the three player game. In this game

we have P0, P1, and P2. The board for this game consists of all possible combinations

of choices for 3 players. This actually is a cube, so if each player has n choices, there are

n 3 possible outcomes. Instead of looking at a confusing attempt at drawing a cube, we

will look at the 3 cross-sections corresponding to what P2 plays. One square has all the

outcomes if P2 plays 0 fingers. Another is a square of outcomes if P2 plays 1 finger, and

so on. In the cube these cross-sections are just stacked.

If the number of choices is a multiple of three, then the game is fair. This is because,

the number of spaces in the cube will also be a multiple of three, so it will be split evenly

among the three players. This can be seen in our board for 3-choices, but well have to

imagine larger boards. The 6 choice game, for instance, has 6, 6x6 cross-sections, making

216 possible outcomes, but we can understand that they are evenly split between the layers.

Though we are not including the game boards, in our research we found that if the

number of choices is one more then a multiple of three, then P0 has an advantage. If the

number of choices is two more then a multiple of three then P0 is at a disadvantage. Like

he 2-player game, the more choices players have, the smaller the advantage becomes. 3

players choosing numbers between 1 and 1000 is quite fair.

54

4 HARRY BOYER AND SEAMUS WHORISKEY

Figure 4. All possible outcomes for 3 players with 3 choices. We see three

cross-sections based on P2’s choice. This is a fair game.

2.3. K-players. Lets look at the game with k players. This is a more general version of

the mod-2 and mod-3 games. There are k players (which simply means k is any number),

and each player has n choices. Here the board is a k-dimensional hypercube with n k spaces.

If n is a multiple of k, then n k = k k , which is a multiple of k. Then the board can be split

evenly amongst the k players, and this game is fair.

It is much harder to see in general what happens if n is not a multiple of k. In the 4

player game, for instance the distribution of advantages was surprising, but we leave this

research for future work.

3. Conclusion

The mod-k game is very similar to games like Rock, Paper, Scissors, in that it can be

used to make decisions. Our research shows that it can be played fairly, using random

strategies, if the number of choices is a multiple of the number of players. There will be

advantages and disadvantages for other choice possibilities, but as the number of choices

grow, the advantage diminishes.

Continued research into the mod-k game could include an in-depth look at who has

advantages when, for various remainders of choices. For instance, if 4 players play randomly,

with each player playing anywhere from 0 to 4 fingers, who has an advantage? Also, we

55

THE MOD-K GAME 5

could look at possible strategies. Is there a clever way to play the mod-3 game, for instance,

so that you can take an advantage? Perhaps we’ll look into these in the future, but for

now we’ll just beat our friends.

56

EXPLORATIONS
INTO
DISCRETE
MATH

GAUTAMA
MEHTA

This
year,
in
an
independent
study
in
mathematics
with
Ted
Theodosopoulos,

I
looked
at
a
number
of
different
problems
in
the
field
of
“discrete
math.”
These

included,
in
the
fall
semester,
a
proof
of
Sylvester’s
theorem,
which
states
that

there
cannot
be
a
finite
set
of
non‐collinear
points
on
a
plane
with
the
property

that
every
line
passing
through
two
of
them
passes
through
a
third.
Later,
we

looked
at
(and
ultimately
answered)
the
problem
of
the
number
of
regions

created
by
connecting
every
pair
of
points
from
a
finite
set
of
points
on
the

circumference
of
a
circle.
This
problem
included
solving
for
the
sum
of
the
first
n

perfect
squares,
as
a
polynomial
function
of
n.
I
did
so
using
sigma
notation
and

combinatorics.

Later
in
the
year
we
came
across
the
following
problem:
“Are
there
seventeen

lines
on
the
plane
that
cross
at
exactly
101
points?”

Discussion
of
this
problem
prompted
larger
inquiry
into
questions
about
the

nature
of
the
relationship
between
the
number
of
infinite
lines
on
a
plane
and

the
number
of
times
they
intersect
each
other.
At
its
broadest,
our
question
was:

How
many
points
are
defined
by
the
intersections
of
a
given
number
of
lines
on

an
infinite
plane?

Let
us
label
the
number
of
lines
l
and
the
number
of
points
defined
by
their

intersections
p.
It
is
important
to
note
that
p
is
not
fixed
depending
on
l;
there

can
exist
two
configurations
of
l
lines
with
different
values
for
p.
It
is
clear,

though,
that
at
the
maximum
value
for
p,
wherein
every
pair
of
lines
crosses

once,
p
=
C(l,
2).
The
natural
question
is
now:
What
values
of
p
can
exist
for
a

given
l?

Well,
there
are
two
ways
to
reduce
p
from
its
maximum
case.
One
is
to
create

groups
of
parallel
lines.
Since
each
pair
of
parallel
lines
can
be
thought
of
as

simply
pushing
the
intersection
of
those
two
lines
to
a
point
at
infinity,
each

group
of
parallel
lines
reduces
p
by
C(the
number
of
lines
in
the
group,
2)
points.

The
other
method
of
reducing
p
is
by
having
more
than
one
two
lines
meet
at

one
point,
effectively
combining
their
intersections.
Here,
every
pair
of
lines
in

each
group
of
such
lines
gets
rid
of
its
intersection—except
for
one,
since
there
is

one
point
at
which
they
all
converge.
Thus,
each
group
of
coinciding
lines

reduces
p
by
(C(the
number
of
lines
in
the
group,
2)
‐
1).

We
still
don't
know
how
to
count
the
number
of
p
values
for
a
given
l.
Matters

are
complicated
by
the
fact
that
you
can't
simply
add
any
combination
of

triangular
numbers
and
triangular
numbers
minus
one
to
sum
to
the
p
you
want;

the
number
of
lines
involved
in
both
methods
of
reduction
has
to
be
greater
than

or
equal
to
l.

Our
next
question
was:
How
many
unique
configurations
of
lines
are
there

that
satisfy
given
l
and
p
values?

In
order
to
simplify
this
problem,
we've
defined
a
notation
for
describing

configurations
of
lines,
using
“ki”
and
“mj”
to
denote
the
number
of
sets
of
i

parallel
and
j
coincident
lines
respectively.
To
explain,
I’ll
give
some
examples.

57

Consider
this
case
(assuming
all
lines
are
infinite):

Each
index
of
K
(Kx)
represents
the
number
of
groups
of
x
parallel
lines.
The

statement
“All
other
indices
of
K=0”
is
implied
and
will
be
left
out
from
here
on.

Now
take
this
case:

Each
index
of
M
(Mx)
represents
the
number
of
groups
of
x
lines
meeting
at
a

point.

Thus,
by
this
system:

This
would
be
described
in
terms
of
its

groups
of
parallel
lines
as
follows:

K2=1

K4=1

All
other
indices
of
K=0

This
is
described
in
terms
of
its
groups
of

multiple
lines
meeting
at
a
point
like
this:

M3=1

58

the
above
case
would
be
described
as:

K2=2

M3=1

M4=1

This
notation
guarantees
that
with
a
given
l
and
given
M
and
K
values,
we
can

compute
p
by
subtracting
from
the
maximum
case
as
described
before.
We

developed
a
Diophantine
equation
(or
rather,
an
equation
template
of
sorts)
to

represent
the
situation.
Here
is
an
example
of
its
application:

C(l,
2)
‐
p
=
k2
C(2,2)
+
k3
C(3,2)
+
...
+
m3
[C(3,2)
‐
1]
+
m4
[C(4,2)
‐
1]
+
...

So
far,
so
good.
But
when
we
got
to
this
point,
we
realized
something

unsettling
that
made
this
problem
even
more
complicated
than
it
already

appeared.
There
can
exist
what
we
termed
“overlaps,”
i.e.
lines
that
are
counted

in
more
than
one
group
of
parallel
or
coincident
lines.
It
would
be
nice
if
the

product
of
all
M
and
K
indices
and
values
equaled
exactly
l,
but
this
is
not
always

the
case:

In
many
ways,
the
pattern
of
our
course
through
this
problem
was
marked
by

two
things:
classification
and
specialization.
We
built
an
entire
notational
system

and
used
it
only
to
solve
a
very
particular
special
case
of
the
original
problem.

But
in
the
process,
I
think
it’s
fair
to
say
that
the
development
of
the
questions

themselves
provided
a
great
deal
better
understanding
of
the
problem
than
I
had

at
outset.

Before
I
get
to
what
the
special
case
was,
I
need
to
introduce
the
“valance”

system
I
designed
to
deal
with
overlaps.

Here's
a
way
to
visualize
our
classification
system
for
configurations
of
lines,

while
accounting
for
overlaps
between
multiple
M/K
groups:

Draw
a
group
of
dots
to
represent
each
M/K
group.
Connect
these
dots,
or

valances,
with
“lines”
to
illustrate
the
overlaps.
However,
our
definition
of
“line”

is
not
what
one
might
expect:
consider
the
case
where
more
than
two
M/K

groups
share
a
line.
This
is
represented
in
the
new
system
as
lines
emanating

from
all
of
the
groups
involved,
and
converging,
becoming
a
single
“line.”
Each

valance
in
the
new
system
represents
a
line,
unless
it
is
connected
to
one
or
more

other
valances,
in
which
case
it
represents
the
same
line
as
do
those.

K2=1

M3=1

Here,
the
topmost

horizontal
line
belongs

to
both
a
pair
of
parallel

lines
and
a
group
of
3

coinciding lines.

59

Thus,
this
configuration

is
represented
as

Note
that
two
K
groups
can
never
be
connected,
since
by
the
transitive
nature

of
parallel
lines,
if
two
sets
of
parallel
lines
shared
a
line
they
would
have
to
be

the
same
set.

So
we
are
now
in
a
position
to
define
what
was
referred
to
before
as

“overlaps,”
labeled
Q:

Q
=
Number
of
valances
–
l

and

=
(Number
of
connected
valances)
–
Number
of
connecting
lines

Actually,
a
valance
diagram
such
as
shown
above
does
not
give
all
of
the

information
necessary
to
accurately
describe
a
configuration
of
lines,
since
it

doesn’t
account
for
the
number
of
“free”
lines
(F)
that
aren’t
part
of
K
or
M

groups.
However,
it
is
easy
to
correct
this
by
simply
adding
to
the
diagram
e.g.
“F

=
4”
to
fully
describe
the
scenario,
and
thus
let
l
and
p
be
calculable.

K3

K2

M3

M2

M2

M2

K2=1

K3=1

M2=3

M3=1

l = 14

60

But
over
the
course
of
researching
and
discussing
this
problem,
we
found

ourselves
getting
less
interested
in
calculating
l
and
p,
and
more
interested
in

finding
out
how
many
unique
configurations
there
were
under
given
constraints.

We
defined
equivalency
between
two
valance
diagrams
as
when
one
could

either
interchange
the
valances
of
one
group
or
interchange
two
groups
of
the

same
type
with
the
same
valances
from
one
diagram
to
get
to
another.
These
two

diagrams
were
considered
the
same
configuration.

Let
me
describe
the
special‐case
scenario
for
which
I
actually
got
an
answer.

Suppose
Q
=
1,
and
you
are
given
a,
b,
and
c
where

a
=
number
of
distinct
M
types

b
=
number
of
distinct
K
types

c
=
number
of
distinct
M
types
with
more
than
one
“copy,”
i.e.
number
of
M

indices
with
a
value
>
1

In
this
case,
the
number
of
unique
configurations
that
satisfy
these

constraints
equals:

C[(a
+
b)
,
2]
‐
C(b,
2)
+
c

Let
me
take
this
formula
apart,
term
by
term.
It
basically
counts
the
number

of
places
there
are
to
put
the
one
overlap
available.
Since
there
is
only
one

overlap,
all
groups
of
the
same
type
are
identical.
The
one‐overlap
constraint

also
means
that
wherever
it
is
placed,
it
has
to
connect
exactly
two
groups.
The

C[(a
+
b)
,
2]
term
counts
all
the
potential
overlap
locations
between
one
M/K

group
type
and
another,
but
overcounts,
because
it
allows
for
two
K
groups
to

overlap.
For
this
reason,
we
subtract
C(b,
2)
to
compensate.
Finally,
with
c,
we

allow
for
overlaps
within
distinct
M
types,
of
which
there
are
exactly
c.

The
natural
inclination
after
solving
this
sub‐problem
is
to
try
and
generalize.

However,
despite
attempts
to
solve
the
Q
=
2
case,
we
have
not
gotten
far.
In
the

process
of
discussing
this
problem,
though,
Mr.
Theodosopoulos
introduced
me

to
Boolean
algebra
and
integer
programming,
which
we
have
spent
the

remainder
of
the
year
working
on.
The
hope
was
that
once
I
became
proficient

enough
in
the
field
of
logical
constraint
satisfiability
theory,
we
could
return
to

the
problem
and
write
a
computer
program
to
help
solve
it.
I’m
not
quite
there

yet,
but
certainly
am
interested
in
continuing
research
in
both
areas.

61

!

!

!

!

!

!

MATHEMATICAL JOURNEYS is a publication of the **Saint**

Ann’s Mathematics Department. It is published annually in June

and it reflects the work done by students in the Independent Math

Studies program.

© Copyright 2011 by **Saint** Ann’s **School**.