hacker bits

February 2016


Ray Li


Maureen Ker


Geoff Ralston

Miles Shang

Robin Doherty

Mike Hearn

Tanya Khovanova

Todd Hoff

Tom Blomfield

Michael Marner

Ossi Hanhinen

Seth Godin

Martin Matusiak



new bits

Thank you for making our vision a reality. Yes,

just by reading these words, you, gentle reader,

have made our wildest dreams come true.

You see, we’ve been fans of the now discontinued

Hacker Monthly* (RIP), which gathered the biggest

hits of Hacker News** into one slick magazine.

When that magazine was shutdown, we were

disheartened. Then we had a crazy idea.

What if we could bring it back to life?

Better yet, what if we could make it free for


Well, thanks to Paul Graham and the contributors

who gave us the go ahead, this pipe dream has

materialized as the inaugural issue of Hacker Bits.

So sit back and enjoy this curated collection of the

most popular articles on Hacker News…and stay

tuned for the next issue!

— Maureen and Ray

*We are not affiliated with Hacker Monthly in any way.

**We are not affiliated with Hacker News (although we wish we were!)

2 hacker bits

hacker bits

February 2016

Geoff Ralston

Miles Shang

Robin Doherty

Tanya Khovanova

Mike Hearn

Todd Hoff

Seth Godin

Tom Blomfield

Michael Marner

Ossi Hanhinen

Martin Matusiak












A guide to seed fundraising

So you think you can program an


Why privacy is important, and

having "nothing to hide" is


The dying art of mental math tricks

The resolution of the Bitcoin


A beginner's guide to scaling to 11

million+ users On Amazon's AWS

Getting ahead vs. doing well

When to join a startup

Sending and receiving SMS on


How Elm made our work better

Two weeks of rust

hacker bits



A guide to

seed fundraising


Startup companies need to purchase

equipment, rent offices,

and hire staff. More importantly,

they need to grow. In almost

every case they will require outside

capital to do these things.

The initial capital raised by a

company is typically called “seed”

capital. This brief guide is a summary

of what startup founders need

to know about raising the seed

funds critical to getting their company

off the ground.

This is not intended to be a

complete guide to fundraising. It

includes only the basic knowledge

most founders will need. The information

comes from my experiences

The process of raising that money is often

long, arduous, complex, and ego deflating.

Nevertheless, it is a path almost all

companies and founders must walk...

4 hacker bits

working at startups, investing in

startups, and advising startups at

Y Combinator and Imagine K12.

YC partners naturally gain a lot

of fundraising experience and YC

founder Paul Graham (PG) has

written extensively on the topic.

His essays cover in more detail

much of what is contained in this

guide and are highly recommended


Why raise money?

...work on your product and

talk to your users.

Without startup funding the vast

majority of startups will die. The

amount of money needed to take

a startup to profitability is usually

well beyond the ability of founders

and their friends and family to


A startup here means a company

that is built to grow fast. High

growth companies almost always

need to burn capital to sustain their

growth prior to achieving profitability.

A few startup companies

do successfully bootstrap (selffund)

themselves, but they are the

exception. Of course, there are

lots of great companies that aren’t

startups. Managing capital needs

for such companies is not covered


Cash not only allows startups

to live and grow, a war chest is

also almost always a competitive

advantage in all ways that matter:

hiring key staff, public relations,

marketing, and sales. Thus, most

startups will almost certainly want

to raise money.

The good news is that there are

lots of investors hoping to give the

right startup money. The bad news

is, “Fundraising is brutal”. The process

of raising that money is often

long, arduous, complex, and ego

deflating. Nevertheless, it is a path

almost all companies and founders

must walk, but when is the time

right to raise?

When to raise money

Investors write checks when the

idea they hear is compelling, when

they are persuaded that the team

of founders can realize its vision,

and that the opportunity described

is real and sufficiently large. When

founders are ready to tell this story,

they can raise money. And usually

when you can raise money, you


For some founders it is enough

to have a story and a reputation.

However, for most it will require

an idea, a product, and some

amount of customer adoption,

a.k.a. traction. Luckily, the software

development ecosystem today

is such that a sophisticated web or

mobile product can be built and

delivered in a remarkably short period

of time at very low cost. Even

hardware can be rapidly prototyped

and tested.

But investors also need persuading.

Usually a product they

can see, use, or touch will not be

enough. They will want to know

that there is product market fit and

that the product is experiencing

actual growth.

Therefore, founders should

raise money when they have

figured out what the market opportunity

is and who the customer

is, and when they have delivered

a product that matches their needs

and is being adopted at an inter-

hacker bits


estingly rapid rate. How rapid is


This depends, but a rate of

10% per week for several weeks

is impressive. And to raise money

founders need to impress. For

founders who can convince investors

without these things, congratulations.

For everyone else, work on

your product and talk to your users.

Startup founders must understand the

basic concepts behind venture financing.

How much to raise?

Ideally, you should raise as much

money as you need to reach profitability,

so that you’ll never have to

raise money again. If you succeed

in this, not only will you find it

easier to raise money in the future,

you’ll be able to survive without

new funding if the funding environment

gets tight. That said, certain

kinds of startups will need a follow-on

round, such as those building

hardware. Their goal should be

to raise as much money as needed

to get to their next “fundable” milestone,

which will usually be 12 to

18 months later.

In choosing how much to raise

you are trading off several variables,

including how much progress

that amount of money will

purchase, credibility with investors,

and dilution. If you can manage

to give up as little as 10% of your

company in your seed round, that

is wonderful, but most rounds will

require up to 20% dilution and

you should try to avoid more than

25%. In any event, the amount

you are asking for must be tied to

a believable plan. That plan will

buy you the credibility necessary

to persuade investors that their

money will have a chance to grow.

It is usually a good idea to create

multiple plans assuming different

amounts raised and to carefully

articulate your belief that the company

will be successful whether

you raise the full or some lesser

amount. The difference will be how

fast you can grow.

One way to look at the optimal

amount to raise in your first round

is to decide how many months

of operation you want to fund. A

rule of thumb is that an engineer

(the most common early employee

for Silicon Valley startups) costs

all-in about $15k per month. So,

if you would like to be funded for

18 months of operations with an

average of five engineers, then

you will need about 15k x 5 x 18 =

$1.35mm. What if you are planning

to hire for other positions as well?

Don’t worry about it! This is just

an estimate and will be accurate

enough for whatever mix you hire.

And here you have a great answer

to the question: “How much are

you raising?” Simply answer that

you are raising for N months (usually

12-18) and will thus need $X,

where X will usually be between

$500k and $1.5 million. As noted

above, you should give multiple

versions of N and a range for X,

giving different possible growth

scenarios based on how much you

successfully raise.

There is enormous variation

in the amount of money raised

by companies. Here we are concerned

with early raises, which

usually range from a few hundreds

of thousands of dollars up to two

million dollars. Most first rounds

seem to cluster around six hundred

thousand dollars, but largely

thanks to increased interest from

investors in seed, these rounds have

been increasing in size over the last

several years.

Financing options

Startup founders must understand

the basic concepts behind venture

financing. It would be nice if this

was all very simple and could be

explained in a single paragraph.

Unfortunately, as with most legal

matters, that’s not possible. Here

is a very high level summary, but

it is worth your time to read more

about the details and pros and cons

of various types of financing and,

importantly, the key terms of such

deals that you need to be aware of,

from preferences to option pools.

The articles below are a decent


• Venture Hacks / Babk Nivi:

6 hacker bits

Should I Raise Debt or Equity

• Fred Wilson: Financing Options

• Mark Suster on Convertible

Debt, Announcing the Safe

Venture financing usually takes

place in “rounds,” which have traditionally

had names and a specific

order. First comes a seed round,

then a Series A, then a Series

B, then a Series C, and so on to

acquisition or IPO. None of these

rounds are required and, for example,

sometimes companies will start

with a Series A financing (almost

always an “equity round” as defined

below). Recall that we are

focusing here exclusively on seed,

that very first venture round.

Most seed rounds, at least in

Silicon Valley, are now structured

as either convertible debt or simple

agreements for future equity

(safes). Some early rounds are still

done with equity, but in Silicon

Valley they are now the exception.

Convertible debt

Convertible debt is a loan an investor

makes to a company using

an instrument called a convertible

note. That loan will have a principal

amount (the amount of the investment),

an interest rate (usually

a minimum rate of 2% or so), and

a maturity date (when the principal

and interest must be repaid).

The intention of this note is that

it converts to equity (thus, “convertible”)

when the company does

an equity financing. These notes

will also usually have a “Cap”

or “Target Valuation” and / or a

discount. A Cap is the maximum

effective valuation that the owner

of the note will pay, regardless of

the valuation of the round in which

the note converts. The effect of the

cap is that convertible note investors

usually pay a lower price per

share compared to other investors

in the equity round.

Similarly, a discount defines

a lower effective valuation via a

percentage off the round valuation.

Investors see these as their seed

“premium” and both of these terms

are negotiable. Convertible debt

may be called at maturity, at which

time it must be repaid with earned

interest, although investors are

often willing to extend the maturity

dates on notes.


Convertible debt has been almost

completely replaced by the safe at

YC and Imagine K12. A safe acts

like convertible debt without the

interest rate, maturity, and repayment

requirement. The negotiable

terms of a safe will almost always

be simply the amount, the cap, and

the discount, if any. There is a bit

more complexity to any convertible

security, and much of that is driven

by what happens when conversion

occurs. I strongly encourage you to

read the safe primer, which is available

on YC’s site. The primer has

several examples of what happens

when a safe converts, which go a

long way toward explaining how

both convertible debt and safes

work in practice.


An equity round means setting a

valuation for your company (generally,

the cap on the safes or notes is

considered as a company’s notional

valuation, although notes and safes

can also be uncapped) and thus a

per-share price, and then issuing

and selling new shares of the company

to investors. This is always

more complicated, expensive,

and time consuming than a safe

or convertible note and explains

their popularity for early rounds. It

is also why you will always want

to hire a lawyer when planning to

issue equity.

To understand what happens

when new equity is issued, a

...whatever form of financing you do, it is

always best to use well-known financing


hacker bits


simple example helps. Say you

raise $1,000,000 on a $5,000,000

pre-money valuation. If you also

have 10,000,000 shares outstanding

then you are selling the shares at:

1. $5,000,000 / 10,000,000 = 50

cents per share*

and you will thus sell…

2. 2,000,000 shares

resulting in a new share total


3. 10,000,000 + 2,000,000 =


and a post-money valuation


4. $0.50 * 12,000,000 =


and dilution of…

5. 2,000,000 / 12,000,000 =


Not 20%!

There are several important components

of an equity round with

which you must become familiar

when your company does a priced

round, including equity incentive

plans (option pools), liquidation

preferences, anti-dilution rights,

protective provisions, and more.

These components are all negotiable,

but it is usually the case that if

you have agreed upon a valuation

with your investors (next section),

then you are not too far apart, and

there is a deal to be done. I won’t

say more about equity rounds,

since they are so uncommon for

seed rounds.

One final note: whatever form

of financing you do, it is always

best to use well-known financing

documents like YC’s safe. These

documents are well understood by

the investor community, and have

been drafted to be fair, yet founder


Valuation: What is my

company worth?

You are two hackers with an idea,

a few months of hacking’s worth of

software, and several thousand users.

What is your company worth?

It should be obvious that no formula

will give you an answer. There

can only be the most notional sort

of justification for any value at all.

So, how do you set a value when

talking to a potential investor?

Why do some companies seem to

be worth $20mm and some $4mm?

Because investors were convinced

that was what they were (or will

be in the near future) worth. It is

that simple. Therefore, it is best to

let the market set your price and

to find an investor to set the price

or cap. The more investor interest

your company generates, the higher

your value will trend.

Still, it can be difficult in some

circumstances to find an investor to

tell you what you are worth. In this

case you can choose a valuation,

usually by looking at comparable

companies who have valuations.

Please remember that the important

thing in choosing your valuation is

not to over-optimize. The objective

is to find a valuation with which

you are comfortable, that will allow

you to raise the amount you need to

achieve your goals with acceptable

dilution, and that investors will find

reasonable and attractive enough to

write you a check. Seed valuations

tend to range from $2mm-$10mm,

but keep in mind that the goal is

not to achieve the best valuation,

nor does a high valuation increase

your likelihood of success.

Investors: Angels & venture


The difference between an angel

and a VC is that angels are amateurs

and VCs are pros. VCs invest

other people’s money and angels

invest their own on their own

terms. Although some angels are

quite rigorous and act very much

like the pros, for the most part they

are much more like hobbyists.

Their decision making process is

...by far the best way to

meet a venture capitalist

or an angel is via a warm


8 hacker bits

usually much faster–they can make

the call all on their own–and there

is almost always a much larger

component of emotion that goes

into that decision.

VCs will usually require more

time, more meetings, and will have

multiple partners involved in the

final decision. And remember, VCs

see LOTS of deals and invest in

very few, so you will have to stand

out from a crowd.

The ecosystem for seed (early)

financing is far more complex now

than it was even five years ago.

There are many new VC firms,

sometimes called “super-angels,”

or “micro-VC’s”, which explicitly

target brand new, very early stage

companies. There are also several

traditional VCs that will invest in

seed rounds. And there are a large

number of independent angels who

will invest anywhere from $25k to

$100k or more in individual companies.

New fundraising options

seem to arrive every year, for

example, AngelList Syndicates in

which angels pool their resources

and follow a single lead angel.

How does one meet and encourage

the interest of investors? If

you are about to present at a demo

day, you are going to meet lots

of investors. There are few such

opportunities to meet a concentrated

and motivated group of seed

Always optimize for getting money

soonest (in other words, be greedy).

investors. Besides a demo day, by

far the best way to meet a venture

capitalist or an angel is via a warm

introduction. Angels will also often

introduce interesting companies

to their own networks. Otherwise,

find someone in your network to

make an introduction to an angel or

VC. If you have no other options,

do research on VCs and angels and

send as many as you can a brief,

but compelling summary of your

business and opportunity (see Documents

You Need below).


There are a growing number of

new vehicles to raise money,

such as AngelList, Kickstarter,

FundersClub, and Wefunder. These

crowdfunding sites can be used to

launch a product, run a pre-sales

campaign, or find venture funding.

In exceptional cases, founders have

used these sites as their dominant

fundraising source, or as clear

evidence of demand. They usually

are used to fill in rounds that are

largely complete or, at times, to

reanimate a round that is having

difficulty getting off the ground.

The ecosystem around investing

is changing rapidly, but when and

how to use these new sources of

funds will usually be determined

by your success raising through

more traditional means.

Meeting investors

If you are meeting investors at an

investor day, remember that your

goal is not to close–it is to get the

next meeting. Investors will seldom

choose to commit the first day they

hear your pitch, regardless of how

brilliant it is. So book lots of meetings.

Keep in mind that the hardest

part is to get the first money in the

company. In other words, meet

as many investors as possible but

focus on those most likely to close.

Always optimize for getting money

soonest (in other words, be greedy).

There are a few simple rules

to follow when preparing to meet

with investors. First, make sure you

know your audience–do research

on what they like to invest in and

try to figure out why. Second, simplify

your pitch to the essential–

why this is a great product (demos

are almost a requirement nowadays),

why you are precisely the

right team to build it, and why together

you should all dream about

creating the next gigantic company.

Next make sure you listen carefully

to what the investor has to say.

If you can get the investor to talk

more than you, your probability of

a deal skyrockets. In the same vein,

do what you can to connect with

the investor. This is one of the main

reasons to do research. An investment

in a company is a long term

hacker bits


commitment and most investors see

lots of deals. Unless they like you

and feel connected to your outcome,

they will most certainly not

write a check.

Who you are and how well you

tell your story are most important

when trying to convince investors

to write that check. Investors are

looking for compelling founders

who have a believable dream and

as much evidence as possible documenting

the reality of that dream.

Find a style that works for you,

and then work as hard as necessary

to get the pitch perfect. Pitching

is difficult and often unnatural

for founders, especially technical

founders who are more comfortable

in front of a screen than a crowd.

But anyone will improve with

practice, and there is no substitute

for an extraordinary amount of

practice. Incidentally, this is true

whether you are preparing for a

demo day or an investor meeting.

During your meeting, try to

strike a balance between confidence

and humility. Never cross

over into arrogance, avoid defensiveness,

but also don’t be a pushover.

Be open to intelligent counterpoints,

but stand up for what you

believe and whether or not you persuade

the investor just then, you’ll

have made a good impression and

will probably get another shot.

Lastly, make sure you don’t

leave an investor meeting without

...it is almost always better not

“to try to negotiate in real-time.

an attempted close or at very minimum

absolute clarity on next steps.

Do not just walk out leaving things


Negotiating and closing the


A seed investment can usually be

closed rapidly. As noted above,

it is an advantage to use standard

documents with consistent terms,

such as YC’s safe. Negotiation, and

often there is none at all, can then

proceed on one or two variables,

such as the valuation/cap and possibly

a discount.

Deals have momentum and

there is no recipe towards building

momentum behind your deal

other than by telling a great story,

persistence, and legwork. You may

have to meet with dozens of investors

before you get that close. But

to get started you just need to convince

one of them. Once the first

money is in, each subsequent close

will get faster and easier.

Once an investor says that they

are in, you are almost done. This

is where you should rapidly close

using a handshake protocol. If you

fail at negotiating from this point

on, it is probably your fault.


When you enter into a negotiation

with a VC or an angel, remember

that they are usually more experienced

at it than you are, so it is

almost always better not to try to

negotiate in real-time. Take requests

away with you, and get help

from YC or Imagine K12 partners,

advisors, or legal counsel. But also

remember that although certain

requested terms can be egregious,

the majority of things credible VCs

and angels will ask for tend to be

reasonable. Do not hesitate to ask

them to explain precisely what

they are asking for and why. If the

negotiation is around valuation (or

cap) there are, naturally, plenty of

considerations, e.g. other deals you

have already closed. However, it

is important to remember that the

valuation you choose at this early

round will seldom matter to the

success or failure of the company.

Get the best deal you can get–but

get the deal! Finally, once you get

to yes, don’t wait around. Get the

investor’s signature and cash as

soon as possible. One reason safes

are popular is because the closing

mechanics are as simple as signing

a document and then transferring

funds. Once an investor has decided

to invest, it should take no longer

than a few minutes to exchange

signed documents online (for example

via Clerky or Ironclad) and

execute a wire or send a check.

10 hacker bits

Documents you need

Do not spend too much time developing

diligence documents for a

seed round. If an investor is asking

for too much due diligence or

financials, they are almost certainly

someone to avoid. You will probably

want an executive summary

and a slide deck you can walk

investors through and, potentially,

leave behind so VCs can show to

other partners.

The executive summary should

be one or two pages (one is better)

and should include vision, product,

team (location, contact info),

traction, market size, and minimum

financials (revenue, if any, and

fundraising prior and current).

Generally make sure the slide

deck is a coherent leave-behind.

Graphics, charts, screenshots are

more powerful than lots of words.

Consider it a framework around

which you will hang a more detailed

version of your story. There

is no fixed format or order, but the

following parts are usually present.

Create the pitch that matches you,

how you present, and how you

want to represent your company.

Also note that like the executive

summary, there are lots of similar

templates online if you don’t like

this one.

1. Your company / Logo / Tag


2. Your Vision - Your most

expansive take on why your


new company exists.

3. The Problem - What are you

solving for the customer–

where is their pain?

4. The Customer - Who are they

and perhaps how will you

reach them?

5. The Solution - What you have

created and why now is the

right time.

6. The (huge) Market you are

addressing - Total Available

Market (TAM) >$1B if possible.

Include the most persuasive

evidence you have that

this is real.

7. Market Landscape - including

competition, macro trends,

etc. Is there any insight you

have that others do not?

8. Current Traction - list key

stats / plans for scaling and

future customer acquisition.

9. Business model - how users

translate to revenue. Actuals,

plans, hopes.

10. Team - who you are, where

you come from and why you

have what it takes to succeed.

Pics and bios okay. Specify


11. Summary - 3-5 key takeaways

(market size, key product

insight, traction)

12. Fundraising - Include what

you have already raised and

what you are planning to raise

now. Any financial projections

may go here as well.

You can optionally include a

summary product roadmap (6

quarters max) indicating what

an investment buys.

It is worth pointing out that startup

investing is rapidly evolving and

it is likely that certain elements

of this guide will at some point

become obsolete, so make sure to

check for updates or future posts.

There is now an extraordinary

amount of information available

on raising venture money. Several

sources are referenced and more

are listed at the end of this document.

Fundraising is a necessary, and

sometimes painful task most startups

must periodically endure. A

founder’s goal should always be to

raise as quickly as possible and this

guide will hopefully help founders

successfully raise their first round

of venture financing. Often that

will seem like a nearly impossible

task and when it is complete, it will

feel as though you have climbed a

very steep mountain. But you have

been distracted by the brutality of

fundraising and once you turn your

attention back to the future you will

realize it was only a small foothill

on the real climb in front of you. It

is time to get back to work building

your company. •

Geoff has been an investor in, board

member of, and advisor to a variety of

start-up companies, mostly in the San

Francisco Bay Area. He is also a partner at

Y Combinator.

Reprinted with permission of the original author. First appeared at themacro.com.

hacker bits



So you think you can program

an elevator


Many of us ride elevators every day. We feel

like we understand how they work, how

they decide where to go. If you were asked

to put it into words, you might say that an elevator

goes wherever it's told, and in doing so goes as far in

one direction as it can before turning around. Sounds

simple, right? Can you put it into code?

In this challenge, you are asked to implement the

business logic for a simplified elevator model in Python.

We'll ignore a lot of what goes into a real world

elevator, like physics, maintenance overrides, and

optimizations for traffic patterns. All you are asked to

do is to decide whether the elevator should go up, go

down, or stop.

How does the challenge work? The simulator and

test harness are laid out in this document, followed by

several examples. All of this can be run in an actual

Python interpreter using Python's built-in doctest

12 hacker bits

Open a pull request with your solution. Good

luck! Have fun!

Test harness

Like all elevators, ours can go up and down. We define

constants for these. The elevator also happens to

be in a building with six floors.

>>> UP = 1

>>> DOWN = 2


We will make an Elevator class that simulates an elevator.

It will delegate to another class which contains

the elevator business logic, i.e. deciding what the

elevator should do. Your challenge is to implement

this business logic class.

User actions

A user can interact with the elevator in two ways. She

can call the elevator by pressing the up or down button

on any floor, and she can select a destination floor

by pressing the button for that floor on the panel in

the elevator. Both of these actions are passed straight

through to the logic delegate.

>>> class Elevator(object):

... def call(self, floor, direction):

... self._logic_delegate.on_

called(floor, direction)


... def select_floor(self, floor):

... self._logic_delegate.on_floor_


functionality, which extracts the code in this document

and runs it.

A naive implementation of the business logic is

provided in the elevator.py file in this project. If

you run doctest using the provided implementation,

several examples fail to produce the expected output.

Your challenge is to fix that implementation until all of

the examples pass.

Elevator actions

The logic delegate can respond by setting the elevator

to move up, move down, or stop. It can also read the

current floor and movement direction of the elevator.

These actions are accessed through Callbacks, a

mediator provided by the Elevator class to the logic


>>> class Elevator(Elevator):

hacker bits


... def __init__(self, logic_delegate,


... self._current_floor = starting_


... print "%s..." % starting_floor,

... self._motor_direction = None

... self._logic_delegate = logic_


... self._logic_delegate.callbacks

= self.Callbacks(self)


... class Callbacks(object):

... def __init__(self, outer):

... self._outer = outer


... @property

... def current_floor(self):

... return self._outer._



... @property

... def motor_direction(self):

... return self._outer._motor_



... @motor_direction.setter

... def motor_direction(self,


... self._outer._motor_

direction = direction

That's it for the framework.

Business logic

As for the business logic, an example implementation

is provided in the elevator.py file in this project.

>>> from elevator import ElevatorLogic

As provided, it doesn't pass the tests in this document.

Your challenge is to fix it so that it does. To run the

tests, run this in your shell:

python -m doctest -v README.md

With the correct business logic, here's how the elevator

should behave:

Basic usage

Make an elevator. It starts at the first floor.

>>> e = Elevator(ElevatorLogic())


Somebody on the fifth floor wants to go down.

>>> e.call(5, DOWN)

Keep in mind that the simulation won't actually advance

until we call step or one of the run_until_*


>>> e.run_until_stopped()

2... 3... 4... 5...

The elevator went up to the fifth floor. A passenger

boards and wants to go to the first floor.

>>> e.select_floor(1)

Also, somebody on the third floor wants to go down.

>>> e.call(3, DOWN)

Even though the first floor was selected first, the elevator

services the call at the third floor...

>>> e.run_until_stopped()

4... 3...

...before going to the first floor.

>>> e.run_until_stopped()

2... 1...


Elevators want to keep going in the same direction.

An elevator will serve as many requests in one direction

as it can before going the other way. For example,

if an elevator is going up, it won't stop to pick up

passengers who want to go down until it's done with

everything that requires it to go up.

14 hacker bits

e = Elevator(ElevatorLogic())


>>> e.call(2, DOWN)

>>> e.select_floor(5)

Even though the elevator was called at the second floor

first, it will service the fifth floor...

>>> e.run_until_stopped()

2... 3... 4... 5...

...before coming back down for the second floor.

>>> e.run_until_stopped()

4... 3... 2...

In fact, if a passenger tries to select a floor that contradicts

the current direction of the elevator, that selection

is ignored entirely. You've probably seen this before.

You call the elevator to go down. The elevator shows

up, and you board, not realizing that it's still going up.

You select a lower floor. The elevator ignores you.

>>> e = Elevator(ElevatorLogic())


>>> e.select_floor(3)

>>> e.select_floor(5)

>>> e.run_until_stopped()

2... 3...

>>> e.select_floor(2)

At this point the elevator is at the third floor. It's not

finished going up because it's wanted at the fifth floor.

Therefore, selecting the second floor goes against the

current direction, so that request is ignored.

>>> e.run_until_stopped()

4... 5...

>>> e.run_until_stopped() # nothing

happens, because e.select_floor(2) was


Now it's done going up, so you can select the second


>>> e.select_floor(2)

>>> e.run_until_stopped()

4... 3... 2...

Changing direction

The process of switching directions is a bit tricky.

Normally, if an elevator going up stops at a floor and

there are no more requests at higher floors, the elevator

is free to switch directions right away. However, if

the elevator was called to that floor by a user indicating

that she wants to go up, the elevator is bound to

consider itself going up.

>>> e = Elevator(ElevatorLogic())


>>> e.call(2, DOWN)

>>> e.call(4, UP)

>>> e.run_until_stopped()

2... 3... 4...

>>> e.select_floor(5)

>>> e.run_until_stopped()


>>> e.run_until_stopped()

4... 3... 2...

If nobody wants to go further up though, the elevator

can turn around.

>>> e = Elevator(ElevatorLogic())


>>> e.call(2, DOWN)

>>> e.call(4, UP)

>>> e.run_until_stopped()

2... 3... 4...

>>> e.run_until_stopped()

3... 2...

If the elevator is called in both directions at that floor,

it must wait once for each direction. You may have

seen this too. Some elevators will close their doors

and reopen them to indicate that they have changed


>>> e = Elevator(ElevatorLogic())


>>> e.select_floor(5)

>>> e.call(5, UP)

>>> e.call(5, DOWN)

>>> e.run_until_stopped()

2... 3... 4... 5...

Here, the elevator considers itself to be going up, as it

hacker bits


favors continuing in the direction it came from.

>>> e.select_floor(4) # ignored

>>> e.run_until_stopped()

Since nothing caused the elevator to move further up,

it now waits for requests that cause it to move down.

>>> e.select_floor(6) # ignored

>>> e.run_until_stopped()

Since nothing caused the elevator to move down,

the elevator now considers itself idle. It can move in

either direction.

>>> e.select_floor(6)

>>> e.run_until_stopped()


En passant

Keep in mind that a user could call the elevator or

select a floor at any time. The elevator need not be

stopped. If the elevator is called or a floor is selected

before it has reached the floor in question, then the

request should be serviced.

>>> e = Elevator(ElevatorLogic())


>>> e.select_floor(6)

>>> e.run_until_floor(2) # elevator is not



>>> e.select_floor(3)

>>> e.run_until_stopped() # stops for



>>> e.run_until_floor(4)


>>> e.call(5, UP)

>>> e.run_until_stopped() # stops for



On the other hand, if the elevator is already at, or has

passed the floor in question, then the request should

be treated like a request in the wrong direction. That

is to say, a call is serviced later, and a floor selection

is ignored.

>>> e = Elevator(ElevatorLogic())


>>> e.select_floor(5)

>>> e.run_until_floor(2)


>>> e.call(2, UP) # missed the boat, come

back later

>>> e.step() # doesn't stop


>>> e.select_floor(3) # missed the boat,


>>> e.step() # doesn't stop


>>> e.run_until_stopped() # service



>>> e.run_until_stopped() # service

e.call(2, UP)

4... 3... 2...

Fuzz testing

No amount of legal moves should compel the elevator

to enter an illegal state. Here, we run a bunch of random

requests against the simulator to make sure that

no asserts are triggered.

>>> import random

>>> e = Elevator(ElevatorLogic())


>>> try: print '-', # doctest:+ELLIPSIS

... finally:

... for i in range(100000):

... r = random.randrange(6)

... if r == 0: e.call(

... random.randrange(FLOOR_

COUNT) + 1,

... random.choice((UP, DOWN)))

... elif r == 1: e.select_

floor(random.randrange(FLOOR_COUNT) + 1)

... else: e.step()

- ...

More examples

The rest of these examples may be useful for catching

bugs. They are meant to be run via doctest, so they

may not be very interesting to read through.

An elevator is called but nobody boards. It goes idle.

16 hacker bits

e = Elevator(ElevatorLogic())


>>> e.call(5, UP)

>>> e.run_until_stopped()

2... 3... 4... 5...

>>> e.run_until_stopped()

>>> e.run_until_stopped()

The elevator is called at two different floors.

>>> e = Elevator(ElevatorLogic())


>>> e.call(3, UP)

>>> e.call(5, UP)

>>> e.run_until_stopped()

2... 3...

>>> e.run_until_stopped()

4... 5...

Like above, but called in reverse order.

>>> e = Elevator(ElevatorLogic())


>>> e.call(5, UP)

>>> e.call(3, UP)

>>> e.run_until_stopped()

2... 3...

>>> e.run_until_stopped()

4... 5...

The elevator is called at two different floors, but going

the other direction.

>>> e = Elevator(ElevatorLogic())


>>> e.call(3, DOWN)

>>> e.call(5, DOWN)

>>> e.run_until_stopped()

2... 3... 4... 5...

>>> e.run_until_stopped()

4... 3...

The elevator is called at two different floors, going in

opposite directions.

>>> e = Elevator(ElevatorLogic())


>>> e.call(3, UP)

>>> e.call(5, DOWN)

>>> e.run_until_stopped()

2... 3...

>>> e.run_until_stopped()

4... 5...

Like above, but with directions reversed.

>>> e = Elevator(ElevatorLogic())


>>> e.call(3, DOWN)

>>> e.call(5, UP)

>>> e.run_until_stopped()

2... 3... 4... 5...

>>> e.run_until_stopped()

4... 3...

The elevator is called at two different floors, one

above the current floor and one below. It first goes to

the floor where it was called first.

>>> e = Elevator(ElevatorLogic(), 3)


>>> e.call(2, UP)

>>> e.call(4, UP)

>>> e.run_until_stopped()


>>> e.run_until_stopped()

3... 4...

Like above, but called in reverse order.

>>> e = Elevator(ElevatorLogic(), 3)


>>> e.call(4, UP)

>>> e.call(2, UP)

>>> e.run_until_stopped()


>>> e.run_until_stopped()

3... 2...

The elevator is called while it's already moving.

>>> e = Elevator(ElevatorLogic())


>>> e.call(5, UP)

>>> e.run_until_floor(2)


>>> e.call(3, UP)

>>> e.run_until_stopped()


hacker bits



4... 5...

If the elevator is already at, or has passed the floor

where it was called, it comes back later.

>>> e = Elevator(ElevatorLogic())


>>> e.call(5, UP)

>>> e.run_until_floor(3)

2... 3...

>>> e.call(3, UP)

>>> e.run_until_stopped()

4... 5...

>>> e.run_until_stopped()

4... 3...

Two floors are selected.

>>> e = Elevator(ElevatorLogic())


>>> e.select_floor(3)

>>> e.select_floor(5)

>>> e.run_until_stopped()

2... 3...

>>> e.run_until_stopped()

4... 5...

Like above, but selected in reverse order.

>>> e = Elevator(ElevatorLogic())


>>> e.select_floor(5)

>>> e.select_floor(3)

>>> e.run_until_stopped()

2... 3...

>>> e.run_until_stopped()

4... 5...

Two floors are selected, one above the current floor

and one below. The first selection sets the direction,

so the second one is completely ignored.

>>> e = Elevator(ElevatorLogic(), 3)


>>> e.select_floor(2)

>>> e.select_floor(4)

>>> e.run_until_stopped()


>>> e.run_until_stopped()

Like above, but selected in reverse order.

>>> e = Elevator(ElevatorLogic(), 3)


>>> e.select_floor(4)

>>> e.select_floor(2)

>>> e.run_until_stopped()


>>> e.run_until_stopped()

If the elevator is called to a floor going up, it should

ignore a request to go down.

>>> e = Elevator(ElevatorLogic())


>>> e.call(5, UP)

>>> e.run_until_stopped()

2... 3... 4... 5...

>>> e.select_floor(6)

>>> e.select_floor(4)

>>> e.run_until_stopped()


>>> e.run_until_stopped()

Like above, but going in other direction.

>>> e = Elevator(ElevatorLogic())


>>> e.call(5, DOWN)

>>> e.run_until_stopped()

2... 3... 4... 5...

>>> e.select_floor(6)

>>> e.select_floor(4)

>>> e.run_until_stopped()


>>> e.run_until_stopped()

Elevator is called to a floor and a passenger also selects

the same floor. The elevator should not go back

to that floor twice.

>>> e = Elevator(ElevatorLogic())


>>> e.call(5, DOWN)

>>> e.select_floor(5)

18 hacker bits


2... 3... 4... 5...

>>> e.select_floor(4)

>>> e.run_until_stopped()


>>> e.run_until_stopped()

Similarly, if the elevator is called at a floor where it is

stopped, it should not go back later.

>>> e = Elevator(ElevatorLogic())


>>> e.call(3, UP)

>>> e.run_until_stopped()

2... 3...

>>> e.call(3, UP)

>>> e.call(5, DOWN)

>>> e.run_until_stopped()

4... 5...

>>> e.run_until_stopped()

Elevator is ready to change direction, new call causes

it to keep going in same direction.

>>> e = Elevator(ElevatorLogic())


>>> e.call(2, DOWN)

>>> e.call(4, UP)

>>> e.run_until_stopped()

2... 3... 4...

>>> e.call(5, DOWN) # It's not too late.

>>> e.run_until_stopped()


>>> e.run_until_stopped()

4... 3... 2...

When changing directions, wait one step to clear current


>>> e = Elevator(ElevatorLogic())


>>> e.select_floor(5)

>>> e.call(5, UP)

>>> e.call(5, DOWN)

>>> e.run_until_stopped()

2... 3... 4... 5...

>>> e.select_floor(4) # ignored

>>> e.run_until_stopped()

>>> e.select_floor(6) # ignored

>>> e.select_floor(4)

>>> e.run_until_stopped()


>>> e.run_until_stopped()

Like above, but going in other direction.

>>> e = Elevator(ElevatorLogic(), 6)


>>> e.select_floor(2)

>>> e.call(2, UP)

>>> e.call(2, DOWN)

>>> e.run_until_stopped()

5... 4... 3... 2...

>>> e.select_floor(3) # ignored

>>> e.run_until_stopped()

>>> e.select_floor(1) # ignored

>>> e.select_floor(3)

>>> e.run_until_stopped()


>>> e.run_until_stopped()

If other direction is not cleared, come back.

>>> e = Elevator(ElevatorLogic())


>>> e.select_floor(5)

>>> e.call(5, UP)

>>> e.call(5, DOWN)

>>> e.run_until_stopped()

2... 3... 4... 5...

>>> e.select_floor(6)

>>> e.run_until_stopped()


>>> e.run_until_stopped()


>>> e.select_floor(6) # ignored

>>> e.select_floor(4)

>>> e.run_until_stopped()


>>> e.run_until_stopped() •

Miles was formerly a software engineer at a tech company in

Silicon Valley. Now, he's a recreational hacker based in Winnipeg,

Manitoba, Canada. He likes hacks that are whimsical,

prosaic, and absurd.

Reprinted with permission of the original author. First appeared at github.com/mshang.

hacker bits



Why privacy is important,

and having "nothing to hide"

is irrelevant


The governments of Australia,

Germany, the UK and

the US are destroying your

privacy. Some people don’t see the


“I have nothing to hide, so

why should I care?”

It doesn’t matter if you have

“nothing to hide”. Privacy is a right

granted to individuals that underpins

the freedoms of expression,

association and assembly; all of

which are essential for a free, democratic


The statement from some politicians

that “if you have nothing to

hide then you have nothing to fear”

purposefully misframes the whole


This affects all of us. We must


"Arguing that you don’t care

about the right to privacy

because you have nothing

to hide is no different than

saying you don’t care about

free speech because you have

20 hacker bits

nothing to say." – Edward


Privacy and freedom

Loss of privacy leads to loss of


Your freedom of expression

is threatened by the surveillance

of your internet usage – thought

patterns and intentions can be

extrapolated from your website

visits (rightly or wrongly), and the

knowledge that you are being surveilled

can make you less likely to

research a particular topic. You lose

that perspective, and your thought

can be pushed in one direction as

of your communications online

and by phone, and your freedom

of assembly is threatened by the

tracking of your location by your

mobile phone. Can we afford to

risk the benefits of free association,

the social change brought by activists

and campaigners, or the right to


These freedoms are being

eroded, right now. The effects will

worsen over time, as each failure to

exercise our freedom builds upon

the last, and as more people experience

the chilling effects.


Bits of information that you might

• where we go,

• who we contact and when,

• and what we do on the internet.

With just a small portion of this

data, off-the-shelf software and

their own spare time, ABC News

readers found Will Ockenden’s

home, workplace and parents’


The intrusion becomes all

the more spectacular when you

consider the data across a whole

population, the massive budgets of

the Five Eyes intelligence agencies,

and the constant progress of

artificial intelligence and big data


Your interactions with the

Ask yourself: at every point in history, who

suffers the most from unjustified surveillance?

It is not the privileged, but the vulnerable.

Surveillance is not about safety, it’s about

power. It’s about control.

Edward Snowden

a result. Similarly, when the things

you write online, or communicate

privately to others, are surveilled,

and you self-censor as a result, the

rest of us lose your perspective,

and the development of further

ideas is stifled.

Your freedom of association

is threatened by the surveillance

not feel the need to hide can be

aggregated into a telling profile,

which might include things that

you actually do want to conceal.

In the case of data retention in

Australia, we have given away our

rights to privacy, and now share a

constant stream of:

world around you can reveal your

political and religious beliefs, your

desires, sympathies and convictions,

and things about yourself that

you aren’t even aware of (and they

might be wrong too).

hacker bits


Given enough data and time,

your behaviour might even be


Societal chilling effects

The combined result of these

second thoughts across the population

is a chilling effect on many

of the activities that are key to a

well-functioning democracy – activism,

journalism, and political

dissent, among others.

We all benefit from progress

that occurs when activists, journalists

and society as a whole are

able to freely engage in political

discourse and dissent. Many of the

positive changes of the last century

were only possible because of these

freedoms. For example, the 1967

referendum on including indigenous

Australians in the census, and

allowing the federal government to

make laws specifically benefiting

indigenous races, was only made

possible by sustained activism

throughout the 1950s and 60s.

Unfortunately, we are already

self-censoring. A 2013 survey of

US writers found that after the

revelations of the NSA’s mass surveillance

regime, 1 in 6 had avoided

writing on a topic they thought

would subject them to surveillance,

and a further 1 in 6 had seriously

considered doing so.

"Ask yourself: at every point

in history, who suffers the

most from unjustified surveillance?

It is not the privileged,

but the vulnerable. Surveillance

is not about safety,

it’s about power. It’s about


– Edward Snowden

Misuse & misappropriation

By creating databases and systems

of easy access to such a great

volume of personally revealing

information, we increase the scope

of mass surveillance, and therefore

the scope for infringements upon

our human rights.

East Germany is the most

extreme example of a surveillance

state in history. The Stasi – its infamous

security agency – employed

90,000 spies and had a network of

at least 174,000 informants. The

Stasi kept meticulous files on hundreds

of thousands of innocent citizens,

including the “pink files” on

people they believed to be homosexual,

and used this information to

psychologically harrass, blackmail

and discredit people who became

dissenters. But that was before the

internet. Reflecting on the NSA’s

current systems of mass surveillance,

a former Stasi lieutenant

colonel said: “for us, this would

have been a dream come true”.

Even aside from the risk of

systematic state misbehaviour, in

Australia we know that the 2500

snoopers who have unrestricted

Privacy is rarely lost in one fell swoop. It is

usually eroded over time, little bits dissolving

almost imperceptibly until we finally begin to

notice how much is gone. “Why Privacy Matters Even if You Have ‘Nothing to Hide’, Daniel J. Solove

22 hacker bits

access to your data are subject to

“professional curiosity”, fallible

morals, and are only human, so will

make mistakes and become victims

of social engineering, blackmail or


This is most dangerous for the

most vulnerable people. For example,

if you have an angry or violent

ex-partner, you could be put in

mortal danger by them getting their

hands on this much detail about

your life.

Risk taking

Our “digital lives” are an accurate

reflection of our actual lives. Our

phone records expose where we go

and who we talk to, and our internet

usage can expose almost everything

about ourselves and what we

care about.

Even if we trust the motives of

our current governments, and every

person with authorised access to

our data, we are taking an incredible

risk. The systems of surveillance

that we entrench now may

be misappropriated and misused at

any time by future governments,

foreign intelligence agencies,

double agents, and opportunistic


The more data we have, the

more devastating its potential.

Gradual erosion

Each system of surveillance and

intrusion that we introduce erodes

our privacy and pushes us one step

further away from a free society.

While you may not have noticed

the impact yet, your privacy

has already been eroded. If we continue

along our current path, building

more powers into our systems

of surveillance, what was once

your private life will be whittled

away to nothing, and the freedoms

that we have taken for granted will

cease to exist.

As technology advances, we

are presented with a choice – will it

to continue to offer an overall benefit

to society, or will we allow it to

be used as a tool for total intrusion

into our lives?

"Privacy is rarely lost in

one fell swoop. It is usually

eroded over time, little bits

dissolving almost imperceptibly

until we finally begin to

notice how much is gone."

– Why Privacy Matters Even

if You Have ‘Nothing to

Hide’, Daniel J. Solove

What next?

The governments of Australia,

New Zealand, Canada, the US and

others are poised to take a big step

in the wrong direction with the

Trans-Pacific Partnership (TPP).

The EFF explains why the TPP is

a huge threat to your privacy and

other rights.

• Take action – if you are a technologist,

join the Cryptohack

Network and fight back against

mass surveillance – cryptohack.


• Spread the privacy mindset –

we must foster understanding

of this issue in order to protect

ourselves from harmful laws

and fight against future invasions

of privacy. Please help

spread the knowledge, discuss

this article with a friend, tweet

it, share it, etc.

• Protect yourself – protect your

own data from mass surveillance.

This increases the cost

of mass surveillance and helps

others too. Read my advice

on protecting your data from

retention in Australia, the EFF’s

Surveillance Self-Defense

Guide, and Information Security

for Journalists. •

Robin is a consultant at ThoughtWorks.

He usually plays the role of software

developer or technical lead. He tries

to demystify cryptography and digital

privacy by facilitating cryptoparties in

Melbourne. Follow CryptoPartyAus or

him to hear about the next public cryptoparty.

He also organises cryptohack.

net and Melbourne-based meetups, where

software developers gather to contribute to

open source privacy tools like Pixelated,

helping to make them easier to use.

Reprinted with permission of the original author. First appeared at robindoherty.com.

hacker bits



The dying art of

mental math tricks


Without using a calculator,

can you tell how much

752 is? If you are reading

my blog, with high probability you

know that it is 5625. The last two

digits of the square of a number

ending in 5 are always 25. So we

only need to figure out the first two

digits. The first two digits equals

7 times 8, or 56, which is the first

digit of the original number (7)

multiplied by the next number (8).

I was good at mental arithmetic

and saved myself a lot of money

back in the Soviet Union. Every

time I shopped I calculated all the

charges as I stood at the cash register.

I knew exactly how much to

pay, which saved me from cheating

cashiers. To simplify my practice,

the shelves were empty, so I was

never buying too many items.

When I moved to the US, the

situation changed. Salespeople are

not trying to cheat, not to mention

that they use automatic cash registers.

In addition, the US sales taxes

are confusing. I remember how I

bought three identical chocolate

bars and the total was not divisible

by 3. So I completely dropped my

at-the-cash-registers mental training.

Being able to calculate fast was

somewhat useful many years ago.

But now there is a calculator on

every phone.

John H. Conway is a master

of mental calculations. He even

invented an algorithm to calculate

the day of the week for any day. He

taught it to me, and I too can easily

calculate that July 29 of 1926 was

Thursday. This is not useful any

more. If I google “what day of the

week is July 29, 1926,” the first

line in big letters says Thursday.

One day a long time ago I

was watching a TV show of a guy

performing mental tricks: remembering

large numbers, multiplying

large numbers, and so on. At the

end, to a grand fanfare, he showed

his crowned trick. A member from

the audience was to write down a

2-digit number, and to raise it to the

fifth power using a calculator. The

mental guy, once he is told what

the fifth power was, struggled with

great concentration and announced

the original number.

I was so underwhelmed. Any-

24 hacker bits

one knows that the last digit of a

number and its fifth power are the

same. So he only needs to guess

the first digit, which can easily be

estimated by the size of the fifth


I found the description of this

trick online. They recommend

memorizing the fifth powers of the

numbers from 1 to 9. After that,

you do the following. First, listen

to the number. Next, the last digit

of the number you hear is the same

as the last digit of the original

number. To figure out the first digit

of the original number, remove the

last 5 digits of the number you hear.

Finally, determine between which

fifth powers it fits. This makes

sense. Or, you could be lazy and

remember only meaningful digits.

For example, 395=90,224,199, and

405=102,400,000. So if the number

is below 100,000,000, then the first

digit is less than 4.

You do not have to remember

the complete fifth powers of every

digit. You just need to remember

significant ranges. Here they are:

first digit range

1 between 100,000 and


2 between 3,000,000

and 24,000,000

3 between 24,000,000,


4 between 100,000,000

and 300,000,000

first digit range

5 between 300,000,000

and 750,000,000

6 between 750,000,000

and 1,600,000,000

7 between

1,600,000,000 and


8 between

3,200,000,000, and


9 above 5,900,000,000

Besides, for this trick the guy

needed to guess one out of one

hundred number. He had a phenomenal

memory, so he could easily

have remembered one hundred

fifth powers. Actually he doesn’t

need to remember all the numbers,

only the differentiated features. On

this note, there is a cool thing you

can do. You can guess the number

before the whole fifth power is

announced. One caveat: a 1-digit

number x and 10x when taken to

the fifth power, both begin with the

same long string of digits. So we

can advise the audience not to suggest

a 1-digit number or a number

divisible by 10, as that is too easy

(without telling the real reason).

Now we can match a start of

the fifth power to its fifth root.

Three first digits are almost always

enough. Here is the list of starting

digits and their match: (104,16),

(107,64), (115,41), (116,65),

(118,26), (12,66), (130,42),

(135,67), (141,17), (143,27),

(145,68), (147,43), (156,69),

(161,11), (164,44), (17,28),

(180,71), (184,45), (188,18),

(19,72), (2051,29), (2059,46),

(207,73), (221,74), (229,47),

(23,75), (247,19), (248,12),

(253,76), (254,48), (270,77),

(282,49), (286,31), (288,78),

(30,79), (33,32), (345,51),

(348,81), (370,82), (371,13),

(38,52), (391,33), (393,83),

(40,21), (4181,53), (4182,84),

(44,85), (454,34), (459,54),

(470,86), (498,87), (50,55),

(51,22), (525,35), (527,88),

(53,14), (550,56), (558,89),

(601,57), (604,36), (62,91),

(64,23), (656,58), (659,92),

(693,37), (695,93), (71,59),

(73,94), (75,15), (77,95), (792,38),

(796,24), (81,96), (84,61), (85,97),

(902,39), (903,98), (91,62),

(95,99), (97,25), (99,63).

Or, you can come to the mental

guy’s performance and beat him

by shouting out the correct answer

before he can even finish receiving

the last digit. This would be cool

and cruel at the same time. •

Tanya Khovanova is a Lecturer at MIT

and a freelance mathematician. She

received her Ph.D. in Mathematics from

the Moscow State University in 1988. Her

current research interests lie in recreational

mathematics including puzzles, magic

tricks, combinatorics, number theory,

geometry, and probability theory. Her

website is located at tanyakhovanova.

com, her highly popular math blog at blog.

tanyakhovanova.com and her Number

Gossip website at numbergossip.com.

Reprinted with permission of the original author. First appeared at blog.tanyakhovanova.com.

hacker bits



The resolution of the

Bitcoin experiment


I’ve spent more than 5 years

being a Bitcoin developer. The

software I’ve written has been

used by millions of users, hundreds

of developers, and the talks I’ve

given have led directly to the creation

of several startups. I’ve talked

about Bitcoin on Sky TV and

BBC News. I have been repeatedly

cited by the Economist as a Bitcoin

expert and prominent developer. I

have explained Bitcoin to the SEC,

to bankers and to ordinary people I

met at cafes.

From the start, I’ve always said

the same thing: Bitcoin is an experiment

and like all experiments, it

can fail. So don’t invest what you

can’t afford to lose. I’ve said this in

interviews, on stage at conferences,

and over email. So have other well

known developers like Gavin Andresen

and Jeff Garzik.

But despite knowing that

Bitcoin could fail all along, the

now inescapable conclusion that it

has failed still saddens me greatly.

The fundamentals are broken and

whatever happens to the price in

the short term, the long term trend

should probably be downwards.

I will no longer be taking part in

Bitcoin development and have sold

all my coins.

26 hacker bits

Why has Bitcoin failed?

It has failed because the community

has failed. What was meant to be

a new, decentralised form of money

that lacked “systemically important

institutions” and “too big to fail”

has become something even worse:

a system completely controlled by

just a handful of people.

Worse still, the network is on

the brink of technical collapse. The

mechanisms that should have prevented

this outcome have broken

down, and as a result there’s no

longer much reason to think Bitcoin

can actually be better than the

existing financial system.

Think about it. If you had never

heard about Bitcoin before, would

you care about a payments network


• Couldn’t move your existing


• Had wildly unpredictable fees

that were high and rising fast

• Allowed buyers to take back

payments they’d made after

walking out of shops, by simply

pressing a button (if you

aren’t aware of this “feature”

that’s because Bitcoin was only

just changed to allow it)

• Is suffering large backlogs and

flaky payments

• … which is controlled by China

• … and in which the companies

and people building it were in

open civil war?

I’m going to hazard a guess

that the answer is no.

Deadlock on the blocks

In case you haven’t been keeping

up with Bitcoin, here is how the

network looks as of January 2016.

The block chain is full. You

may wonder how it is possible for

what is essentially a series of files

to be “full”. The answer is that an

entirely artificial capacity cap of

one megabyte per block, put in

place as a temporary kludge a long

time ago, has not been removed

and as a result the network’s capacity

is now almost completely


Figure 1 is a graph of block


The peak level in July was

reached during a denial-of-service

attack in which someone flooded

the network with transactions in an

Figure 1: Block sizes

hacker bits


Figure 2: Weekly average block sizes

attempt to break things, calling it

a “stress test”. So that level, about

700 kilobytes of transactions (or

less than 3 payments per second),

is probably about the limit of what

Bitcoin can actually achieve in


NB: You may have read that

the limit is 7 payments per second.

That’s an old figure from 2011 and

Bitcoin transactions got a lot more

complex since then, so the true

figure is a lot lower.

The reason the true limit seems

to be 700 kilobytes instead of the

theoretical 1000 is that sometimes

miners produce blocks smaller than

allowed and even empty blocks,

despite that there are lots of transactions

waiting to confirm — this

seems to be most frequently caused

by interference from the Chinese

“Great Firewall” censorship system.

More on that in a second.

If you look closely, you can

see that traffic has been growing

since the end of the 2015 summer

months. This is expected. I wrote

about Bitcoin’s seasonal growth

patterns back in March.

Figure 2 is weekly average

block sizes.

So the average is nearly at the

peak of what can be done. Not surprisingly

then, there are frequent

periods in which Bitcoin can’t

You may have read that the limit is

7 payments per second. That’s an old figure

from 2011 and Bitcoin transactions got a lot

more complex since then, so the true figure

is a lot lower.

28 hacker bits

Figure 3: Recent blocks

keep up with the transaction load

being placed upon it and almost all

blocks are the maximum size, even

when there is a long queue of transactions

waiting. You can see this in

the size column (the 750kb blocks

come from miners that haven’t

properly adjusted their software) in

Figure 3.

When networks run out of

capacity, they get really unreliable.

That’s why so many online attacks

are based around simply flooding

a target computer with traffic. Sure

enough, just before Christmas payments

started to become unreliable

and at peak times backlogs are now

becoming common.

Quoting a news post by Pro-

Hashing, a Bitcoin-using business:

Some customers contacted

Chris earlier today asking

why our bitcoin payouts

didn’t execute …

The issue is that it’s now officially

impossible to depend

upon the bitcoin network

anymore to know when or if

your payment will be transacted,

because the congestion

is so bad that even minor

spikes in volume create

dramatic changes in network

conditions. To whom is it acceptable

that one could wait

either 60 minutes or 14 hours,

chosen at random?

It’s ludicrous that people are

actually writing posts on

reddit claiming that there is

no crisis. People were criticizing

my post yesterday on

the grounds that I somehow

overstated the seriousness of

the situation. Do these people

actually use the bitcoin network

to send money everyday?

ProHashing encountered another

near-miss between Christmas

and New Year, this time because a

payment from an exchange to their

wallet was delayed.

Bitcoin is supposed to respond

to this situation with automatic fee

rises to try and get rid of some users,

and although the mechanisms

hacker bits


When parts of the community are

viciously turning on the people that

have introduced millions of users to

the currency, you know things have got

really crazy.

behind it are barely functional

that’s still sort of happening: it is

rapidly becoming more and more

expensive to use the Bitcoin network.

Once upon a time, Bitcoin

had the killer advantage of low

and even zero fees, but it’s now

common to be asked to pay more

to miners than a credit card would


Why has the capacity limit not been


Because the block chain is controlled

by Chinese miners, just two

of whom control more than 50% of

the hash power. At a recent conference

over 95% of hashing power

was controlled by a handful of

guys sitting on a single stage. The

miners are not allowing the block

chain to grow.

Why are they not allowing it to


Several reasons. One is that

the developers of the “Bitcoin

Core” software that they run have

refused to implement the necessary

changes. Another is that the miners

refuse to switch to any competing

product, as they perceive doing so

as “disloyalty” —and they’re terrified

of doing anything that might

make the news as a “split” and

cause investor panic. They have

chosen instead to ignore the problem

and hope it goes away.

And the final reason is that the

Chinese internet is so broken by

their government’s firewall that

moving data across the border

barely works at all, with speeds

routinely worse than what mobile

phones provide. Imagine an entire

country connected to the rest

of the world by cheap hotel wifi,

and you’ve got the picture. Right

now, the Chinese miners are able

to — just about — maintain their

connection to the global internet

and claim the 25 BTC reward

($11,000) that each block they create

gives them. But if the Bitcoin

network got more popular, they

fear taking part would get too difficult

and they’d lose their income

stream. This gives them a perverse

financial incentive to actually try

and stop Bitcoin becoming popular.

Many Bitcoin users and observers

have been assuming up until

very recently that somehow these

problems would all sort themselves

out, and of course the block chain

size limit would be raised. After all,

why would the Bitcoin community

… the community that has championed

the block chain as the future

of finance … deliberately kill itself

by strangling the chain in its crib?

But that’s exactly what is happening.

The resulting civil war has

seen Coinbase — the largest and

best known Bitcoin startup in the

USA — be erased from the official

Bitcoin website for picking the

“wrong” side and banned from the

community forums. When parts

of the community are viciously

turning on the people that have

introduced millions of users to the

currency, you know things have got

really crazy.

Nobody knows what’s

going on

If you haven’t heard much

about this, you aren’t alone. One

of the most disturbing things that

took place over the course of 2015

is that the flow of information to

investors and users has dried up.

In the span of only about eight

months, Bitcoin has gone from

being a transparent and open community

to one that is dominated by

rampant censorship and attacks on

30 hacker bits

itcoiners by other bitcoiners. This

transformation is by far the most

appalling thing I have ever seen,

and the result is that I no longer

feel comfortable being associated

with the Bitcoin community.

Bitcoin is not intended to be an

investment and has always been

advertised pretty accurately: as an

experimental currency which you

shouldn’t buy more of than you can

afford to lose. It is complex, but

that never worried me because all

the information an investor might

want was out there, and there’s an

entire cottage industry of books,

conferences, videos and websites

to help people make sense of it all.

That has now changed.

Most people who own Bitcoin

learn about it through

the mainstream


Whenever a story

goes mainstream

the Bitcoin price

goes crazy, then

the media report

on the price rises

and a bubble happens.

Stories about

Bitcoin reach newspapers and magazines

through a simple process:

the news starts in a community forum,

then it’s picked up by a more

specialised community/tech news

website, then journalists at general

media outlets see the story on those

sites and write their own versions.

I’ve seen this happen over and over

again, and frequently taken part in

it by discussing stories with journalists.

In August 2015 it became clear

that due to severe mismanagement,

the “Bitcoin Core” project that

maintains the program that runs the

peer-to-peer network wasn’t going

to release a version that raised

the block size limit. The reasons

for this are complicated and discussed

below. But obviously, the

community needed the ability to

keep adding new users. So some

long-term developers (including

me) got together and developed the

necessary code to raise the limit.

That code was called BIP 101 and

we released it in a modified version

of the software that we branded

Bitcoin XT. By running XT, miners

could cast a vote for changing

the limit. Once 75% of blocks

were voting for the change the

rules would be adjusted and bigger

One of the great things

about Bitcoin is its lack of


bitcoin.org Admin

blocks would be allowed.

The release of Bitcoin XT

somehow pushed powerful emotional

buttons in a small number

of people. One of them was a guy

who is the admin of the bitcoin.org

website and top discussion forums.

He had frequently allowed discussion

of outright criminal activity

on the forums he controlled, on

the grounds of freedom of speech.

But when XT launched, he made a

surprising decision. XT, he claimed,

did not represent the “developer

consensus” and was therefore

not really Bitcoin. Voting was an

abomination, he said, because:

“One of the great things about

Bitcoin is its lack of democracy”

So he decided to do whatever it

took to kill XT completely, starting

with censorship of Bitcoin’s primary

communication channels: any

post that mentioned the words “Bitcoin

XT” was erased from the discussion

forums he controlled, XT

could not be mentioned or linked

to from anywhere on the official

bitcoin.org website and, of course,

anyone attempting to point users to

other uncensored forums was also

banned. Massive numbers of users

were expelled from the forums and

prevented from expressing their


As you can

imagine, this

enraged people.

Read the comments

on the

announcement to

get a feel for it.


some users found

their way to a new uncensored

forum. Reading it is a sad thing.

Every day for months I have seen

raging, angry posts railing against

the censors, vowing that they will

be defeated.

But the inability to get news

about XT or the censorship itself

through to users has some problematic


For the first time, investors

have no obvious way to get a

clear picture of what’s going on.

hacker bits


Dissenting views are being systematically

suppressed. Technical

criticisms of what Bitcoin Core

is doing are being banned, with

misleading nonsense being peddled

in its place. And it’s clear that

many people who casually bought

into Bitcoin during one of its hype

cycles have no idea that the system

is about to hit an artificial limit.

This worries me a great deal.

Over the years governments have

passed a large number of laws

around securities and investments.

Bitcoin is not a security and I do

not believe it falls under those laws,

but their spirit is simple enough:

make sure investors are informed.

When misinformed investors lose

money, government attention frequently


Why is Bitcoin Core keeping the limit?

People problems.

Why is Bitcoin Core keeping

the limit?

People problems.

When Satoshi left, he handed

over the reins of the program we

now call Bitcoin Core to Gavin

Andresen, an early contributor.

Gavin is a solid and experienced

leader who can see the big picture.

His reliable technical judgement is

one of the reasons I had the confidence

to quit Google (where I had

spent nearly 8 years) and work on

Bitcoin full time. Only one tiny

problem: Satoshi never actually

asked Gavin if he wanted the job,

and in fact he didn’t. So the first

thing Gavin did was grant four

other developers access to the code

as well. These developers were

chosen quickly in order to ensure

the project could easily continue

if anything happened to him. They

were, essentially, whoever was

around and making themselves

useful at the time.

One of them, Gregory Maxwell,

had an unusual set of views: he

once claimed he had mathematically

proven Bitcoin to be impossible.

More problematically, he did not

believe in Satoshi’s original vision.

When the project was first

announced, Satoshi was asked how

a block chain could scale to a large

number of payments. Surely the

amount of data to download would

become overwhelming if the idea

took off? This was a popular criticism

of Bitcoin in the early days

and Satoshi fully expected to be

asked about it. He said:

“The bandwidth might not be

as prohibitive as you think …

if the network were to get [as

big as VISA], it would take

several years, and by then,

sending [the equivalent of] 2

HD movies over the Internet

would probably not seem like

a big deal.”

It’s a simple argument: look at

what existing payment networks

handle, look at what it’d take for

Bitcoin to do the same, and then

point out that growth doesn’t

happen overnight. The networks

and computers of the future will

be better than today. And indeed

back-of-the-envelope calculations

suggested that, as he said to me, “it

never really hits a scale ceiling”

even when looking at more factors

than just bandwidth.

Maxwell did not agree with this

line of thinking. From an interview

in December 2014:

Problems with decentralization

as bitcoin grows are not

going to diminish either, according

to Maxwell: “There’s

an inherent tradeoff between

scale and decentralization

when you talk about transactions

on the network.”

The problem, he said, is that

as bitcoin transaction volume

increases, larger companies

will likely be the only ones

running bitcoin nodes because

of the inherent cost.

The idea that Bitcoin is inherently

doomed because more users means

less decentralisation is a pernicious

one. It ignores the fact that despite

all the hype, real usage is low,

growing slowly and technology

32 hacker bits

gets better over time. It is a belief

Gavin and I have spent much time

debunking. And it leads to an obvious

but crazy conclusion: if decentralisation

is what makes Bitcoin

good, and growth threatens decentralisation,

then Bitcoin should not

be allowed to grow.

Instead, Maxwell concluded,

Bitcoin should become a sort of

settlement layer for some vaguely

defined, as yet un-created

non-blockchain based system.

The death spiral begins

In a company, someone who did

not share the goals of the organisation

would be dealt with in a

simple way: by firing him.

But Bitcoin Core is an open

source project, not a company.

Once the 5 developers with commit

access to the code had been chosen

and Gavin had decided he did not

want to be the leader, there was no

procedure in place to ever remove

one. And there was no interview or

screening process to ensure they

actually agreed with the project’s


As Bitcoin became more popular

and traffic started approaching

the 1mb limit, the topic of raising

the block size limit was occasionally

brought up between the

developers. But it quickly became

an emotionally charged subject.

Accusations were thrown around

that raising the limit was too risky,

that it was against decentralisation,

and so on. Like many small groups,

people prefer to avoid conflict. The

can was kicked down the road.

Complicating things further,

Maxwell founded a company that

then hired several other developers.

Not surprisingly, their views then

started to change to align with that

of their new boss.

Co-ordinating software upgrades

takes time, and so in May

2015 Gavin decided the subject

must be tackled once and for

all, whilst there was still about 8

months remaining. He began writing

articles that worked through the

arguments against raising the limit,

one at a time.

But it quickly became apparent

that the Bitcoin Core developers

were hopelessly at loggerheads.

Maxwell and the developers he had

hired refused to contemplate any

increase in the limit whatsoever.

They were barely even willing to

talk about the issue. They insisted

that nothing be done without “consensus”.

And the developer who

was responsible for making the

releases was so afraid of conflict

that he decided any controversial

topic in which one side might “win”

simply could not be touched at all,

and refused to get involved.

Thus despite the fact that exchanges,

users, wallet developers,

and miners were all expecting a

rise, and indeed, had been building

entire businesses around the assumption

that it would happen, 3 of

the 5 developers refused to touch

the limit.


Meanwhile, the clock was


Massive DDoS attacks on

XT users

Despite the news blockade, within

a few days of launching Bitcoin XT

around 15% of all network nodes

were running it, and at least one

mining pool had started offering

BIP101 voting to miners.

That’s when the denial of

service attacks started. The attacks

were so large that they disconnected

entire regions from the internet:

“I was DDos’d. It was a massive

DDoS that took down

my entire (rural) ISP. Everyone

in five towns lost their

internet service for several

hours last summer because of

these criminals. It definitely

discouraged me from hosting


In other cases, entire datacenters

were disconnected from the internet

until the single XT node inside

them was stopped. About a third

of the nodes were attacked and

removed from the internet in this


Worse, the mining pool that

had been offering BIP101 was also

attacked and forced to stop. The

message was clear: anyone who

supported bigger blocks, or even

allowed other people to vote for

them, would be assaulted.

The attackers are still out there.

When Coinbase, months after the

launch, announced they had finally

lost patience with Core and would

run XT, they too were forced offline

for a while.

hacker bits


Bogus conferences

Despite the DoS attacks and censorship,

XT was gaining momentum.

That posed a threat to Core,

so a few of its developers decided

to organise a series of conferences

named “Scaling Bitcoin”: one in

August and one in December. The

goal, it was claimed, was to reach

“consensus” on what should be

done. Everyone likes a consensus

of experts, don’t they?

It was immediately clear to

me that people who refused to

even talk about raising the limit

would not have a change of heart

because they attended a conference,

and moreover, with the start

of the winter growth season there

remained only a few months to get

the network upgraded. Wasting

those precious months waiting for

conferences would put the stability

of the entire network at risk. The

fact that the first conference actually

banned discussion of concrete

proposals didn’t help.

So I didn’t go.

Unfortunately, this tactic was

devastatingly effective. The community

fell for it completely. When

talking to miners and startups, “we

are waiting for Core to raise the

limit in December” was one of

the most commonly cited reasons

for refusing to run XT. They were

terrified of any media stories about

a community split that might hurt

the Bitcoin price and thus, their


Now the last conference has

come and gone with no plan to

raise the limit, some companies

(like Coinbase and BTCC) have

woken up to the fact that they got

played. But too late. Whilst the

community was waiting, organic

growth added another 250,000

transactions per day.

A non-roadmap

Jeff Garzik and Gavin Andresen,

the two of five Bitcoin Core

committers who support a block

size increase (and the two who

have been around the longest), both

have a stellar reputation within the

community. They recently wrote a

joint article titled “Bitcoin is Being

Hot-Wired for Settlement”.

Jeff and Gavin are generally

softer in their approach than I am.

I’m more of a tell-it-like-I-see-it

kinda guy, or as Gavin has delicately

put it, “honest to a fault”. So the

strong language in their joint letter

is unusual. They don’t pull any


“The proposed roadmap currently

being discussed in the

bitcoin community has some

good points in that it does

have a plan to accommodate

more transactions, but it fails

to speak plainly to bitcoin

users and acknowledge key


Core block size does not

change; there has been zero

compromise on that issue.

In an optimal, transparent,

open source environment, a

BIP would be produced …

this has not happened

One of the explicit goals of

the Scaling Bitcoin workshops

was to funnel the chaotic

core block size debate

into an orderly decision

making process. That did not

occur. In hindsight, Scaling

Bitcoin stalled a block size

decision while transaction

fee price and block space

pressure continue to increase.”

Failing to speak plainly, as

they put it, has become more and

more common. As an example, the

plan Gavin and Jeff refer to was

announced at the “Scaling Bitcoin”

conferences but doesn’t involve

making anything more efficient,

and manages an anemic 60%

capacity increase only through

an accounting trick (not counting

some of the bytes in each transaction).

It requires making huge

changes to nearly every piece of

Bitcoin-related software. Instead

of doing a simple thing and raising

the limit, it chooses to do an

incredibly complicated thing that

might buy months at most, assuming

a huge coordinated effort.

Replace by fee

One problem with using fees to

control congestion is that the fee

to get to the front of the queue

might change after you made a

payment. Bitcoin Core has a brilliant

solution to this problem — allow

people to mark their payments

as changeable after they’ve been

sent, up until they appear in the

block chain. The stated intention

is to let people adjust the fee

paid, but in fact their change also

allows people to change the pay-

34 hacker bits

ment to point back to themselves,

thus reversing it.

At a stroke, this makes using

Bitcoin useless for actually buying

things, as you’d have to wait for a

buyer’s transaction to appear in the

block chain … which from now on

can take hours rather than minutes,

due to the congestion.

Core’s reasoning for why this

is OK goes like this: it’s no big loss

because if you hadn’t been waiting

for a block before, there was a

theoretical risk of payment fraud,

which means you weren’t using

Bitcoin properly. Thus, making that

risk a 100% certainty doesn’t really

change anything.

In other words, they don’t recognise

that risk management exists

and so perceive this change as zero


This protocol change will be

released with the next version of

Core (0.12), so will activate when

the miners upgrade. It was massively

condemned by the entire

Bitcoin community but the remaining

Bitcoin Core developers don’t

care what other people think, so the

change will happen.

If that didn’t convince you Bitcoin

has serious problems, nothing

will. How many people would

think bitcoins are worth hundreds

of dollars each when you soon

I woke up this morning to find people

wishing me well in the uncensored forum

and asking me to stay...

won’t be able to use them in actual



Bitcoin has entered exceptionally

dangerous waters. Previous crises,

like the bankruptcy of Mt Gox,

were all to do with the services and

companies that sprung up around

the ecosystem. But this one is

different: it is a crisis of the core

system, the block chain itself.

More fundamentally, it is a crisis

that reflects deep philosophical

differences in how people view the

world: either as one that should be

ruled by a “consensus of experts”,

or through ordinary people picking

whatever policies make sense to


Even if a new team was built to

replace Bitcoin Core, the problem

of mining power being concentrated

behind the Great Firewall would

remain. Bitcoin has no future

whilst it’s controlled by fewer than

10 people. And there’s no solution

in sight for this problem: nobody

even has any suggestions. For a

community that has always worried

about the block chain being

taken over by an oppressive government,

it is a rich irony.

Still, all is not yet lost. Despite

everything that has happened, in

the past few weeks more members

Reprinted with permission of the original author. First appeared at medium.com/@octskyward.

of the community have started

picking things up from where I am

putting them down. Where making

an alternative to Core was once

seen as renegade, there are now

two more forks vying for attention

(Bitcoin Classic and Bitcoin

Unlimited). So far they’ve hit the

same problems as XT but it’s possible

a fresh set of faces could find

a way to make progress.

There are many talented and

energetic people working in the

Bitcoin space, and in the past five

years I’ve had the pleasure of getting

to know many of them. Their

entrepreneurial spirit and alternative

perspectives on money, economics

and politics were fascinating

to experience, and despite how

it’s all gone down I don’t regret

my time with the project. I woke

up this morning to find people

wishing me well in the uncensored

forum and asking me to stay, but

I’m afraid I’ve moved on to other

things. To those people I say: good

luck, stay strong, and I wish you

the best. •

Mike, one of the developers of the Bitcoin

digital currency system. Previously Mike

was a senior software engineer at Google,

where he worked on Earth, Maps, anti-spam

systems, account signup abuse and

login risk analysis for improved account

security. He is the developer of the widely

used bitcoinj Java API, which has been

used by dozens of complex projects that

work with the Bitcoin protocol.

hacker bits


linode bit

HACKERBITS.COM and Ray's blog

are hosted on a Linode since 2010.

Why? Because their customer service

is excellent, and they are a great bargain for a

cloud-hosted virtual private server (VPS).

It's a VPS, so you'll have full root access.

In addition, the daily, weekly and snapshot

backups feature are easy-to-use and saved us

more times than we can remember.

We highly recommend using Linode for your

first VPS and beyond!

Try Linode with our referral link and we will

earn a small commission (at no additional cost

to you) that will help cover Hacker Bits’ server

hosting costs. You can also use our referral code


Remember, we at Hacker Bits recommend

Linode because they are helpful and useful, not

because of the small commission we earn if you

decide to buy something.

Referral URL: http://hackerbits.com/linode

Referral Code: 4e56fcfd8e1d9c25df482010fc1566ec8250387e


A beginner's guide to scaling

to 11 million+ users On

Amazon's AWS


How do you scale a system from one user to

more than 11 million users? Joel Williams,

Amazon Web Services Solutions Architect,

gives an excellent talk on just that subject: AWS

re:Invent 2015 Scaling Up to Your First 10 Million


If you are an advanced AWS user this talk is not

for you, but it’s a great way to get started if you are

new to AWS, new to the cloud, or if you haven’t kept

up with with constant stream of new features Amazon

keeps pumping out.

As you might expect since this is a talk by Amazon

that Amazon services are always front and center

as the solution to any problem. Their platform play

hacker bits


is impressive and instructive. It's obvious by how the

pieces all fit together Amazon has done a great job of

mapping out what users need and then making sure

they have a product in that space.

Some of the interesting takeaways:

• Start with SQL and only move to NoSQL when


• A consistent theme is take components and separate

them out. This allows those components to

scale and fail independently. It applies

to breaking

up tiers and creating microservices.

• Only invest in tasks that differentiate

you as a business, don't

reinvent the wheel.

• Scalability and redundancy are

not two separate concepts, you

can often do both at the same


• There's no mention of costs.

That would be a good addition

to the talk as that is one of

the major criticisms of AWS


The basics

AWS is in 12 regions around the


• A Region is a physical location

in the world where Amazon has

multiple Availability Zones.

There are regions in: North

America; South America; Europe; Middle East;

Africa; Asia Pacific.

• An Availability Zone (AZ) is generally a single

datacenter, though they can be constructed out of

multiple datacenters.

• Each AZ is separate enough that they have separate

power and Internet connectivity.

• The only connection between AZs is a low latency

network. AZs can be 5 or 15 miles apart, for

example. The network is fast enough that your


has done a

great job of

mapping out

what users

need and then

making sure

they have a

product in that


application can act like all AZs are in the same


• Each Region has at least two Availability Zones.

There are 32 AZs total.

• Using AZs it’s possible to create a high availability

architecture for your application.

• At least 9 more Availability Zones and 4 more

Regions are coming in 2016.

AWS has 53 edge locations around the world.

• Edge locations are used by CloudFront, Amazon’s

Content Distribution Network (CDN) and

Route53, Amazon’s managed

DNS server.

• Edge locations enable users to

access content with a very low

latency no matter where they are

in the world.

Building Block Services.

• AWS has created a number of

services that use multiple AZs

internally to be highly available

and fault tolerant. Here is a list

of what services are available


• You can use these services

in your application, for a fee,

without having to worry about

making them highly available


• Some services that exist within

an AZ: CloudFront, Route 53,

S3, DynamoDB, Elastic Load

Balancing, EFS, Lambda, SQS,


• A highly available architecture can be created using

services even though they exist within a single


1 User

In this scenario you are the only user and you want to

get a website running.

38 hacker bits

Your architecture will look something like:

• Run on a single instance, maybe a type t2.micro.

Instance types comprise varying combinations of

CPU, memory, storage, and networking capacity

and give you the flexibility to choose the appropriate

mix of resources for your applications.

• The one instance would run the entire web stack,

for example: web app, database, management, etc.

• Use Amazon Route 53 for the DNS.

• Attach a single Elastic IP address to the instance.

• Works great, for a while.

Vertical scaling

• You need a bigger box. Simplest approach to

scaling is choose a larger instance type. Maybe a

c4.8xlarge or m3.2xlarge, for example.

• This approach is called vertical scaling.

• Just stop your instance and choose a new instance

type and you’re running with more power.

• There is a wide mix of different hardware configurations

to choose from. You can have a system

with 244 gigs of RAM (2TB of RAM types are

coming soon). Or one with 40 cores. There are

High I/O instances, High CPU Instances, High

storage instances.

• Some Amazon services come with a Provisioned

IOPS option to guarantee performance. The idea

is you can perhaps use a smaller instance type for

your service and make use of Amazon services

like DynamoDB that can deliver scalable services

so you don’t have to.

• Vertical scaling has a big problem: there’s no

failover, no redundancy. If the instance has a

problem your website will die. All your eggs are

in one basket.

• Eventually a single instances can only get so big.

You need to do something else.

Users > 10

Separate out a single host into multiple hosts.

• One host for the web site.

• One host for the database. Run any database you

want, but you are on the hook for the database


• Using separate hosts allows the web site and the

database to be scaled independently of each other.

Perhaps your database will need a bigger machine

than your web site, for example.

Or instead of running your own database you could

use a database service.

• Are you a database admin? Do your really want to

worry about backups? High availability? Patches?

Operating systems?

• A big advantage of using a service is you can have

a multi Availability Zone database setup with a

single click. You won’t have to worry about replication

or any of that sort of thing. Your database

will be highly available and reliable.

As you might imagine Amazon has several fully

managed database services to sell you:

• Amazon RDS (Relational Database Service).

There are many options: Microsoft SQL Server,

Oracle, MySQL, PostgreSQL, MariaDB, Amazon


• Amazon DynamoDB. A NoSQL managed database.

• Amazon Redshift. A petabyte scale data warehouse


More Amazon Aurora:

• Automatic storage scaling up to 64TB. You no

longer have to provision the storage for your data.

• Up to 15 read read-replicas.

• Continuous (incremental) backups to S3.

• 6-way replication across 3 AZs. This helps you

handle failure.

• MySQL compatible.

Start with a SQL database instead of a NoSQL database.

• The suggestion is to start with a SQL database.

• The technology is established.

• There’s lots of existing code, communities, support

groups, books, and tools.

hacker bits


• You aren’t going to break a SQL database with

your first 10 million users. Not even close. (unless

your data is huge).

• Clear patterns to scalability.

When might you need start with a NoSQL database?

• If you need to store > 5 TB of data in year one or

you have an incredibly data intensive workload.

• Your application has super low-latency requirements.

• You need really high throughput. You need to really

tweak the IOs you are getting both on the reads

and the writes.

• You don’t have any relational data.

Users > 100

• Use a separate host for the web tier.

• Store the database on Amazon RDS. It takes care

of everything.

• That’s all you have to do.

Users > 1000

• As architected your application has availability

issues. If the host for your web service fails then

your web site goes down.

• So you need another web instance in another

Availability Zone. That’s OK because the latency

between the AZs is in the low single digit milliseconds,

almost like they right next to each other.

• You also need to a slave database to RDS that

runs in another AZ. If there’s a problem with the

master your application will automatically switch

over to the slave. There are no application changes

necessary on the failover because your application

always uses the same endpoint.

• An Elastic Load Balancer (ELB) is added to the

configuration to load balance users between your

two web host instances in the two AZs.

Elastic Load Balancer (ELB):

• ELB is a highly available managed load balancer.

The ELB exists in all AZs. It’s a single DNS

endpoint for your application. Just put it in Route

53 and it will load balance across your web host


• The ELB has Health Checks that make sure traffic

doesn’t flow to failed hosts.

• It scales without your doing anything. If it sees

additional traffic it scales behind the scenes both

horizontally and vertically. You don’t have to

manage it. As your applications scales so is the


Users > 10,000s - 100,000s

The previous configuration has 2 instances behind

the ELB, in practice you can have 1000s of instances

behind the ELB. This is horizontal scaling.

You’ll need to add more read replicas to the database,

to RDS. This will take load off the write master.

Consider performance and efficiency by lightening

the load off your web tier servers by moving

some of the traffic elsewhere. Move static content in

your web app to Amazon S3 and Amazon CloudFront.

CloudFront is the Amazon’s CDN that stores your

data in the 53 edge locations across the world.

Amazon S3 is an object base store.

• It’s not like EBS, it’s not storage that’s attached to

an EC2 instance, it’s an object store, not a block


• It’s a great place to store static content, like javascript,

css, images, videos. This sort of content

does not need to sit on an EC2 instance.

• Highly durable, 11 9’s of reliability.

• Infinitely scalable, throw as much data as it as you

want. Customers store multiple petabytes of data

in S3.

• Objects of up to 5TB in size are supported.

• Encryption is supported. You can use Amazon’s

encryption, your encryption, or an encryption


Amazon CloudFront is cache for your content.

• It caches content at the edge locations to provide

your users the lowest latency access possible.

• Without a CDN your users will experience higher

latency access to your content. Your servers will

40 hacker bits

also be under higher load as they are serving the

content as well as handling the web requests.

• One customer needed to serve content at 60 Gbps.

The web tier didn’t even know that was going on,

CloudFront handled it all.

You can also lighten the load by shifting session state

off your web tier.

• Store the session state in ElastiCache or DynamoDB.

• This approach also sets your system up to support

auto scaling in the future.

You can also lighten the load by caching data from

your database into ElastiCache.

are started automatically.

You can also lighten the load by shifting dynamic

content to CloudFront.

• A lot of people know CloudFront can handle static

content, like files, but it can also handle some dynamic

content. This topic is not discussed further

in the talk, but here’s a link.

Auto scaling

If you provision enough capacity to always handle

your peak traffic load, Black Friday, for example,

you are wasting money. It would be better to match

compute power with demand. That’s what Auto Scaling

let’s you do, the automatic resizing of compute

If you provision enough capacity to

always handle your peak traffic load...

“you are wasting money.

• Your database doesn’t need to handle all the gets

for data. A cache can handle a lot of that work

and leaves the database to handle more important


Amazon DynamoDB - A managed NoSQL database

• You provision the throughput you want. You dial

up the read and write performance you want to pay


• Supports fast, predictable performance.

• Fully distributed and fault tolerant. It exists in multiple

Availability Zones.

• It’s a key-value store. JSON is supported.

• Documents up to 400KB in size are supported.

Amazon Elasticache - a managed Memcached or Redis

• Managing a memcached cluster isn’t making you

more money so let Amazon do that for you. That’s

the pitch.

• The clusters are automatically scaled for you. It’s a

self-healing infrastructure, if nodes fail new nodes


You can define the minimum and maximum size

of your pools. As a user you decide what’s the smallest

number of instances in your cluster and the largest

number of instances.

CloudWatch is a management service that’s embedded

into all applications.

• CloudWatch events drive scaling

• Are you going to scale on CPU utilization? Are

you going to scale on latency? On network traffic?

• You can also push your own custom metrics into

CloudWatch. If you want to scale on something

application specific you can push that metric into

CloudWatch and then tell Auto Scaling you want

to scale on that metric.

Users > 500,000+

• The addition from the previous configuration is

auto scaling groups are added to the web tier. The

hacker bits


auto scaling group includes the two AZs, but can

expand to 3 AZs, up to as many as are in the same

region. Instances can pop up in multiple AZs not

just for scalability, but for availability.

• The example has 3 web tier instances in each AZ,

but it could be thousands of instances. You could

say you want a minimum of 10 instances and a

maximum of a 1000.

• ElastiCache is used to offload popular reads from

the database.

• DynamoDB is used to offload Session data.

You need to add monitoring, metrics and logging.

• Host level metrics. Look at a single CPU instance

within an autoscaling group and figure out what’s

going wrong.

• Aggregate level metrics. Look at metrics on the

Elastic Load Balancer to get feel for performance

of the entire set of instances.

• Log analysis. Look at what the application is telling

you using CloudWatch logs. CloudTrail helps

you analyze and manage logs.

• External Site Performance. Know what your customers

are seeing as end users. Use a service like

New Relic or Pingdom.

You also need to...

• Know what your customers are saying. Is their

latency bad? Are they getting an error when they

go to your web page?

• Squeeze as much performance as you can from

your configuration. Auto Scaling can help with

that. You don’t want systems that are at 20% CPU



Don’t reinvent the wheel... Only invest in

tasks that differentiate you as a business.

The infrastructure is getting big, it can scale to 1000s

of instances. We have read replicas, we have horizontal

scaling, but we need some automation to help

manage it all, we don’t want to manage each individual


There’s a hierarchy of automation tools.

• Do it yourself: Amazon EC2, AWS CloudFormation.

• Higher-level services: AWS Elastic Beanstalk,

AWS OpsWorks

AWS Elastic Beanstalk

Manages the infrastructure for your application

automatically. It’s convenient but there’s not a lot of


AWS OpsWorks

An environment where you build your application in

layers, you use Chef recipes to manage the layers of

your application.

Also enables the ability to do Continuous Integration

and deployment.

AWS CloudFormation

• Been around the longest.

• Offers the most flexibility because it offers a templatized

view of your stack. It can be used to build

your entire stack or just components of the stack.

• If you want to update your stack you update the

Cloud Formation template it will update just that

one piece of your application.

• Lots of control, but less convenient.

AWS CodeDeploy

• Deploys your code to a fleet of EC2 instances.

• Can deploy to one or thousands of instances.

• Code Deploy can point to an auto scaling configuration

so code is deployed to a group of instances.

• Can also be used in conjunction with Chef and


42 hacker bits

Decouple infrastructure

• Use SOA/microservices. Take components from

your tiers and separate them out. Create separate

services like when you separated the web tier

from the database tier.

• The individual services can then be scaled independently.

This gives you a lot of flexibility for

scaling and high availability.

• SOA is a key component of the architectures built

by Amazon.

Loose coupling sets you free.

• You can scale and fail components independently.

• If a worker node fails in pulling work from SQS

does it matter? No, just start another one. Things

are going to fail, let’s build an architecture that

handles failure.

• Design everything as a black box.

• Decouple interactions.

• Favor services with built-in redundancy and scalability

rather than building your own.

Don’t reinvent the wheel

• Only invest in tasks that differentiate you as a


• Amazon has a lot of services that are inherently

fault tolerant because they span multiple AZs.

For example: queuing, email, transcoding, search,

databases, monitoring, metrics, logging, compute.

You don’t have to build these yourself.

SQS: queueing service.

• The first Amazon service offered.

• It spans multiple AZs so it’s fault tolerant.

• It’s scalable, secure, and simple.

• Queuing can help your infrastructure by helping

you pass messages between different components

of your infrastructure.

• Take for example a Photo CMS. The systems that

collects the photos and processes them should

be two different systems. They should be able to

scale independently. They should be loosely coupled.

Ingest a photo, put it in queue, and workers

can pull photos off the queue and do something

with them.

AWS Lambda: lets you run code without provisioning

or managing servers.

• Great tool for allowing you to decouple your


• In the Photo CMS example Lambda can respond

to S3 events so when a S3 file is added the Lambda

function to process is automatically triggered.

• We’ve done away with EC2. It scales out for you

and there’s no OS to manage.

Users > 1,000,000+

Reaching a million users and above requires bits of

all the previous points:

• Multi-AZ

• Elastic Load Balancing between tiers. Not just on

the web tier, but also on the application tier, data

tier, and any other tier you have.

• Auto Scaling

• Service Oriented Architecture

• Serve Content Smartly with S3 and CloudFront

• Put caching in front of the DB

• Move state off the web tier.

In addition:

• Use Amazon SES to send email.

• Use CloudWatch for monitoring.

Users > 10,000,000+

As we get bigger we’ll hit issues in the data tier. You

will potentially start to run into issues with your database

around contention with the write master, which

basically means you can only send so much write

traffic to one server.

How do you solve it?

• Federation

• Sharding

• Moving some functionality to other types of DBs

(NoSQL, graph, etc)

hacker bits


Federation - splitting into multiple DBs based on


• For example, create a Forums Database, a User

Database, a Products Database. You might have

had these in a single database before, now spread

them out.

• The different databases can be scaled independently

of each other.

• The downsides: you can’t do cross database queries;

it delays getting to the next strategy, which is


• Start to build custom solutions to solve your particular

problem that nobody has ever done before.

If you need to serve a billion customers you may

need custom solutions.

• Deep analysis of your entire stack.

In review

• Use a multi-AZ infrastructure for reliability.

• Make use of self-scaling services like ELB, S3,

SQS, SNS, DynamoDB, etc.

...use a managed service instead of coding

“your own, unless it’s absolutely necessary.

Sharding - splitting one dataset across multiple hosts

• More complex at the application layer, but there’s

no practical limit on scalability.

• For example, in a Users Database ⅓ of the users

might be sent to one shard, and the last third to

another shard, and another shard to another third.

Moving some functionality to other types of DBs

• Start thinking about a NoSQL database.

• If you have data that doesn’t require complex

joins, like say a leaderboard, rapid ingest of

clickstream/log data, temporary data, hot tables,

metadata/lookup tables, then consider moving it

to a NoSQL database.

• This means they can be scaled independently of

each other.

Users > 11 million

• Scaling is an iterative process. As you get bigger

there's always more you can do.

• Fine tune your application.

• More SOA of features/functionality.

• Go from Multi-AZ to multi-region.

• Build in redundancy at every level. Scalability

and redundancy are not two separate concepts,

you can often do both at the same time.

• Start with a traditional relational SQL database.

• Cache data both inside and outside your infrastructure.

• Use automation tools in your infrastructure.

• Make sure you have good metrics/monitoring/logging

in place. Make sure you are finding out what

your customers experience with your application.

• Split tiers into individual services (SOA) so they

can scale and fail independently of each other.

• Use Auto Scaling once you’re ready for it.

• Don’t reinvent the wheel, use a managed service

instead of coding your own, unless it’s absolutely


• Move to NoSQL if and when it makes sense. •

Todd runs highscalability.com.

Reprinted with permission of the original author. First appeared at highscalability.com.

44 hacker bits


Getting ahead vs. doing well


Two guys are running away

from an angry grizzly when

one stops to take off his hiking

boots and switches to running

shoes. "What are you doing," the

other guy yells, "those

aren't going

to allow you to outrun

the bear..." The

other guy smiles

and points out that

he doesn't have to

outrun the bear, just

his friend.

I was at a fancy

event the other day,

and it was held

in three different

rooms. All of these

fancy folks were

there, in fancy outfits,

etc. More than

once, I heard people

ask, "is this room

the best room?" It

wasn't enough that

the event was fancy. It mattered

that the room assigned was the

fanciest one.

Class rank. The most expensive

car. A 'better' neighborhood. A faster

marathon. More online followers.

A bigger pool...

One unspoken objection to

raising the minimum wage is that

people, other people, those people,

will get paid a

little more. Which

They quit

a good

job, a job

they liked,



people got

a raise.

might make getting

ahead a little

harder. When we

raise the bottom,

this thinking goes,

it gets harder to

move to the top.

After a company

in Seattle

famously raised

its lowest wage

tier to $70,000,

two people (who

got paid more than

most of the other

workers) quit,

because they felt it

wasn't fair that people who weren't

as productive as they were were

going to get a raise.

They quit a good job, a job they

liked, because other people got a


This is our culture of 'getting

ahead' talking.

This is the thinking that, "First

class isn't better because of the

seats, it's better because it's not

coach." (Several airlines have tried

to launch all-first-class seating, and

all of them have stumbled.)

There are two challenges here.

The first is that in a connection

economy, the idea that others need

to be in coach for you to be in first

doesn't scale very well. When we

share an idea or an experience, we

both have it, it doesn't diminish the

value, it increases it.

And the second, in the words

of moms everywhere: Life is more

fun when you don't compare. It's

possible to create dignity and be

successful at the same time. (In

fact, that might be the only way to

be truly successful.) •

Seth Godin is the founder of Yoyodyne

and Squidoo, a popular blogger and the

author of 18 bestselling books. His latest

project can be found at www.altmba.com.

Reprinted with permission of the original author. First appeared at sethgodin.typepad.com.

hacker bits



When to join a



Something has changed in

the last few years which has

made an increasing number

of people want to join startups. It

seemed to start around the time

the Social Network movie came

out - perhaps it’s just a

coincidence, but part of me

imagines a group of MBAs

sitting around watching

Justin Timberlake play

Sean Parker, thinking that

actually a billion dollars

would be pretty cool.

When choosing a startup

to work at, people often

overlook one of the most

important factors, which

is the size and stage of the

company. I’d argue this

is sometimes even more

important than the industry or the

idea itself.

Bootstrapping, aka two

people in a bedroom

The startup was recently formed,

and the company might not even

be legally incorporated yet. There’s

Anything is

possible. The entire

direction of the

company might

even be changing

from week to week.

no funding and no-one is taking

any salary. You’re two, three or

four people in a bedroom living on

baked beans, working all hours of

the day to get a working prototype

to show to customers or investors.

The only people using your website

or app are a couple of sympathetic

friends and your mum.

Someone else tried to

convince you to join a different

startup a year earlier,

but they had no funding

and zero customers. They

wanted you to work without

a salary in return for 1%

of equity, which sounded

like a scam to you. You’re

glad you picked this startup

instead. You might not

be getting any salary, but

at least you’re joining as

a co-founder with a double-digit

equity stake. You

jokingly talk with your cofounders

about which island you’d each buy

46 hacker bits

when you sell out for billions in a

few years’ time.

The product vision is vague and

broad, but you’re working every

day to try to validate it with customers.

Anything is possible. The

entire direction of the company

might even be changing from week

to week.

Because nothing yet exists,

you need to bring it to life with a

combination of energy and raw

self-belief. There’s little specialisation

and no hierarchy; everyone

does whatever needs to be done. It

can be exhilarating.

It can also be exhausting at

times, lasting months or years before

you find customer traction or



Now the company has a working

prototype, a few dozen regular

users and £250k of seed funding.

It’s enough to pay a handful of

people low salaries. Since you’re

too big to fit in a bedroom, you’re

renting desks at a coworking space

or scrounging them off a bigger


You cut a lot of corners, but

seem to ship an incredible amount

of code. People you don’t know

personally start to use your product

and it feels awesome. You’re

working hard to spread the word,

perhaps even starting to earn a little

revenue from your first paying


You join at this stage for 3%

equity plus whatever basic salary

you need to survive. You talk about

which cities you’d buy houses in

when the company sells.

You might join as the first

professional programmer or designer,

but you’re still more-orless

on your own. When you join,

your first job is to go to the Apple

Store and buy yourself a laptop

because no-one thought to order

any equipment. There’s no “mentorship”

from more experienced

colleagues or even code-review,

simply because the company is still

too small.

“Marketing” consists of one of

the founders emailing your funding

announcement to TechCrunch.

As an employee, you have a

significant say in the direction of

the company, although the business

model might still shift substantially

underneath you. Even if you’re

not making the decisions, you hear

about them immediately.

hacker bits


There’s still a good chance that

the company fails to find “product-market

fit” and goes bankrupt

without raising more money. Your

mum tells you that you should have

stuck with your well-paying job at


Series A

The company has raised £3m from

a big-name VC and the team has

grown to 10-15 people. You’ve got

your own office with your logo on

the door, or at least your own area

in the coworking space.

The investor makes you submit

quarterly cashflow reports, so you

start keeping accounts for the first

time. This is done by whoever has

Excel installed on their computer.

The company eventually hires

an office manager to ease the burden

of administration on the founders.

They set up a payroll provider

and start doing peer reviews.

You join the company to do a

specific role, but you spend your

first week or two figuring out what

you should actually be working on.

At least they remembered to buy

you a laptop, a desk and a chair.

There are 3 or 4 engineers

in the team, so you start to do

code-review and set up a continuous

integration server. You lament

all the technical debt that was

accrued by those idiots who arrived

before you, slowing down the pace

of feature development.

A new product direction for the

company is decided in a meeting of

the three founders; this is presented

in an “all-hands” meeting of 12

employees. The company sets out

a quarterly product roadmap, and

you’re asked what you think should

be prioritised. You pick the projects

that most interest you.

You’ll certainly be able to earn

more money at a big company,

but the salaries are enough to live

You join the company

to do a specific role,

but you spend your

first week or two figuring

out what you should

“actually be working on.

comfortably on. You get 0.5% in

stock options with a salary that’s

10-30% below market rate. But you

reckon it’s worth the gamble - if

the company sells, you’ll be able to

buy a really nice house in London.

Still, it feels a long way off.

The marketing function is one

full-time person who does everything

from PR, paid advertising

and social media. The founder no

longer personally runs the company’s

Twitter account. Your company

regularly appears in the “ones to

watch” articles in tech press.

You’ve got thousands of paying

customers, and you occasionally

see people in public using your

app, which feels fantastic. Your

mum isn’t really sure what your

company does, but she’s happy

enough because you’re being paid

a real salary.

Series B

The company is now 2-3 years old

and has just raised £10-20m. It’s

using that money to grow the team,

increasing headcount to 60 people.

The founders got tired of the

dingy office, so they splashed out

on somewhere with lots of exposed

brickwork and glass walls. Everyone

is sitting on Herman Miller

chairs. You find these chairs online

and see they cost £1000 each.

There are 3 or 4 people in

finance and HR roles, and you’re

presented with a company handbook

when you join. You have a

couple of days’ training, but you

mostly figure it out along the way.

Marketing now has three digital

advertising people, and people

specialising in PR, content and

community management. You’ve

got millions of paying customers in

several countries, and your company

is regularly featured in the

business pages of the press.

Engineering has been split into

48 hacker bits

two or three separate teams, and

they’re trying recruit graduating

college students to fill the hiring

pipeline for next year.

You join in a very well-defined

role, after the team manager

recruited you. You meet founders

during your interview, but they

don’t always remember your name

now you work here. The CEO sees

a management coach every fortnight.

A new product direction is

debated at a board meeting, and

senior management is informed

by the CEO. Management let their

teams know that changes are imminent.

Most people are focussed on

“repeatable processes” and scaling


The company doesn’t seem to

release new features very often, but

the app is used by some of of your

mum’s friends, which makes her

very proud.

Your salary is in line with

what other companies are paying,

and you get 0.05% stock options.

You’ve already figured out which

house you’d put a deposit on when

the company sells. A sale seems

inevitable at some stage.

Series C

You’re joining a unicorn! After

5 years, the company has raised

£100m at a £1bn valuation. There

are some strange terms in the deal

structure that you’ve never seen

before, but the company’s HR department

tells you that’s just “legal


When you join, you’re granted

£50,000 of options, which you

work out is one two-hundredth of

a percent, or 0.005% of the company.

Friends in other startups tell

you to check what the investor’s

liquidation preference is, but noone

at the company can seem tell

you. They’re all too busy trying

to guess when the IPO is going to

be announced. You figure that any

kind of payout is a nice bonus. The

salary is pretty decent anyway.

You join in the company’s

newly opened Dublin office, part of

a new drive by the company to internationalise.

You’ve joined as one

of 6 new Sales Associates focussing

on European enterprise sales.

There’s 4 week intensive induction

training programme before you’re

allowed to talk to any customers.

The new office feels empty

at first, but you’re surprised how

quickly it fills up with new staff.

You ask where the marketing department

sit, but you’re told they’re

in a different office.

You see the CEO on the cover

of Forbes, but you’ve never actually

met him.

Your mum is already using the

app, although it’s starting to look a

bit dated compared to some of the

newer startups that are in Tech-


Business Insider reports that

the company has fired the new VP

Reprinted with permission of the original author. First appeared at tomblomfield.com.

You see the CEO

on the cover of Forbes, but

you’ve never actually met him.

of Product and is pursuing a new

production direction. She had only

arrived from Twitter 6 months earlier.

An internal memo from HR a

couple of days says that the company’s

doing some minor internal


You see consultants in the

meeting rooms, but you don’t know

what they’re doing.

A month later, you gather for a

televised all-hands meeting, broadcast

from the company’s HQ. The

CEO delivers a short speech announcing

that the company’s been

acquired by Oracle, explaining how

it will accelerate the company’s

ability to deliver on its vision. He

thanks everyone for being part of

the company’s incredible journey.

People in the audience are crying.

HR follows up explaining that

as part of the acquisition, some

departments will be downsized.

You’re told that your stock options

are “underwater”. When you Google

this, you find out that they’re

worthless. You’ve got some friends

in engineering who are being

given Oracle stock as a “retention

package”, but that doesn’t apply to

anyone in sales. You wonder why

you ever joined a startup in the first

place. •

Tom is currently CEO and co-founder of

Mondo - a new smartphone-based bank.

He previously co-founded Boso.com

and GoCardless - both companies were

accepted onto Y Combinator. He blogs

here at tomblomfield.com about software

engineering & startups.

hacker bits



Sending &


SMS on




little while ago I worked on a mixed media

theatre production called If There Was A Colour

Darker Than Black I’d Wear It. As part of

this production I needed to build a system that could

send and receive SMS messages from audience members.

Today we’re looking at the technical aspects of

how to do that using SMS Server Tools.

There are actually a couple of ways to obtain

incoming text messages:

• Using an SMS gateway and software API

• Using a GSM modem plugged into the computer,

and a prepaid SIM

The API route is the easiest way to go from a programming

aspect. It costs money, but most gateways

provide a nice API to interface with, and you’ll be

able to send larger volumes of messages.

50 hacker bits

BLACK had a few specific requirements that made

the gateway unsuitable.

Following these steps and you can send

and receive SMS messages on Linux, using

a cheap prepaid SIM and GSM modem.

1. We were projecting out of a van in regional

South Australia. We had terrible phone reception,

and mobile data was really flakey.

2. We were going to be sending text messages to

audience members later, and needed to have

the same phone number.

SMS tools

SMS Tools is an open source software package for

interfacing with GSM modems on Linux. It includes

a daemon, SMSD, which receives messages. SMSD

is configured to run your own scripts when messages

are received, allowing you to do pretty much anything

you want with them.

Installation is straight forward on Ubuntu et al:

So, we got hold of a USB GSM modem and used a

prepaid phone SIM. This allowed us to receive unlimited

messages for free. However, we couldn’t send

messages as quickly as we would have liked.

Modem selection

There are quite a few GSM modems to choose from.

You are looking for one with a USB interface and a

removable SIM. GSM modems that use wifi to connect

to computers won’t work. You need to be able to

remove the SIM because most mobile data SIMs won’t

allow you to send or receive SMS messages. The other

big requirement is Linux drivers, and Google is really

your friend here. The main thing to watch out for is

manufacturers changing the chipsets in minor product


We ended up going with an old Vodafone modem

using a Huawei chipset. It shows up in Linux like this:

ID 12d1:1001 Huawei Technologies Co., Ltd.

E169/E620/E800 HSDPA Modem

sudo apt-get install smstools

Next you’ll need to configure the software for your

modem and scripts.

Configuration file

The configuration file is a bit unwieldy, but thankfully

it comes with some sane default settings. Edit the file

in your favourite text editor:

sudo vim /etc/smsd.conf

Modem Configuration

First up you will need to configure your modem. The

modem configuration is at the end of the config file,

and the exact parameters will vary depending on what

modem you have. Let’s have a look at what I needed:


device = /dev/ttyUSB0

init = AT^CURC=0

incoming = yes

baudrate = 115200

hacker bits


device is where you specify the file descriptor for

your modem. If you’re using a USB modem, this will

almost allways be /dev/ttyUSB0.

init specifies AT commands needed for your modem.

Some modems require initialisation commands

before they start doing anything. There are two strategies

here, either find the manual for your modem,

or take advantage of the SMSTools Forums to find a

working configuration from someone else.

incoming is there to tell SMSTools you want to

use this device to receive messages.

baudrate is, well, the baud rate needed for

talking to the device.

Like I said, there are many options to pick from,

but this is the bare minimum I needed. Check the

SMSTools website and forum for help!

Event Handler

The other big important part of the config file is the

event handler. Here you can specify a script/program

that is run every time a message is sent or received.

From this script you can do any processing you need,

and could even reply to incoming messages.

eventhandler = /home/michael/smstools/sql_


My script is some simple Bash which inserts a message

into a database, but more on that in a moment.

Sending messages

Sending SMS messages is super easy.

Sending SMS messages is super easy. Smsd looks

in a folder, specified in the config file, for outgoing

messages. Any files that appear in this folder get sent

automatically. By default this folder is /var/spool/sms/


An SMS file contains a phone number to send to

(including country code, but with out the +) and the

body of the message. For example:

To: 61412345678

This is a text message sent by smstools.


Easy! Just put files that look like this into the folder

and you’re sending messages.

Receiving messages

Let’s have a better look at the event handler. Remember,

this script is called every time a message is sent

or received. The information about the message is

given to your program as command line arguments:

1. The event type. This will be either SENT,


We’re only interested in RECEIVED here.

2. The path to the SMS file. You read this file to

do whatever you need with the message

You can use any programming language to work with

the message. However, it is very easy to use formail

and Bash. For example:


#run this script only when a message was


if [ "$1" != "RECEIVED" ]; then exit; fi;

#Extract data from the SMS file

SENDER=`formail -zx From: < $2`

TEXT=`formail -I ""


That’s all you need to write programs that can send

and receive SMS messages on Linux. Once you have

smsd actually talking to your modem it’s pretty easy.

However, in practice it’s also fragile.

The smsd log file is incredibly useful here. It lives

in /var/log/smstools/smsd.log

Here are some of the errors I encountered and what

to do about them:

Modem Not Registered

You’ll see an error that looks like this:



This means the modem has lost reception, and is trying

to re-establish a connection. Unfortunately there is

nothing you can do here but wait or, using a USB

extension cable, trying to find a spot with better reception.

Write To Modem Error

An error like this:

GSM1: write_to_modem: error 5: Input/output


4. Plug the modem back in

5. Start smsd (sudo service smstools start)

Cannot Open Serial Port

You may see this error:

Couldn’t open serial port /dev/ttyUSB0,

error: No such file or directory

This occurs if you started the computer (and therefore

smsd) before plugging in the modem. Follow the

steps above to fix it.


So there you have it. Follow these steps and you can

send and receive SMS messages on Linux, using a

cheap prepaid SIM and GSM modem.

In the next post we’ll be looking at exactly what I

used this setup for. •

Michael is a freelance developer in South Australia. He obtained

his PhD in 2013 researching novel user interfaces for spatial

augmented reality. He likes working at the intersection of art and

technology on projects that use new ways of interacting with

computers. Michael is Chair of the Board and an announcer at

community radio station Three D Radio in Adelaide, and likes

making music and noise. @MichaelMarner

means the software can no longer communicate with

the modem. This is usually caused by the modem being

accidentally unplugged, the modem being plugged

in after the system has powered up, or by an intermittent

glitch in the USB driver. To fix this, do the following:

1. Stop smsd (sudo service smstools stop)

2. Unplug the modem

3. Wait 10 seconds or so

Reprinted with permission of the original author. First appeared at 20papercups.net.

hacker bits



How Elm made

our work better


Elm is a beginner friendly

functional reactive programming

language for building

web frontend. Choosing Elm for

a customer project made my job

nicer than ever and helped maintain

project velocity during months of

development. This boils down to

two things, in my opinion:

1. Elm restricts the way you

program, resulting in maintainable

code no matter what.

2. There are no runtime exceptions

so debugging is way

less of an issue.

At the Reactive 2015 conference,

where I gave a lightning talk on

54 hacker bits

Elm compiler errors can be super helpful

assumption was that we would use

React to build the frontend, but I

wasn't convinced of its merits. We

talked about Cycle.js, ClojureScript

and Reagent, and my recent endeavors

with Elm.

In the end, we decided to just

give Elm a try and quickly fall back

to Reagent or React if it doesn't

work out. We figured we should try

and make the trickiest parts (technology-wise)

first, so we could fail


Here's how I originally put it in

the project README:

stateless web UI rendering, many

people asked me: "How hard is it

to debug compiled Elm code on the

browser?" I was more than a little

confused by these questions, until I

remembered what it's like to write

JavaScript. You make a change,

switch to browser, set up debugging

stops, click on a few things

in the app, check the debugger and

go "Uhhh... How did that happen?

Maybe I should console.log


Writing Elm, on the other hand,

is like this: you make a change,

check the superb compiler errors,

fix them. Next. Of course, you

should then switch to the browser

and check that it actually does what

you wanted, but the point is: you

don't spend half the time coding

digging through the debugger.

Despite being a whole new

language to learn, I would argue

the gains of the language design

can far outweigh the "loss of time"

while learning. JavaScript is, after

all, a very very convoluted language

with myriads of different

libraries, frameworks, linters et

cetera to try and make dealing with

it less painful.

Based on my experiences, I

wholeheartedly recommend Elm

for all bigger frontend application

projects from now on. There are

parts that are more cumbersome

than in JavaScript, for sure. But

all things considered, I value the

guarantees Elm provides far more

than the ability to do things quick

'n' dirty.

How we chose Elm

In the Summer of 2015 I had a

stroke of luck. I got into a project,

where there were no restrictions on

the frontend technology. We were

tasked with building a web application

for a select few expert users,

from scratch. The browser support

target reflected the fact: Latest


I sat down with Henrik Saksela

to discuss our options. The baseline

Less complexity

The main properties and

benefits of using Elm instead

of plain JavaScript are the


• Strong static types →

finding errors fast with

readable compiler messages

• No null or undefined →

impossible to leave possible

problems unhandled

• Immutability & purity →

readability and maintainability

• No runtime exceptions →

uncomparable reliability

• Reactive by design →

FRP isn't opt-in, it is

baked right in the language

Months later (see next figure)...

hacker bits


Tweet of project stats

way and there is no way around it.

Everything in Elm is immutable,

from "variables" to

function definitions to records

(which are a bit like JS objects).

That means rendering a view, for

example, cannot possibly have an

effect on the application state. And

that the dreaded "global state" is

actually a very nice thing - since

we can be sure nothing is changing

it in secret.

The Elm pattern is the following:

The project

The application was a tool for

quickly managing news website

content. In essence, the articles on

the site's main pages are curated

by a handful of experts 24/7, and

our tool was the means for doing

that efficiently. Futurice was also

responsible for the design, both

user interaction and graphics, and

building the backend service, so

we had a great cohesion within the

whole project.

The interaction was heavily

based on drag-and-drop. To place

an article on the page, the user

would drag it from the side panel

into the main panel. Similarly,

modules (article groups) could be

dragged up and down on the page

to determine their order.


Note: Back when we started the

project, StartApp wasn't as big a

thing as it is now. It might have

guided our approach to a different

direction, but we feel our architectural

choices resulted in a great

solution in this case.

The Elm Architecture outlines

the basic pattern of Model, Update

and View. This is in fact a mandatory

separation in any Elm application.

The language just works that

• Define the shape of the data


• Define Actions and how to

react to them (update)

• Define how to show the state


Dissecting the state

Having worked with Virtual DOM

and immutable structures before,

me and Henrik reasoned we could

try and rely on the backend data --

forgoing frontend state completely.

This simple idea worked out really

well for us.

We came to think about "application

state" in this manner:

Go and learn Elm. Seriously. It is the

simplest language I have ever tried...

56 hacker bits

• The UI can be in different

states regarding views, ongoing

drag-and-drop actions and so

on, which should not persist

between sessions. This is our


• The backend represents the real

was a bit more involved than the

standard pattern, though:

• Define the shape of the data on

the backend (model)

• Define Actions that get turned

into Tasks

that instead of updating models

immediately based on an Action,

we used the actions to determine

which HTTP calls are necessary to

comply to the user's intent. These

calls then resolve to either a failing

or succeeding scenario. Both of

The Elm pattern

state of the world, or all data

that should persist. This is our


UiState was handled like in any

other Elm application, updating the

state based on Actions.

The way we handled DataState

• Define HTTP call tasks that get

turned into succeed/fail Actions

• Define how to react to the succeed/fail

actions (update)

• Define how to show the state


How our pattern differed from

a standard Elm application was

these are then translated to updates

- be it showing a notification about

the error or changing the data. In

short, we had a fully "pessimistic"

UI that would save the state to the

backend on every change. Pessimistic

means that we never assume

an operation to succeed, but instead

we rely only on facts: what (if at

all) the server responds.

hacker bits


The way we update the backend-provided

data in the Elm application

was the main kicker, though.

Once we've POSTed a change to a

list in the backend, we simply GET

the whole list from the backend

one place to another (which would

imply a data change).

There were two main concerns

to this approach: 1) is the UI responsive

enough with backend-only

updates, and 2) is it madness

In our experiments using a wireless

connection (actual users have wired

connections), we found the heaviest

updates took about 600ms on average.

This was before we optimized

the caching, which sped things up

Our model for pessimistic UI updates

and replace the whole thing in our

model. This means the state can

never be inconsistent between the

backend and the frontend. We also

made sure only one of these tasks

could be running at once simply

by deferring data-changing user

interactions until the UI had updated.

The user could still scroll and

click on things while the task was

running, but not drag things from

to discard and replace the whole

model on update. As it turns out,

concern 2 was mostly unfounded.

The Virtual DOM in elm-html does

the heavy lifting, so on the browser

only an updated item gets re-rendered.

Concern 1 was valid though.

As previously stated, our project

was an expert tool. It would only

be used from within the customer

network, using desktop computers.

ten-fold. As a result, pretty much

all updates happen in consistently

under 300ms, which is great!

Strictness - a mixed


The strictness of Elm proved invaluable.

Since the compiler won't

let you disregard a potential failure

even when trying out something,

58 hacker bits

there is no way some of them

might end up in production code.

Coupled with total immutability the

language itself enforces good functional

programming practices.

The place where Elm's restrictions

can become hardships

are when dealing with the outside

world. For one, you need to

provide full modeling of the data

structure if you wish to parse a

JSON response from the server.

And you need to take into account

that the parsing may fail at that

point. This all seems obvious once

you get familiar with Elm's type

system, though. If your API is a

little "tricky" - for example the

response JSON can have certain

properties that define which other

properties are available - you may

need to jump through hoops to

make that work.

Another thing is interoperability

with JavaScript libraries. First

off though: Elm has its FRP and

functional utilities built-in, and

the elm-html package comes with

virtual-dom, so there's no need for

stuff like Lodash, React or Redux.

But if you do need to interact

with JavaScript, the mechanism for

that is called Ports. Ports are essentially

strictly typed event streams

to listen to (Elm to JS) or to push

events into (JS to Elm). This means

you will need to write some boilerplate-ish

code, both in Elm and

in JavaScript, in order to pass a

message to the other side. So the

more JavaScript libraries you need,

and the more different kinds of

objects you want to pass through,

the more boilerplate code you will

end up with.

That said, Elm is still fairly

young and seems to be quickly

gaining recognition in the aftermath

of the "functional web frontend

tsunami" React and friends

brought about. The fact some

commonly used library alternatives

are still missing could soon turn.

And while we're on the topic of

dependencies, Elm has one more

ace up its sleeve: the package

system enforces semantic versioning.

Because of the strict typing,

the package manager can infer any

outfacing changes to the package

source code and determine the version

number on its own. No more

sudden breaking because an NPM

package upgraded from 0.14.3 to



Go and learn Elm. Seriously. It is

the simplest language I have ever

tried, and the team has put a crazy

lot of effort into making the developer

experience as nice as possible.

The syntax may seem daunting

at first, but don't fret. It's like getting

a nice new sweater. A few days

in, you'll be familiar with it and

from then on it's like you've always

known it. In our project, two out of

three developers had never coded

in Elm before. Both of them got up

to speed and were productive in a

couple of weeks. Even the sceptic

told me that once he got over the

initial shock, he found Elm a very

nice language and a good fit for the


A compiled Elm application

has a whole FRP implementation

built in, so it might not make sense

to use it for very small things on a

mostly static web page. For these

kinds of uses, you may be better

off with e.g. PureScript. PureScript

is, however, much harder to learn

without a solid prior understanding

of functional programming concepts.

There are other differences

between the two as well, such as

the way they handle data. Elm uses

persistent data structures under the

hood, which means better performance

in big apps. PureScript

resorts to standard JavaScript data

structures, which results in smaller

compiled files.

To get started with Elm, I

recommend reading through the

official Elm documentation and

checking out the links at Awesome

Elm. When I was first learning the

language, I wrote an introductory

article that describes the basic syntax

and the model-update-view pattern:

Learning FP the hard way. •

Ossi has been writing web stuff since he

was in elementary school. He is always

trying to learn something new and interesting

— nowadays surrounded by the

brilliant people of Futurice.

Reprinted with permission of the original author. First appeared at futurice.com/blog.

hacker bits



Two weeks of rust


Disclaimer: I'm digging Rust. I lost

my hunger for programming from

doing too many sad commercial

projects. And now it's back. You

rock, Rust!


spent about two weeks over

the Christmas/New Year break

hacking on emcache, a memcached

clone in Rust. Why a memcached

clone? Because it's a simple

protocol that I understand and is

not too much work to implement.

It turns out I was

in for a really fun



The build system

and the package

manager is one of

the best parts of

Rust. How often do you hear that

about a language? In Python I try

to avoid even having dependencies

if I can, and only use the standard

library. I don't want my users to

have to deal with virtualenv and

pip if they don't have to (especially

if they're not pythonistas). In

Rust you "cargo build". One step,

all your dependencies are fetched,

built, and your application with it.

No special cases, no build scripts,

no surprising behavior *whatsoever*.

That's it. You "cargo test". And

you "cargo build --release" which

makes your program 2x faster (did

I mention that llvm is pretty cool?)

Rust *feels* ergonomic. That's

the best word I can think of. With

every other statically compiled

language I've ever used too much

of my focus was being constantly

diverted from what I was trying to

accomplish to annoying little busy

work the compiler kept bugging me

about. For me Rust is the first statically

typed language I enjoy using.

Rust *feels* ergonomic.

That's the best word I

can think of.

Indeed, ergonomics is a feature in

Rust - RFCs talk about it a lot. And

that's important, since no matter

how cool your ideas for language

features are you want to make sure

people can use them without having

to jump through a lot of hoops.

Rust aims to be concise. Function

is fn, public is pub, vector is

vec, you can figure it out. You can

never win a discussion about conciseness

because something will

always be too long for someone

while being too short for someone

else. Do you want u64 or do you

want WholeNumberWithoutPlusOrMinusSignThatFitsIn64Bits?

The point is Rust is concise and

typeable, it doesn't require so much

code that you need an IDE to help

you type some of it.

Furthermore, it feels very composable.

As in: the things you make

seem to fit together well. That's

a rare quality in languages, and

almost never happens to me on a

first project in a new language. The

design of emcache is actually

nicely decoupled, and

it just got that way on the

first try. All of the components

are fully unit tested,

even the transport that

reads/writes bytes to/from

a socket. All I had to do

for that is implement a TestStream

that implements the traits Read and

Write (basically one method each)

and swap it in for a TcpStream.

How come? Because the components

provided by the stdlib *do*

compose that well.

But there is no object system!

Well, structs and impls basically

give you something close enough

that you can do OO modeling

anyway. It turns out you can even

do a certain amount of dynamic

60 hacker bits

The standard library

is rather small,

and you will need

to go elsewhere

even for certain

pretty simple things

like random numbers

or a buffered stream.

dispatch with trait objects, but

that's something I read up on after

the fact. The one thing that is

incredibly strict in Rust, though,

is ownership, so when you design

your objects (let's just call them

them that, I don't know what else to

call them) you need to decide right

away whether an object that stores

another object will own or borrow

that object. If you borrow you need

to use lifetimes and it gets a bit


Parallelism in emcache is

achieved using threads and channels.

Think one very fast storage

and multiple slow transports. Channels

are async, which is exactly

what I want in this scenario. Like

in Scala, when you send a value

over a channel you don't actually

"send" anything, it's one big shared

memory space and you just transfer

ownership of an immutable value

in memory while invalidating

the pointer on the "sending" side

(which probably can be optimized

away completely). In practice,

channels require a little typedefing

overhead so you can keep things

clear, especially when you're

sending channels over channels.

Otherwise I tend to get lost in what

goes where. (If you've done Erlang/

OTP you know that whole dance

of a tuple in a tuple in a tuple, like

that Inception movie.) But this case

stands out as atypical in a language

where boilerplate is rarely needed.

Macros. I bet you expected

these to be on the list. To be honest,

I don't have strong feelings

about Rust's macros. I don't think

of them as a unit of design (Rust

is not a lisp), that's what traits are

for. Macros are more like an escape

hatch for unpleasant situations.

They are powerful and mostly nice,

but they have some weird effects

too in terms of module/crate visibility

and how they make compiler

error messages look (slightly more

confusing I find).

The learning resources have

become very good. The Rust book

is very well written, but I found

it a tough read at first. Start with

Rust by example, it's great. Then

do some hacking and come back to

"the book", it makes total sense to

me now.

No segfaults, no uninitialized

memory, no coercion bugs, no data

races, no null pointers, no header

files, no makefiles, no autoconf,

no cmake, no gdb. What if all the

problems of c/c++ were fixed with

one swing of a magic wand? The

hacker bits


future is here, people.

Finally, Rust *feels* productive.

In every statically compiled

language I feel I would go way

faster in Python. In Rust I'm not so

sure. It's concise, it's typeable and

it's composable. It doesn't force me

to make irrelevant nit picky decisions

that I will later have to spend

tons of time refactoring to recover

from. And productivity is a sure

way to happiness.


The standard library is rather

small, and you will need to go elsewhere

even for certain pretty simple

things like random numbers or

a buffered stream. The good news

is that Rust's crates ecosystem has

already grown quite large and there

seem to be crates for many of these

things, some even being incubated

to join the standard library later on.

While trying to be concise,

Rust is still a bit wordy and syntax

heavy with all the pointer types and

explicit casts that you see in typical

code. So it's not *that easy* to

read, but I feel once you grasp the

concepts it does begin to feel very

logical. I sure wouldn't mind my

tests looking a bit simpler - maybe

it's just my lack of Rust foo still.

The borrow checker is tough,

everyone's saying this. I keep

Language design is really hard,

and sometimes you succeed.

running into cases where I need to

load a value, do a check on it, and

then make a decision to modify or

not. Problem is the load requires a

borrow, and then another borrow is

used in the check, which is enough

to break the rules. So far I haven't

come across a case I absolutely

couldn't work around with scopes

and shuffling code around, but

I wouldn't call it fun - nor is the

resulting code very nice.

Closures are difficult. In your

run-of-the-mill language I would

say "put these lines in a closure, I'll

run them later and don't worry your

pretty little head about it". Not so

in Rust because of move semantics

and borrowing. I was trying to

solve this problem: how do I wrap

(in a minimally intrusive way) an

arbitrary set of statements so that I

can time their execution (in Python

this would be a context manager)?

This would be code that might

mutate self, refers to local vars

(which could be used again after

the closure), returns a value and so

on. It appears tricky to solve in the

general case, still haven't cracked


*mut T is tricky. I was trying to

build my own LRU map (before I

knew there was a crate for it), and

given Rust's lifetime rules you can't

do circular references in normal

safe Rust. One thing *has to*

outlive another in Rust's lifetime

model. So I started hacking together

a linked list using *mut T (as

you would) and I realized things

weren't pointing to where I thought

they were at all. I still don't know

what happened.

The builder pattern. This is an

ugly corner of Rust. Yeah, I get that

things like varargs and keyword arguments

have a runtime overhead.

But the builder pattern, which is to

say writing a completely separate

struct just for the sake of constructing

another struct, is pure boilerplate,

it's so un-Rust. Maybe we

can derive these someday?

Code coverage. There will

probably be a native solution for

this at some point. For now people

use a workaround with kcov, which

just didn't work at all on my code.

Maybe it's because I'm on nightly?


So there you have it. Rust is

a fun language to use, and it feels

like an incredibly well designed

language. Language design is

really hard, and sometimes you

succeed. •

Martin is a software engineer working

mostly in Python on various web stuff.

He's always looking for interesting new

developments in programming languages,

though, and Rust really seems like a

language worth investing in.

Reprinted with permission of the original author. First appeared at matusiak.eu/numerodix.

62 hacker bits

food bit *

The hairy uses of caramel…

Hair is probably the last thing

you’ll want on a piece of caramel,

but amazingly, sticky caramel

has been used to remove unwanted

hair for centuries. It’s a popular form

of hair removal in the Arab world,

where it first originated. Known as

sugaring, the method calls for smearing

liquid caramel onto hairy body

parts, and then sticking a strip of cloth

to the area. The cloth strip is then

violently ripped off, along with most

of the caramel and hair.

* FOOD BIT is where we, enthusiasts of all edibles,

sneak in a fun fact about food.

hacker bits

HACKER BITS is a curated collection of the most popular articles on Hacker News — a social news

website widely considered among programmers and entrepreneurs to be the best of its kind.

Every month, we select the top voted articles on Hacker News and publish them in magazine format.

For more, visit hackerbits.com.

More magazines by this user
Similar magazines