The_Future_of_Employment
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
can be accomplished by machines, while non-routine tasks are not sufficiently<br />
well understood to be specified in computer code. Each <strong>of</strong> these task categories<br />
can, in turn, be <strong>of</strong> either manual or cognitive nature – i.e. they relate to<br />
physical labour or knowledge work. Historically, computerisation has largely<br />
been confined to manual and cognitive routine tasks involving explicit rulebased<br />
activities (Autor and Dorn, 2013; Goos, et al., 2009). Following recent<br />
technological advances, however, computerisation is now spreading to domains<br />
commonly defined as non-routine. <strong>The</strong> rapid pace at which tasks that were defined<br />
as non-routine only a decade ago have now become computerisable is<br />
illustrated by Autor, et al. (2003), asserting that: “Navigating a car through city<br />
traffic or deciphering the scrawled handwriting on a personal check – minor<br />
undertakings for most adults – are not routine tasks by our definition.” Today,<br />
the problems <strong>of</strong> navigating a car and deciphering handwriting are sufficiently<br />
well understood that many related tasks can be specified in computer code and<br />
automated (Veres, et al., 2011; Plötz and Fink, 2009).<br />
Recent technological breakthroughs are, in large part, due to efforts to turn<br />
non-routine tasks into well-defined problems. Defining such problems is helped<br />
by the provision <strong>of</strong> relevant data: this is highlighted in the case <strong>of</strong> handwriting<br />
recognition by Plötz and Fink (2009). <strong>The</strong> success <strong>of</strong> an algorithm for handwriting<br />
recognition is difficult to quantify without data to test on – in particular,<br />
determining whether an algorithm performs well for different styles <strong>of</strong> writing<br />
requires data containing a variety <strong>of</strong> such styles. That is, data is required<br />
to specify the many contingencies a technology must manage in order to form<br />
an adequate substitute for human labour. With data, objective and quantifiable<br />
measures <strong>of</strong> the success <strong>of</strong> an algorithm can be produced, which aid the continual<br />
improvement <strong>of</strong> its performance relative to humans.<br />
As such, technological progress has been aided by the recent production<br />
<strong>of</strong> increasingly large and complex datasets, known as big data. 16 For instance,<br />
with a growing corpus <strong>of</strong> human-translated digitalised text, the success <strong>of</strong> a<br />
machine translator can now be judged by its accuracy in reproducing observed<br />
translations. Data from United Nations documents, which are translated by hu-<br />
16 Predictions by Cisco Systems suggest that the Internet traffic in 2016 will be around 1<br />
zettabyte (1×10 21 bytes) (Cisco, 2012). In comparison, the information contained in all books<br />
worldwide is about 480 terabytes (5 × 10 14 bytes), and a text transcript <strong>of</strong> all the words ever<br />
spoken by humans would represent about 5 exabytes (5×10 18 bytes) (UC Berkeley School <strong>of</strong><br />
Information, 2003).<br />
15