10.11.2016 Views

Learning Data Mining with Python

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Classifying Objects in Images Using Deep <strong>Learning</strong><br />

This will download the dataset. Once that has downloaded, you can extract the data<br />

to the <strong>Data</strong> folder by first creating that folder and then unzipping the data there:<br />

mkdir <strong>Data</strong><br />

tar -zxf cifar-10-python.tar.gz -C <strong>Data</strong><br />

Finally, we can run our example <strong>with</strong> the following:<br />

python3 chapter11cifar.py<br />

The first thing you'll notice is a drastic speedup. On my home computer, each epoch<br />

took over 100 seconds to run. On the GPU-enabled virtual machine, each epoch takes<br />

just 16 seconds! If we tried running 100 epochs on my computer, it would take nearly<br />

three hours, compared to just 26 minutes on the virtual machine.<br />

This drastic speedup makes trailing different models much faster. Often <strong>with</strong> trialing<br />

machine learning algorithms, the computational complexity of a single algorithm<br />

doesn't matter too much. An algorithm might take a few seconds, minutes, or hours<br />

to run. If you are only running one model, it is unlikely that this training time will<br />

matter too much—especially as prediction <strong>with</strong> most machine learning algorithms is<br />

quite quick, and that is where a machine learning model is mostly used.<br />

However, when you have many parameters to run, you will suddenly need to train<br />

thousands of models <strong>with</strong> slightly different parameters—suddenly, these speed<br />

increases matter much more.<br />

After 100 epochs of training, taking a whole 26 minutes, you will get a printout<br />

of the final result:<br />

0.8497<br />

Not too bad! We can increase the number of epochs of training to improve this<br />

further or we might try changing the parameters instead; perhaps, more hidden<br />

nodes, more convolution layers, or an additional dense layer. There are other types<br />

of layers in Lasagne that could be tried too; although generally, convolution layers<br />

are better for vision.<br />

[ 268 ]

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!