Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Asus Tinker Board Vs Raspberry Pi


Pages of


and features


Install a pro-level firewall

Advanced Terminal profiles


Coding Academy: Start

developing with Django

UBUNTU 17.04


Discover the inner workings of

this essential Zesty release!

The future of Unity 8

AMD inside – Kernel 4.10

New-look Gnome 3.24

Web development



become a web developer

Maker Faire 2017

The community plays a big part

in the movement, everyone

helps each other and offers advice

Robin Hartley on building the Amazing Shortcut Keypad!


KDE distros

Get a Linux desktop

experience like no other

Push Linux

to the limit

Benchmarks, stress

testing and more!


Get into Linux today!

What we do

We support the open source community



We help all readers get more from Linux with our

tutorials section – we’ve something for everyone!

We license all the source code we print in our


We give you the most accurate, unbiased and

up-to-date information on all things Linux.


This issue we asked our experts: What are your

thoughts on Ubuntu moving back to Gnome?

Happy, sad, or indifferent Slackware user?

Jonni Bidwell

I’m a little sad, but I don’t know exactly why.

I use Gnome every day and always found

Unity to be a bit clumsier/clunkier. I

suppose that I thought it was more

beginner friendly, and now a lot of people

that have never encountered Gnome 3 will,

come to 18.04 and suddenly find

themselves in terra incognita.

Neil Bothwick

I think many users may be disappointed.

Unity is already using Gnome3 technology

so I think the ‘new’ desktop will be a lot

more like Unity than the old Ubuntu

desktop. Had they gone with Cinnamon

things would have been different. But I’m a

KDE diehard so what do I know?

Les Pounder

When Unity was first released I fell out of

love with Ubuntu. For years I loved the

Gnome desktop, and before that I was a

KDE user. But Unity for me just didn’t

work. Fast forward to 2016 – 2017 and I

am now using Unity, but secretly happy

for the return of Gnome.

Mayank Sharma

The whole thing is so against the open

source ethos. What of the unending spiel

about the benefits of Mir and Unity 8 for

the desktop? Just because he couldn’t sell

enough phones, suddenly Shuttleworth

realises that fragmentation isn’t good and

his ‘pragmatic’ solution is to shutter these

projects? Bah humbug!

Shashank Sharma

When writing reviews, I’m pained when

confronted with below-par software

because criticising and dismissing

somebody’s labour of love is not easy.

While not an Ubuntu user, I’m feeling

equally pained at this development, more

so when Unity is being discarded in so

cavalier a fashion.

Orange army

We know there’s a collective roll of the eyes from many

regular readers when we run our (bi)-annual Ubuntu

release covers. But there’s no escaping the sales boost

every orange-soaked cover gains on the yearly 04 release

schedule. It’s actually heartening to see so many people looking

forward to, or at least welcoming, the release of a new version

of Ubuntu by rushing out and snapping up our little magazine.

The truth is that Canonical, and its prime distro Ubuntu,

remains a key driver for Linux both on the desktop and in the

enterprise world. Red Hat and SUSE certainly have made their

own mark in enterprise, but Canonical is seeing wins in the

telephony industry, ‘cloud’ market and the emerging IoT world

of devices from Pi-like boards to self-driving cars and robots, as

we covered the LXF223 show report.

So in many ways it’s no shock to hear that Unity 8 has been

killed, the dream of convergence dispelled and its CEO

dismissed. It seems Mark Shuttleworth, self-appointed

benevolent dictator for life, has dictated that Canonical and

therefore Ubuntu needs to concentrate on projects that make it

money. This must be devastating for the people involved and

you can learn more in our news on page 6. But a return to a

Gnome desktop (on top of Wayland) will be fantastic, a focus on

projects that deliver can only benefit everyone and we’re

looking forward with optimism to Ubuntu 18.04 LTS.

But orange distros aren’t the only Linux fruit, so this month

we’re looking at the best KDE-based distros in Roundup, how to

systematically benchmark any Linux distro, reporting on all the

excitement at the UK’s Maker Faire 2017, and we review the

Pi-sized Asus Tinker Board and dole out the usual top selection

of mind-expanding tutorials. What a time to be alive!

Neil Mohr Editor



On digital and print, see p30


June 2017 LXF224 3


Learn Linux

in 5 days

on p70

“In the name of God, stop a moment, cease your work, look around you.” – Leo Tolstoy


Google WiFi .......................15

Take one wireless router into your home?

Nonsense, at least three are required to

reach the darkest corners of LXF Tower’s

dungeon. Bath’s stone walls are thick.

Get inside

Ubuntu 17.04

We go hands-on with the latest release

of Ubuntu 17.04, rip off its top and

peer into its gooey inner workings

to see what makes it tick, on p32.

Yes, we’re still doing shampoo-based

jokes for things that come in groups.

Crucial MX300 2TB ..........16

We find out if the latest low-cost (Ha! – Ed),

high-speed SSD from Crucial can do enough

to tempt Brexit-strapped LXF readers from

their spinning-disc alternatives.

Parrot Security OS 3.5 ..... 17

A modern day Robin Hood with a

conscience, Shashank Sharma merely tests

security measures with this pentesting

distro, while remaining anonymous. Erm…

FreeNAS Corral .................18

Shashank Sharma looks at the popular

NAS solution that has been reborn,

much like himself.

4MLinux 21.0 .....................19

Ambitious minimalist distros are quite a

tempting bait (without the switch, we hope,)

but will Shashank Sharma bite?

Civilization VI ................... 20

Gather around, children, as holographic

great, great granddaddy TJ Hafer describes

how all this here rocket port used to be fields.

How will you rule your kingdom,

like Trumpton or a Little England?


KDE distros p23

Maker Faire 2017

It’s all about the attitude,

the willingness to learn,

and the sense of community.

Robin Hartley, at the UK’s premier maker show p40

4 LXF224 June 2017 www.linuxformat.com

Raspberry Pi User

Pi news ................................... 58

The sales of Pi keep on growing, more than the

Commodore 64 (and yes, soon its entire range),

there’s a Mac Pi and Julia is here.

Asus Tinker Board ............... 59

Les Pounder tries a new SBPC that thinks it’s a

Pi beater from big-name Asus.

Analogue explained .............60

Les Pounder turns it up to eleven connecting his

analogue thingies to his GPIO whatsits.

Digital wall calendar ............ 63

Nate Drake is the most organised man you’ve

never met, discover his secret and build your

very own Pi-based digital wall calendar.

On your FREE DVD

Ubuntu 17.04 32-bit,

Ubuntu 17.04 64-bit,

Linux Lite 3.4 32-bit.

Only the best distros every month

Plus HotPicks, code and library


Subscri e

& save! p30


Benchmark Linux ................. 45

Testing stuff is hard, let us show you how to

make it easy(ier) with our benchmark guide.

Coding Academy

Django unchained ................ 88

Thomas Rumbold walks you through the

basics of the Django Framework and

Daniel Samuels show you how to get

started with your first lines of code.

Web development ................ 92

Kent Elchuk strings English words together to

make sentences that explain how you can build

a ready-to-go web development machine.



Custom profiles ...............72

Nick Peers reveals how the Terminal can be

customised for different uses with the help

of custom profiles.


pfSense .............................74

Afnan Rehman demonstrates that building

your own router and firewall system has

never been this easy.

Regulars at a glance

News............................. 6

Unpleasant news from Canonical

Towers, dropping of Unity, dropping of

Mir, dropping of staff, but we’ll have

Gnome and Wayland in the future.

User groups................ 11

Les Pounder loves a bit of jam

especially when it’s with a nice Pi.


Puzzles are coming! Firewalls are

coming! Digital editions are coming!

Even the kitchen sink is coming!

Subscriptions ...........30

Why walk to the shops when we can

come to you and save you £££? Grab

our latest subs offer today!

HotPicks .....................51

Alexander Tolstoy most certainly is

not drawing outlawed clown pictures

of Mr Putin, he’s too busy drawing

conclusions on top FOSS like:

MATE, MtPaint, Meteo-Qt, NTFS-3G,

Guetzli, LanguageTool,

Webenginepart, GNU Nano,

Classifier, Man vs. Olives, Tank island.

Overseas subs ..........69

Like the Stranglers we’re Big in

America and other territories too.

Next month...............98

Pop on your white-hat hoody and

prepare to defend your networks

from attack! We enter Room 101 of

the pen-testing world. Please, no rats.

You see? Easy.


haProxy ............................. 76

Mihalis Tsoukalos tires of endless maths

and tinkers with his web proxy instead.


GnuPG .............................. 80

John Lane expands your worldview with a

dive into GnuPG’s key-based trust model.

Roundup ....................23

Mayank Sharma is shirking the new

Ubuntu. He’s got better things to do,

like play with pretty KDE distros.

Back issues ...............68

Discover exactly what the new Pi

Zero W is capable of in LXF223, then

build yourself chattering devices.

Our subscription team

is waiting for your call.


Build a router ................... 84

Afnan Rehman brings all the boys to the

yard, his router is better than theirs.

www.techradar.com/pro June 2017 LXF224 5

THIS ISSUE: Canonical crisis Unity lives on Samsung’s holes Netflix on Firefox


Not so Canonical now

Canonical in turmoil as it drops Unity 8, Mir (sort of) and Ubuntu phone, while axing

jobs mean turbulent times for the company and people behind Ubuntu.

News has been coming thick and

fast from the Canonical camp

recently, with major new

developments happening even as we go

to press. It all started at the beginning

of April 2017 when Mark Shuttleworth,

founder of Canonical, announced in a

blog post (http://bit.ly/2pGH63j)

that Canonical would stop working on

Unity 8 and Mir, saying that the Ubuntu

desktop will “shift back to GNOME for

Ubuntu 18.04 LTS.”

Perhaps the biggest news of all was

that Canonical is also dropping its goal

of putting Ubuntu on smartphones and

tablets. While the Ubuntu-powered

smartphones that have already been

released have been met with a poor

critical (see Reviews LXF197, LXF212)

reception, Canonical had until recently

maintained that its vision of

‘Convergence’ – where Ubuntu worked

across desktop and mobile devices –

was vital to the company.

That has now all changed, with

Shuttleworth admitting that he made a

mistake when he “took the view that – if

convergence was the future and we

could deliver it as free software – that

would be widely appreciated both in the

free software community and in the

technology industry, where there is

substantial frustration with the existing,

closed, alternatives available to

manufacturers.” He concludes “I was

wrong on both counts.”

In the blog post, Shuttleworth

admits that contrary to Canoncial’s

aims for convergence, its efforts were

seen by the community as

“fragmentation not innovation”. Many

people were concerned that Canoncial’s

strategy of chasing after the mobile

market – which is dominated by

Android and Apple – was taking

resources away from the

desktop version of Ubuntu,

which would then suffer.

Instead, Shuttleworth

emphasised Canonical’s

“ongoing passion for,

investment in, and

commitment to, the

Ubuntu desktop that

millions rely on. We will

continue to produce the

most usable open source

desktop in the world, to

maintain the existing LTS

releases, to work with our

commercial partners to

distribute that desktop, to

support our corporate

customers who rely on it,

and to delight the millions

of IoT and cloud developers who

innovate on top of it.”

Stripped of PR speak, that means

Ubuntu is going to focus on Ubuntu for

desktops, servers, virtual machines, as

well as snaps and Ubuntu Core for IoT

(Internet of Things) embedded devices.

Cloud infrastructure technology will

also continue to be worked on.

Big changes

are coming to


“Ubuntu is going to focus on

Ubuntu for desktops, servers,

virtual machines, snaps and IoT.”

While not many people will mourn

the passing of Ubuntu Phone the

announcement has wider implications

for Canoncial and Ubuntu. By dropping

Unity 8, Ubuntu will return to using

GNOME as its interface, with

Shuttleworth confirming on his

personal Google+ account that

Canonical “will invest in Ubuntu GNOME

with the intent of delivering a fantastic

all-GNOME desktop”, which all but

confirms that Ubuntu 18.04 LTS will

come with the GNOME Shell.

Despite Canoncial’s high hopes for

Unity, it also looks like we won’t be

getting a heavily modified version of

GNOME – and will instead get the

vanilla experience.

“We’re helping the

Ubuntu GNOME

team, not creating

something different

or competitive with

that effort. While I

am passionate

about the design ideas in Unity, and

hope GNOME may be more open to

them now, I think we should respect the

GNOME design leadership by delivering

GNOME the way GNOME wants it

delivered.” However, as the month went

on, the full implications of Canonical’s

move became apparent.

6 LXF224 June 2017




What now for Unity?

Is this the end of the road for the desktop environment?

One of the biggest questions hanging over

Canoncial’s abrupt dropping of Unity is

what will happen to the desktop

environment – and the versions of Ubuntu that are

currently running it. For seven years Canoncial has

been concentrating on Unity after dropping

GNOME, and since Ubuntu 11.10, Unity has been

the default desktop for the distro.

For the time being, it looks like not a huge

amount will change. Unity hasn’t had any major

updates for a while now, instead getting a few

minor adjustments to make sure it continues to

work. Unity should still get some updates, then, in

the future, to make sure people using it aren’t left

completely unsupported.

As Mark Shuttleworth stated, “Unity 7 packages

will continue to be carried in the archive. I know

there are quite a few people who care enough

about it to keep it up to date.” Also, while Canoncial

will revert to GNOME for Ubuntu 18.04 LTS, that

won’t be released until April 2018. Before that, two

version of Ubuntu will release, 17.04 and 17.11.

Both of these releases will continue to use

Unity 7. Unity 7 will also be available in next year’s

version of Ubuntu, according to Shuttleworth. “I

expect it will be in-universe for 18.04 LTS.” As with

so many open source projects, it looks like the

community will come to the rescue, with many

people pledging to continue working on Unity 7 –

and even the unfinished Unity 8. Marius Gripsgård,

a developer who worked on Unity, said on his


Layoff hit Canonical

The repercussions of Canoncial’s decisions begin to hit.

At the beginning of April when Mark

Shuttleworth announced Canonical’s

plans to drop Ubuntu Phone and Unity, he

was known as the founder of Canonical and ex

CEO. Less than a month later, he was once again

the CEO. This follows the announcement that the

(then) current CEO of Canonical, Jane Siber, was

standing down. In a blog post (https://insights.

ubuntu.com/?p=66110), Siber wrote that “We’re

now entering a new phase of accelerated growth at

Canonical, and it’s time to pass the baton.”

Siber insisted that this was not a sudden

decision. She had “originally agreed to be CEO for

five years and we’ve extended my tenure as CEO

by a couple of years already”. Siber will remain CEO

for the next three months, with Shuttleworth

Google+ account that “I’m not giving up! I will do

my best to keep Ubuntu Touch and Unity 8

standing on both its legs!”

Ubports has also stepped up to continue

development for Unity 8, and has a project website

at https://unity.ubports.com. There, it explains

that “After the announcement that Canonical will

stop investing in Unity 8, we stepped forward

stating that we will continue development for

Unity 8. The reason why we will do that is that we

believe in convergence, we believe convergence is

the thing people want in the future, and now that

desktop is slowly decreasing in users ... investing

in mobile is a smart move in our opinion.”

This is an important reminder that just

because Canonical is no longer working on Unity,

it’s not the end for the project. The beauty of open

source software is that if you don’t agree with a

company’s decision to stop working on

something, you can work on it yourself. Canonical

is also ceasing work o

its Mir display server.

But, as Canonical has

reiterated its support

for Ubuntu on IoT,

which often relies

on Mir, it looks like

it will continue to be

updated by Canonical

– just not for the

desktop Ubuntu.

Don’t count out

Unity just yet.

replacing her in June. Siber will continue to work

with Canonical in a new position on the Canonical

Board and within the Ubuntu community.

Canonical has also told over half the team

working on Unity that if they cannot be mapped to

new positions in the company, they will be made

redundant. Reports suggest other departments at

Canonical are being reduced, some losing 30

percent of their workforce, others up to 60 percent.

In comments reported by The Register

(http://bit.ly/2oT6O4n), Shuttleworth said of

the cuts “we need to look at those things and say,

‘Could we run a marathon in six months to a year?’

and we could, but to do that we need to get a bit

fit.” Being considered dead weight that needs to be

shed will be little comfort to Canonical employees.


It may come as little surprise, but

a security researcher has

uncovered 40 unknown zero-day

vulnerabilities in Tizen, the mobile

operating system that runs on a

number of Samsung devices.

According to the person who found

the vulnerabilities, Amihai Neiderman,

“It may be the worst code I’ve ever

seen... You can see that nobody with

any understanding of security looked

at this code or wrote it. It’s like taking

an undergraduate and letting him

program your software”. All the

vulnerabilities Neiderman found would

give hackers the ability to perform

remote-code executions to hijack

software running on other devices.

Let’s hope these findings prompt

Samsung to take the security of its

smart devices more seriously.


The results of Stack Overflow’s

annual developer survey are in

(see https://stackoverflow.com/

insights/survey/2017) with over

64,000 developers sharing details

about their jobs. It will probably come

as little surprise, but Linux remains

incredibly popular with developers,

with 26 percent of respondents saying

it was their platform of choice. Linux

was the second most popular

platform after Windows (which got the

nod from 32.4 percent of developers

polled). While it’s a shame to see

Microsoft’s OS in first place, the fact

that Linux has such a large percentage

of the vote among developers,

considering its usage among the

general population, is a testament to

developers’ love of Linux.

Netflix now works on Linux via

the Firefox browser. Linux users

have been able to watch Netflix for a

few years now, since the company

began the transition from Microsoft’s

Silverlight to HTML5 plugin-free

playback across multiple platforms –

but only if they were using Chrome.

Netflix, in a blog post about the move

(http://nflx.it/2mRe5iI), said that

“Plugin-free playback that works

seamlessly on all major platforms

helps us deliver compelling

experiences no matter how you

choose to watch.” Even Iron Fist?

Netflix is now much easier to

watch on Linux.


June 2017 LXF224 7



Ubuntu goes


Daniel Stone


decision to

move back to

a GNOMEbased

desktop will have ramifications for years

to come. For user experience, unifying the

desktops means combining forces and

eliminating duplicated effort. For developers, a

lot of the differences in APIs such as indicators,

menus, and scrollbars could now come to an

end, making Linux an easier target for ISVs. Not

to mention that we are back to only supporting

two window systems: Wayland and legacy X11.

With Ubuntu following Fedora’s lead in

shipping Wayland-based GNOME for 18.04, all

major distros will reap the benefits of the work

done to Wayland, EGL, and Vulkan across the

board. And we’ll undoubtedly see more focus

on improving and extending Wayland.

But you may be surprised with continuity, and

just how much of the graphics infrastructure is

common. When I started working on X11 nearly

15 years ago, the idea of a fork or alternate

window system was unthinkable. Not just

because the drivers and platform specifics

were tied up in the XFree86/X.Org servers, but

the toolkits too: much of the big breakage

between GTK+ 2.x and 3.x was removing X11

implementation details from the toolkit API.

However, 2017 is a different time. KMS

provides device-independent display control,

Vulkan and EGL provide GPU acceleration

across multiple window systems, xkbcommon

provides keyboard infrastructure, and logind

lets us do all this without being root. GBM

allocates graphics buffers, and the universal

allocator being designed by the whole

community including Nvidia, will join the family.

As Mir also relied on these, the change is less

seismic than you might think. From this point of

view, nothing changes: we continue to cooperate

on the bedrock infrastructure borne of

X.Org’s incredibly long-sighted view that it had

a duty to make itself replaceable.

Daniel Stone, Graphics Lead, Collabora Ltd.


A major release for the minimalist

distro that focuses on running

Docker containers has emerged.

The idea behind the distro is to

keep the footprint of the OS as

small as possible, so your machine

can commit as many resources as

possible to the containers.

According to the official blurb: “Key

features of RancherOS include:

minimalist OS – eliminates the

need for unnecessary libraries and

services; automatic configuration

– simplifies OS configuration by

using cloud-init to parse the cloudconfig

files from multiple data

KAOS 2017.04

Distro watch

What’s behind the free software sofa?

The newest version of KaOS has

been released in time for the

fourth anniversary of the KDEbased

rolling release distro. New in

version 2017.04 is an improved

installer that now lets users use

GPT disk layouts in BIOS systems,

as well as a separate Wayland

edition for people who don’t want

to run the Plasma desktop using

the X display server. Running the

distro in a virtual machine is now

more streamlined thanks to the

inclusion of VirtualBox guest

modules. However, you aren’t able

to run the Wayland version of


This major release of the minimalist

distribution features the 4.8 version

of the Linux kernel, along with glibc

being updated to 2.24, GCC

updated to 6.2.0, and much more.

Meanwhile, according to the brief

release notes (which can be

found if you browse over to


sources; simple setup – runs services

inside containers orchestrated using

Docker Compose service files, making

setup as simple as running a Docker

container.” To find out more, and to

download, head to http://rancher.


RancherOS makes setting up and

using Docker containers as simple

as possible.

KaOS as a VM. For more information

on what’s new, visit http://kaosx.


KaOS is celebrating four years in

the business with a new release.

index.php/topic, 20934.0.html),

“most extensions have been copied

over from the 7.x repo.” The release

notes also state that “the Xorg-7.7

extensions have been updated, the

ncurses and readline extensions have

changed major versions and the

openssl extension has been factored

out into openssl and ca-certficate.”

Tiny Core,

big update.

8 LXF224 June 2017




This specialist distro, which is built

around the popular Kodi platform,

and primarily designed for media

playback on big screens, has been

updated. A major new change is the

inclusion of the WeTek 2Play 2

platform. This allows you to stream

“endless entertainment to your living

room, enjoy the latest movies and

series in 4K UHD, play games, browse

the internet, keep up with the news,

or use the DVB modular tuner to

watch thousands of TV channels via

satellite, terrestrial and cable

connections.”This new platform

comes with its own build. Check out

TALKINGARCH 2017.04.04

TalkingArch is an accessibility re-spin

of the Arch Linux live ISO image,

which includes support for speech

and Braille output for blind and

visually impaired users. The latest

release brings support for x86_64

processors – a first for the distro. It

drops i686 support, which means

TalkingArch helps blind and visually impaired users work with their PCs.


NuTyX may have a rather annoying

name, but it’s a great distro that’s

based on Linux From Scratch and

uses the “cards” custom software

manager. This latest release comes

with a range of updates to the

included software, including Plasma

5.9, GNOME 3.22, MATE 1.16 and

Python 3.6. The Linux kernel is now

4.10, and the ISOs have been updates

so that they can be launched on UEFI

machines. NuTyX 9.0 can be loaded

completely into a system’s memory

as long as it has over 1GB of RAM,

giving you flexibility if you no longer

want to run the live ISO from a USB

the full release announcement at




Media fans will be pleased to see

there’s a new version of OpenElec.

that very old PCs may no longer work.

The upshot of this is that the new

version is much smaller. This version

also includes the 4.10.6 Linux kernel,

and a number of upgrades to

packaged software. Head to https://


talkingarch to find out more.

stick. Find out more at:


Useful distro, useless name.



Bringing open

source together

Arpit Joshipura

Open source


has come a

long way in the

past five years. In this time, it has evolved from

a disaggregation of networking components at

all levels of the stack, to production-ready

components matured and deployed in various

production networks. This year, open source is

poised to enter its third phase of development:

production-ready end-to-end solutions.

With this comes fresh challenges, the most

glaring of which is the current fragmentation in

the industry. For open source to move forward,

harmonisation needs to take place across the

stack. This will give rise to common frameworks

between the different open networking projects

that will, in turn, dictate their interoperability.

Without this crucial step, the mass adoption of

open source solutions in the carrier network

space will prove impossible.

How can we, as an industry, best achieve

harmonisation? The key is a massive increase

in collaboration and standardisation, a direction

in which The Linux Foundation is already

spearheading initiatives. In February, the

Foundation announced the merger of two

massive projects in the MANO sector –

OPEN-O and ECOMP – into ONAP, effectively

eliminating duplicate efforts and supplying end

users with a unified platform for all issues

relating to open source virtual networks.

In fact, the Foundation has, for many years

now, been bringing disparate elements of the

open networking industry together through its

networking events. These include the Open

Networking Summit, ContainerCon, and

LinuxCon, all gathering the best in the industry

to exchange ideas and discuss developments.

At The Linux Foundation, we hope to use our

resources to provide the structure and support

for a sustainable community that will work

together to continually advance the open

source agenda.

Arpit is the new general manager for networking

and orchestration at the Linux Foundation.


June 2017 LXF224 9

Linux user groups

United Linux!

The intrepid Les Pounder brings you the latest community and LUG news.


Alpinux, le LUG de Savoie

Meet on the first and third Thursday of the month at

the Maison des Associations de Chambéry.


Bristol Hackspace Studio G11, 37 Philip

Street, Bedminster, Bristol, UK, BS3 4EA.


Surrey and Hampshire Makerspace

Tuesdays and Fridays at the Boileroom in Guildford.


Lancaster and Morecambe Makers

Unit 5, Sharpes Mill, White Cross, Lancaster, Open

Night on Wednesday evening 6:30pm till late.


Hull Raspberry Jam Malet Lambert School,

Hull. Every other month. See their Twitter account.


Preston Hackspace 28a Good St, PR2 8UX.

Open night is 2nd Monday of the month, 7pm.


Huddersfield Raspberry Jam Meet every

month at Huddersfield Library, usually 4th Saturday.


North Kent Raspberry Pi User Group

Every two weeks at Medway Makers, 12 Dunlin Drive,

St Mary’s Island, Chatham ME4 3JE.


Cheltenham Hackspace The Runnings

trading estate, Cheltenham. Thursdays from 7pm.



Raspberry Jamboree 2017

This formerly fruitful event has been preserved.

The Raspberry Jamboree was

last seen in 2014 under the

expert guidance of Alan

O’Donohoe, aka Teknoteacher. Now a

leaner event is taking on the name.

The Raspberry Jamboree 2017 is

organised by Claire Wicher, a Raspberry

Pi Certified Educator who works in

community outreach in Manchester.

Claire specialises in helping children

and adults learn about computing.

This one-day event takes place on

May 27 at Manchester’s Central Library,

St Peter’s Square. The library has kindly

provided three rooms where delegates

can take part in two tracks of

unconference/barcamp talks where the

delegates are the speakers. If you have

a talk in your head, and it

has a Raspberry Pi,

micro:bit or other single

board computer theme, the

Raspberry Jamboree would

love to hear about it.

The Jamboree has also

set up one room to be a


space, for 25 delegates to

learn new skills and help

others learn more about

the Raspberry Pi and what it

can do in your home or classroom.

These workshop sessions are the only

curated part of the event, aimed at

introducing new projects and nurturing

new talent. If you would like to run a

session then please contact the

organisers via their form.

This free event aims to provide the

community with an additional means to

express and showcase new projects,

ideas and opinions for this ever popular

board. But is it limited to just Raspberry

Pi fans? No, this is an inclusive event. If

you are a fan of Arduino, micro:bit etc

you are more than welcome.

More details and tickets can be

found on the Eventbrite page, at:

http://bit.ly/2onuRr6. LXF

At previous Jamborees we have seen the

Raspberry Pi Foundation run workshops and talks.

The Perl Conference

in Amsterdam

This three-day conference starts

on August 9. Perl is a powerful

tool that has been used to solve

many problems with only a few

lines of code. What started life as

a grassroots user meeting has

now grown into a much larger

conference where Perl Mongers

can share projects. For 2017 their

keynote speaker is Larry Wall,

creator of the Perl language.

More information and a schedule

of talks and workshops can be

found on their website.


EMFCamp 2018

This is still over a year away, but

a date for your diaries. August 31

to September 2, 2018.

Electromagnetic Field, shortened

to EMFCamp, is a festival not

unlike Glastonbury, but replaces

the music with technology!

Blacksmithing, carpentry,

electronics, wearable technology,

lock picking and craft ales along

with the other many facets of

maker culture meet in a lovely

field for three days of talks,

workshops and networking. If

you are a maker, tinkerer, or

hacker in the UK then this is the

event for you! More details at:



The GNOME conference comes

to Manchester, UK, July 28 to

August 2. It covers the latest

technical developments in the

desktop environment, as well as

talks, workshops, panels and the

chance to socialise with GNOME

project members. The event is

split into two sections, days 1-3

cover talks, days 4-6 are for

workshops and training. Right

now there’s a call for papers, so

this is your chance to submit.



June 2017 LXF224 11

Write to us at Linux Format, Future Publishing, Quay House, The Ambury, Bath BA1 1UA or lxf.letters@futurenet.com.


Don’t worry, I’m not writing for a

USA edition, the UK version

works just fine. I was wondering

if you could cover my favourite

Linux firewall, either in a firewall

Roundup, or as part of an

OpenSuse review? I find the

SuSEfirewall2 to be very easy to

use, using either Yast or editing

the config file directly, and I can

get much more granular with the

rules than I can with most of the

other firewalls out there.

Chris Lucht, CT USA

Neil says: As you might have

noticed we ran openSUSE

Tumbleweed with LXF222 and we

managed to squeeze the entire

4.7GB of the full openSUSE on to

LXF220! So hopefully that will

help slake people’s thirst for the

green geko for a while at least.

I have to admit openSUSE

doesn’t get anywhere near

enough love from us, but I think

that’s also a reflection of the wider

situation – even though it’s a

bulletproof distro with solid

inroads into the enterprise

business, and constantly in the

top five distros.

Do digital

I have been a faithful reader of

Linux Format for over ten years.

Recently, I have had some

difficulty in getting my copies by

post, so I have been buying them

from J Sainsbury. I have very

recently tried to renew my

subscription changing to the

digital edition. Having paid for an

annual digital subscription I find

that it is only available for iPads,

iPhones and Android devices.

Fancy a publisher of one of the

world’s leading Linux

publications not offering a

digital edition for Linux

platforms! It is rather like the

Houses of Parliament publishing

online the text of Hansard only

in Urdu and Farsi. There may be

millions of Indians and Persians

out there, but how many want to

read British parliamentary

proceedings? Please draw the

attention of the senior

executives of Future Publishing

to the ridiculousness of the

current situation.

John Hunter, via email

Neil says: Believe it or not the

majority of our digital

subscriptions are bought by Apple

(boo, hiss) owning readers.

Splitters, etc. As you’d hope,

Android devices have caught up,

but it’s still not quite on parity.

None of that really deals directly

with your point, but one man’s

platform is another man’s walled

garden. What is the Linux

platform? Android, which runs

Linux? Ubuntu? Many people

wouldn’t want to create the

required Ubuntu One account. The

compromise we ended up with is

that print subscribers can access

It turns out people do love

their walled gardens

the Linux Format online PDF

archive – DRM-free PDF versions

of every issue back to number 66

– but a couple of years back this

was extended to digital-only

subscribers too, who paid through

our own MyFavouriteMagazines.

co.uk service so we are able to

validate the subscription

database; something we’re unable

to do through Google or Apple.

A great year

A big thanks to everyone for

continuing to produce such an

excellent magazine. My favourite

articles of 2016? Well, the

history lesson on the birth of

Linux (LXF215) made absolutely

fascinating reading, as did Jonni

Bidwell’s Security Suite

(LXF216), but that’s not to say

nothing else is good; Nick Peers’

terminal tutorial is always useful,

as is Jolyon Brown’s

Administeria (what I can

understand, anyway).

As a little aside, I recently

queried with you the validity of

the claim by Firefox v50.1.0 that

your own website was

considered insecure. Is Firefox

just being overly cautious or is

its database of acceptable

certificate issuers not up to

date, I wonder? Keep up the

good work.

David Bones, via email

Neil says: Thanks for the kind

words, we’re hoping to make 2017

an even better year for Linux

Format with new writers wanting

to contribute and new ideas from

you the readers! In terms of the

website report, we had a technical

issue with the website at the start

of 2017 due to a security protocol

being deprecated by Google and

Mozilla. We disabled HTTPS

access as a temporary workaround

– though this then caused

its own warnings – but once Jonni

got back from travelling we were

able to build a new version of

Apache and all is well once more.

Kitchen sink

Here’s a challenge or a

suggestion for a future article…

As an example I have a Dell

Latitude E5520, Intel Core i3

with 8GB of memory. On it, I’ve


Pluma (because it retains


12 LXF224 June 2017 www.linuxformat.com

Letter of the month


Let me just say I totally agree with

John Wagner’s letter “Puzzling”

in LXF219. Learning a new

language can be pretty dry if you

don’t have, or can’t think of a target

application. Resolution of a puzzle, solving

a mathematical challenge or simulating a

‘dynamic’ problem would be ideal

challenges for a potential developer

looking to practise their skills.

This kind of entertainment has been a

staple of computing and scientific

magazines since the ’80s – my first

exposure was ‘Computer Recreations’

(Scientific American) and that led to JJ

Clessa (Leisure Lines – Personal

Computer World magazine and his book

“Math and Logic Puzzles for PC

Enthusiasts” by JJ Clessa – ISBN:

9780486291925) and last but not least

Martin Gardner (magazines and books).

I appreciate the challenges of

managing such a column, but how about

creating a new entry in your existing

forums, where interested readers could

pose puzzles and all members could post

and discuss solutions or assist each other

with hints and tips. That seems like a

winner to me.

Steve Simmons, Kent

Neil says: Good news! I’ve been chatting

with mathematical genius Mihalis and he’s

all for figuring out how to do a puzzling

coding challenge. As always it’s a struggle to

squeeze everything we want into each issue

and the bean counters are always on our

backs – despite LXF being one of the few

print magazines to grow in circulation over

2016 – but keep an eye out for a trial puzzle

series starting in LXF226. Many thanks to

everyone who wrote in to support this idea.

If you like a puzzle then good news,

we’ve got some planned.

recent files, unlike Leafpad,

which does not)

Kate (because it does column

selection and word counts as

well as word wrap)

LlibreOffice, Bleachbit, FSLint

(for duplicate image cleanup)

Catfish (search), feh (Image

viewer), SpaceFM (FileManager),

Wine, VirtualBox, NoMachine,

GUFW (Firewall)

Gimp (Colour to B/W

conversions, G’Mic – brushes /

Patterns – import Inkscape then


Krita (Image fun / brushes /

Patterns – Side trips to Gimp/

Pinta (or vice versa)

Inkscape (lots of extensions –

export to Openscad / Cookie

Cutter / Blender Import /

Export ), Inkscape (using

Freestyle or F3 Image Save and

Bitmap Trace)

ChaosPro, Pinta, OpenSCAD,

FreeCAD, K3DSurf, Synth

Structure, POV Ray, Netfabb

Basic, Meshlab, Blender.

I give my PCs a workout, to

say the least. And, they crash

and burn (lock up and complain

– pkill, xkill and HTop are my

friends). So, that’s my

(shortened) list of what’s most

likely to find it’s way onto my

PCs over the years of use.

Both laptops run Linux Mint

18.1 and Ubuntu LTS. I stick with

Ubuntu because I’m more

comfortable with how it gets

things done than I am with other

distros. But, I’ve looked at just

about all of them and for various

reasons, Linux Mint just gets me

where I want to go with a

minimum of fuss.

So, throw the kitchen sink at

them, yours or mine, makes no

difference, software, hardware,

distros, PC, laptop, what tends

to work best, what just falls

short. I’ve tried just about

Rendering is going to tax even the most powerful Ryzen processor.

everything, limited resource

distros, custom distros, outside

my comfort zone (Centos,

Fedora, Mageia, etc). What

shines? What doesn’t? What’s

recommended? What’s not?

Mike, Covington, WA

Neil says: Just to be clear, we’re

not going to print big old lists of

installed programs every month,

but in your case we made an

exception because you ARE most

certainly giving your systems a

good workout. Though perhaps

the story here is more what you’re

doing with Blender et al that’s

destroying your PCs on a regular

basis?! LXF

Write to us

Do you have a burning Linuxrelated

issue you want to discuss?

Want to let us know what issue

made you throw your gaming

laptop out the window, or just

want to suggest future content?

Write to us at Linux Format, Future

Publishing, Quay House, The

Ambury, Bath, BA1 1UA or



June 2017 LXF224 13

The home of technology


All the latest software and hardware reviewed and rated by our experts

Google WiFi

Why take just one router into your home when

Joe Osborne can take three mesh routers!




ac,AC1200 2x2

Wave 2 Wi-Fi


mesh; dual-band

2.4GHz and 5GHz,

TX beamforming);

Bluetooth Smart


CPU: Quad-core

ARM 710MHz

Mem: 512MB


eMMC flash


Implicit and

Explicit for 2.4 &

5GHz bands

Ports: 2x

Gigabit Ethernet

per point (1 WAN,

1 LAN)

Size: 4.1x2.7

inches (106.1 x

68.7mm; Dia x H)

Weight: 340g

Routers and range extenders are

dead, the future is Wi-Fi mesh

or “tri-band” systems

utilising 802.11s. Naturally, the

smart-home obsessed Google is all

over it with the Google WiFi.

As it turns out, Google may very

well have crafted the best Wi-Fi mesh

system to date. The company has

managed to churn out a system that

offers more mesh units than

competitors for far less with a focus on

dead-simple setup and management.

The result? We never want to look at

our gateway again.

Google WiFi costs £229 for a set of

two units – that’s one primary “WiFi

Point” (the one you hook up to the

modem or gateway) and a secondary

WiFi Point – but you can always add

more if you want. A single Google WiFi

unit can be had for £129 and Google

promises that three WiFi Points can

cover up to 4,500 square feet (418

square meters) in a (big) home.

Regardless, Google offers more units

for less money than any competitor, like

the Netgear Orbi, with all others costing

at least £320 for the same number.

Mesh means that any of the units

can function as the “router” of the

system, while the others can bestow

wired internet (begotten wirelessly)

with their included Ethernet ports as

well as wireless internet. All units are

powered via USB-C.

The setup is as sublime as Google’s

hardware design, using a free iOS and

Android app to facilitate the process.

(Be aware there is NO browser-based

setup option, you are required to have a

suitable Android or iOS device, bad

Google – Ed.)

Scan a QR code and from there, the

app tells you to name your network and

set a password, then pair the additional

Wi-Fi points and label them in the app

for reference. Again, it takes seconds for

the “router” to recognise the Wifi Points

and for them to begin broadcasting.

Power users, take note: you’re not going

to get the same depth of access as even

Netgear Orbi provides, so no band

switching for you.

The app offers plenty of useful

features, such as constant monitoring

of your network, its Points and the

devices connected to it. The app has an

included internet speed test, a mesh

test that measures the health of your

Points’ connections as well as a Wi-Fi

test that measures your connection

strength from within the network. You

can also prioritise bandwidth to one

device for a time, control smart home

devices and pause internet access to

certain devices in a family setting – all

from within this app.

We saw as impressive performance

from the Google WiFi system as we

have seen from Netgear Orbi – if not

better. Google WiFi draws the absolute

It’s a device as simple to operate as

it looks.

most of our 100Mbps service that we’ve

seen any router able to, and it can do so

from every room of our, admittedly

small, house.

We’ve been able to stream 4K video

through Netflix to our Roku Premiere in

the basement, as well as we’ve been

able to play Overwatch in the office

where the modem is located: without

issue. Wi-Fi mesh systems like Google

WiFi aren’t focused so much about

throughput as they are coverage, but

this product delivers regardless. The

true benefit of Google WiFi over others

is its coverage for the price. LXF

Google WiFi


Developer: Google

Web: https://madeby.google.com/wifi

Price: £229

Features 8/10

Performance 9/10

Ease of use 9/10

Value 8/10

Not the most powerful or precisely

controlled, but the easiest and most

manageable router we’ve ever set up.

Rating 9/10


June 2017 LXF224 15

Reviews SSD

Crucial MX300 2TB

The only writer we know who’s owned multiple Porsches, Jeremy Laird

sullies his hands with the cheapest 2TB SSD in town.


Size: 2,050GB

Type: TLC 3D


Chip: Marvell


Form: 2.5-inch

Port: SATA

Read: 530MB/s

Write: 510MB/s

4K Reads:

92,000 IOPs

4K Writes:

83,000 IOPs


Three years

Solid-state storage is one of

the wonders of the modern

world. Thinking about it,

we’re reminded of the immortal

words of Arthur C Clarke, the

great science fiction writer. He

reckoned that any sufficiently

advanced technology will seem

like magic.

Go back 100 years, and what

would anyone make of, for

instance, a 256GB MicroSD

card? (they’d probably think it

was chewing gum – Ed) The

notion that something so tiny

could store hundreds of

thousands of books would

surely seem like witchcraft.

Nowadays, it’s pretty

impressive that you can bag a

128GB USB stick for under £25.

Despite that, SSDs aren’t yet the

default option for mass storage. That

will happen eventually. But not yet, not

even thanks to Crucial’s 2TB MX300.

It’s about as close as you’ll get to a

really big SSD aimed at a mainstream

audience, but at £480 it’s getting on for

eight times the price of the cheapest

2TB magnetic hard drive.

But then, the MX300 majors on

capacity rather than performance.

That’s because it’s a SATA drive, rather

than PCI Express, which introduces

limitations in terms of the peak

bandwidth on the SATA interface and

the inefficiencies of the AHCI protocol it

uses. The latter was never designed

with solid-state storage in mind.

Thus, we’re talking peak claimed

performance around 500MB/s for

reads and writes, and IOPs (Input/

output Operations Per second) a fair bit

below 100,000. The 2TB MX300,

incidentally, has the same claimed

performance for sequential throughput

and IOPs as the 1TB, 750GB, and

525GB MX300 models. Only the entrylevel

275GB model differs with slightly

lower performance. It also shares the

familiar Marvell 88SS1074 controller

with the rest of the MX300 family.

Similarly, this 2TB model uses the

same 384GB TLC 3D flash memory

dies as previous MX300s. It’s that

Two terabytes is a lot of storage

space in the SSD world.

unusual capacity per die that leads to

the MX300 range having odd sizes, with

this drive serving up 2,050GB. The

MX300 range also has a dynamic write

acceleration mode that switches a

portion of the memory to SLC mode for

increased performance. For the 2TB

model, the amount of memory that can

be switched to SLC mode is increased.

A major negative point is that any

software that Crucial offers is Windows

only, this includes its firmware update

packages. We’d hope for better.

If all that reads like a feature list

without much real-world analysis, the

truth is that the MX300’s real-world

performance isn’t all that interesting.

SATA SSDs such as this are a fairly

mature technology, and the limitations

we mentioned mean there’s zero

chance of this drive setting any new

records. What you want is a reliable

drive with no performance nasties, and

for the most part that’s what the 2TB

MX300 delivers.

In our synthetic performance tests,

it operates pretty much exactly as you’d

expect, with peak performance around

500MB/s, and 4K results in the mid-

20MB/s area for reads, and 120–

140MB/s for writes, depending on the

benchmark app in question. It’s a

similar story of generic SATA drive

performance in our real-world

compression and copy benchmarks. All

of which means the MX300 ultimately

trades on price, which is handy,

because it’s comfortably the cheapest

2TB SSD you can currently buy. LXF

Crucial MX300 2TB

Developer: Crucial

Web: www.crucial.co.uk

Price: £480


Features 7/10

Performance 7/10

Ease of use 8/10

Value 8/10

Artificially limited by the SATA III

connection as it is, the real question is:

do you want 2TB of storage in a 2.5-

inch removable drive form factor at the

lowest price around?

Rating 8/10

16 LXF224 June 2017 www.linuxformat.com

Linux distribution Reviews

Parrot Security 3.5

A modern day Robin Hood with a conscience, Shashank Sharma tests

security measures while remaining anonymous. Until now…

In brief...

The Debianbased


release distro is

available as an

installable Live

medium. As well

as its wide array

of popular


testing tools, it

has many privacy

and cryptography

tools, too. While

comparisons with

Kali Linux are only

natural, the

distribution can

also serve as a

regular desktop

for privacy

and securityconscious


Security assessment essentially

means protecting your network

infrastructure from

unscrupulous individuals. Specialised

distributions featuring a vast collection

of popular tools to help you do just that

have been around for quite some time,

with many competing for dominance in

the field of vulnerability assessment.

Parrot Security OS is one such

Debian-based rolling release

distribution. While that structure is

identical to Kali Linux, arguably the

most popular penetration distribution,

Parrot Security has enough tricks up its

sleeve to impress novice and

experienced administrators alike. For

one, unlike its myriad peers that are

designed to be run as a Live medium,

Parrot can be installed to disk and thus

features a number of everyday

productivity apps, and you can fetch

and install more using the repositories.

All the specialised tools are housed

in the Parrot menu, which is further split

into neat categories with various subcategories

where warranted. For

instance, the Information Gather menu

is further divided into SSL Analysis,

DNS Analysis, etc, apart from multipurpose

tools such as nmap which

aren’t relegated to any sub-category.

The Wireless Testing menu similarly

offers submenus for 802.11 and

Bluetooth tools, among others.

Designed in collaboration with

Caine, Parrot features the best tools

and suites in digital forensics featuring

the best analysis, evidence

Features at a glance


All essential offensive

security, cryptography

and privacy in a neatly

packaged distribution.

Neat menus

Parrot manages to present

its vast software offerings

in smart menus that are

easy to navigate.

You can use this Mate powered desktop to have a go at the neighbours’

wireless router. Strictly for educational purposes. Or a prank. (NO!–Ed)

management and reporting tools. With

your security in mind, the distribution

also ships with several cryptography

and encryption tools to safeguard your

data. It also boasts out of the box Tor

support featuring torbrowser, torchat,

Anonsurf and various other privacy

tools to mask your online presence.

The latest release also ships with

native VirtualBox and VMWare guest

support, unlike its peers.

Teething troubles

For any Linux distribution, but especially

one geared for specialist use,

installation is a key challenge. Parrot

users can choose to install to USB, with

or without persistence, or to disk

without being forced to first boot into

the Live environment. What’s more, the

Curses based standard installer is

complemented by a GTK-driven one for

users more at ease with a mouse.

Which turned out to be a good thing as

the distribution refused to launch the

installer from within the Live

environment without reporting error

messages. Also, the installation failed

during one of our tests when we chose

to install /home and /tmp to separate

partitions using encrypted LVM. Barring

these two hiccups, the distribution

worked flawlessly even on machines

with unimpressive RAM.

One area where the distro lags

behind pack leader Kali Linux is

documentation, which is minimalist,

albeit functional. Until early this year,

the project didn’t even have a dedicated

and independent forum board. The

newly launched forum boards are fairly

active but with only a handful of posts

as of now, it’s not exactly a vast

information resource.

Based on Debian, Parrot Security OS

has often been unfairly compared with

its more successful peer Kali Linux,

despite being more similar to Caine.

However, Parrot’s repository of tools,

comparable to Kali, is complemented

by an arsenal of cryptography software

coupled with various anonymising tools

to ensure your online presence is

always masked should you so desire,

and that communications are

encrypted. As a result, Parrot Security

OS establishes itself as a wise fit for

pentesting purposes with enough

privacy tools to make Snowden proud.

That’s the author’s Robin Hood side

rearing its head again. LXF


Parrot Security OS 3.5

Developer: Frozenbox Network

Web: http://parrotsec.org

Licence: Various

Features 8/10

Performance 7/10

Ease of use 8/10

Documentation 5/10

A fully equipped distribution with

tools spanning three genres. A better

forum and documentation is needed.

Rating 7/10


June 2017 LXF224 17

Reviews NAS distro

FreeNAS Corral

Shashank Sharma looksatthepopularNASsolutionthathasbeen

reborn much like its reviewer.

In brief...

It’s one of the

most popular

open source


storage solutions,

based on the


operating system.

It’s overflowing

with NAS-related

features and can

be extended with

plugins. The latest

release marks a

major milestone

for the project.

There’s no dearth

of open source

NAS solutions but

NAS4Free, Amahi

and the Debianbased


Media Vault are

three designed

for advanced

home users.

The latest release of the popular

network-attached storage

solution is a ground up rewrite.

FreeNAS 10 as it was known during its

development is now called FreeNAS

Corral because the operating system

can now be used to corral data, storage

devices, virtual machines and docker

containers under one redesigned

management interface.

Better support for the self-healing

OpenZFS copy-on-write filesystem that

ensures data is never overwritten,

support for Active Directory and

FreeIPA directory services, and

improvements to the backup and

replication functionality, are some of the

‘minor’ improvements. ZFS is one of the

main reasons for the popularity of

FreeNAS. It boasts useful features such

as software RAID, known as RAID-Z,

and filesystem snapshots that can be

scheduled and stored remotely.

The real highlight is the addition of

FreeBSD’s hypervisor that you can use

to create and launch guest operating

systems from inside your NAS. If you

fancy containers rather than virtual

machines, the release also integrates

support for Docker as well.

The overhauled browser-based

management interface is now more

modern-looking and intuitive. Along

with all the capabilities of its previous

avatar, it now also lets you fiddle with

the new virtualisation capabilities of the

software. FreeNAS Corral also includes

over a dozen easy-to-use templates.

With these you get pre-installed, fully

Features at a glance

New interface

Exposes powerful features

in a fairly intuitive fashion,

accessible to users with

varying degrees of skill.

House and run VMs

Use your Corral-powered

NAS to manage and power

virtual machines and

Docker containers.

To familiarise users with the new interface, FreeNAS devs have created a

handful of detailed videos available at www.youtube.com/user/FreeNASTeam.

configured versions of popular BSD

operating systems and Linux distros

like TrueOS, FreeBSD, CentOS, Linux

Mint, Ubuntu, and more. You can also

create VMs from your own ISO images.

Under new management

The management GUI is one of the

most visible changes. The sidebar

notifications are a particularly nice

touch especially when you initiate timeconsuming

processes such as creating

a template-based VM. Another powerful

feature is volume creation.

Corral contains four canned profiles

for volume creation that strike the right

balance between performance, capacity

and redundancy. For example, if you

select Media from the menu, the

interface will award maximum

weightage to capacity and some for

redundancy at the cost of performance.

The good thing about the interface is

that it lets you alter the weightage and

create custom profiles graphically.

The interface also lets you assign

and use attached disks by dragging and

dropping them from the pool of

available disks into a storage pool

known as vdev in ZFS parlance. These

vdevs are automatically arranged in

RAID formats depending on the

number of disks in the pool. So if you

have added five disks to a vdev, they’ll

be arranged in RAID-Z3, which means

that you wouldn’t lose your data even if

three of the drives in the pool fail. You

can also encrypt the ZFS volumes for

additional security.

The interface also lets you add disks

as spares so that when a drive fails, the

system will automatically replace the

failed disk with one from the pool of

spares. Similarly, when you create a

share, say an SMB or a NFS share, the

interface lets you specify all related

advanced options from within the

browser window. For experienced

FreeNAS campaigners Corral includes a

new scriptable command line interface

that can automate and control every

aspect. If you are running the previous

version, FreeNAS 9.10, you can also

upgrade to Corral without issues. The

FreeNAS devs plan to support FreeNAS

9.10 and release fixes and updates for

“as long as there is an audience.” LXF

FreeNAS Corral

Developer: iXSystems

Web: www.freenas.org

Licence: BSD License


Features 9/10

Performance 8/10

Ease of use 9/10

Documentation 9/10

With the new interface and features,

this is suitable for everyone from home

users to enterprises.

Rating 9/10

18 LXF224 June 2017 www.linuxformat.com

Linux distribution Reviews

4MLinux 21.0

Ambitiously minimalist distros can be quite a tempting bait, but will

Shashank Sharma bite on this occasion?

In brief...

A minimalist

distro with many

unique traits. With

its included

software, the Live

distro can be used

for maintenance

purposes, to

unwind with

multimedia, to run

different servers,

and even to

indulge in some

good old games.

Users can also

investigate the

Antivirus Live CD,


and The SSS – all

official forks.

The 4M in the name denotes the

four areas of focus for this

single-developer distro:

Maintenance, Media, Mini-server and

Mystery. The 500MB Live distribution

requires less than 1GB of storage

should you decide to install it. Being a

Maintenance distro, 4MLinux offers

Photorec, Testdisk, and various tools to

help you recover data from a corrupted

Windows installation and to manage

partitions. Also included are tools to

help you play an assortment of audio

and video files, with all the relevant

codecs pre-installed. You can also use

4MLinux to run FTP, SSH, and HTTP

servers. The Mystery component refers

to the bundled games, which include

the original Quake and Doom, Tetris,

Pacman, Sokoban and others.

Unique approach

4MLinux lays claim to being the only

Linux distro that can automatically

repair itself. This is made possible

because of the choice of Busybox as its

init service. Apart from the core system,

which includes Busybox, BASH and

kernel files, the init system builds

various directories, configuration files in

/etc, and so on, during the first startup.

During each subsequent boot, 4MLinux

inspects each of these components and

rebuilds them if necessary.

The choice of Busybox also means

the distro has to be run as the root user.

But for security purposes, the distro

creates system users automatically,

such as when starting a browser.

Features at a glance

Custom scripts

4MLinux ships with

various custom tools

such as udev, restart, zk,

connect, server and more.

Self healing

You can delete or corrupt

your /etc directory and

still reboot into a working


Featuring JWM as its window manager, 4MLinux easily manages to justify its

name with the bundled software.

If your hardware is not properly

identified, you can run the udev

command to identify and configure all

the connected devices not properly

configured during boot. Another unique

feature is the distro’s ability to restart

itself with the restart command, in

order to apply changes to configuration

files without a reboot.

Unlike most other distros, 4MLinux

packages are xzipped tar archives and

called ‘addons’. All installed packages

are listed in the /var/4MLinux folder.

Additional packages, such as drivers,

can be downloaded from http://bit.

ly/4Mdrivers. You can install addons

using the custom-built zk package

manager with zk addon_name.tar.xz .

Use zk update to update the installed

4MLinux to the latest stable release.

While 4MLinux doesn’t ship updates

to its collection of packages as most

other Linux distros, occasionally at the

start of the month it ships point

releases to the stable channel, so zk

update will keep your system updated.

Also, the Snap packages under the

Extensions menu are not available out

of the box. Clicking on them launches a

script which installs the package.

4MLinux has had problems in past

releases with configuring wireless cards.

Although correctly identified, it failed to

enable the Intel wireless card on one of

our test machines. To be fair, this

particular card has troubled many a

Linux users, across various distros.

The spartan documentation can be

accessed at http://4mlinux.com. The

docs on installation and desktop are

blog posts from 2013, but these are

updated with each new release when

required. A FAQ answers a lot of

questions, and users can also post

queries on the 4MLinux forum hosted

at LinuxQuestions.org where the lone

developer is very active. LXF

4MLinux 21.0


Developer: Zbigniew Konojacki

Web: www.4mlinux.com

Licence: GPLv3

Features 10/10

Performance 8/10

Ease of use 10/10

Documentation 9/10

A unique design, and well suited to

those who don’t already have a

favourite distro for maintenance tasks.

Rating 9/10


June 2017 LXF224 19

Reviews Linux games

Civilization VI

Gather around children as holographic great, great granddaddy TJ Hafer



OS: Ubuntu

16.04, SteamOS


CPU: Intel Core

i3 530, AMD


Mem: 6GB


GPU: Nvidia

GeForce 650,


NB: Intel and

AMD GPUs are

not (officially)



regions are



Civilization VI is the ultimate

digital board game. More than

ever in the series, the board –

the world – is the soul of every

opportunity and challenge. As usual for

Civ, we build empires, compete for a set

of victory conditions, and fend off

warmongering leaders like that

scoundrel Peter the Great. But we’re

also playing for, with, and against the

board. Forests and deserts and

resource-rich tundras each influence

the flow of our civilisation, granting us

boons and burdening us with lasting

weaknesses. Bands of barbarians put

our farms in crisis, but also open up

opportunities to speed the

development of our military techs. The

glorious, challenging dynamics that

emerge from Civ VI’s redesigned maps

left us with no question that the storied

series has crowned a new king.

While Civ VI is probably the most

transformative step forward for the

series, its changes shouldn’t trip up

longtime players too much. You still

settle cities, develop tiles, train military

units, wage turn-based warfare, and

conduct diplomacy. It mirrored our

memories of past Civs closely enough

that hints from the in-game adviser

were all we needed to course-correct

when something we hadn’t seen before

came our way.

But there are so many of these new

features that it could feel overwhelming

Unit stacking is more flexible this time around.

at times. The depth and variety of

systems resembles a Civ game that’s

already had two or three expansions

added on top – from the new Districts

that perform specific tasks and spread

cities out into an often messy but

somehow pleasing sprawl, to a whole

separate ‘tech’ tree for civic and cultural

progress that ties into a sort of

collectible card game for mixing policy

bonuses to build a unique government.

What binds everything together is

the map; the map itself, and its cities,

iron mines, and festival squares, is more

alive than ever. Unworked fields lie

barren, and you can tell how many

citizen slots in a commercial district are

taken up by the level of bustle

occupying its streets. It’s a pretty

brilliant way of keeping you engrossed

and focused on what matters.

The tech trees and the leader

interaction screen are the only parts of

the UI that hide your soaring cities from

view. The latter of the two involves fully

animated, 3D representations of

everyone from Montezuma to that jerk

Peter the Great who thinks his

moustache and his science bonus from

tundra tiles are so cool, even though

they’re not and we’ve had bombers in

range of his second largest city since

the Atomic Age, ready to wipe that

stupid grin off his face. They’re all very

well voice-acted, with the return of

native language dialogue from Civ V.

There’s never a time that you can

feel you’ve filled every tile with the most

obvious ‘correct’ district or

improvement and call it a day. The need

for foresight is unending. There are

always sacrifices to make, like when we

fell behind in culture because the only

eligible tile for a theatre square was the

one we’d been saving to build a rocket

launch site to clinch a science victory.

It’s a fantastic, richly realised way of

forcing difficult decisions at every bend

in the river and ensuring no two cities

you build will ever look or feel the same.

20 LXF224 June 2017 www.linuxformat.com

Linux games Reviews

There is a level of trial and error that

could cause some legitimate frustration

in the first few races to the space age.

When everything is fresh and new, you

might not realise that you’re plopping

down a university campus in a place

you should have waited to build a

neighbourhood several centuries later.

Some type of city planning tool enabling

mockups of where everything was

going would be a boon.

Another way the map has become a

much more important part of Civ VI is in

how it ties into the tech and civics tree.

Every tech and civic has an associated

mini objective that will trigger a

“Eureka” moment and pay off half the

cost immediately. Founding a city next

to an ocean tile speeds up progress

toward Sailing. Building three industrial

districts with factories jumps you ahead

in a quest to embrace communism.

(Viva la Economic Policy Slots!)

It’s not all a reinvented wheel,

though. The Civ staples of war and

diplomacy have returned recognisable,

but honed to the sharpest edge we’ve

ever seen. We particularly enjoyed the

way AI leaders are now given agendas

(one public, and one that must be

uncovered through espionage, building

a positive relationship, or observing

context). These overtly tell you what the

leaders like and don’t like, and make it

theoretically possible to stay on

everyone’s good side through the whole

game if you’re willing to jump through a

lot of hoops.

In the event that hostilities do break

out, Civ VI has split the difference

between V’s one unit per tile and IV’s

Clash of the Doomstacks to reach a


are no longer

constrained to

your city centre.

“The map binds

everything together. It’s

more alive than ever.”

happy middle. Support units such as

medics and Great Generals can attach

to and occupy the same tile as a regular

combat unit like a pikeman. In the mid

and late game, you also gain the ability

to combine two combat units into a

Corps, and later you can add a third to

make an Army, which is a more

powerful version of that unit that only

takes up a single tile. This adds some

new layers and tactics to a model of

warfare that could get predictable and

repetitive as in Civ V.

Civ’s score breathes life into all these

conflicts and conferences. Christopher

Tin’s new main theme, Sogno di Volare,

is just as sweeping, catchy, and

beautiful as Baba Yetu. The real magic

happens past the menu screen,

however, where each and every civ has

a main theme that grows more complex

and epic as you go through the ages.

When we looked down upon

everything we’d built as our Mars

colonists blasted off to barely snatch

victory away from Peter and his

doubtlessly mustachioed cronies, every

tile struck us with a sense of history.

The sprawl of the Dehli-Calcutta

There’s now a

tree for civic and

cultural progress.

metroplex reflected moments from the

windows of its skyscrapers. There was

the little tentacle we’d made by

purchasing tiles to get access to coal.

There was the 3,000-year old farmland

we had to bulldoze to place an

industrial-era wonder. And just beside

where our first settler had spawned, at

the foot of the soaring peaks that had

protected us from marauding armies

for generations, was the new growth

forest we’d planted on the site of a

former lumber mill to have enough

uninterrupted nature for a National

Park. For each valley and steppe and

oasis, we could tell you why we had

developed it the way we did, much

more meaningfully than “Because hills

are a good place for mines.” As the

board shaped our empire, and we

shaped it, the history of the civilisation

and our decisions accumulated and

followed us right up to the threshold of

the stars. And that, more than anything,

is why we’ll never need another Civ

game in our life besides this one. LXF


Civilisation VI

Developer: Aspyr

Web: www.aspyr.com

Price: £50

Gameplay 7/10

Graphics 8/10

Longevity 9/10

Value 7/10

Iffy AI aside, sight, sound, and

systems harmonise to make Civilization

VI the liveliest, most engrossing, most

rewarding, most challenging 4X in any

corner of God’s green earth.

Rating 8/10


June 2017 LXF224 21













Best KDE distribution

While he hops distros with bravado, Mayank Sharma was quite timid when

it comes to moving away from Gnome-based desktops... until now.

How we tested...

Several major mainstream desktop

distributions offer a choice of

multiple desktop environments. You

can also install KDE on top of

virtually every desktop distribution.

For this Roundup, however, we focus

on distributions that work around

the KDE desktop. For a project to

make the cut, the distribution needs

to have an official KDE flavour.

KDE itself puts out the KDE Neon

Live environment, but we haven’t

included it since by its own

admission KDE Neon doesn’t

consider itself a distro, but rather a

showcase for the KDE desktop.

We’ll compare the distributions on

parameters such as their default

packages and update mechanisms.

The criteria that will play a major

part in helping decide the winner,

however, are look and feel. The distro

that helps users experience the best

of KDE will score over the others.








ALinux distribution ships with

a particular desktop

environment, and while we

can often replace it with

another without much trouble,

the majority of us tend to stick

with the default. It can be argued

that Gnome dominates the

default desktop environment

and pops up one way or another

in a large number of

distributions. You’ll find Gnomepowered

environments across a wide

spectrum of projects, from mainstream

desktops to specialised builds.

While we aren’t saying that KDE

users aren’t plentiful, the desktop just

doesn’t get all the attention it deserves.

The chance of a new Linux user getting

started with KDE is fairly remote

“The desktop has

matured a lot since

the days of KDE 4.”

considering the fact that a majority of

the routes that lead into Linux use a

non-KDE desktop environment by

default. Also KDE has historically been

the more adventurous of the desktops

and has managed to rile experienced

Linux users long before Gnome and

Unity started luring them away. While

the desktop has matured quite a lot

since the days of KDE 4, some

users have moved on to other

pastures and are missing all the

interesting developments.

In our bid to correct these

wrongs, we’ll look at some of

the distros that focus their

development around the KDE desktop.

In the Roundup you’ll find some distros

that will appeal to first time users as

well as those that will impress the

stalwarts with their mature platforms.


June 2017 LXF224 23

Roundup Best KDE distribution

Default apps

Do they dilute the mix with non-KDE apps?

The KDE project has a wide

collection of applications, so

much so that the project hived

off the official apps into a separate KDE

product called KDE Applications just

before the release of KDE 4. Besides

the usual productivity apps such as text

editors and image viewers, it also has

alternatives for web browsers and office

suites. In their bid to get you to sample

the best of KDE, some distributions

only include KDE apps while others mix

in the more popular mainstream apps

instead of their KDE alternatives.

Chakra expressly bills itself as a

KDE-centric distribution, which is why it

sticks to KDE apps by default. Some of

the more interesting apps that it

bundles include, among others, Krita,

Karbon, Kget, Kdenlive, digiKam, Kmail,

Calligra Office suite, Qupzilla web

browser and the Spectacle screenshot

utility. Other Qt-based apps include the

Bomi media player and Tomahawk

music player. There’s also a storage

service manager to fetch files from

Dropbox, YouSendIt, Box, Google Drive

and any WebDAV location. KaOS also

sticks to KDE apps by default. Besides

the usual ones there’s Quassel IRC,

Cantata client for MPD, Komoso

webcam, mpv media player,

SimpleScreenRecorder, SMPlayer,

SMTube, and Qupzilla. Also unlike

Chakra, KaOS includes the FatRat

download manager, a Seafile client, and

the proprietary Skype client.

Unlike Chakra and KaOS, Manjaro

offers an official KDE release though its

flagship offering is the Xfce release. This

is why Manjaro is more forgiving

towards non-KDE apps than the

previous two. That said, there’s still a

healthy collection of KDE apps but

most of the productivity apps are the

mainstream ones instead of their KDE

alternatives. You’ll find LibreOffice, VLC,

Firefox, Thunderbird, and the Steam

client in Manjaro. Similarly, both Maui

and Netrunner combine KDE apps with

popular open source apps such as

LibreOffice, Firefox, Thunderbird, GIMP,

and VLC. They are also the only

distributions to include Ndiswrapper for

You can install all kinds of community maintained

packages from KaOS’s website with a single click.

installing Windows wireless drivers.

Besides the usual KDE apps, they have

quite a few proprietary apps as well

including the Steam client, VirtualBox,

and Skype.

While the usability of the KDE

bouquet of apps might take a certain

amount of getting used to, since the

primary objective of this Roundup is to

pick the distribution that puts on the

best KDE show, we will award a higher

weighting to the projects that stick to

KDE apps exclusively rather than

substituting them with popular and

familiar alternatives.







Chakra and

KaOS offer

a pure KDE


Packaging policy

How do they manage their repositories?

Chakra separates packages into

several custom repositories

such as “desktop” that contains

KDE packages and “gtk” where you’ll

find various GTK apps. Then there’s the

“unstable” repository which contains

bleeding edge packages. There’s also a

community maintained repository

inspired by the Arch User Repository.

KaOS first places all packages in a

build repository and after testing moves

them to one of the three main user

Although Netrunner isn’t a rolling release, the developers put out a bash script

that helps you upgrade to a new version.

facing repositories – core, main, and

apps. Core is rolling but only after the

components are tested thoroughly.

While apps are fully rolling, many

packages in the main repository that

houses libraries, drivers and firmware

are rolling and available to users after

ten days of testing. Like Chakra, KaOS

also has a community supported

repository. Packages from Arch end up

in an unstable Manjaro repository, and

are then tested for a week before

ending up in the stable repository.

Maui is based on Ubuntu Xenial and

uses its repositories for the core

components that also house KDE Neon

packages. For bleeding-edge

components, Maui has two backports

repositories. Netrunner has a similar

arrangement as Maui. It’s based on a

snapshot of the Debian Testing branch.

You can enable the Backports and

Debian Testing repositories to receive

continuously tested updates.







All distros go

to great lengths

to package apps

from upstream


24 LXF224 June 2017 www.linuxformat.com

Best KDE distribution Roundup

Degree of customisation

What makes them stand out?

None of the distros in this

Roundup ship with a stock

KDE desktop. However, their

degrees of customisation vary. Some

have even tweaked the layout of key

KDE apps such as the Dolphin file

manager to make it more appealing to

their intended user base.

All distributions offer various types

of menu. There’s a simple application

menu, cascading popup menus used by

default by several distributions, as well

as a full-screen Unity/Gnome3 style

Application Dashboard.

Maui, Netrunner and KaOS offer no

custom tools to aid administration

which is relegated to the KDE System

Settings window. Maui and Netrunner

have both tweaked Firefox to include

add-ons such as AdBlock Plus and Ant

The Manjaro Settings Manager has a daemon that’ll notify users when there’s

a new kernel available for installation.

video downloader. KaOS on the other

hand is the only distribution to place

the panel on the right-side of the screen

instead of its usual place at the bottom.

It also offers the option to start a

Plasma Wayland session at the login

screen and uses the Kaptan first run

wizard to adjust various aspects of a

newly installed system such as the

behaviour of the mouse, etc.

Manjaro includes the Manjaro

Settings Manager (MSM) that helps

users change between the various

available kernels. The tool also

automatically installs all necessary

kernel modules for a selected kernel.

MSM also includes the hardware

detection module that lets you switch

between free and proprietary drivers for

connected hardware including graphics

cards. Chakra’s kernel is configured

with several MAC hardening options

including Tomoyo and AppArmor. The

distro also has a graphical minibackup

script to backup personal data such as

address book, email SSH keys as well

as application settings. There’s also the

Chakra repository editor module to

help you control repositories with ease.







Manjaro and

Chakra go one

step ahead of

their peers with

custom tools.

Upgrade policy and tools

Moving with the times.

Most of the distros are based

on Arch Linux. The two

exceptions are Maui, based

on Ubuntu and Netrunner, based on a

snapshot of Debian Testing. All projects

use the tools of the distribution they are

based on to deliver updates. The Archbased

distributions use Octopi, a

graphical frontend to Arch’s pacman

package manager, to install individual

packages as well as to keep the

installation updated. Maui and

Netrunner use Synaptic for installing

individual packages and Mint’s Update

Manager instead of Ubuntu’s.

One obvious difference between the

two systems is that the update process

of the distros that use pacman is more

verbose and sometimes requires

manual confirmations. Also, the

process of updating the mirror list

varies from one distribution to another.

Maui by default follows a

standard release cycle. If you

need bleeding-edge

software you can turn your

installation into a part rolling

release by enabling the

Xenial backports channel.

Similarly you can run the

latest Netrunner release as

a stable install or enable

Debian’s testing repository.

In contrast, Manjaro is a

fully rolling release much like

Arch itself. KaOS too. The

developers ensure that even

if a package hasn’t been

updated in a year it will be

rebuilt so that it integrates well with the

rest of the system. Also when it comes

to the kernel, the distro offers two

options. There’s the stable Linux kernel

and a fully-rolling Linux-next kernel. The

The green alien head in Octopi means the

distribution includes Yaourt, which lets you

install packages from the Arch User Repository.

Chakra developers came up with a halfrolling

release model to deliver stable

core components along with the latest

apps and security updates, giving you

the best of both worlds.







KaOS and

Chakra strive to

strike the right

balance between


apps on top of a

stable core.


June 2017 LXF224 25

Roundup Best KDE distribution

The KDExperience

How KDEesque are they?

Since they all use the KDE desktop, all

the candidates on test here look quite

similar at first glance. They also have

very similar list of default apps which makes

them look even more alike. What really sets

them apart from each other is their objective

and the user experience they aim to deliver.

Both of these become apparent as you spend

time fiddling around with the distribution

getting things done.

Some distributions are focused enough to

only appeal to a particular type of user while

others are flexible enough to mould

themselves to suit a large number of users. In

this section we’ll evaluate which distribution

makes the best use of the tools at its disposal

to present a more polished, refined and useful

KDE desktop.

Chakra Linux

The distribution bills itself as a KDE-centric distribution and you’ll be

hard pressed to find a non-KDE app in its menus. The developers also

strip GTK+ dependencies from popular Gnome-based packages that

you can install from its repositories. Chakra also looks inviting and uses

the SDDM display manager which is quite slick and integrates nicely

with the Plasma desktop. Besides the usual slew of KDE apps, the

distribution also includes several you will not find elsewhere such as the

Bomi media player, Tomahawk music player, and KDE’s Storage service

manager. Things related to the Chakra project such as links to the

documentation, forums, bugtracker, package changelog, code repository,

etc, are neatly arranged inside a Chakra submenu in the Application

menu. Clearly the goal is to give users an end-to-end KDE experience.


The KaOS developers argue that while the diversity offered by open

source software is good, distributions should make some choices that

they think best for their users. To that end, KaOS is currently based on

Linux but is constantly evaluating the Illumos kernel as well. One thing it

has made up its mind on, however, is the KDE Plasma desktop and the

Qt toolkit. The distro does offer GTK packages in its repositories for

when the KDE/Qt equivalent don’t offer matching features. KaOS is also

the only distro to include the mpv media player, Kamoso, SMPlayer and

SMTube apps. The inclusion of the Kaptan desktop greeter that allows

users to tweak various aspects of their installation is a particularly nice

touch though the placement of the application menu on the right-side of

the screen takes some getting used to and there’s no apparent option to

place it elsewhere on the desktop.

Ease of installation

Can you anchor them with ease?

This is another one of those

parameters that really isn’t a

factor when choosing one of

these candidates. That’s because all the

distributions on test here use the

distribution-independant Calamares

installer. Calamares is actually a

framework that the distributions can

customise as per their target users. The

installer is pretty intuitive and visually

appealing and gets the job done without

too much fuss. Its partitioning feature

supports both manual and automated

partitioning tasks. It also includes a very

convenient Replace Partition option

that allows you to reuse a partition for

hopping distributions. That said it’s best

for simple straightforward installations

and doesn’t yet support advanced

setups like RAID and LVM partitions.

One obvious difference between the

implementations of the Calamares

installer across the distributions is

purely cosmetic. The other difference is

that some distributions give you the

option to encrypt the root partition

while others do not. The installer as

implemented in Chakra, Maui and

Netrunner falls under the latter

category and doesn’t offer the option to

encrypt a newly created partition.

Conversely, Manjaro and KaOS do let

you encrypt the partition in which you

plan to install the respective

distribution. KaOS developers have also

tweaked Calamarest to better detect

other installed operating systems and

distributions and identify them during

the partitioning stage.







All distros

are equally

matched as all

use Calamares


26 LXF224 June 2017 www.linuxformat.com

Best KDE distribution Roundup


The KDE edition isn’t Manjaro’s flagship desktop which is why the

distribution is more open to embracing non-KDE apps. That said, with

the exception of mainstream productivity apps like LibreOffice and

Firefox, the distro is chockful of KDE apps. Manjaro is a fully rolling

distribution and the developers ensure their users are among the first to

sample the latest release of the KDE components including the Plasma

desktop, the Apps bundle and the Framework. Manjaro is also the only

distro to bundle the Kleopatra tool to manage certificates. The

distribution has its own set of custom tools bundled within the Manjaro

settings manager that assists users with administration tasks such as

switching between available kernels. Manjaro is also visually pleasing

and its interface is polished enough to maintain a consistent look.


Maui boots to a heavily customised desktop that’s probably designed for

large displays. On non-FullHD laptop resolutions the icons appear huge

which robs the desktop of the polished professional look that you get

with some of the other KDE desktops. For example, some of the longer

application names are truncated which looks rather unprofessional and

disorienting. By default Maui uses the full-screen KDE launcher, which

feels much like Gnome 3 Activities and Ubuntu’s Unity. However, like the

other distributions, Maui offers three other launchers as well. It relies on

Mint’s Update Manager which makes it fairly straightforward to keep a

Maui installation updated. The distribution carries an interesting mix of

KDE, non-KDE and proprietary apps and the devs have worked to make

sure the non-native apps are integrated into the Plasma desktop.


By default, Netrunner uses the traditional application launcher with

cascading menus but like the others you can switch to any of the

alternatives. Thanks to its customisations and theme, it looks polished

and consistent despite a bunch of non-KDE and proprietary apps, much

like Maui. The customisation extends to key KDE apps such as Dolphin

and the System Settings window. For example, in addition to the usual

configuration modules, the System Settings window also includes the

System administration section. There’s also the Advanced section from

where you can manage Plasma services such as KRunner and Akonadi.

Like Maui, update duties are handled by Mint’s Update Manager, but the

devs should revisit their decision to use Synaptic for package

management instead of one of the other more intuitive front-ends.

Documentation and support

Where do you go for help?

The primary mode of dispensing

help and handling support

queries for all the distributions

on test here is the humble forum board.

Some of these, such as Chakra’s, are

fairly active and Manjaro is unique in

this Roundup in that it hosts

multilingual forum boards.

Maui has a blog for announcing new

releases. There’s also a Readme icon on

the desktop of both Maui and

Netrunner. While the one in Maui

displays some brief information about

the current release, Netrunner’s lists

information that’ll come handy postinstallation

and the distribution also has

an illustrated installation guide.

Chakra includes an illustrated

Beginner’s guide as well, and a wiki with

articles to help install and manage the

distribution. However, note that while

many articles are fairly detailed, some,

such as the one on Octopi, have a single

line of content.

On KaOS’s website you’ll find

various brief usage guides such as one

on pacman, one on switching to the

Linux-next kernel, another on installing

KaOS on a SSD, and so forth. Manjaro

trumps the lot with a channel on Vimeo

with over two dozen videos on various

aspects of the distribution and its

development. Their wiki is fairly detailed

and well categorised and will help users

install, manage, and troubleshoot their

installation. In keeping with its forums,

Manjaro also hosts multilingual IRC

channels and several mailing lists to

discuss different aspects of the project.









have an active

community to

help you out.


June 2017 LXF224 27

Roundup Best KDE distribution

Verdict: Best KDE distribution

The verdict

The many similarities between

the candidates make this

Roundup difficult to judge. One

of the crucial criteria is the number of

non-KDE apps included by default

within a distribution. Since we are on

the lookout for the best KDE

distribution, we’ll look favourably

towards the one that sticks to the KDE

bouquet of apps instead of bundling the

popular and mainstream apps.

Maui brings up the rear mainly

because it doesn’t look as polished as

its peers on non-FullHD screens. It does

score points for an interesting mix of

KDE and proprietary apps but loses

some for the lack of documentation

and for its appearance. Its elder sibling,

Netrunner, is a well put together

distribution that does a nice job of

showcasing KDE but again loses out for

its decision to use mainstream apps

instead of the KDE alternatives.

In fact, the only real difference

between Maui and Netrunner is their

respective base distributions. Maui is

the continuation of the Kubuntu-based

original Netrunner distribution, but is

now based on KDE Neon and gets its

stable core components from an

Ubuntu LTS release. Netrunner is now

based on the Debian Testing branch

which will end up as the next stable

Debian codenamed Stretch.

Manjaro gets on the podium for

being one of the few projects developed

by a non-corporate team that’s still

available for both 32-bit and 64-bit

machines. KDE isn’t its first desktop of

choice and the distribution includes

quite a few non-KDE apps by default.

The top two distributions are both

available only for 64-bit architecture

and very closely matched. KaOS scores

points for including more third-party

apps by default than Chakra such as

Skype and for the customisations

options in the first-run wizard. It also

offers the option to run Plasma on top

of Wayland. KaOS also does a fine job of

preserving the KDE

ethos of choice

thanks to its

greeter application

that lets users

select the type of

Chakra is also one of the most usable distributions

straight out of the box.

menu, theme and such. The distribution

loses out for the placement of the

application panel on the right side of the

screen, which takes some getting used

to and there’s no obvious way to

change. This leaves us with Chakra at

the top, which does a wonderful KDE

rendition and has an impressive

collection of KDE apps. Unlike its peers,

the distribution also manages to strike

the right balance between a stable core

and bleeding edge applications.

“A wonderful KDE

rendition and an impressive

collection of KDE apps.”


1. Chakra

Web: https://chakralinux.org Licence: GPL and others Version: 2017.03

A pure KDE distribution. Bleeding edge apps on top of a stable platform.

2nd 2. KaOS

Web: https://kaosx.us Licence: GPL and others Version: 2017.02

Another thoroughbred KDE distribution that’s a pleasure to use.

4th 4. Netrunner

Web: www.netrunner.com Licence: GPL and others Version: 17.01.2

Includes a healthy dose of KDE, non-KDE and proprietary apps.

5th 5. Maui

Web: https://mauilinux.org Licence: GPL and others Version: 17.03

The KDE Neon-based version of Netrunner with the same apps.

3rd 3. Manjaro

Web: https://manjaro.org Licence: GPL and others Version: 17.0

A full release using mainstream apps instead of their KDE alternatives.

Over to you...

Have we got it wrong? Have we got it right? What’s your favourite

KDE distro? Email your opinions to lxf.letters@futurenet.com

Also consider...

There’s no dearth of mainstream and smaller

distros with KDE desktops. The mainstream

ones include Kubuntu, OpenSUSE and Mageia

all of which use a heavily modified KDE desktop.

You can try KDE atop Debian, and Linux Mint,

Fedora’s KDE flavour uses a virtually

unmodified KDE desktop or Korara’s KDE is

similar but with a larger number of default apps.

The Debian-based SparkyLinux and Arch-based

Antergos are two smaller projects.

You can also experience KDE straight from

the developers with KDE Neon. While the

Ubuntu-based distro includes an installer, its

developers only intend it as a demonstration

platform and not an everyday distro. If you are

not averse to non-Linux operating systems,

several BSDs also use the KDE desktop

including TrueOS, FreeBSD, and OpenBSD.. LXF

28 LXF224 June 2017 www.linuxformat.com

Discover another of our great bookazines

From science and history to technology and crafts, there

are dozens of Future bookazines to suit all tastes






Never miss an issue

Delivered to

your home

Free delivery of every issue,

direct to your doorstep

Get the biggest


Get your favourite magazine

for less by ordering direct

Simply visit


MARCH 2017 £5 49

Choose from our

best-selling magazines


FREE 48-PAGE 2017


















Master winter


Be inspired by our hands-on guide to shooting

mountainscapes, seascapes and more








We reveal the year’s best travel

photos shot on a Nikon

Movie poste



If you follow the paths of

other photographers you

end up doing the same

thing that they did

FransLanting, naturephotographerp92

rime time





Essential kit & settings Posing guide

Natural lighting Location lighting

Simple set-ups Creative tricks

Trump card



Advice Inside

The best security

Avoid Net nasties

Remove malware

Stay safe forever

Trusted advice








The best free photo editors for Windows reviewed

File Explorer secrets revealed / / Check your Internet speed

SPR NG 2017 £549




The UK’s best-selling Apple mag

Issue 311 April 2017

macformat com @macformat










Uncovered! This

year’s hottest

Apple kit









SSUE 290

APR L 2017


US $8.99

Apple secrets


All-new tips for Safari, Mail,

Photos, iMovie and more


Go Pro Hero

back in black

Film your adventur


Intuos 3D

Love your pen


Speed up

an old Mac



with ease

Add a

ouc Bar

to any Mac!

How to use your iPad as

a Touch Bar for your Mac

Mac iPho Watch iCloud iTunes Photos















PC Gamer

Hotline 0344 848 2852

TERMS AND CONDITIONS Savings compared to buying 13 full priced issues from the UK newsstand. The trial offer is for new UK print subscribers paying by Direct

or call us to cancel your subscription at any time and we will refund you for all un-mailed issues. Prices correct at point of print and subject to change. Offer ends 30/06/17.



30th JUNE


Ubuntu 17.04

Ubuntu 17.04

It’s been six months since the Yakkety

Yak escaped the confines of the build

system, which means it’s now time for

the Zesty Zapus to roam free.

Spring is here, and that can mean

only one thing. Actually it could

mean one of many things –

hayfever, youthful romances

blossoming in the quad, chthonic

Walpurgisnacht celebrations on the

hill – but those inconveniences

aside, what this springtime is really

all about is the new Ubuntu release,

numbered and named 17.04 and

Zesty Zapus respectively. We’ve

scrambled to get it onto our

coverdisc, moving press deadlines right

down to the wire, pleading with Polish

disc replicators and forcing this writer to

stay up late drinking gallons of coffee.

While the non-LTS releases aren’t

recommended for production systems,

they are thoroughly tested and great for

showing the direction next year’s LTS will

take. Users of previous releases will find the

“What this springtime is

really all about is the new

Ubuntu release.”

usual refresh of packages and fully updated

kernel. But with this release there’s also the

sense of a growing undercurrent towards

new technologies, namely Snaps (the

atomic packaging format) and Unity 8 (see

News on page 7 for where this mess is

going). They are not mandatory (and in the

case of the latter not really ready, as we’ll

see) but early adopters will enjoy seeing

the potential of these new tools.

While the days of deals with

advertisers and groundbreaking UI

changes (Ha!–Ed) are thankfully

behind us, there’s much to explore in

17.04, and in this feature we do just

that. You’ll find an overview of what’s

changed and what hasn’t, tips to improve

the default install, plus we’ll delve under the

hood to show you some hardcore tweaks.

So intrepid reader, read on!

32 LXF224 June 2017 www.linuxformat.com

Ubuntu 17.04

What does the Zapus bring?

“Evolution, not revolution” might best describe the changes in 17.04, but


Ubuntu aficionados will perhaps be most eager to

see the progress of Unity 8 with this release, and

we’ve dedicated two pages overleaf to the Mirpowered,

device-agnostic (and not long for this world)

desktop environment. But there’s plenty of other new stuff

to get excited about in Ubuntu 17.04.

Most of it though, is under the hood – the default Unity 7

session looks much the same as it has done since last year,

users aren’t going to have to learn to manage packages

differently (although that option is available) and the default

selection of installed applications remains unchanged from

16.10. However, starting at the bottom we have a brand new

kernel, 4.10, a new version of systemd, which now assumes

name resolution duties through systemd-resolvd ,anew

version of the X.org server and some handpicked

components from GNOME 3.24.

The newer kernel means better hardware support, in

particular AMD’s new Ryzen processors will work much more

efficiently with this kernel. Also on the AMD side there’s

support for their Polaris 12 architecture, and fan information

is now exposed via hwmon . There’s also experimental

support for unlocking Boost frequencies on Nvidia GPUs with

the free Nouveau driver. More importantly, users of newer

Nvidia cards can now control the onboard LED via a sysfs

interface. Plus there’s support for a slew of less-expensive

new hardware – yes, the chances are the new USB dongle

you just bought is supported!

One change that bucks the trend is that swap partitions

are done away with. These have been a staple of Linux installs

since the very beginning – when running out of memory was

a thing that happened regularly. Instead the default install

(unless you already have a swap partition, in which case you

can elect to continue using it) will use on-demand swap files,

which is a slightly more efficient way of doing things. The oft’

repeated but rarely considered advice to set-up a swap

partition twice as large as the amount of RAM is downright

wasteful in an age where 8GB of RAM is the norm. That 16GB

swap partition could accommodate a whole other distro.

This cameo appearance by jazz-hiphop pioneer The Herbaliser certainly got

our attention. A clean install occupies just over 4GB, by the way.

Printer panacea

Printing on Linux has always seemed to be far harder work

than it ought to be. The dark truth is that more often than not

people just print to a PDF and take that PDF to a Windows

machine or use Google Cloud Print when they want to get a

hard copy. As is the case with other bits of hardware, the

situation is confused by manufacturers offering drivers on

their websites that have not a hope in hell of working with a

modern Linux distribution, and actually stand a reasonable

chance of breaking it entirely.

Of course, drivers are available within the CUPS package,

but with 17.04 comes driverless printing, and perhaps an

amelioration to the ongoing situation. This only works with

some models, including those that support IPP Everywhere

(see http://www.pwg.org/dynamo/eveprinters.php) and

Apple Airprint, but it’s a start.

Snap happy

Snaps (or snappy packages or whatever you

want to call them) and the tooling for dealing

with them, have been available in Ubuntu since

16.04. They solve a number of shortcomings of

current packaging methods. First and

foremost, thanks to containers being all the

rage they are installed in isolation to the rest of

the system. So even if you install a rogue snap,

it’s very limited in the amount of damage it can

do. Snaps are also, in one sense anyway, much

more portable than traditional .deb packages.

They can be installed on any distribution

supporting snaps, since they include all of their

dependencies. This of course has the adverse

effect of making them larger than traditional

packages, but also makes the packaging

process far more simple – developers can

package once for many distributions. Snaps

can be easily updated too, so common gripes

about packages in repos being out of date are

assuaged. For example, if you wanted to have

an always-updated version of the Telegram

secure messaging app, then all you need is:

$ sudo snap install telegram-latest

There are many more snaps available on the

UAppExplorer website https://uappexplorer.

com/apps?type=snappy. One of our

favourites is the mighty OhMyGiraffe, which

we’ll leave you to investigate. Snaps are not the

only dependency-free, distro-agnostic way of

packaging things – the Flatpak (formerly xdgapp)

and AppImage formats have been around

for a while, too.

Flatpaks, essentially a more desktop-centric

approach to application packaging, are

supported by the software app too. Hopefully

these formats can coexist harmoniously and

lead to developers’ lives being made easier

without causing fragmentation.


June 2017 LXF224 33

Ubuntu 17.04

Unity 8 desktop


time? We’ll let you decide, which best describes Unity 8.

Discerning music fans will remember Queen

Latifah’s seminal 1993 album U.N.I.T.Y., which

challenged the misogynistic trends prolific in the

rap music of the era. Unity the desktop environment has,

since its introduction in 2010, challenged things too: users’

patience, their ideas about privacy, and the notion that

sideways launcher bars have been a bad idea since

Microsoft Office tried it in 2000. Much of the shouting and

bluster of seven years ago has died down now. For one

thing, Unity, thanks not least to a persistent application

bar, maintains some hint of the traditional desktop ideal

that GNOME 3 threw out of the window.

People have, on the whole, got used to the idea that

navigating through a bunch of cascading menus to find the

program you want isn’t really the most efficient way to do

things – that a few carefully chosen keystrokes (can’t we have

both?–Ed) can get you to where you’re going much more

swiftly. Users of Emacs and keyboard-driven window

managers have been telling us this for years, but Unity

doesn’t force you to learn three- and four- fingered

gymnastics to launch an application, and gives you

something that looks modern and friendly. Try sitting a novice

down in front of a vanilla i3 session to see what we mean here.

Unity has made concessions to many of the most uttered

gripes (excepting the left-handed window buttons) over the

“In some ways it’s a shame

that we’ll never see Unity 8 in

a more polished, final form.”

Old style X11 applications, where they work, look odd with

double titlebars. In this case Files seems to have lost its

icons, too.

years and the resulting desktop is one that is at least

tolerated by many and may actually be enjoyed by a few.

Barring a little uncertainty about where menu bars should

be drawn, Unity 7 hasn’t changed a great deal in three years,

and 17.04’s default desktop does nothing to challenge this.

Unity 7 is effectively mothballed, and the future is Unity 8. At

least it was, up until right before this feature was due when

Mark Shuttleworth announced it wasn’t. Ubuntu 16.10 offered

a technical preview of Unity 8, which despite being a little

buggy and rough around the edges, certainly gave an idea of

what the final product would be like. With 17.04 we still have

what is essentially a technical preview, but one that offers a

Nudging the


edge of the

screen with

the mouse,

or swiping

right, activates

this stylish


switcher, ala

Windows VIsta

circa 2007.

34 LXF224 June 2017 www.linuxformat.com

Ubuntu 17.04

much more complete experience, and a glimpse of where but

for market forces things might have gone. So what can you

expect when you select that stylish 8-ball? Well let’s get this

out of the way first, it’s probably not something you’ll want to

use for your desktop affairs. You’ll be greeted by a sparse

looking desktop, with a glaringly empty Scopes window (more

on this in a second) and (maybe this was just us) slightly

weird font aliasing. But it’s not so different from Unity 7—

there’s a launcher bar on the left, a system area (with the

familiar volume, network and logging out options) top-right

and if you affectionately bump the right-hand edge a stylish

application switcher appears. Clicking the Ubuntu button (top

left) will slide out a lexicographically sorted list of applications.

Much ado has been made of Scopes and the role they

were to play in Ubuntu’s converged future. They are, for all

intents and purposes apps such as one would find on other

mobile platforms, but apps that can shapeshift into desktop

mode on demand. That empty Scopes window can be

populated by clicking the up arrow at the bottom and

choosing from the current offerings. Unfortunately the

selection of available Scopes is pretty barren – there’s a

handful of useful/entertaining ones, including Reddit,

Wikipedia and the Weather channel. Once you’ve favourited a

selection of these, you can flick through them by dragging (or

swiping, which feels much less awkward, if you have a

touchscreen) the top bar.

Deactivate Desktop Mode in the system menu and an

amazing transformation occurs: the active application fills the

screen. The Scopes window acts as the home screen on

mobile platforms and so can’t be killed (not with conventional

weapons anyway) so this will fill the screen in the absence of

any other applications. In this way we see scopes have

something in common with Android’s widgets, in that they

present preview information which the user can expand by

opening another application. For example, the Wikipedia

scope displays a selection from that site’s “Featured Articles”

page, as well as a search bar. Choosing an article will display a

short preview and a single image (if available) whence the

user can elect to open a web browser to view the rest of it.

Unity 8 was first introduced with Ubuntu 13.10, and back

then it was very much an experiment. With this release we’ve

seen a great deal of progress since 16.10, but we’d stop short

of recommending it for day to day use – it’s still very much a

tech preview (and now it’s not going anywhere). We had a few

crashes and strange errors (though remember we’re working

with the beta and writing this in the past), and the whole

experience is still very rough around the edges. But for all

that, the sharp window edges look modern, the new system

area is tidy and there’s nothing glaringly wrong with how

things are laid out. In some ways it’s a shame we’ll never see

this in a more polished, final form. But hey, we can at least

look forward to how Ubuntu on GNOME will look next year.



scope is useful

for looking up

the animals

that Ubuntu

releases are

named after,

among other


App incarceration with Libertine

Many traditional X11-based applications don’t

work with the Mir display server that powers

Unity 8, not out of the box anyway. For example,

try opening Firefox or LibreOffice Writer and

you’ll be met with a splash screen and spinning

progress doodle that goes nowhere. These lastgeneration

applications can be coerced into

running using the Libertine sandboxing layer.

This runs old packages safely inside an LXC

container. One of the (many, many) reasons

there is such a push to move away from X11 is

because there is no inbuilt mechanism for

isolating applications from one another. Any

client can read input events and window data

from any other, which makes writing keyloggers

and other spyware much easier. This is a

serious problem, especially when we consider

that different users may be connecting to the

same X server at the same time. With Mir (and

Wayland, the other display server of the future),

applications are confined from each other, and

things ought to be much more secure.

By sandboxing old X11 applications they can

be isolated from the rest of the system and,

through the use of multiple containers, each

other. These legacy applications will need to be

installed into the container separately, even if

they already exist on your system. Furthermore,

each container requires around 500MB of

space (before you even do anything with it) so

you may prefer just running X11 applications

natively, ie, in Unity 7. There are some packages

to install before you can see what the future

running the past looks like:

$ sudo apt install libertine libertine-scope

Scroll through the application list and you’ll

find a Libertine Manager has appeared. Start

this and you’ll be prompted to configure

“Classic Application Support”. Then you get

some options about what to call your container,

whether it needs 32-bit support and whether or

not you want to protect it with a password.

Once that’s all set up we’re ready to install

some packages, which can be done by

selecting the container and clicking the plus

sign in the top right. You can search for

packages by name, or choose from recently

downloaded .deb files. As we’re accustomed to,

it’s also possible to do administer Libertine

containers and the packages there interred

from the command line, using libertinecontainer-manager

. For example, to install the

whole LibreOffice suite in a container called

zesty we can type:

$ libertine-container-manager install-package

--id zesty --package libreoffice

Some non-Mir applications will work without

being containerised in this way, through the

Xmir compatibility layer, but they are drawn

with Unity 7 titlebars below the new style ones.

This looks odd, but does give you a choice of

which close button to use. The standard

Ubuntu Terminal application doesn’t start, but

there is a Mir alternative, which rather overcautiously

asks you for your password before it

starts. ‘Tis better to be safe than sorry.


June 2017 LXF224 35

Ubuntu 17.04

Zesty flavours



Criticism of the Unity desktop environment (DE) is

fairly common. Whether this is due to it genuinely

pushing users’ buttons, or people not being able to

find appropriate buttons, or just internet people jumping

on the fun-poking bandwagon (as they are wont to do), we

couldn’t say. Whatever the reason, dislike of the desktop is

no reason to dismiss the whole distribution. Ubuntu has

many thousands of users and a thorough testing process

to ensure those users are well catered for. Huge efforts are

devoted to making sure applications work out of the box

There are of course other distributions that are awesome

in their own way, such as Arch Linux (will you shut up about

Arch, this is an Ubuntu feature! – Ed), but all too often users

are too keen to run away based on appearances (have they

not seen Beauty and the Beast?). So many distros are based

on Ubuntu now anyway, so why not choose an official flavour?

You’ll get all the stability of the Ubuntu base together with the

desktop environment of your dreams. Assuming you dream

of KDE, Xfce, LXDE, MATE, GNOME or Budgie, that is.

There are a few other official flavours – Edubuntu (for

schools), Ubuntu Studio (for artists, musicians, videographers

and other creative types, see LXF223) and Ubuntu Kylin (for

Chinese users) – but these all still use the Unity DE. This

release cycle we also mourn the loss of the long-lived

MythBuntu, which since 2005 has eased the pain of getting a

working MythTV setup. Alas, due to a dwindling development

team that project has folded. Meanwhile Lubuntu continues

its move away from the ageing and buggy GTK2 in favour of

the LXQt desktop. The changing of the guard won’t happen

this release, but users wishing to see how progress is coming

along in the lightweight-yet-modern, Qt5-powered desktop

can do so by installing the lubuntu-qt-desktop package. As

we’ll see later, this and other desktop packages can be

installed on any Ubuntu flavour, so that users can choose a

desktop at log in, depending on their mood.

We’ll focus on two official flavours here, Budgie and

GNOME, because the others only feature minor package

updates this release. Budgie is the newest addition to the

Ubuntu family. The avian desktop originated in the Evolve OS

distro, which nowadays goes by Solus Linux (see LXF208),

but has since seen contributions from without. Budgie has

hitherto been a GNOME-based affair, but in trying to create a

desktop with elements that hearken back to GNOME 2, albeit

with modern conveniences like insta-search, the developers

have consistently run into things that needed to be hacked

around. This is symptomatic of a sea change in the GNOME

ecosystem, in which GTK, once a general purpose toolkit for

making nice and consistent looking widgets, is becoming

more and more subsumed by the GNOME aegis. As a result,

trying to do things with GTK3 that don’t involve the GNOME

desktop is becoming more and more challenging. So the

Budgie team, like LXDE, will be migrating (haha – Ed) to Qt5

in the future. Interestingly, Unity 7 is an alternative shell for

GNOME that is implemented as a Compiz plugin. It has

Budgie’s Raven configuration menu is useful, and GNOME Maps has come a long way too.

36 LXF224 June 2017 www.linuxformat.com

Ubuntu 17.04

managed to circumvent these GTK-GNOME assimilation thus

far, in part by copious patching and in part by relying on a

different toolkit called Nux. If this all sounds very convoluted

then don’t worry, it is, and that’s why Unity 8 uses the deviceagnostic

Qt5/QML instead.

Back to Budgie, then, and the first thing one notices when

it starts is a friendly welcome screen which features a very

democratic browser ballot screen. This puts one in mind of

the one the EU forced Microsoft to implement in 2009 after

deciding their bundling of Internet Explorer with Windows was

anti-competitive. Chromium is installed by default, but users

can install Chrome, Firefox or Vivaldi with a single click. This is

a nice touch, seeing as web browser preference is a

contentious issue among users – many prefer Google

Chrome, and resent the fact that many distributions make it

hard to install, whereas equally many would be flabbergasted

to find such an epitome of proprietary nastiness installed by

default. Budgie, unlike its parent GNOME desktop and

(maybe this is being unduly harsh) Unity has userconfigurability

at its heart. There’s a handy sidebar – the

Raven Sidebar – which can be popped out by clicking the

icon on the far right of the main toolbar. From here one can

control widgets, notifications, panel layouts and font settings.

It’s actually quite amazing how many controls they’ve

managed to fit in here without it feeling cramped. That said,

the roadmap for the next Budgie release sees some of these

settings slated for shipping off into the main settings applet.

We’re in two minds about this: on the one hand it’s nice to

“It’s amazing how many

controls they’ve managed to fit

in without it feeling cramped.”

have easy access to these settings, but on the other hand

things like font sizes are the kind of thing that only get

modified once (if at all), rendering their accessibility

somewhat moot.

Budgie uses the Plank dock/launcher area, which is

initially set up as a left-handed affair, but it’s smaller and less

distracting than Unity’s. It can be configured to autohide or

even intellihide and can be augmented with ‘docklets’ such as

a clock, a CPU monitor or a desktop pager. Budgie also

makes use of the rather excellent Terminix application. Not

only can it split horizontally and vertically (like Terminator),

but it can also save your session in gloriously parsable .JSON.


Recipes is a new addition to the GNOME applications family, and with this

tasty Alsatian Grumbeerekiechle, it’s a most welcome one.

Like Unity, there’s plenty of bits of GNOME 3.24 in here, but

obviously not as much as in Ubuntu GNOME, to which we will

soon direct our fiery gaze.

This release of Ubuntu lines up nicely with the release of

Kernel 4.10, the latter having been out in the wild for five

weeks before the Zapus was released, and obviously available

for testing long before this. Of course, different projects follow

different release cadences and things don’t always align so

nicely. One such example is the GNOME desktop

environment, version 3.24 of which was

released just two weeks before Ubuntu. At

the time of writing this has yet to be fully

incorporated in any Linux distribution. Be

that as it may, parts of GNOME 3.24 have

been co-opted by vanilla Ubuntu (Calendar,

Videos, Disks). This is thanks largely due to

GTK’s new long term release cadence, which makes it easier

for developers outside of GNOME to better track the library

and avoid being bitten by API changes. However, one gets

more (but still not the whole shebang) of the GNOME 3.24

experience if one uses the official Ubuntu GNOME flavour, or

one adds the desktop (all 800MB of it) with sudo apt-get

install ubuntu-GNOME-desktop . Ubuntu needs a heavilypatched

Nautilus and terminal to integrate with its

peculiarities, so older (3.20) versions are included. Likewise

GNOME Software (patched and rebaptised Ubuntu Software)

is based on the 3.22 edition, but it includes support for Snaps

and Flatpaks, which was not present in previous Ubuntus.

The latest GNOME release doesn’t have too

many user-facing changes, although it does

feature Night Light, which tweaks your display

temperature to reduce the amount of blue light

it radiates during night hours. By making the

display warmer at night time, eyestrain and

tiredness should be reduced. Such programs

have been around for a while, for example

there’s the cross-platform but closed source

f.lux, or the open source Redshift. People, in

particular the people that write for Linux

magazines, are spending more and more time

looking at screens well into the small hours and

their retinal health and circadian rhythms are

suffering. So it’s nice to see this functionality

being incorporated natively into GNOME now, if

nothing else it’s one less out-of-place icon in

the system tray. Also Redshift doesn’t currently

work with Wayland, and seems to disagree with

heavy duty applications such as Chromium,

where as Night Light played nicely with

everything during our testing.

Besides saving our retinae, GNOME 3.24

includes some new applications which can be

installed separately – Recipes is pretty much

what you’d expect. It can make shopping lists

for you and lets you share your own recipes too.

There’s a fullscreen mode too which could be

handy if you have a touch-enabled laptop in the

kitchen. There’s also Games, which can

manage not only natively-installed DRM-free

games, but also your Steam library and libretropowered



June 2017 LXF224 37

Ubuntu 17.04

Graphics drivers


with new Nvidia cards will want to install the proprietary versions.

Many people use Ubuntu as a gaming platform,

and with good reason – besides SteamOS it’s

where most of the Linux testing takes place. In

fact, Steam’s (ageing) runtime is based on old Ubuntu

libraries. It’s easy to install Steam on Ubuntu – just enable

the Multiverse repository from the Software & Updates

application (type soft into the Dash and you’ll find it), and

then it’s available either from the Ubuntu Software

application or via a good old fashioned:

$ sudo apt-get install steam

The package that gets installed this way is just a sort of

bootstrap installer that downloads and installs the Steam

client into your home directory. In this way Steam takes care

of its updates outside of the package manager, and for the

most part does a good job of not breaking itself. If you’re

going to be playing graphically intensive games, or even

graphically moderate games, and you have a discrete

graphics card then you may wish to install proprietary

graphics drivers. More specifically, if you have an Nvidia card

Proprietary drivers can be installed from the Additional Drivers tab, but if

you need something newer check out the PPA.

then you probably will, and if you have an newer AMD card

you probably won’t.

Thanks to the new AMDGPU driver framework, open

source support for newer (we’ll clarify this in a moment) AMD

cards is actually pretty good. A proprietary driver, AMDGPU-

PRO is available, but this is intended for industry-type

OpenCL workloads rather than boosting gaming

performance. Initially, the new driver only supported the

“Volcanic Islands” and later cards (the Rx 200 series,

introduced in late 2013, and newer), but with Kernel 4.10

support now goes back to the Southern and Sea Islands

cards (HD7000 and HD8000). The older ATI proprietary

driver known as Fglrx is no longer maintained, and doesn’t

work with newer versions of X anyway, so unless you like

archaic software stack-related masochism this should be

avoided. Either buy a newer graphics card or stick with the

older, libre radeon driver.

Cards using the Nvidia Maxwell (GTX900) and Pascal

(GTX1000) architectures really need the proprietary driver to

unleash their not inconsiderable gaming potential. This can

be installed from the Additional Drivers tab in the Software &

Updates utility. But since these are not updated as frequently

as Nvidia release new drivers, you may prefer to use the new

graphics-drivers PPA. Historically unsuspecting users have

got themselves into all kinds of driver hell by downloading the

binaries directly from Nvidia’s website, used a combination of

experimental PPAs, copied and pasted instructions from

Reddit, or all of the above. The graphics-drivers PPA aims to

be a one stop shop for people who are willing to risk slightly

more unstable drivers at the cost of newer features. While the

drivers from this PPA may occasionally exhibit peculiar or

crashy behaviour, they should not lead to unbootable

machinery (failsafe boot is your friend–Ed) or data loss.

To use the newer drivers is a straightforward matter of:

sudo add-apt-repository ppa:graphics-drivers/ppa

sudo apt-get update

sudo apt-get install nvidia-378

Of course, there are plenty of FOSS games available in the

repos if proprietary software gives you the heeby-jeebies.


The open source Nvidia driver, Nouveau, is a

remarkable project. Unlike AMD, Nvidia has

contributed very little in the way of free code

for its desktop cards. Back in the day there was

a free driver called nv which contained code so

obfuscated that many a developer went mad

just trying to parse it (joke). So the Nouveau

project was set up in 2005 to reverse engineer

the hardware and write drivers that could be

free, maintainable and sanity-preserving.

Nouveau is the default driver for Nvidia

hardware and generally works well, insofar as

users generally don’t find themselves booting

to a black screen. But if you need performance,

Nouveau is not likely to be much help. The

reason is that these cards boot in a low-power

state, and without using experimental (or

proprietary) code, it’s been tricky to get them

out of this state. Work is ongoing here, and

kernel 4.10 adds some (albeit manual)

reclocking features to Kepler-based (GTX600-

700 series) cards. Cards older than this aren’t

really suited to modern gaming, so owners of

these may as well stick to the free driver

(although the legacy 304 series proprietary

driver supports some of them). Those

interested in experimenting with the new

Nouveau features will want to grock the

nouveau.config=NvBoost=2 kernel option, and

the /sys/kernel/debug/dri/0/pstate entry.

38 LXF224 June 2017 www.linuxformat.com

Ubuntu 17.04

Bland is grand


attention-grabbing features. Here’s why that’s a good thing.

So here’s an admission, all those attention-seeking

words on the cover are a little misleading. There

haven’t been any particularly revolutionary changes

in this release, it’s not necessarily more stable than the

last one and, realistically, you’re not going to gain much

beyond larger version numbers by upgrading the LTS to

this release (shut up!–Ed).

Believe it or not, these are all Good Things: major changes

upset people and installing a new operating system every six

months does likewise. Indeed, if you are happy with your

Ubuntu 16.04 installation then consider following Canonical’s

advice and keeping it – LTS editions are extensively tested

and supported for five years, unlike the interstitial releases

which are supported only until the next one comes along. This

release, and many before it, have all been about gently

refining a well-proven recipe. Yes, there have been major

changes to Ubuntu’s plumbing over this period – we’ve seen

init systems come (systemd) and go (Upstart), we’ve seen

application menus move from inside the application window

to the top bar (and then, optionally, to the titlebar) and we’ve

shed no tears for the loss of the obnoxious Ubuntu Software

Centre. There’s also a great deal of effort just to maintain

compatibility with newer versions of the underlying GNOME

components on which Unity depends and, more generally,

just to track new releases of everything in the repos. But for

the most part anyone who was familiar with Ubuntu since

about 12.04 would have had no difficulty navigating

subsequent Ubuntus, and that remains the case for 17.04.

With the exception of Windows 8, we could say the same

thing about Windows since Windows 95, and about the OS

formerly known as OSX, since its inception in 2001.

Innovation is great, but so is not breaking something that isn’t

broken – a key feature of any good desktop environment is

that it stays out of your way, and the notion of shiny,

disruptive features runs entirely orthogonal to this.

Add to this the general statistic that people are doing less

and less with their desktop machines (Android is now more

popular for browsing the web than Windows according to

recent surveys), opting rather to use apps on their phones,

tablets or ocular implants, and you begin to see that trying to

lure more users to Ubuntu on the desktop with headlinegrabbing

features is largely futile. Such an endeavour will just

infuriate or alienate existing users. And, relatively speaking,

there were never very many of them to begin with.

Beyond the desktop

Remember that Canonical is a company and desktop Ubuntu

does very little to boost its profits. These come from lucrative

enterprise support contracts and public cloud offerings. The

Server version of Canonical’s OS, and in particular the many

customised cloud images of it, has orders of magnitude more

users and installations than desktop Ubuntu and this ratio is

only going to increase with time. Latterly Ubuntu Core is

coming to the fore, having already been used to power

When Unity 8 and Ubuntu Phones are finally released all the complaining

will be... Wait, what? Not happening at all you say. Oh dear.

drones, robots, SDR base stations and more. It hopes

(through transactional updates, AppArmor and container

voodoo) to bring some decorum to the wild and securitybarren

world of IoT, and thanks to Canonical’s diverse

industry partnerships there will soon be many, many more

devices running this OS. There is still a huge chunk of

common code between these flavours and the desktop ones,

which helps to keep the development process from becoming

too fragmented, and reduces some of the burden of

“Innovation is great, but so

is not breaking something

that isn’t broken.”

maintaining a libre desktop OS. A universal and secure

packaging mechanism, ie, Snaps, relates desktop and device

installations. The now abandoned push towards Unity 8

aimed to unify user interfaces across desktops and mobile

devices, and again spoke to this idea of a common, converged

codebase. It’s a different approach to Red Hat, which uses its

free Fedora distro as a staging area for the commercial Red

Hat Enterprise Linux, and it’s a different approach to SUSE,

the ‘other’ corporate Linux player, which sponsors

OpenSUSE. But making money out of free software requires

innovation and ingenuity. Some users may disagree with

Canonical’s vision, and the beauty of open source means

there’s no shortage of other distros for those users to run to,

but from our point of view it’s an exciting time for Ubuntu and

an exciting time for Linux in general. LXF


June 2017 LXF224 39

Maker Faire UK 2017

Maker Faire UK 2017

Les Pounder travels once again to Newcastle upon Tyne, to see not

the fog upon it, but the latest projects of the maker community.

Maker Faire UK is the largest

Maker Faire in the UK and

draws crowds from all over the

world to take part in a weekend

long celebration of maker culture. Diversity

is the main theme of every Maker Faire. You

may think that it’s all about circuits and

code, but if you look deeper you can see that

there are many projects that serve other

purposes. 3D printers provide makerspaces

with the tool to print custom limbs for

disabled children, musicians bend the

circuits of toys, never designed to make

music, into other-worldly instruments, and

children learn how to become future makers.

At every Maker Faire UK there are big

names, and this year we saw the Raspberry Pi

Foundation, CPC, and Kitronik rub shoulders

with the general public. Everyone, no matter

their age or skill, was a ‘peer, we are all equal in

our pursuit of learning how things work, and

how we can use them in our next project. It

was humbling to see so many children there,

learning to build their own projects, and quite

a few showing the older generations how to

realise a great idea.

Going around the exhibitors it was clear to

see that there are still two boards that inspire

the makers more than any other device. They

are the Raspberry Pi and the Arduino. The

Raspberry Pi was seen controlling robots, trail

cameras, and even a light sculpture that

represented the heartbeats of the makers at

the event. The Arduino was seen controlling

open source 3D printers and laser cutters.

These boards are part of maker community

and Maker Faires are the reflection of what a

community is interested in– with each

changing year we see these boards being used

in new ways.

Being at Maker Faire UK is a chance to

sample these new ideas, and to speak to the

people who created them. Everyone shares

their ideas and lessons learnt. It’s just what

being a maker is all about.

40 LXF224 June 2017 www.linuxformat.com

Maker Faire UK 2017

Robin Hartley

Robin is a fresh face in

the maker community.

Interview He is at Maker Faire UK

to demonstrate his first

project, a keypad that

can be programmed to

perform multiple tasks

at the push of a button.

But what makes this project different to

others is that it is programmed using a

block interface.

Linux Format: Hi Robin, thanks for taking the

time to talk to Linux Format. So can you tell

the readers who you are?

Robin Hartley: Hi, I’m Robin and I am studying

for a Masters degree in Chemical Engineering at

the University of Sheffield.

LXF: So you are not an Electronics Engineer?

RH: No, electronics and ‘making’ is just a hobby

because I really enjoy it. Along the way I have

taught myself circuit board design, web

development and 3D CAD design.

LXF: So how long ago did you start to teach

yourself these new skills?

RH: I started about two years ago,

programming with the Arduino, because I said

that I could for a job interview. Luckily I had six

months to learn and prepare myself.

LXF: This weekend at Maker Faire UK you

are running a stall to show off your latest

invention. Can you tell us more about it?

RH: The Amazing Shortcut Keypad, a slightly

tongue in cheek name. Computers can do

incredible things, now we can do things with

technology that just 20 years ago were merely a

dream. But in all these years the interface has

changed very little. We still have a keyboard, and

we still have a mouse. So how do we typically

interact with this new functionality? By using

massive menus, and having to navigate through

them with a mouse or using typed commands

using keyboard shortcuts. None of these are

really efficient.

I started by building the Keypad for myself,

as the CAD software I was using required me to

remember and type a series of commands,

something that I was sick of. I just wanted to

press a button and have it typed for me. So now

I can press just one button and it will type out

the command. All of this is made possible by an

Arduino at the heart of the Keypad, that works

as a keyboard, and to program the Keypad I

have created a simple block editor that enables

anyone to customise it to perform whatever

action or shortcut they desire. I am also keen to

enable users to share their custom maps so

that the community can benefit from each

other’s help. For example a Photoshop user

may wish to share their layout so that others

can use Photoshop more efficiently.

LXF: This keyboard could have far-reaching

applications, not just for efficiency but for

helping others to create custom inputs for

medical/assistive technologies.

RH: Yes, you can do so much with it, but there

will be others who think of new ideas to expand

its potential.

LXF: So why did you embark on this journey

to become a maker? Why did you feel the

need to “understand how things work"?

RH: I think that the fundamental ‘thing’ about

makers is not that we wish to make, but that we

want to see things

come to life. No

matter what they

wish to make, they

each have the skill

and enthusiasm to

learn and make it

happen. That is more

about being a maker,

rather than having the tools and knowing how

to use them.

LXF: So you could say that you have

a “Maker Mindset"?

RH: Yeah, it’s almost like in a startup it is called

a “Growth Mindset”, the idea that you can learn

and develop yourself and constantly get better.

And this can be applied to a hobbyist level to

create the same mindset but for makers.

LXF: Maker Faire UK is much more than just

cool technology, then?

RH: Indeed, I am on my own at my stall so I

won’t get the time to go round and see the

exhibitors, but the public are great to talk to,

hear their stories, be inspired by their awesome

projects and possibly make connections. The

inspiration, the spark is what brings projects to

life, and the exhibitors and general public at

Maker Faire UK have plenty to inspire me.

LXF: At Maker Faire UK we can see the

breadth of Maker Culture, from helping the

disabled with assistive products to learning

how to knit and sew. Will there be a limit as

to what can be encompassed under the

name “Maker"?

RH: That is a tough question! There isn’t a

boundary that can be drawn, we can’t say for

example that laser cutting is, but 3D printing

isn’t. It’s more about the attitude, the

willingness to learn, the enthusiasm to bring

something to life and the sense of community.


“It plays a big part. Everyone

helps each other with

perspectives and advice.”

The community plays a big part in the

movement, everyone helps each other, with

perspectives and offers of advice.

LXF: Right now your keypad is still a

prototype, are looking to crowdfund it?

RH: Yes, the plan at the minute is to start the

crowdfunding in mid-June 2017 and my website

is http://theamazingshortcutkeypad.com .

It’s very interesting to go from something that I

soldered up in my bedroom, to something that I

can crowdfund, being able to get the technology

to a point where it can be sold to the public as

well as all the boring stuff (passing regulations).

This learning journey is something that the

maker community lends itself to, as it is a

natural progression of the maker movement.


June 2017 LXF224 41

Maker Faire UK 2017

Adrian McEwen

Adrian is well known in

the maker community,

Interview in fact he wrote the

book on designing

products for the

Internet of Things. He’s

a keen advocate of

independent manufacturing and scaling

hobby projects into commercial products.

LXF: Hi Adrian, thanks for taking the time to

talk to LXF, can you tell the readers a little

more about yourself?

Adrian McEwen: I mostly describe myself as

someone who connects strange things to the

internet. This could mean a bubble machine

connected to Twitter, or a sensors monitoring a

wave energy machine sat in the English

Channel that send their data over 3G for

analysis. My work is really varied and

interesting. I am also an author, I wrote

Designing the Internet of Things for O’Reilly.

Lastly I am one of the founders of Liverpool’s

hackspace, known as DoES Liverpool.

LXF: How long have you been a maker?

AM: I would say at least ten years, but I have

always been a maker at some level. But in

2006/2007 I first heard about the Arduino. In

fact in 2007 I purchased my first Arduino from

the Arduino factory while I was living in Turin.

LXF: Would you attribute the Arduino as being


“I don’t want to go through

the whole venture capital

system and go to China.”

your route into being a maker?

AM: Yeah, the Arduino definitely helped. I’ve

worked in embedded software for my entire

career, and it was always internet related

projects. For example I was part of the team

that made the first web browser on a mobile

phone in 1997. In the 2000s I had seen some

other single-board computers, such as Sun

Spots, but they were always quite expensive

and complicated to use. But when the Arduino

came along it was only £20 and did not require

any specialist software or knowledge to start

using it. So if I didn’t like the Arduino, all that I

would lose was the £20 and the board would

gather dust on the shelf. This price level and

ease of use was the point at which I thought

that I could justify having a play and start

teaching myself electronics.

LXF: Not only are you a maker, but you are an

advocate of Independent Manufacturing.

AM: Yeah, I’ve been running my own company

since 2002 and we work on Internet of Things

projects for clients. Being

based in a makerspace

means I have access to

laser cutters, 3D printers

and plenty of tools, which

also means that I can

make one of almost

anything that I need. But

the company has always

been interested in how we move into product

design and make your own product. For

example we have been making the Ackers Bell,

which is an internet-enabled bell that can be

linked to online stores, and will audibly ring

when the store makes a sale. So for the product

side of things we thought “How do we make

more of these?” and I don’t want to go through

the whole venture capital system, raise lots of

money and go to China then manufacture

20,000 units which I then have to go and sell.

I’ve been interested in how you scale things up

and how we can get more people who are

makers. So the process of scaling up but

without this risk is really interesting and the

makerspace has those sort of links into

manufacturing. We have people who come to

DoES Liverpool with something that they wish

to make on the 3D printer for sale... now, in the

makerspace we can make say around 200 of

these objects, but any more than that requires

looking at the local supply chain. So as more

people do projects in DoES Liverpool, and you

get this right across the makerspace network,

there is a real community of makers who share

their information. So your route into

manufacturing is that you come to the

makerspace to use the tools, and then learn of

this wonderful community of makers and

manufacturers that is there to help you realise

your project.

LXF: For those wishing to become a maker,

is there a path that they should follow?

AM: No, there are so many different routes into

the maker community, everyone has their own

story that illustrates the path that they followed

to become a maker. For me it was to have a

project that I wanted to achieve and that is what

made me dedicate the time to go and learn

different things for my own benefit. The maker

community has such a nice, diverse crowd, the

stereotype is men hunched over computers,

but the maker community is broad, covering

many different skills such as arts, fine arts,

engineering, haute couture, and along with the

established interests of technology helps there

to be more diversity and the male stereotype is

not reflective of the community at large.

42 LXF224 June 2017 www.linuxformat.com

Maker Faire UK 2017

Lorraine Underwood

Lorraine is from Ireland

and works with

Interview teachers in Lancashire

to help them gain

confidence in teaching

computing. Lorraine is

new to the maker

community and this is her first Maker Faire

UK, and she has chosen to show off her

new project: Nora the Explorer, a firstperson

virtual reality robot.

LXF: Hi Lorraine, can you tell the readers a

little more about yourself?

Lorraine Underwood: I’m the Computing at

School coordinator for Lancaster University.

Computing at School is a government funded

organisation that helps teachers teach

computing. Today at Maker Faire UK I am here

with my project Nora the Explorer.

LXF: So what is Nora the Explorer?

LU: Nora is a little robot car, based around the

Raspberry Pi Zero W. She has a camera on top

of her and you control the car using a Wiimote,

and the video feed from the camera is sent to a

virtual reality headset giving the player a firstperson

view of the course.

LXF: Amazing, so the Raspberry Pi Zero W is

able to handle the video, Bluetooth and Wi-Fi

while driving the course?

LU: Yes, the Wiimote is connected using

Bluetooth and the Wi-Fi on the Zero W is

broadcasting an access point that can be

connected-to using a mobile phone in the

headset. The effect is similar to FPV drones.

LXF: So what inspired you to build this?

LU: I just wanted to because I believed that I

could do it, this coupled with really wanting to

take part in Maker Faire UK. I’m fairly new to the

maker community, and looking into the various

genres. I went to Liverpool Makefest last year

and I was really excited by the maker

community. But for Maker Faire UK I wanted to

be an exhibitor, so I needed a cool idea to make.

I saw someone do the same project with an

Arduino and use a radio transmission to

transmit the video to the user, but the costs

were extremely high. But everything I see I

always think “I can do that for cheaper!” so that

is what drove me to create Nora.

LXF: You’ve just hit upon the maker

mentality “I can do that!”

LU: Yes! And I always add “and cheaper”.

LXF: So how new are you to the community?

LU: This time last year I took part in Picademy,

the Raspberry Pi Foundation’s teacher training

course. I have a degree in Computer Science,

and I have always loved programming. But I

have never really been into the physical side of

computing, the making aspect. But at

Picademy I had a go. Now for most of Picademy


“Nora is a little robot car,

where video feed is sent to

a virtual reality headset.”

I was “nah this isn’t very good” as I am not into

Minecraft, and I am tone deaf, but the simple

LED lighting up, that was amazing and started

my interest in making. Since then I have hacked

projects with Neopixels, which uses a Raspberry

Pi 3 and my staircase! The outside temperature

is displayed using a string of Neopixels down

my staircase. The data is from a public API

which uses your map reference to provide a

local weather report. The code simply checks

the temperature and then uses a conditional

test which will trigger the corresponding

neopixels. If the temperature goes above 25C

then the staircase is illuminated red, but living in

the UK I am unlikely to see that happen!

The coding side of the project was simple, as I

am already a coder, but the electronics side of

things was a steep learning curve. But luckily I

had help from my husband who is exceptionally

good at electronics. I’m a little scared that I may

break a Pi. For example I didn’t know that the

Neopixels and the Pi could share the same

power supply. For my next project I hope to

break a Raspberry Pi!

LXF: So what is your

next step?

LU: The tricky next step

is learning electronics. I

don’t know how you learn

it. Is it self taught? I’ve

got a book on electronics, which I am slowly

going through, it is helping but I am a rather

impatient learner. The community is amazing

and I can ask questions via social media.

LXF: Have you had the time to visit the other

Maker Faire UK exhibitors?

LU: Yes I’ve had a wonderful time going round

and talking to the other exhibitors. I met

Jonathan Sanderson who has made an

amazing “Heart of Maker Faire” which records

your heartbeat and then plays the recording

using lots of Neopixels. We’d both talked over

Twitter about the problems that we faced...

Maker Faire is a great place to put a face to the

Twitter handle and have the opportunity to

discuss and share ideas. LXF


June 2017 LXF224 43


So, you think your system’s fast? Faster than

Jonni Bidwell’s? Almost certainly. And now you can

prove it with our awesome guide to speed testing.

The different hardware

components in your computer all

run at given speeds or have easily

accessible speed limits. If your

hard drive or SSD is attached to a SATA 3.0

bus then it has a theoretical maximum

transfer rate of 600MB/s, while

a fancy m.2 SSD (connected to a

fast enough PCIe slot) will easily

manage 2.5GB/s on a good day,

and the bus itself (using 4 PCIe

3.0 lanes) can manage 3.9GB/s.

Yes, your CPU will change

frequency according to load (likewise your

GPU), and yes these things can be

overclocked, but all these numbers can be

looked up or otherwise calculated. The trouble

is, most of the time they don’t correlate with

real world performance, since most real world

operations use a variety of different aspects of

the system. For example, given all the specs of

all the hardware involved, it’s still hard to say

how quickly a system will boot a vanilla install

of the latest Fedora. Likewise what kind of FPS

“These real world

measurements involve all

kinds of intangibles.”

you’ll see if you turn everything up to 11 on

Shadow of Mordor. These real world

measurements are tricky because they involve

all kinds of intangibles – overheads introduced

by the filesystem, code paths used by the

graphics driver, latencies introduced by

scheduling in the kernel. Tricky to predict, but,

minus a few caveats, not so tricky to measure.

Benchmarking is the dark art of performing

this measurement. It involves running standard

programs that can be compared across

different machines, each program

testing some particular aspect of

the system. But nothing’s ever

simple and it’s easy to get

benchmarking wrong. Background

programs, thermal throttling,

laptop power-saving and display

compositors can all interfere. Games in

particular tend to do better on one processor

manufacturer or GPU driver. Using a particular

title and assuming the results will give a

objective ranking is foolhardy.


June 2017 LXF224 45


For all the controversy surrounding

systemd, it gave us a a simple one-shot

way of measuring boot time – just

running systemd-analyze critical-chain will

give a good measure of the wait from GRUB to

the login screen. For more detail try systemdanalyze

blame which shows how long each

individual service takes to complete. Bear in

mind that services start in parallel, and long

running tasks (eg, updating the mlocate

database) happen in the background, so they

don’t get in the way as much as one might

suspect. A cool feature of systemd’s boot

profiling is its ability to make pictures, for

example systemd-analyze plot > ~/boot.svg

will graph the data from the blame command

above, emitting a file in your home directory

which can be viewed in a web browser or

vector graphics program (eg, Inkscape).

Finally, a more detailed graph can be

generated using systemd-bootchart. This is a

little more involved, and requires modifying

grub.cfg, so if you’re not au fait with how this

file is laid out you may want to skip this part.

Some distros (eg, Ubuntu and Fedora) have

separated this functionality into a separate

package, systemd-bootchart, which you’ll

need to install for this to work. The bootchart

module is invoked from the kernel’s init

parameter, usually used to specify an

alternate init system. In this case though, the

usual init process (ie, /sbin/init) is forked off

while the bootchart times, probes and

measures a plethora of variables. So to

summon the bootchart on next boot, edit

/boot/grub/grub.cfg adding:


to the line that loads your kernel (it begins

with linux ). All going well boot should

complete and an SVG image named

bootchart-$DATE.svg should have popped

up in the /run/log/ directory. This typically

isn’t very well drawn (axes labels

end up all atop one another), but

does give more information than

systemd-analyze plot , such as

CPU usage and disk I/O.

Optimising the boot process isn’t

generally a thing that’s done

anymore – systemd starts units in

parallel, and knows which units

depends on what, so there isn’t

much to be gained from shuffling

these things about. In general, if

you want your system to boot

faster, replacing that old hard

drive housing your OS with a

shiny new SSD is probably the

best way to go.

“Results obtained with ‘dd’

may be a slower than the

device’s true capabilities.”

Storage speedgun

And once you’ve done that, why

not benchmark your new

acquisition? This can be done with the humble

dd utility (the same one you use to write

Linux ISOs to USB sticks), subject to a couple

of gotchas. The method we demonstrate here

writes to a regular file, so it depends on the

filesystem and partition alignment, but should

(so long as there’s no sneaky hardware-based

caching going on)

provide an accurate

measure of sequential

read and write speeds.

First mount the drive

and cd to a directory

on it where you have

read/write access (or become root if you’re

feeling powerful). We’re going to write a

1024MB file full of zeros, of which the

/dev/zero device has an interminable supply:

$ dd if=/dev/zero of=testfile bs=1M

count=1000 conv=fdatasync

Note the fdatasync option supplied

through the conv parameter. It ensures data

is actually written out, rather than being left in

a buffer, before the process finishes. On our

test system, which is nothing special, dd

reports as follows:

1048576000 bytes (1.0 GB, 1000 MiB) copied,

8.61276 s, 122 MB/s


Disks tells us that our write rate is less than half our

read rate, which is the usual way of things.

which is fairly typical for an old spinning rust

drive (speed varies over the platter too). We

can now recycle this file to measure the drive’s

read speed, but we must be careful – the file,

or bits of it, may be cached, so we first instruct

the kernel to drop those buffers before doing

the read test:

$ echo 3 | sudo tee /proc/sys/vm/drop_caches

$ dd if=testfile of=/dev/null bs=1M


Rerun the second command to see why

the first is necessary – our system reported

an amazing (and wrong) 7GB/s without

having first dropped the cache.

Results obtained from dd may be a little

slower than the device’s true capabilities,

since filesystems and fragmentation get in the

way. Reading directly from the device

circumvents this delay, and GNOME’s Disks

utility lets you do just this. Disks is included in

Ubuntu and most Gnome-based distros. The

package is usually named gnome-disk-utility

should you need to install it manually. Start

the program and select the drive you wish to

test, then click on the two small cogs

(additional partition options) below the

Volumes diagram. Select Benchmark Partition

and then Start Benchmark.

The glxgears utility is often mentioned in

conjunction with testing OpenGL functionality.

If you’ve never encountered it before (and

indeed even if you have) it renders three cogs

spinning in a way that is relatively pleasing to

the eye. It also emits FPS data to stdout, but

since most configurations lock framerate to the

monitor’s refresh rate (so-called vsync) this

generally just displays something very close to

60fps. It doesn’t take much GPU (or even CPU

power, relatively speaking) to maintain this

refresh rate nowadays, so unless you like cogs

glxgears has dubious benchmarking value.

However, if you’re using drivers built on the

Gallium framework (ie, Nouveau or AMD’s open

source drivers, not Intel or proprietary drivers)

you can use glxgears to test a little-known

feature, the Gallium Head Up Display (HUD).




You’ll get a nice Fraps-style overlay – Steam

Settings offers a basic In-Game > FPS counter

too – showing FPS, CPU and VRAM usage,

among other things. This can be customised

and works with any OpenGL program, including

Steam. (Start Steam from the command line,

setting the GALLIUM_HUD variable there. It

draws all over the Steam client, but you won’t

see that once the game starts and it’s much

easier than figuring out how to start games

from the command line). For more details set

the environment variable to “help” .

46 LXF224 June 2017 www.linuxformat.com


At this point, you’ll see that a write test is

available, but this requires the device to be

unmounted so it won’t work on the partition

containing your OS. Furthermore it should not

be used on a device containing data that

hasn’t been backed up – the test writes

random data all over the place, and while in

theory it writes back the original data

afterwards, a power outage or crash would

preclude this, so don’t shoot yourself in the

foot here. The non-destructive read test is

easy to instigate, the default sample numbers

and sizes are fine, so

just hit the Start

button. The scatter

plot and graphs will be

updated as the test

progresses, showing

access time and read

rates respectively.

So far we’ve used ‘grassroots’ tools to do

benchmarking, Windows users, by

comparison have a variety of tools, suites,

demos and other fully featured tools to put

their system through its paces. Cinebench,

CrystalDiskMark, Catzilla, FurMark and the

Futuremark suite (3DMark, PCMark and

VRMark) are often used to test the latest

hardware on snazzy tech websites, but alas

none have Linux versions available. Some of

these can be made to work in Wine, but that’s

not likely to give you anywhere near a fair

measurement. Cinebench and 3DMark in

particular are quite DirectX-centric, so if you

wanted to get reasonable data, you’d have to

mess around with the CSMT and/or the

Gallium Nine patches for Wine and your

graphics drivers. This is beyond the scope of

this feature. Happily, there are some splendid

(and free) programs available for Linux.

“You’ll be transported to a

magical steampunk realm,

all beautifully rendered.”

Perhaps the most exciting of these is the

one that’s only just been released as we type

this: Unigine Superposition. This uses the

Unigine 2 engine to test VR capabilities, render

beautiful scenes and give you pretty

minigames to play. Unigine’s other famous

benchmarks, Valley and Heaven, can be

downloaded for free from https://unigine.

com/en/products/benchmarks/. Advanced

Our machine didn’t do very well at Unigine heaven – 5fps on low quality with no

tessalation. Most underwhelming.

and Pro versions are available for a fee (a

substantial one in the latter case), offering

extra features such as report generation and

commercial usage. Despite the complexity of

the scenes they render, getting Superposition,

Heaven and Valley running is easy. Just

download the .run file from the website,

extract it and run it. For example, to run

Heaven, do as follows:

$ sh Unigine_Heaven-4.0.run

$ cd Unigine_Heaven-4.0

$ ./heaven

A menu will pop up allowing some settings

to be configured. It’s tempting to go straight

for Ultra quality and x8 anti-aliasing, but these

really will make your graphics card hot under

the collar, so choose something gentle to

begin with, and then hit the Run button. You’ll

be transported to a magical steampunk realm,

all beautifully rendered and with fps displayed

in the top-right corner. Wireframing and

tessellation can be toggled with F2 and F3,

and pushing Esc will open a menu. To begin

the actual benchmark, which measures min

and max fps over all scenes, press F9. Once it

completes you can save an HTML report of

the proceedings.

Phoronix Test Suite

The pinnacle of Linux Benchmarking, however,

is Michael Larabel’s Phoronix Test Suite (PTS).

It’s extendable, automatable, reproducible and

open source. If there’s something you want it

to test that it doesn’t test already, then you

can write your own tests in XML and share

them with the community. Results can be

uploaded to the openbenchmarking.org

website so you can compare your results with

other systems on a prettily rendered graph(s).

Installing PTS on Debian-based systems is

easy, just grab the .deb file, following the

download links on http://phoronix-testsuite.com.

It can be installed with Gdebi or a

good old fashioned dpkg -i phoronix-testsuite_7.0.1_all.deb

. There are some PHP

dependencies, so if dpkg complains these


One of the oldest CPU benchmarks is the

venerable LINPACK. This was originally written

in the ’70s and is a collection of taxing Fortran

programs for performing linear algebra –

operations on vectors and matrices. This was

written to test the supercomputers of the era,

and it has now been mostly superseded by

LAPACK, which makes better use of native

vector operations and other voodoo in modern

architectures. Nonetheless, LINPACK still

remains vaguely relevant since a great deal of

supercomputer time today is still spent

inverting matrices. It’s not really a standard any

more, there are several different versions you

can get from www.netlib.org/benchmark.

Netlib is a repository of mathematically themed

papers and programs and well worth

investigating if you’re of a scientific

programming bent. The most modern LINPACK

implementation is the one called hpl (High

Performance LINPACK) on Netlib. It’s the one

still used for the TOP-500 supercomputer

listing, but it requires extra support libraries.

Here’s how to compile a simpler C-based

LINPACK and measure how many FLoating

point Operations per Second (FLOPS) your

machine can manage.

$ wget http://www.netlib.org/benchmark/

linpackc.new -O linpack.c

$ gcc linpack.c -o linpack -lm

Using the default 200x200 array, our

humble machine managed around

700MFLOPS. However, gcc is a clever

creature, and using some extra compiler

optimisations, for example:

$ gcc linpack.c -o linpack -lm -O3


and rerunning the benchmark saw this leap to

around 4.5GFLOPS.

There are many other optimisations one

can pass to gcc such as -ffast-math (which

might also be wrong-math in certain

situations), and Gentoo users like them so

much that they devote hours to compiling

every package on their system with them.


June 2017 LXF224 47


can be installed with apt install -f . There are a

number of pre-packaged tests that can speed

test pretty much everything you can think of in

a largely reliable and robust manner. This

includes classic Linux stuff like compiling a

kernel (which depends both on disk I/O and

processor power), synthetic benchmarks like

the Unigine family mentioned before and

games – both open source ones and those

available on Steam.

Suppose we want to perform the

aforementioned kernel compilation

benchmark. That’s just a matter of:

$ phoronix-test-suite benchmark build-linuxkernel

PTS will grab the kernel sources (4.9 at

present), install any needed dependencies

(gcc, make, etc, it can do this for a variety of

distros) and ask you if you want to upload

your results. Then it will dutifully compile the

kernel three times (or more if there’s

significant discrepancy between timings) and

report an average. A humble LXF staffer’s

machine takes about five minutes, but the

fancy Ryzen machine we played with in

LXF223 managed it in a mere 78s. There are

many tests available (just run phoronix-testsuite

list-available-tests , but some of them

haven’t been updated or otherwise don’t

work. For CPU benchmarking fftw (the Fastest

Fourier Transform in the West) is a good one,

vital for digital signal processing. John the

Ripper (password cracking), encode-flac

(audio encoding) and c-ray (raytracing) are

also good. For measuring disk I/O we’d

recommend IOzone or Dbench. PTS also

supports testing a number of games, both

FOSS titles (eg, Xonotic, Supertuxkart and

Open Arena) and proprietary ones from your

Steam library (BioShock Infinite, Mad Max,

Civilization VI and many more). PTS seems to

get confused with any of the Steam-based

benchmarks if Steam isn’t already running in

the background, so start it first and then run

PTS from the command line.

The reproducibility of PTS’s benchmarks

make them useful for comparing your system

to others. For example, to compare your

awesome rig to this writer’s decrepit and dustcovered

device in the raytracing, audioencoding,

public key, PHPbench quadrathlon,

just run the following:

$ phoronix-test-suite benchmark


Or to just view the results, visit http://


LXFBENCHM51. Something went awry when

we added the PHPBench results, and PTS

decided that we were using a different system.

We most assuredly were not, so compare your

Glxgears is not

very useful as

a benchmark

anymore, but it’s

good for testing

the Gallium HUD

with appropriate

hardware and


results to ours, and feel free to write to our

superiors and tell them that we need faster

machines. Lots of them.

Believe it or not, but we’ve really only

scratched the surface of what can be

benchmarked and how it can be done reliably.

It’s worth looking into creating your own tests

and suites in PTS, but there just isn’t room to

write about it here. It’s also worth looking at

programs such as hardinfo.

For basic stress-testing just running CPUintensive

benchmarks over an extended

period of time does pretty much this. The

Great Internet Mersenne Prime search is still

going strong (making it the longest running

distributed computing program in history)

and the associated Prime95 software

(available under a free licence) has a nonparticipation

mode which is ideal for testing

your system’s stamina. LXF

The Gallium HUD can be made

to work with Steam as well.

We don’t really know what ps

invocations are either.

48 LXF224 June 2017 www.linuxformat.com




Up-to-the-minute tech business news

In-depth hardware and software reviews

Analysis of the key issues affecting your business




Not your average technology website



Fascinating reports from the bleeding edge of tech

Innovations, culture and geek culture explored

Join the UK’s leading online tech community


twitter.com/GizmodoUK facebook.com/GizmodoUK

The best new open source

software on the planet


MATE MtPaint Meteo-Qt, NTFS-3G Guetzli LanguageTool

Webenginepart GNU Nano, Classifier Man vs Olives Tank island

Alexander Tolstoy

dishes up another big pot of steaming

open source soup, swimming in a rich

and fatty freedom-heavy sauce that

will keep you going for the month.

Desktop environment


Version: 1.18 Web: https://mate-desktop.org

The world of Linux desktop

environments has been a matter

of passionate discussion and

fierce online battle for decades, yet

there is one thing that didn’t change

dramatically over the years. We are

alluding, of course, to MATE, a

successor to the once-famous and

widely used GNOME 2 desktop.

If we cast an eye back a few years,

we see GNOME 2 as a standard

desktop for the first Fedora Core

releases, Novell Enterprise Linux,

OpenSolaris and more, with a focus on

enterprise-class Unix-based desktops.

The curious bit is that MATE looks and

behaves exactly the same as GNOME 2,

offering pleasant updates and support

that bring it on par with other modern

desktops. It may be less demanding but

it is no less functional and provides a

good choice for those who want their

computer hardware to do some work

rather than fancy desktop effects, while

avoiding the slightly older and cut down

feel of the likes of LXDE.

Mate 1.18 was released after nearly

half a year of hard work since the 1.16

Exploring the Mate interface...

Application menu

MATE offers a classic

applications menu with a

hierarchical structure.

Desktop with icons

Home for some default icons

and a place where you can

drop your files.

File manager

Caja is what used to be

Nautilus a few years ago. It

doesn’t lack vital features!

Bottom panel

This second panel is used to

switch between tasks and

virtual desktops.

MATE has a control centre where all noteworthy

settings gather to plot against you.

“In this release MATE got

rid of the historic (yet

widely used) GTK2 library.”

System tray

Calendar, volume control and notifications

from running apps reside here.

version, and it features quite a lot

of changes that are worth

highlighting. Finally, in this

release the MATE developers got

rid of the historic (yet still widely

used) GTK2 library and switched

entirely to GTK3. This means not only

compatibility with the latest fancy GTK

themes, but also more easy-tomaintain

code going forward.

The file manager app, Caja, comes

with a copy queue and pausing and also

with new notifications for safe

unmounting of external drives. The Atril

document viewer handles PDF files in a

more lively fashion than ever, the MATE

calculator is again in the default

desktop offering and the Engrampa

archive manager now opens ZIP

archives that pretend to be custom

formats, such a WAR and EAR.

This new MATE release has of

course received a multitude of smaller

changes here and there, and you really

need to start using this desktop in order

to benefit from the latest improvements

they’ve made. Many new features

involve a more intensive use of extra

mouse buttons and scrolling, so you

can be sure that MATE devs do care

about your productivity.

www.techradar.com/pro June 2017 LXF224 51


Painting software


Version: 3.50 Web: https://github.com/wjaguar

We have another addition to

your collection of extremely

lightweight productivity

applications. Last time (see LXF223)

we admired the great AzPainter drawing

program, and in case you found its

interface too entangling, here is a

simpler candidate. Meet MTPaint,a

personal graphic editor written by Mark

Tyler back in the days of GTK 1 and

initially optimised for decent

performance on machines that are a

hundred times feebler than your

smartphone of today.

We’re not talking about ancient

abandonware, though. MTPaint has

been continuously developed up to

version 3.40 (released in 2011), but

since then only few development

versions were released. The current

MTPaint 3.50 release of nowadays

brings an impressive number of

enhancements and new features, and,

by the way, it can be safely compiled

against GTK2. So you can now enjoy

scripting console (Image > Script),

support for multiple threads when

rendering images (good for multicore

CPUs), optional gamma correction for

painting, much better text tools,

multiple clone tool improvements, a

new file format (PMM) and a whole lot

more. In the meantime, MTPaint is still a

classic graphic application optimised

for manual drawing and working with

indexed palettes. It requires as little

resources as, say, Microsoft Paint from

early Windows versions, but in return

gives you more advanced tools.

MTPaint supports layers, transparency,

selections, up to 1,000 undo steps and

up to 8000% for (spying our pay rises–

“For many common

tasks MTPaint is able

to replace GIMP”

Turn an old machine into a mighty graphic station

with MTPaint!

Ed) zooming in. And even that is not the

end of MTPaint’s feature-list: how about

exporting your artwork to ASCII or

turning a multi-layered file into a GIF

animation with a few mouse clicks?

Alternatively, you can set your own

custom keyboard shortcuts for any

menu item of the editor, and become a

keyboard ninja of bitmaps.

For many common tasks MTPaint is

able to replace GIMP as the main

creative tool, so why not give this

mighty little editor a try?

Weather application


Version: 0.9.5 Web: http://bit.ly/2ppd0Am

For years we’ve been collecting

handy desktop applications that

can help you build a customtailored,

lightweight desktop that would

do just what you need, without tons of

services and backends eating your CPU

resources. The great Meteo-Qt is one of

these, and hardly needs further

introduction. However, there’s an

endless line of weather apps out there,

so here’s why you should use this one.

Meteo-Qt is a compact application

that sits in the system tray and shows

detailed weather information when you

click the icon. The current data as well

as the forecast are fetched from the

OpenWeatherMap website, so you get

the temperature, wind speed,

cloudiness, pressure, humidity together

with sunrise and sunset times for

almost any location on Earth. Meteo-

Qt5 uses Python and Qt5 components,

but unlike various KDE plasmoids with

similar functionality, it doesn’t use QML

and thus fits in well on any desktop. To

make Meteo-Qt run on your own

desktop, make sure you have Python 3

bindings with Qt5, LXML and SIP (install

the respective packages via your

package manager). Once the runtime

dependencies stated above are met,

the application can be run from its

sources by the following command:

$ python3 /path/to/meteo-qt/meteo_qt/


When the application loads, rightclick

its icon in the system tray and go

to the Settings section. From there you

can change your location, units,

connection details, tray icon look and

“It doesn’t use QML

and thus fits in well

on any desktop.”

Meteo-Qt brings weather forecasts to your fingertips.

behaviour, font size, auto start mode

and other useful settings. Meteo-Qt

does want you to register at

OpenWeatherMap and get your

personal key in order to pair the Meteo-

Qt desktop part with your online

account. It takes a couple of minutes

and doesn’t bother you any time

afterwards. The icon updates weather

statistics once every 30 minutes, which,

again, can be altered to your liking.

52 LXF224 June 2017 www.linuxformat.com


Filesystem driver


Version: 2017.3.23 Web: http://bit.ly/2o3pq0q

Even though the market share of

Microsoft Windows is (very)

slowly declining, in the mid-term

we still use various tweaks to make

Linux and Windows interoperability

easier. Tuxera is a successful vendor of

filesystem drivers for many platforms,

with the most renowned NTFS-3G

product as its flagship.

While the company makes money

from selling commercial software

offerings, the NTFS-3G is a fully open

source technology. As the name

suggests, the driver enables handling of

NTFS partitions on any non-Windows

platform that can work with user-level

filesystems, also known as FUSE

( $ sudo modprobe fuse ). Along with

Linux, the list includes many other

OSes, from macOS to Haiku and

OpenIndiana. The NTFS-3G bundle

includes the basic part for mounting

and the ntfsprogs package for

manipulating partitions. You can read,

write, resize NTFS partitions without

losing data, and as long as the driver

has been considered to be enterpriseready

for some time already, using

NTFS-3G is quite safe. To mount an

NTFS partition, use the following


$ mount –t ntfs3 /dev/sdb1 /mnt/


Whereas scanning and fixing

possible errors goes like this:

$ ntfsfix /dev/sdbX

Do remember to change sdbX to

the real name of your NTFS partition

before proceeding.

The new release features a few

enhancements, such as the ability to

mount NTFS volumes in read-only

“Fixes and performance

enhancements that you

will surely notice.”

A safe and reliable way to work with Windows partitions

from within Linux.

mode in case Windows had put it into

the so-called ‘hibernate’ state, better

extended attributes handling, better

UTF16 support and more. Generally, the

driver has got extra fixes and

performance fixes that improve writing

and reading from an NTFS volume from

Linux or another FUSE-compatible OS.

Version 2017.3.23 is the first major

Tuxera NTFS-3G update in nearly one

year so it will be worth the upgrade if

you rely upon NTFS file-system support

under Linux, eg, having Windows on a

separate partition or need to work with

external drives formatted with NTFS.

Image encoder


Version: GIT Web: https://github.com/google/guetzli

We proudly continue our quest

into the world of alternative

media encoders. So far, we

reviewed FLIF (LXF205) and Lepton

(LXF215) for squeezing extra bits in

graphics, and Opus (LXF209) for a

perfect audio experience. Guetzli falls in

to the graphics camp – it is a new JPEG

encoder from Google that aims to

further reduce the size of JPEG files

when compared to libjpeg, the library

that can be found in almost any Linux

distribution. Google reports that Guetzli

can save you up to 30% of disk space

without compromising quality (even

though JPEG is already a lossy format).

We decided to put Guetzli to the test

and see with our own eyes if it was a

worthy addition to our overcrowded

/usr/bin directory.

Converting a PNG screen shot to

JPEG revealed that Guetzli with default

settings doesn’t have any advantages

over libjpeg in terms of file size. Guetzli

uses the 95% quality setting by default,

but you can lower it down to 84% – the

minimum possible value for Guetzli.

This compression ratio is still very good

for JPEG, so we used it and finally got

the promised figures, the 260KB file

versus the 206KB one in our case.

Guetzli has some other limitations: it

only produces non-progressive JPEGs

and requires a lot of system resources.

The command line tool supports a

memory allocation limit with a huge

default value of 6000MB. It took a

couple of minutes (!) for the above

screenshot to get packed to JPEG with

Guetzli, whereas other libraries, like

“It’s a perfect tool for a

one-time optimisation of

your image library.”

Guetzli (top) produces less artefacts with a smaller file.

libjpeg or MozJpeg are much faster.

However, Guetzli was made to save you

some free disk space.

It’s unlikely that you will want to use

Guetzli for your on-the-fly encodings,

but it is a perfect tool for a one-time

optimisation of your image library.

Guetzli carefully retains compatibility

with any third-party JPEG decoders, so

you will still be able to view your files as

you do now. The only resource that

you’ll need is time and CPU horsepower

(beware the electricity bill!).

www.techradar.com/pro June 2017 LXF224 53


Spell and grammar checker


Version: 3.7 Web: https://languagetool.org

When switching from Windows

to Linux, one of the most

demanded types of software

is an office suite. Frankly, there is an

abundance of choice, both open source

and proprietary. But the story doesn’t

end with proper parsing of DOCX and

XLSX files in Linux, because commonly

Windows users also expect the same

quality of spell-checking as they are

used to with Microsoft Word, which

contains some proprietary language

tools from third parties. While regular

spell-checking in suites like LibreOffice

works very well thanks to the Hunspell

library, we could wish for more –

specifically for automatic grammar,

punctuation and style checking.

LanguageTool is an extension that

implements all above capabilities for

more than 20 languages, mostly

European. Native English speakers will

also find it useful for correcting

common misspellings and style errors.

LanguageTool comes as an .OXT file

for LibreOffice and Apache OpenOffice

and needs to be installed using the

standard extension manager (Tools >

Extension Manager) from any of the

suite’s applications. LanguageTool is

based on Java 8 and will not work with

other office apps in Linux right away.

However, if for any reason you don’t use

LibreOffice or OpenOffice, you can go

with alternative options: LanguageTool

offers Chrome and Firefox browser

plug-ins, a Google Docs plug-in, a crossplatform

standalone desktop client and

finally an online text input field on the

project’s website. When used in an

office suite, LanguageTool supersedes

the default spell-checking engine and

“It provides automatic

grammar, punctuation

and style checking.”

You don’t have to be running LibreOffice to check your

text with LanguageTool.

offers its own dictionaries together with

the grammar check feature. You can tell

it’s working by the blue underlines that

emerge below missed commas, wrong

prepositions, duplicate words and other

errors. Of course, it won’t make your

text shine as a professional reviewer

(but where’s the fun in that?!–Ed)

could do, but at least you’ll escape the

most common mistakes that occur.

Each LanguageTool release brings more

grammar rules and enriches existing

dictionaries with new terms and

phrases, and the 3.7 version is no

exception. Be sure to upgrade if you use

Writer heavily!

Browser engine wrapper


Version: 17.04 Web: http://bit.ly/2p9Aq0d

By the time these lines reach you,

the KDE Applications 17.04

release will be out, bringing

some major new functionality to the

standard KDE Plasma web browser

known as Konqueror.

It’s no secret that in recent years

Konqueror hasn’t attracted much

attention either from developers or

from users, and so lagged behind

Chrome and Firefox. Technically the

reason was the outdated HTML

rendering engines. Konqueror used to

have two options: the very old KHTML,

which was forked by Apple in 2002 to

become the base for Safari, and the

newer and better Webkit, which still

wasn’t good enough for modern web

surfing. Webenginepart brings support

for QtWebengine for KDE applications,

and Konqueror is the one that benefits

the most from it. Webenginepart

supersedes KWebkitpart and lets

Konqueror use a modern rendering

engine while retaining all of its advanced

features. Currently it connects the

browser with QtWebengine 5.8, which

contains the slightly modified

Chromium 53.0.2785.148 engine inside.

Not the very latest version, but still very

good. You can select the new engine in

the Settings > Configure Konqueror >

General menu and instantly enjoy great

speed and compatibility in your now

Chromium-powered Konqueror

browser. Noticeable changes include

smooth online video playback, the

ability to gracefully handle JavaScriptheavy

web pages and much better

performance working with many tabs.

“Noticeable changes

include smooth online

video playback.”

It’s a major improvement for Konqueror.

Webenginepart has been around for

some time already, but the earlier

implementations in late 2016 were not

stable enough and made the browser

crash too often. That contrasted with

web browsers that used QtWebengine

directly, such as Qupzilla. But during the

last months many annoying bugs have

finally been fixed.

Again, Webenginepart is just a

mediator, not a web browser engine

itself, so it is future QtWebengine

updates that will actually improve your

browsing experience with Konqueror.

54 LXF224 June 2017 www.linuxformat.com


HotGames Entertainment apps

Arcade game

Man vs Olives

Version: f8143a Web: http://bit.ly/2p1Jww4k

Don’t ask, why (and how)

people should confront

olives, this should be taken

for granted, as if you’ve been

watching a blockbuster based on a

controversial work of fiction.

You steer a man around a scene

with three-storey platforms and

water below. Your character can

move sideways, jump and even go

down to the water in order to collect

falling coins. There are also jewels

that help you gain even higher scores.

After collecting all the jewels on one

level, you get a key that takes you to

the next. But it’s not only coins that

fall from above: angry green olives on

the first level and terrifying black

olives later on keep on falling around

you. Strangely, they don’t hurt you

when you touch or jump over them,

and you can even bump olives to make

them move in the opposite direction,

which make them look like curling

stones. But if an olive hits you from

above, you’re dead.

There’s also another way to

embrace death: go to the lowest level

and wait for the pink swine run over you.

If the above lines sound weird, you

have read them correctly. The whole

Man vs Olives experience is out of this

world. But it is also fun and oddly

entertaining with its cosy graphics and

a sense of humour in every detail. The

game comes from Finland, and it

“Don’t ask, why (and

how) people should

confront olives.”

Run, jump and hunt for the coins, while escaping green

and black olives. Yes, really.

reminded us of Oilwar from

LXF204. Both games are

incredibly simple, yet not primitive,

and very addictive. So, if you need

to spend half an hour playing

something really fun, Man vs

Olives is a strong recommendation.

The game runs fine locally after

you download the tarball from

Github, or you can play the online

version at https://softcubicle.


Shooter game

Tank Island

Version: GIT Web: http://bit.ly/2p01IGF

Although we’ll be shooting

enemy tanks in a minute or

so, some important

introductory words should be

provided first. Tank Island is a

working example of a game

developed using the Ruby language

and described in a detailed step-bystep

manner in the Developing

Games With Ruby book, both written

by Tomas Varaneckas, a passionate

Ruby enthusiast. It’s a top-down

shooter made with Gosu, a 2D game

development library for Ruby and

C++, so the whole thing is not only

about gaming, but programming as

well. If you enjoy Tank Island,youcan

try to make a similar game yourself.

There couldn’t be better

documentation to the game than the

thick manual for beginners in Ruby.

Back to the game, however.

You control a tank on a firing range

within the island, which is covered with

grass, woods and a sandy coastline.

The landscape is quite basic and rough

– each tile is just a square of given type,

without any soft edges. But the

graphics are more varied thanks to the

fine details of various objects on the

field, such as fuel barrels, hangars and

wild bushes. The gameplay consists of

driving your tank with WASD keys and

aiming/shooting with your mouse at

any other tanks. There are a total of

eight players on the field: you and seven

AI bots. At the beginning, before you

gain certain survival skills, the game

“Enemy bots will try to

blow you up as soon as

you are within reach.”

Approach carefully and fire for effect before enemy

tank does the same against you

cycles you through endless respawns

on random sites of the island, with

enemy bots trying to blow you up as

soon as you are within their reach.

After a while you will likely develop a

strategy of careful approach and

shooting in advance – this is a key to

scoring more points than the bots.

There are also useful power-ups on

the field that can replenish your

health, improve speed and extend

your firing distance. Surviving in tank

duels is both challenging and fun.

www.techradar.com/pro June 2017 LXF224 55


Text editor

GNU Nano

Version: 2.8 Web: https://nano-editor.org

We believe that regular Linux

users should be well-versed

in using a command line

editor, regardless of what that editor is.

Historically the main console editors

for UNIX-based systems were Vi and

Emacs, both are still widely used by

experienced Linux geeks, but in modern

times these two are often hated for Vi’s

terseness and the need to learn Emacs

key combinations, and that’s why many

people are happy with just using Nano.

This is a much simpler text editor with a

user-friendly interface and only the

essential features (eg. when compared

to Vi or using Emacs as an entire init

system!), which is fine if you only use

command line editors occasionally in

some specific cases, like editing the /

etc/sudoers file or fixing some

configuration file, if your system doesn’t

boot into graphical mode.

Nano doesn’t have a ‘command

mode’ and thus it is similar to classic

text editors from DOS or apps like

mcedit. The lower part of the screen

shows references to popular actions

that always stay in your sight. The

minimal knowledge required to use it is

Ctrl+O for saving your changes and

Ctrl+X for escaping back to the console.

There are many shortcuts for

navigating between lines, symbols and

words in the text, mostly bound to Alt or

Ctrl plus , but it’s good to

know that Nano can emulate Alt and

Ctrl themselves. Alt is Esc and Ctrl is

double Esc (Esc Esc), which gives

greater flexibility in certain cases.

The new Nano 2.8 release marks the

transition to the Gnulib library, which

means a lot of changes under the hood.

Users should notice better navigation

across very long lines that don’t fit on

“There are many shortcuts

for navigating between

lines, symbols and words.”

Primitive in the eyes of greybeards, very nice for the rest!

screen. The Up and Down arrows

together with Pg Up and Pg Dn keys

now move you between visual lines

instead of logical lines.

Home and End keys also respect

visual lines first. When working with

very long lines, the first time you press

Home or End, you get to the start or the

end of the visual line, whereas the

second press brings you to the start/

end of the logical line. To test this

feature you may need to write a

long command or use a smaller

screen. In the latter case, Cool

Retro Term (see LXF192) would

be a perfect playground!

Files organizer


Version: GIT Web: github.com/bhrigu123/classifier

How often do you inspect your

~/Downloads directory for old,

obsolete, duplicate or other

useless downloads? Many Linux users

don’t get around to sorting out these

files and keep on collecting everything

they download, until their $HOME

partition becomes menacingly full. It’s

fine for you perfectionists who keep

everything in ideal order, but this

Hotpick is for the rest. We’ve discovered

a Python script that classifies your files

into categories according to file types.

Classifier may be a simple and minimal

tool of a few kilobytes, but it does a vital

joj. Navigate to the place that feels like a

bookcase with broken shelves, and then

launch the tool without any arguments

$ classifier .

The command will complete

immediately (because moving files

within the same partition in Linux takes

no time) and you’ll notice that your

directory now contains far fewer files

(if any) but instead has now many extra

subdirectories. We used our test

~/Downloads directory as a classic

example and admitted that Classifier

applied very smart and adequate

sorting. For instance, it detected RPM

and DEB packages as separate

categories, grouped all archives in one

subdirectory and also sorted pictures,

documents, music, and even .EXE files.

There is no undo feature so you may

want to copy your source directory and

run Classifier against it to imitate a dry

run. No data can will be deleted anyway

“Classifier applied

very smart and

adequate sorting.”

A perfect cleaning with really no effort is possible!

but you may need extra time to find

your files after Classifier tidies things

up. Installing the tool is extremely

simple because Classifier is included in

the online Python modules repository,

so it goes just like this:

$ sudo pip install classifier

Expectably, it is possible to specify

input and output directories explicitly

and define certain file types by hand -

see the $ classifier --help output for

full details. LXF

56 LXF224 June 2017 www.linuxformat.com

Everything you need

to live the Apple life






Available from www.myfavouritemagazines.co.uk/MACsubs

Pi user

Giving you your fill of delicious Raspberry Pi news, reviews and tutorials


is our regular Pie

expert, he also

writes about the

Raspberry Pi.


Raspberry Pi storms past

Commodore 64 sales

The Pi is now the third best-selling computer platform ever.

Ten dollars can buy you a lot of

computer – now it even

comes with Wi-Fi and

Bluetooth. Let that sink in for a

moment. For such a low amount of

money we can add the Internet of

Things (IoT) to our projects and

embed a Raspberry Pi into even the

smallest of projects. When the

Raspberry Pi first burst onto the

scene in 2012, a development board

used to prototype projects would

cost around $150. But the Raspberry

Pi disrupted the market and reduced

that price to $35, and this price has

remained synonymous with the Pi.

The Model A reduced the price

further, but it wasn’t until the Pi Zero

in late 2015 that we saw another

disruption to the market. $5 for a

computer is ludicrous. We routinely

pay that for a coffee. Yet for this $5

price there was something missing,

and that was wireless connectivity.

We could use a USB dongle, but then

we had to use a hub, and then the

hub required external power, so we

then had a $5 computer with $15 of

accessories just to enable us to use

our Wi-Fi, keyboard and mouse. So

while the Pi Zero was revolutionary

for the price, it did have caveats.

For $10 we can buy the Pi Zero W,

which offers built-in Wi-FI and

Bluetooth 4.1. But why is this

important? Well, now I don’t need a

dongle for my Wi-Fi, and I can use my

Bluetooth keyboard and mouse, so

now that solitary micro USB OTG

port can be used with other devices.

In fact the Pi Zero W wireless

connectivity is so good that I just

SSH into the Pi Zero W and run it as a

headless device. And all of this for

just $10. What a bargain.

The Pi has sold more than 12.5 million

boards, making this platform the third

best-selling of all time. The other

two being the Apple Mac and the

IBM PC, so despite being

third, you can safely say it’s in

very good company. “The

Commodore 64 had, until

recently, the distinction of

being the third most popular

general purpose computing platform,”

Eben Upton told a crowd at the Pi’s fifth

birthday party. “That’s what I’m here to

celebrate,” he said. “We are now the third most

popular general purpose computing platform

after the Mac and PC.”

Some people point out if you’re going to

include the whole of the Pi range then surely you’d

Lego Mac Mini Apt-get Julia

E-ink and Docker inside

Jannis Hermanns has created a tiny

Pi-powered replica of the Macintosh Classic.

Inside there’s a Pi Zero running Docker and a

2.7” e-paper display. Docker images are deployed

through resin.io, and the housing is made out of

Lego, which was designed using the Lego Digital

Designer webapp.


Image credit: J. Hermanns

do the same with the Commodore 16, 64 and 128,

but it’ll just be another year to surpass even these

extended figures. So congratulations to the Pi

community on another landmark!

For all your numerical pursuits

Scientific programming language Julia is now

available in Raspbian. Source packages and

binaries have been available via the

JuliaBerry project for some time, but now they can

be installed natively using Apt. JuliaBerry have

Julia versions of the SenseHat, GPIO and

MineCraft APIs available on their GitHub.


Beer IPA





Still used by some

today, the C64 was an

awesome home computer.

A Mac we don’t mind having in our mag. Pie charts in Julia. Insert pi r^2 joke here.




58 LXF224 June 2017 www.linuxformat.com

Single board computer Reviews

Asus Tinker Board

Is it a board? Is it a Pi? No, it’s another contender for the Raspberry Pi’s

crown. Les Pounder sees if it’s any good.

In brief...

A Raspberry Pi

3-sized board that

offers a faster

CPU, more

memory and

Gigabit Ethernet.

It’s targeted at

the maker


specifically those

who require more

processing power

than is currently

on the market. It

has plenty of that,

roughly twice the


power of the Pi 3,

but lacks the

mature software

to get the best

from the board.

Kodi media player

It provides plenty of power

for Kodi and can display

1080p video. All from a

small board.

The Asus Tinker Board arrived on

the scene with very little fanfare,

and seemed to catch everyone

by surprise. This was reflected by the

lack of software when the board was

released: for the first few days there

was no operating system publicly

available to run it. But let’s put that

behind us and take a look.

Powered as it is by an RK3288

System on a Chip, featuring a quadcore

ARM Cortex A17 running at 1.8GHz

and 2GB of LPDDR3 RAM, we can

instantly assume that this board is

meant to be a ‘Pi Killer’ and

computationally it is. Running the

sysbench prime number test for a

single core, it took only two minutes and

two seconds to compute all the prime

numbers up to 10,000, versus the Pi 3

time of three minutes two seconds. A

full minute quicker! We repeated the

test utilising all four cores and the

Tinker Board completed it in 31.34

seconds and the Pi 3 in 45.7. So we can

see there is plenty of power in the

Tinker Board’s CPU.

The board provides four USB 2.0

ports along with HDMI, micro USB

power, and a 40-pin GPIO, which is not

fully compatible with boards produced

for the Raspberry Pi, but can be used

with electronic components (LEDs,

buttons, etc) to build your own projects.

The software to control the GPIO has to

be downloaded separately, quite why it

can’t be included ready for use we don’t

know. It is called ASUS.GPIO and you’ve

guessed it, it is a fork of the RPI.GPIO

Features at a glance

External antenna

The Wi-Fi antenna can be

disconnected, enabling the

user to add an external

one for increased range.

The Tinker Board shares

the same dimensions as the

Raspberry Pi 3, and even fits

inside the official case.

library which has powered thousands of

projects. It works in the same manner,

but you won’t be able to connect any

SPI/I2C devices just yet as the software

isn’t ready.

Networking comes in the form of the

built-in 802.11(b/g/n) Wi-Fi and

Bluetooth 4.0, which uses a PCB

antenna for reception, but you can

replace the antenna with an externally

mounted option to boost your signal.

There is also “Gigabit” Ethernet, but

when we tested the bandwidth using

iperf we only managed to record

35.3Mbits/s. However this is still far

higher than the Pi 3 which only

manages 11Mbits/s as it comes via a

USB 2 interface. The Tinker Board does

not share the Ethernet bus with USB,

enabling the higher bandwidth, but still

short of true Gigabit speeds.

The Asus Tinker Board runs a

version of a Debian-based distribution,

called TinkerOS. It is lightweight and

works really well as a desktop, giving

the user access to a traditional menu

and widgets to control wireless

connectivity. You will find the Chromium

web browser present, and it did an

admirable job with everything we threw

at it – except for YouTube. This board is

able to play video at 1080p but YouTube

videos ran poorly, even after installing a

patch from Asus. Our test of the Star

Wars Rogue One trailer at 1080p

crawled along in windowed and

fullscreen mode.

We installed Kodi on our test unit

and we were able watch HD movies and

stream HD content to our device. It

worked flawlessly and means that the

issues observed are only with streaming

content via the web browser, which can

be fixed with a future software update.

The Asus Tinker Board is

undisputedly a powerful platform for

makers, but no matter what power it

may have, it has not managed to claim

the crown from the Raspberry Pi, which

offers greater documentation and

support for those wanting to learn

more. This is a board for experienced

hardware hackers only. LXF

Asus Tinker Board

Developer: Asus

Web: http://bit.ly/2oZ6m6Q

Price: £55


Features 7/10

Performance 9/10

Ease of use 7/10

Value 7/10

Plenty of power, and a capable board

for makers, but the software and the

community around it need to develop.

Rating 7/10


June 2017 LXF224 59

Raspberry Pi Analogue signals

GPIO: Work with

analogue signals

Les Pounder shows us how we can use analogue sensors and inputs

with our Raspberry Pi to control Neopixels.



Les Pounder

Les works with the

Raspberry Pi

Foundation as part

of their Picademy

training, and

travels the UK

helping schools

and teachers do

more with tech.

He blogs at





All models of Raspberry Pi come with the GPIO, the

pins that enable the use of electronic components

and add-on boards. But no model of Raspberry Pi

can interact with analogue components, as the Pi does not

come with an Analogue to Digital Converter (ADC). Step

forward the MCP3008 ADC. In this tutorial we shall use it with

three potentiometers to control a neopixel ring.

Hardware setup

Please refer to the wiring diagram (see You Will Need, below)

for this tutorial, there are quite a few connections to be made.

We start the hardware build by inserting the MCP3008

into our breadboard so that it is over the central channel. The

notch on the MCP3008 should be facing the top of the

breadboard. The MCP3008 has multiple connections to the

Raspberry Pi, for power and for a hardware connection to the

SPI bus. These connections are made on the one side of the

chip (pins 9 to 16 according to the datasheet). Pins 1 to 8 are

reserved for the eight channels that are available for us to use

with analogue devices.

Potentiometers, sometimes referred to as “pots”, are

three-pinned variable resistors. By turning these pots we can

vary the voltage output from the centre pin. Potentiometers’

three pins are voltage, output and ground. They can come in

many forms, but we’re using single-turn potentiometers

similar to those used as volume controls in amplifiers. Others

include linear potentiometers, such as those used on a mixing

desk, and there also “trimpots” used on circuit boards where

infrequent adjustments are needed.

Neopixels are a brand name, created by Adafruit, for the

WS2811/12 series LEDs. Each pixel in a series can be

individually controlled: its colour, brightness and whether it is

on or off. Neopixels need precise timing to control them, and

to do that we need to use a GPIO pin on our Pi that can be

used with Pulse Width Modulation, PWM. From experience we

know that pin 18 can provide this. To power the Neopixels we

can also use the 3V and GND pins, which have been broken

out to the breadboard as per the diagram.

Software setup

No matter what version of Raspberry Pi you are using, even

the Pi Zero, our first task is to turn off the audio output as this

will interfere with our Neopixels. To do this we need to alter

the config.txt file in the boot directory, so open a terminal

and type the following.

$ sudo nano /boot/config.txt

At the end of the file, on a new blank line, add the following

two lines to comment the change for later reference, and to

turn off the 3.5mm audio jack on your Raspberry Pi.

#For Neopixels. Forces audio via HDMI and turns off 3.5mm



To save and exit the editor, press Ctrl + O and then Enter,

then press Ctrl + X. Reboot your Pi to enact the change.

Once rebooted and back to the desktop, we can install the

Python library that will enable us to control the Neopixels. In a

terminal type the following:

$ sudo pip3 install rpi_ws281x

You’ll need:

Any model of Raspberry Pi

The latest Raspbian Pixel Release

An internet connection for your Pi

An MCP3008 ADC

Any Neopixels, we used a 12 pixel LED ring

from Adafruit

Soldering kit

A breadboard

3x 10k potentiometers

6x Male to female jumper wires

13x Male to male jumper wires

All the code for this project and a diagram can

be found at http://bit.ly/2p5oN6A

To see the project in action take a look

at the YouTube video:


This circuit is quite complex as

we are connecting the analogue

inputs, our potentiometers, to the

breadboard’s power rails and to

the MCP3008 chip.

60 LXF224 June 2017 www.linuxformat.com

Analogue signals Raspberry Pi

Neopixels, APA102 and other LEDs

The term LED, short for Light Emitting Diode is

quite ambiguous. There is the humble LED, used

to signify that a device is on and to say “hello

world” when building your first circuit, but there

are many others. We used Neopixels, a brand

name for WS2811/12 LEDs. These super bright

LEDs run from a 3.3V or 5V power supply and

use carefully timed pulses to communicate the

state of LEDs. Now as we learned in the tutorial,

this causes issues as we have to use the GPIO

pins, specifically a pin that can perform PWM,

and this forced us to disable analogue audio

output. But there are other LEDs that we can

use instead of the Neopixel, and this is in the

form of the APA102 series. These LEDs use the

SPI interface, a hardware interface that is

capable of sending data much faster to the

LEDs, enabling them to be used in projects such

as those using Persistence of Vision. APA102

LEDs do not require any additional configuration

changes as they do not interfere with audio

output. Lots of Pi related companies are using

the APA102 instead of the WS2811/12 as it

removes the sticky issue of audio configuration

changes and provides an easier way to use

super bright, individually controllable LEDs with

the Raspberry Pi.

After a few moments the library will be installed. The

MCP3008 does not need any special software installed as we

shall be using it with GPIO Zero, which has a Python class to

utilise the chip.

Coding the project

We start by opening the Python 3 editor, but we need to open

it with sudo to ensure that we can use the Neopixels. Open a

terminal and type the following to open the editor.

$ sudo idle3 &

Once it opens, click on File > New to create a new blank

file, then immediately click on File > Save and call the file

analogue-inputs.py . Subsequent saves will now happen

much quicker.

Our first section of Python code is our imports. First we

import the MCP3008 class from GPIO Zero, enabling us to

use the chip. Then we import the sleep function from the

Time library, this will help us to control the pace of the project.

Finally we import the Adafruit_Neopixel class from the

neopixel library, enabling control of the neopixel.

from gpiozero import MCP3008

from time import sleep

from neopixel import Adafruit_NeoPixel

Our next block of code instructs Python as to which

potentiometer is connected to which channel of our

MCP3008. We also store the raw values output by the

MCP3008 as the variables, r, g, b. These refer to the colours

that we can use with our Neopixels. The raw values are

between 0.0 and 1.0.

r = MCP3008(channel=0)

g = MCP3008(channel=1)

b = MCP3008(channel=2)

We next create two variables, LEDS, which refers to the

number of Neopixels in our string, and PIN, which refers to

the GPIO used to control the Neopixels.

LEDS = 12

PIN = 18

In order to use our Neopixels we need to tell Python where

they are connected, and this is done by creating the “strip”

object. Then we instruct Python to begin using them.

strip = Adafruit_NeoPixel(LEDS, PIN)


A while True loop is used to continually run the code

contained within, and in this case the first few lines of code

are variables, red, green and blue. These are used to contain

the values created by turning the potentiometers, but

remember these values are between 0.0 and 1.0? In order to

use them with Neopixels we need them to be within 0 and

255, 0 being off, and 255 being full brightness. So we take the

initial value and multiply it by 255, and this is enclosed in a

round function that will round the number to the nearest

integer. This gives us values that can be passed to the

neopixels functions later.

while True:

red = round(r.value * 255)

green= round(g.value * 255)

blue = round(b.value * 255

For debug purposes we also print the values to the Python

shell, this helps us to identify any issues.


A for loop is used to set the colour of each LED in our strip

of Neopixels. We use a range that reuses the LEDS variable as

the number of times the loop should iterate. Inside the loop

we set the colour of each “pixel” by instructing it as to where it

is in the strip, i , and then we pass the red, green and blue

values that we created by turning the potentiometers. Lastly

in the loop we instruct the Neopixel library to show the

results. If we didn’t do this, then we wouldn’t see any changes.

for i in range(LEDS):

strip.setPixelColorRGB(i, red, green, blue)


Our very last line of code is outside the for loop, but still

inside the main loop. It is a simple sleep to delay the code,

and stop the CPU from maxing out!


With the code completed, save your work and click on Run

> Run Module to start the code. Now turn the potentiometers

and watch your Neopixels change colour. You’ve just learnt

how to take analogue input values and use them to control

the output of another component. LXF

Neopixels come in many forms, here we can see two rings

wired up and ready for use, and an 8 pixel stick with pins.



When using any

integrated circuit

(IC) chip you need

to understand what

each pin does. Look

for the number

etched into the

chip then use your

favourite search

engine to find the


Love your Pi more Subscribe and save at http://bit.ly/LinuxFormat


June 2017 LXF224 61

Helping you live better & work smarter



Thousands of tips to improve your home & workplace

Get more from your smartphone, tablet & computer

Be more efficient and increase your productivity


twitter.com/lifehackeruk facebook.com/lifehackeruk

Wall calendar Raspberry Pi

iCal: Build a

smart calendar

With a little cunning, you can transform your Pi into the perfect date-checker.



Nate Drake

is a freelance


journalist who

specialises in

cybersecurity and

retro tech



If you’ve previously

installed OwnCloud,

this project will

work perfectly

with the builtin


Calendar. Just

make sure that

everyone who

needs access has

the LDAP link.

One staple in films such as Back to the Future and

Time Cop is that everyone has a handy digitalplanner

panel in their living room showing their

appointments for the day. While we already have calendars

on our smartphones and tablets, but now, thanks to the

Raspberry Pi, it’s possible for you to have an economic wallmounted

calendar in your home or office too.

Given that we have just tacitly admitted we could use the

calendar app on our phones instead, is this project merely a

novelty, or are there any advantages to it? Well, there’s

always the look of the thing. A calendar mounted on a wall

can certainly be more aesthetically pleasing than many

mobile phone displays. Some people on the internet have

gone to great lengths in this direction, such as mounting it

tastefully in a wooden frame or building it into a mirror.

The main advantage, however, is that it enables you to

share a calendar with other people, by putting it in a public

place, such as your living room. Your family can see your own

appointments, and you can make sure to schedule your

commitments around theirs.

In the workplace, you can use calendar views, such as

Agenda in Google Calendars, to organise meetings and

assign tasks to your colleagues.

Crafting your calendar

For this project, you need a Raspberry Pi with internet

access. In the interests of saving on cabling and space, it’s

best to use the Raspberry Pi 3, which has integrated Wi-Fi.

You also need to choose a monitor. One excellent option

is the Official Raspberry Pi Touchscreen Display (see

Choosing a Monitor, over the page) but ultimately any

compatible monitor will do. This isn’t a DIY tutorial, so please

If you choose to subscribe to other calendars, public holidays in other

countries and other useful information can be highlighted for you.

Serving suggestion: this monitor is mounted on a wooden

board, then layered with cork for a rustic feel.

only attempt to mount the Pi and display if you are

comfortable with using a drill and installing brackets. If the

display comes with a stand, there’s no reason it can’t be

placed on a desk or table.

This is also a good time to start measuring cable lengths,

so you can be sure both the monitor and the Pi will have

power wherever they’re mounted.

Once your equipment is in order, you need to consider

the type of calendar you wish to use. If you and your family

or colleagues already have a calendar you share, you can

start following the tutorial right away.

If that’s not the case, you may wish to create a single

calendar for this purpose. If you’re using Google Calendars,

follow the steps at http://bit.ly/1HKeoCL to do this. For

Mac users, visit http://apple.co/2nz2w1v to create a new

iCloud Calendar. Outlook users can also create a calendar by

visiting http://calendar.live.com.

It’s not that important which calendar service you use,

provided it can be displayed in Mozilla Firefox, which we’re

using for this project. Try to give the calendar a distinctive

name, such as Smith Family Calendar, so everyone using it

knows it’s distinct from their personal calendar.

Importing calendars

If you do have an existing calendar, you may wish to import

your personal appointments, birthdays and so on into the

new one. It may not be necessary, because providers such

as Google and iCloud allow multiple calendars. Events are

colour-coded to show which calendar they belong to.

However, if one of the people using your new calendar

previously used a different platform – for example, you have


June 2017 LXF224 63

Raspberry Pi Wall calendar

Choosing a monitor

There’s no shortage of suitable

small monitors to connect to your

Pi. If you feel comfortable with a

small amount of wiring, the Official

Raspberry Pi 7-inch Touchscreen

Display is the ideal size to display a

calendar, as well as having a handy

slot at the back to place your Pi.

The screen, along with assembly

instructions, is available from the

Pi Hut website for £55 (https://

thepihut.com). If you don’t like

messy wires, the Pi Hut also sells a

short micro USB power cable for

£2 to allow the Pi to draw power

from the monitor’s USB port.

The Raspberry Pi Touchscreen

Display has the added advantage of

enabling you to scroll through

appointments with a click of a

finger. If this is not important to

you, or the display is out of your

budget, Amazon and eBay also sell

Pi-compatible displays. As the Pi

has an HDMI port, any HDMI

compatible monitor will do, but

some monitors come with a driver

board to allow you to connect it to

the Pi’s own DSI port.

If you are very comfortable with

electronics and want to save

money, find a broken-down laptop

with a working LCD. If you can

remove the screen safely and buy a

compatible controller board online,

it can be made to work with the Pi.

Visit www.instructables.com/id/


?ALLSTEPS for some tips.

A controller board magically

turns a laptop screen into a

monitor (get one configured

to your specific screen).



If you’ve already

installed the Real

Kiosk add-on, you

won’t be able to

visit the Add-ons

site. Open Terminal

and run firefoxesr


to open Firefox with

add-ons disabled.

decided you’ll all use a Google Calendar and one person

used an iCloud one on their iPhone – you need to import it.

To import events from an iCloud Calendar into Google,

first export them into an ICS file by following the steps at

http://apple.co/1I0wS0p. Then import the file by following

step 2 at http://bit.ly/1mMXSIA.

To export a Microsoft Outlook Calendar to Google

Calendar, follow the steps at http://bit.ly/2cI17lN.


Once you have a single, shared calendar, take some time to

set it to a format with which you’re comfortable. Most

providers have the option of a daily, weekly or monthly view.

Next, feel free to fine-tune the calendar’s appearance. You

can make changes to the iCloud Calendar – for example, to

change the viewable time period – by following the

instructions at http://apple.co/2oeuCm2.

Google Calendar’s default look and feel is somewhat

spartan. If you would like to experiment with different

themes, there are a number available at http://bit.ly/

2oeD7O6. You need the Stylish Firefox extension in order to

install them. Visit https://mzl.la/1fe2Nwr, then click Add to

Firefox to install this.

Full-screen ahead

As you’ll be using a much smaller screen than you’re used

to, space will be at a premium, so consider installing the Real

Use the VKeyBoard plugin to type with a touchscreen. You can maximise and

minimise the keyboard by tapping on the yellow button to the left.

Kiosk (r-kiosk) add-on for Mozilla Firefox.

Real Kiosk does what it says on the tin: it’s designed to

turn your browser into the equivalent of an internet kiosk.

This means the menus, toolbars and even the right-click

function are disabled. The chief advantage of this is that

Firefox always opens in full-screen mode, making your

calendar much easier to see. This also makes sure your

device can only be used as a calendar, as people trying to

view other websites are bounced back.

If you do need to close down Firefox for any reason, you

can do this by connecting a keyboard, holding down the Alt

key, then pressing F4.

Editing your calendars

Reading this project so far, it would seem that viewing the

calendar in the web browser is passive. However, if you have

a central calendar on your wall, wouldn’t it be ideal to let

people add and edit appointments as well?

If you are using the official Raspberry Pi Touchscreen

Display, tapping anywhere with your finger simulates moving

the mouse and left-clicking in that place. You can use this to

edit the time of events and even create new ones.

Problems may arise when you want to edit the text of

events or create names for new ones. Naturally, you could

connect a small wireless keyboard and leave it near the wallmounted

calendar in case data needs to be entered. A much

less clumsy solution, however, would be to have the

keyboard built into the browser itself. The Mozilla Firefox

extension VKeyBoard is designed for kiosk browsers, and

pops up when clicked to allow users to enter text.

Simply visit https://mzl.la/2njGePN inside the browser

and click Add to Firefox to install. If you have already

installed the r-kiosk add-on and can’t change your web page,

restart Firefox in safe mode, as outlined above.

Sharing the dates

If you want to use any device besides the Raspberry Pi to

add or change appointments in Google Calendar, you either

need to sign into your Google or iCloud account on that

device, or share your calendar with others.

To share your Google Calendar, follow the steps at

http://bit.ly/2nzfqwG. You can send a link to only certain

email addresses or make the calendar viewable to anyone

with the link. You can do the same for iCloud Calendars by

following the steps at http://apple.co/2bfWHk8.

If you use Outlook 2010, it’s also possible to publish a

calendar to Outlook.com by visiting http://bit.ly/2oCrLzn

64 LXF224 June 2017 www.linuxformat.com

Wall calendar Raspberry Pi

and following the section entitled Share a Calendar by

Publishing it Online.

Once your calendar has been shared online, those people

who wish to edit it will need to be able to access it from their

own devices. For anyone with a computer, this is a simple

matter of visiting the link as you would on the Raspberry Pi,

by using the browser.

It’s also possible to view and edit the calendar on mobile

phones. If the shared calendar is with Google, Android users

can access it directly from their own calendar app, even if

they have a different Google account, by following the

instructions at http://bit.ly/2nP8vBB. There is an official

Outlook app for Android, which allows for easy viewing and

editing of Outlook calendars.

Sadly, iCloud Calendars aren’t so easy to make friends

with, but there are a number of third-party apps, such as

SmoothSync, in the Google Play store, which enable you to

synchronise between calendars.

If you are an iPhone user, you’re in luck. There’s an official

Google Calendar app in the iTunes Store, with which you can

sign in and view your calendars. There is also an official

Microsoft Outlook – Email and Calendar app, which can be

used to view and edit Outlook Calendars.

Calendar conundrums

If you create or change an appointment and it doesn’t

appear right away on everyone’s device, wait for five to ten

minutes before attempting troubleshooting, to let it

percolate through the various layers of software. If the

changes are visible on the wall calendar – that is, on the

The rear view of the Raspberry Pi Touchscreen Display,

where the Pi can be neatly housed. The stand is optional.

Calendar tweaks and tips

website – the issue is most likely to do with the device, not

the Raspberry Pi.

The software and add-ons used to view the calendar are

very easy to install, so the most problematic part of this

project is likely to be when it comes to adding the monitor

and fixing it to your wall.

You can make life much easier for yourself by buying a

monitor specifically designed for the Raspberry Pi, so you

have somewhere to put the computer itself – in other words,

tucked away tidly behind the screen.

If the place you want to install the wall calendar is hard to

reach, you may be able to buy a longer micro USB cable, but

bear in mind that the voltage drops as cable length grows.

Consider using shorter cables and/or a powered USB hub.

If the Raspberry Pi crashes for any reason, Firefox will

attempt to restore all open web pages once it reboots, which

may mean you have to plug in a mouse or keyboard to close

down any extra tabs.

You can reduce the chance of this happening by starting

Firefox in safe mode, and then entering about:config in the

address bar. Press Return to be taken to the settings screen

for Firefox.

Once you’re there, scroll down to the setting marked

“Browser.sessionstore.resume_from_crash” and doubleclick

to change from True to False.

If you’re using Google Calendars, anyone who scrolls to

the top of the screen will be able to switch from your

calendar to your other Google Apps, such as Gmail. They

can also use the search bar to view documents stored in

your Google Drive.

If this concerns you, consider setting up a dedicated

Google account, just for the calendar. You can still access

and edit the calendar from your own account.


Calendar allows

you to set

up multiple

calendars, which

can be colourcoded

to help


between them.



If you have

a personal

appointment, you

can create a private

event. Other people

accessing your

calendar simply see

you’re busy, with no

further details.

When the calendar is on your wall,

the cursor can look untidy,

especially if you are using a

touchscreen. A handy app named

Unclutter can hide the cursor

except when it’s being moved or

you’re touching the screen. Open

Terminal on your Pi (or connect via

SSH) and run the command:

sudo apt-get install unclutter

In case the Pi crashes and

you’re forced to reboot, it’s also

best to have Firefox programmed

to open automatically, saving you

the trouble of reconnecting a

keyboard and mouse. Open

Terminal on your Pi (or connect via

SSH) and run the command sudo

nano /etc/xdg/lxsession/LXDE-pi/

autostart . Scroll to the bottom

of the window and add the line

@firefox-esr . Press Ctrl+X, then Y,

then Return to save your changes.

Finally, to make sure the display

doesn’t sleep after a few minutes,

open Terminal or connect via SSH

once again and run the command

sudo nano /etc/lightdm/lightdm.

conf . Scroll down to where it

says #xserver-command=X and

Google lists any calendars

to which you subscribe.

remove the hash at the start of the

line. Next, put a space after the

letter X and type -s 0 –dpms .

Press Ctrl+X, then Y, then

Return to save your changes, then

reboot your Raspberry Pi.

If you are using Google

Calendars, click the arrow beside

Other Calendars, then Browse

Interesting Calendars to see a list

of calendars to which you can

subscribe – for example, Public

Holidays in the United Kingdom.

Click Subscribe to have them

appear on your own calendar.


June 2017 LXF224 65

Raspberry Pi Wall calendar

Set up your wall calendar

1 Update Raspbian and install Firefox

Before you can physically transform your Raspberry Pi into a wall

calendar, you need to connect the device to the internet and open

the Terminal app. Run sudo apt-get update and then sudo apt-get

upgrade to bring your Pi up to date. Next, enter sudo apt-get install

iceweasel to install Firefox Extended Support Release on to your

mini computer.

2 Set Firefox preferences

Go to Menu > Internet > Firefox ESR to open Firefox. Visit your

calendar address – http://calendar.google.com,forexample–

and sign in if necessary. If Firefox prompts you to remember your

preferences, say Yes. Once you can see your calendar, go to Edit >

Preferences, and click the Set to Current Page button to make sure

Firefox always displays the calendar.

3 Set Firefox for full-screen

This step is optional but recommended. Visit https://addons.

mozilla.org/en-US/firefox/addon/r-kiosk to install the Real Kiosk

add-on – this disables menus and toolbars. Firefox needs to restart

for it to work. Remember you can still close the window by connecting

a keyboard to the Pi and pressing Alt+F4.

4 Perform tweaks

Make sure the calendar is in the view you want – for example,

Monthly. Next, follow the steps in the Tweaks and Tips box (opposite)

to hide the mouse when not in use, disable the Pi’s sleep function and

make Firefox start every time you switch on the machine if you wish.

Restart the Pi to ensure your changes have taken effect.

5 Connect it up

Now for the hardware-dependent bit: connecting everything.

Obviously, the specific steps required to connect your monitor are

going to vary from device to device. If you are using the official

Raspberry Pi Touchscreen Display, assembly instructions are

available from http://bit.ly/2nR9qRs.

6 Finishing touches

Once you’ve connected your Raspberry Pi up to a screen, you’ll want

to position it somewhere that everyone can access. If it’s in your

home, you’ll probably also want to make it look as good as possible.

You can let your imagination run wild here – take a look online to see

what other people have achieved. LXF

66 LXF224 June 2017 www.linuxformat.com


5 issues for £5!



Available from all good newsagents & supermarkets today

TERMS AND CONDITIONS The trial offer is for new UK print subscribers paying by Direct Debit only. You will receive 13 issues in a year. Full details of the Direct Debit

guarantee are available upon request. If you are dissatisfied in any way you can write to us or call us to cancel your subscription at any time and we will refund you for all

un-mailed issues. Prices correct at point of print and subject to change. Offer ends 31/05/2017.

Get into Linux today!

Issue 223

May 2017

Product code:


In the magazine

The greatest Pi ever? Of

course! We reveal the new

Pi Zero W and how you

can build better devices,

AMD Ryzen gets

reviewed, Roundup of

CAD packages, create

Android apps and we talk

Open democracy (sob).

LXFDVD highlights

Ubuntu Studio 16.04, openSUSE

Tumbleweed and XenialDog 2017.

Issue 222

April 2017

Product code:


In the magazine

Gain Force powers

without any sign of

midichlorians. Master the

terminal, the best privacy

distros, FOSS file sharing,

Nginx webserver, Tiling

window manager,

Micro:bit battle bots and

the Pi compute module 3.

LXFDVD highlights

Mint 18.1 MATE and Cinnamon

with Scientific Linux 7.3.

Issue 221

March 2017

Product code:


In the magazine

Leave Google behind.

We build our own

impenetrable castle in the

clouds out of the best

open source tools. Plus

the best BSD distros,

introducing MicroPython,

dual-booting using GRUB

and fun with CentOS.

LXFDVD highlights

Ubuntu 16.10 Remix, Siduction

16.1.0 Xfce and Porteus 3.2.2

Issue 220

February 2017

Product code:


Issue 219

January 2017

Product code:


Issue 218

December 2016

Product code:


In the magazine

Make your increasingly

clever home secure and

connect to our pick of the

best remote desktop

clients. Plus, make your

plots beautiful with D3.js,

handle text in Python,

web hosting with Drupal 8

and Linux on Dell devices.

LXFDVD highlights

Install the enterprise Linux distro

openSUSE Leap 42.2 64-bit.

In the magazine

Our no-nonsense guide

to getting started with

the greatest OS on the

planet (the Martians are

still using MacOS—

losers). Plus our pick of

the lightweight distros,

build a faster Linux PC

and inside Wayland.

LXFDVD highlights

Manjaro 16.10.2, Fedora 25, antix 16,

Bodhi Linux 4 and more.

In the magazine

The ultimate guide to

getting the ultimate

Ubuntu and the best

Chromebooks herded

into a pile. Plus revive

your old PC with a 32-bit

distro, using Wireshark,

learning about statistical

learning and VPN.

LXFDVD highlights

BunsenLabs 2016.07.10, Ubuntu

16.10 32-bit & 64-bit and more.

To order, visit myfavouritemagazines.co.uk

Select Computer from the all Magazines list and then select Linux Format.

Or call the back issues hotline on 0344 848 2852

or +44 344 848 2852 for overseas orders.

Quote the issue code shown above and

have your credit or debit card details ready






Available on your device now

*Free Trial not available on Zinio.

Don’t wait for the latest issue to reach your local

store – subscribe today and let Linux Format

come straight to you.

“If you want to expand your

knowledge, get more from

your code and discover the

latest technologies, Linux

Format is your one-stop shop

covering the best in FOSS,

Raspberry Pi and more!”

Neil Mohr, Editor



From €15 every 3 months


From $15 every 3 months

Rest of the world

From $15 every 3 months



CALL +44 344 848 2852

Lines open 8AM–7PM GMT weekdays, 10AM–2PM GMT Saturdays *

Savings compared to buying 13 full-priced issues. You will receive 13 issues in a year. You can write to us or call us to cancel your subscription within 14 days of purchase. Your

subscription is for the minimum term specified and will expire at the end of the current term. Payment is non-refundable after the 14 day cancellation period unless

exceptional circumstances apply. Your statutory rights are not affected. Prices correct at time of print and subject to change. * UK calls will cost the same as other standard

fixed line numbers (starting 01 or 02) and are included as part of any inclusive or free minutes allowances (if offered by your phone tariff)

For full terms and conditions please visit bit.ly/magtandc. Expiry date in the terms: 30/06/2017

www.techradar.com/pro June 2017 LXF224 69

Diff Discover how to see differences

Terminal: Use

diff and history

Discover how to easily compare files and use your bash history, with

Jason Cannon from Udemy.com and the Learn Linux in Five days course.



Jason Cannon

started his career

as a Unix and

Linux System

Engineer in 1999.

Since then he’s

used his skills at

such companies

as Xerox, UPS,


and Amazon.

This lesson will cover how to compare the contents of

files. If you want to compare two files and display the

differences, you can use the diff command. The sdiff

command or vimdiff. The diff command will display the

difference between two files, while the sdiff command will

display the difference with file1 on the left and file2 on the

right. Vimdiff will use the vim editor to display the difference

between two files.

diff file1 file2 #compare two files

sdiff file1 file2 #side-by-side

vimdiff file1 file2 #within vim

Here’s just the first line of output produced by diff.

$ diff file1 file2


< this is a line in a file.


> This is a Line in a File!

The first number in 3c3 represents the line number from

the first file and the second number represents line number

from the second file. The middle character separating the line

numbers will either be a c for a change, d for a deletion or an

a for an addition. So the format is:

Make the most of your command line with a bit of history.


In the above example the third line of the first file has

changed from the third line in the second file. The output that

follows the less-than sign belongs to the first file. The text

following the greater-than sign belongs to the second file. The

three dashes are just separators.

In the sdiff output, the pipe or vertical bar character

means that the text differs in the files on that line. You may

also see the less-than sign, which means the line only exists in

the first file. The greater-than sign means that the line only

exists in the second file.

$ sdiff file1 file2

line in file1 | line in file2

> more in file2

| differing lines

< line only from file1

> line only from file2

When you run vimdiff, both files will be pulled up in

separate windows. You can use standard vim controls (that is,

the utter crazed nonsense of a bygone era – Ed) such as :q to

quit, :qa to quit all and :qa! to force-quit all. Use Ctrl-W W to

switch windows.

We can explore how these commands work using the

basic example files below, we’ll use cat to display them with

line numbers:

$ cat -n secret

1 tags: credentials

2 site: facebook.com

3 user: jason

4 pass: Abee!

5 tags: credentials

$ cat -n secret.bak

1 tags: credentials

2 site: facebook.com

3 user: jason

4 pass: bee

5 tags: credentials

$ diff secret secret.bak


< pass: Abee!

70 LXF224 June 2017 www.linuxformat.com

In partnership with Udemy Tutorial

Learn & save with Udemy

If you’ve enjoyed this small taste of the Learn

Linux in Five Days Udemy course, you can

gain unrestricted access to the full course at

udemy.com with this exclusive Linux Format

reader discount. You’ll discover a full working

knowledge of using the standard sysadmin

terminal tools, setting up a test machine, Linux

file permissions, standard text editors,

manipulating files, using network transfers,

controlling processes and much more.

Get the discount, start learning!

Visit www.bit.ly/LearnLinux5 to enroll in the

course at a discounted price of £15 * (92% off)!

Click the Buy Now button and sign up for an

account on Udemy.

Once you have signed up for an account, you

will be asked to confirm your purchase.

The discounted course price of £15 will be

applied automatically with the above provided

link. Input your credit card information and click

Enroll now

and save

£180 * !

on “Pay now.”

You’ve successfully

enrolled for the the course! Enjoy

a lifetime of access on the go.

Udemy was founded in 2010 with the aim of

improving lives through learning. Udemy is a

global marketplace for learning and teaching

online where more than 15 million students learn

from a library of 45,000 courses taught by

expert instructors in 80 different languages.

*Discountisvalid i until 6/June/2017 /2017


> pass: bee!

You’ll notice that the line that begins with the less-than

symbol belongs to the first file, while the line with the greaterthan

symbol belongs to the second file. You’ll also notice the

first line of diff output says 4c4 . That means the fourth line of

the first file has changed or is different to that of the fourth

line of the second file.

$ echo new last line >> secret

$ sdiff secret secret.bak

tags: credentials

tags: credentials

site: facebook.com

site: facebook.com

user: jason

user: jason

pass: Abee!

| pass: bee

tags: credentials

tags: credentials

new last line <

Here you can see sdiff in action. It places the files side by

side and the vertical bar or pipe symbol displays the line that

has a difference. We added a new last line to the file. The lessthan

symbol shows that there’s a line in the first file that’s not

in the second file.

Shell history

Let’s cover the shell history, repeating commands or portions

of commands with the exclamation mark syntax. Each

command you enter into the shell is logged in your shell

history, so having access to your shell history is extremely

useful. You can search through it, repeat commands you

previously entered and recall commands, then change them

before execution.

Not only can this save you time and keystrokes, but it can

prevent you from making mistakes by rerunning previously

known good commands. Some shells like Bash keep the

history in memory and only write them to a file on exit.

Common history files include: .bash_history, .history

and .histfile. These history files are stored in your home

directory. The history command displays the commands in

the shell history, it’ll precede each one with a number that can

be used to reference the commands at a later date. By default

bash retains 500 commands in your shell history. this is

controlled by the HISTSIZE environment variable.

!N # repeat command 3

!! # repeat the last command

! # repeat most recent command

You can use the exclamation mark history expansion

syntax to rerun a command by number. Run the history

command to get a list of commands that are preceded by a

number. If you want to rerun command number three in your

history, you would type !3 then press Enter. If you want to

repeat a command that starts with a certain command or

even character you can use the ! combination. For

example if you want to repeat the cat command you recently

ran, you can simply type !c and press Enter.

As well as running entire commands in your history you

can pull out parts of a command line. The syntax is !:,

the ! represents an event, ie, the last command. You can use

the exclamation syntax we just explained, ie, ! !! or

. The : represents a word on the command

line, 0 is the command run, 1 is the first argument and so on.

$ head file.txt sort.txt note.txt

$ !!

head file.txt sort.txt note.txt

$ vi !:2

vi sort.txt

In this example !:0 is head, !:1 is file.txt, !:2 is sort.txt

and !:3 would be note.txt. There are two more useful !

shortcuts you should be aware of:

!^ #first argument

!$ #last argument

$ head file.txt sort.txt note.txt

!^ = file.txt

!$ = note.txt

This is just a tiny taster from the Learn Linux in 5 days Udemy

course, to try more see our reader offer above now! LXF





Once you’ve installed vim you’re able to use vimdiff to compare files.

The exclamation

point (!) is

sometimes referred

to as a bang.

To use vimdiff you’ll

first need to have

vim installed, using

$ sudo apt-get

install vim

Save now! Vist udemy.com and use the discount code: LearnLinux5


June 2017 LXF224 71


Terminal: Using

multiple profiles

Nick Peers reveals how the terminal can be customised for different uses

with the help of custom profiles.



Nick Peers

has been playing

around with

computers for

over 30 years, and

has been dabbling

with Linux for the

best part of a




Set an unsupported

value to a key using

dconf? You can

restore the default

setting like this:

$ dconf reset /org/




Switch profiles

Ubuntu’s default terminal is the Gnome-Terminal, and

one of its lesser known features is support for

profiles. Profiles are a collection of terminal settings,

including profile name, font and background colours, and

scrolling. In addition, profiles can be set to run a specific

command or shell on startup, launch to a specific screen and

even connect to a remote computer via SSH.

This flexibility makes it easy to see why a single user might

want to develop more than one profile for using the terminal

– you may, for example, regularly administer another PC via

SSH, such as a headless Pi Zero running a Mopidy music

server (see LXF218). In this case, you could create a profile

that logs you directly into your Pi Zero, one that uses a

different colour scheme to help you differentiate that terminal

window from any others you might also be running.

Create and manage profiles

In a departure from the norm, you can’t use the command

line to create and manage profiles – the dconf tool can be

used from the shell once you’ve set up a profile, but it can

only be used to view and make changes to settings you’ve

configured following the tips and advice below. The box

reveals what you can do using the dconf tool.

So, to create your first custom profile, you need to open

the terminal, then select File > New Profile. A dialog box will

appear consisting of five tabs. Start by giving your profile a

suitably descriptive name that you’ll use to select when

switching profiles going forward.

Before going further, click the Close button, then switch to

your new profile via the Terminal > Change Profile menu. Now

select Edit > Profile Preferences to reopen the Editing Profile

dialog – doing this logs you in as your new profile as you

You can easily change profiles without

leaving your current terminal session –

the obvious choice is to open the

Terminal > Change Profile menu and

select the profile you want. Doing so will

instantly apply that profile’s settings to

the current session. It won’t, however,

run any custom commands assigned to

that profile.

You can also launch a new Terminal

session from within the command line –

note this will launch it in a new window

(and is the equivalent of choosing File >

Open Terminal):

Replace with

your chosen profile name (which is

case sensitive. If the profile has any

spaces in it, precede this with a

backslash character – for example,


Profiles are created and set up via the Profile Preferences

dialog box, accessed through the terminal’s Edit menu.

make changes, so any tweaks you make are applied in real

time and can be previewed in the terminal window as you go.

If you’re looking to experiment with colour schemes, different

fonts and so on, it’s a must.

The General tab also includes options for setting the initial

size of your terminal window in columns and rows – 80x24 is

the default. You can also change the cursor shape from the

default Block to I-Beam (a straightforward flashing vertical

line used by the likes of LibreOffice Writer) or Underline –

you’ll see the cursor update as you switch between them.

Untick the Terminal Bell box if you don’t want to receive

audio notifications via your PC’s internal speaker – sadly, you

can’t choose to switch on visual (ie, flashing) notifications in

its place for specific profiles. The final three options found

under the General tab are self-explanatory, and start with a

setting for disabling bold text. You can also specify whether to

allow text to automatically rewrap when manually resizing the

terminal window, plus choose a custom font for the display.

The dconf tool reveals that the custom font option

comprises two separate settings. The first – “use-systemfont”

– determines if you’re using the system font or not, with

a simple true (unticked) or false (ticked) value. The second –

“font” – only works when “use-system-font” is set to false,

and determines the replacement font, style and size.

More customisation options

Switch to the Colours tab next to set your text and

background colours – untick Use Colours From System

Theme and you can quickly switch to a different built-in

scheme (such as green text on a black background) or

manually specify colours using the various colour pickers. You

can also switch on transparency, then use the slider to blend

72 LXF224 June 2017 www.linuxformat.com

Terminal Tutorial

Use the command line

All terminal profiles are managed by the dconf

tool, which is a low-level tool for managing

various configuration and system settings. You

can use dconf to read the values of any settings

(referred to as keys) that you’ve previously

configured using the Profile Preferences dialog

box. First, make a note of the Profile ID of your

chosen profile as found under the General tab of

its Profile Preferences dialogue box. Once

identified, return to the command line and type

the following, substituting your exact Profile ID

(including the leading colon character) for


Only those keys you’ve already configured will

appear. To view a specific key’s current value, use

dconf read like so:

Substitute with your chosen key.

You can then use to change the

key’s value, assuming you know what available

values exist:

You can’t create new keys using dconf, and

you need to know the correct value for the key

that you wish to change – some of these values

will be a relatively simple choice between ‘true’

and ‘false’, but others are going to be more

complicated. Refer to the Top Tip box if you run

into trouble.

the terminal window into the background. The Palette section

lets you set individual colours – again you can choose

between various built-in schemes or opt for Custom to

handpick each colour.

The Scrolling tab controls the terminal’s behaviour in four

different ways: first, you can opt to show or hide the scrollbar

(if you hide it, you’ll need to scroll exclusively using your

mouse wheel or by trackpad gesture). Scroll on Output is

unticked by default, so you can scroll upwards with the

mouse to stop scrolling automatically when a large amount of

output is being produced. Ticking this option disables your

ability to do this.

Scroll on Keystroke is ticked, and works in a similar way as

Scroll on Output, except it’s linked to your actual keystrokes,

and for that reason is best left as it is. Finally, Limit Scrollback

To reveals how many terminal lines of output are stored in

memory – this buffer is cleared every time you restart a

terminal session, so you may want to reduce the number if

you’re low on memory and need to produce large volumes of

output without referring to earlier lines. If you’d rather not

limit this at all – not recommended except where you’ve

plenty of RAM to spare – then set the figure to 0, but beware

sluggish performance.

The Compatibility tab allows you to alter the behaviour of

the Backspace and Delete keys, plus set the default encoding

for your terminal session. The final option – Ambiguous-width

Characters – can be set to Wide should you work with certain

languages or characters, such as Greek, or Asian logograms.

Custom commands

Now you’ve prettified your profile, it’s time to tackle the

Commands tab, where you’ll find two check boxes. To run a

simple command on launching the profile, tick Run a Custom

Command Instead of My Shell, then enter your chosen

command into the Custom Command box. For example, to

log on to a remote computer:

The When Command Exits dropdown menu allows you to

determine what happens when the command is completed –

by default, the terminal window will exit, but you can choose

Hold the Terminal Open or Restart the Command too. If

you’re executing a simple command like

update , then the former is the sensible choice.

Note that when you use the custom command to log on

remotely to another computer, the When Command Exits:

action is implemented when you manually close the

connection, typically using the Exit command. Choose Hold

the Terminal Open, for example, and you’ll see a pop-up menu

appear with a handy Relaunch button if you want to

reconnect immediately for whatever reason.

The Command tab also houses the Run Command as a

Login shell option – tick this to have the terminal read the

.profile file – which is the main system wide initialisation file

that’s executed when logging into the shell directly – rather

than the .bashrc file, which is used when you open terminal

via the Unity desktop. You’ll know if or when you need it.

Complete your tweaks

That concludes the process of creating and editing your first

profile. When you next open the terminal, it’ll default to your

basic profile, but you can easily switch following the advice in

the second box. Need another custom profile? Choose File >

New Profile again to create one based on the currently

selected profile, complete with same name and settings,

ready for you to fine-tune. If you want to create a new profile

from scratch, choose Edit > Preferences and switch to the

Profile tab.

Here you’ll see a list of all existing profiles – click New to

create your blank new profile, or select a profile from the list

and click Clone to create one based on that template. You can

also edit profiles from here without having to switch to them

first, plus delete unwanted profiles too.

Last, but not least, you’ll also see a dropdown menu at the

bottom – Profile Used When Launching a New Terminal. Use

this to pick the default profile that’s selected when you launch

terminal (or open a new terminal window) in future. LXF

Next issue:



Set your default profile via this dialog box – it’s also where you go to manage

your profile collection.

Enhance your Terminal-fu Subscribe now at http://bit.ly/LinuxFormat


June 2017 LXF224 73

pfSense How to install and

pfSense: Build

a firewall

Afnan Rehman shows how easy it is to build your own router and firewall

system with this open source software.





is a student, Linux

tinkerer, and

general computer

geek who breaks

everything first so

that you don’t

have to.



Make sure you’re

using a 64-bit

version of pfSense

if you’re installing

on 64-bit capable

hardware. Although

installing 32-bit

pfSense is possible,

it is unstable at

times and is not


Have you ever wanted to build your own router without

going through the hassles of creating your own

iptables and network rules from scratch (the way we

have to on page 84)? It’s possible to get the functionality and

performance you’re looking for without putting in as much

work. The solution is pfSense, the open source network

firewall/router software distro based on the FreeBSD OS.

pfSense is used in many different applications from home

routers to business solutions and is well regarded in the

community for its reliability and versatility.

Assuming you have the hardware for it, which is quite

modest by any standards, pfSense can be installed on any

computer and managed from a web interface on a separate

client device on the same network. All you need is any

processor better than a Pentium 2, 256MB or more RAM, and

at least 1GB of disk space, as well as at least two Ethernet

ports, one for WAN and the other for LAN.

Once your hardware is ready to go, head over to the

pfSense download page at www.pfsense.org/download and

select your architecture. If you’re using any fairly modern

computer, you’ll likely be best served choosing the AMD64

architecture. Cick the download button and select the latest

stable version to download. Once you have the file, we

recommend installing via a USB hard disk, although a CD

would work just as well, provided your router PC has access

to an optical disk drive. You can burn the installer to the disk if

using a CD or write the image to the USB drive using a tool

such as Rufus from Windows, or the dd command to write

from a Mac OS X or Linux computer.

Once the write is complete, you are ready to install

pfSense on your router. Plug in your chosen installation media

and boot from it using the boot menu on your router PC. You

will be greeted with a text-only screen showing several setup

options with a countdown timer.

Press ‘I’ on your keyboard to initiate the installer.

Once you are in the installer, you will be given some

options for tasks that can be performed by this software.

For most people, it is fine to choose the “Quick/Easy Install”

option, which will assume that the first located disk is the

intended installation target. If you only have one hard disk

installed in the router system, this will pose no issues.

However, if there are multiple storage devices, we

recommend using the “Custom Install” option to ensure that

pfSense is installed to the correct disk. Next, the installer will

ask if you are happy to proceed with the changes. There is no

going back so if there is any data currently on the target disk

be sure it is not something you will miss. Once you have given

the OK to the installer, it will proceed to wipe the disk and

install pfSense, which will take some time. This would be the

perfect time for a tea break.

Once the installation is complete, the installer will ask you

to choose between a standard kernel and an embedded

Once you install pfSense and reboot the computer,

ensuring that you removed the installation media, the

software will boot to a menu showing network interfaces

and options for them.

Router maintenance

Hopefully by the end of this tutorial, you will have

a working router with pfSense running and a

web interface that can be easily accessed

through a client computer. There are a few final

suggestions we have for you once you reach this

point. Firstly, once you reach the end, there will

most likely be absolutely no need for you to

check the display directly from the computer

running pfSense. If you had it hooked up to a

monitor, feel free to re-purpose that monitor for

some other use. Even if you need to restart the

router, that can all be handled from the web

interface, just as if it were a consumer router

that you would control from a web interface.

Since this router does not have wireless

capability, we recommend adding some sort of

wireless access point to your network to allow

wireless devices to connect, as well as a switch

to accommodate multiple wired devices. Both of

these are relatively cheap and easy to add to a

system. Last but not least, we’d recommend

keeping the router hardware itself in a safe,

secure, and easy to access area of the house

should you ever need to get to it. Simple

housekeeping such as dusting it every few

months will reduce noise and heat generation,

greatly increasing the lifespan of your hardware

and allowing you to reap the benefits of a high

performing and heavily customisable router for

years to come.

74 LXF224 June 2017 www.linuxformat.com

pfSense Tutorial

Getting extra help

You may get stuck at some points in setting up

your pfSense system. That’s normal, after all this

is not the complete exhaustive guide on

everything to do with pfSense. Luckily, there are

many resources online to help you in your

journey. Since pfSense is such a popular and

well-known router/firewall operating system, it is

very easy to find detailed documentation and

troubleshooting tips for almost every single

problem you may run into. One recommended

tool to use is the official documentation for

pfSense at https://doc.pfsense.org/index.

php/Main_Page. The documentation provided

by the developers is quite detailed and thorough,

providing many troubleshooting tips for the most

common issues. The documentation covers

installation, features, release versions, FAQs,

packages to expand pfSense’s capabilities, and

of course, a section devoted to troubleshooting.

In addition, the documentation also provides

several tutorials on enabling and using advanced

functions such as remote firewall administration.

The official site also provides guidance on

choosing hardware and seeking further training,

should you decide to tackle more advanced

projects such as deploying a pfSense router in an

office environment where hundreds of people

may be accessing the router’s services at once.

Of course, don’t limit yourself to just the

solutions on the official site, but also search

forums and additional guides found all over the

internet to learn more about your problem and

how to address it.

kernel. Unless you really know what you are doing, we

recommend going with the standard kernel, allowing for a

VGA console. Next you will be asked to reboot.

Once you’ve rebooted, you will once again get the boot

timer and you will get a screen showing the available

interfaces to configure a network. For your router, you will see

a number of interfaces corresponding to the number of

Ethernet ports available to your system. Recall from earlier

that you will need at least two.

Questions, questions

The first question you will be asked is if you wish to set up

VLANs. VLANs are Virtual Local Area Networks, and most

home users do not use them. They can be beneficial in cases

where you wish to separate broadcast domains, or isolate

traffic for security reasons. This is typically used in large office

spaces and the like. For now, unless you know you need this,

press ‘N’ to refuse setup of VLANs.

The two default interfaces are em0 and em1. We typically

like to assign em0 as the WAN (which is incoming traffic from

the internet) and em1 as the LAN interface (going out to your

local network of devices). Type those into their respective

areas when prompted. You will also be prompted to enter an

optional interface name, which is not necessary so you can

skip that by pressing the Enter key. The software will then ask

you to confirm the settings for the LAN and WAN interfaces.

Make sure they are correct then press ‘Y’ to proceed. The

operating system will then assign the interfaces and display

their IP addresses along with several options and a prompt to

choose an option. The IP address of the WAN will usually be

assigned through DHCP from your internet service provider.

The LAN IP address will default to This can be

changed by selecting option 2 to set the IP address of each

interface. If you wish to use DHCP for the clients on your local

network, you can also set an acceptable range of IP

addresses using option 2 as well. Select the second option

from the main menu and then select the LAN interface.

Under the option that asks for the new LAN IPv4 bit count,

enter a number corresponding to the subnet mask you wish

to use using the table above the prompt. For most home

users, this will be 24. Then when asked if you want to enable

the DHCP server on LAN, type ‘Y’ and enter the start and end

addresses for the client address range you wish to set. Ensure

this range encompasses all devices that need an IP address

on your network, and allow some room for growth.

Once you are done setting this up, you will be given a link

to access the web configurator from a client device on the

same network. Use this link to log into the web configurator.

At this point, you will most likely not need to access the

console directly from the router, and can access everything

needed through the web configurator.

On your client computer, the web address should enter

you to a login screen. By default, the username and password

are “admin” and “pfsense”, respectively. Once you log in for

the first time a setup window will guide you through the initial

configuration of pfSense, including entry of a domain and

hostname, DNS servers, time zones, and all of that. You will

also get an opportunity to configure WAN and LAN interfaces

and after doing so, will be prompted to change the admin

username and password. We strongly recommend you take

this opportunity to set a strong password to avoid your

network being compromised.

Once you complete the setup, the system will reload,

applying your changes and rebooting the system. Once the

reload process is finished you will be met with a

congratulatory message and access to the pfSense web

configurator GUI dashboard. From this GUI you can pursue

the addition of various advanced settings such as MAC

filtering, VPN setup, and firewall settings which you can

customise to your liking.

Congratulations! You’ve set up your very own pfSense

router, which should now be up and running for everyday use

around your home network. Feel free to jump from here to

more complex things such as adding advanced configuration

to your pfSense system, or even building your own router

from scratch. The possibilities are endless! LXF

During initial installation, you will be taken through a relatively simple menu

system. Simply follow the prompts and installation should proceed smoothly.

Improve your Linux skills Subscribe now at http://bit.ly/LinuxFormat


June 2017 LXF224 75

HAProxy A guide for system and

network admins, and web developers.

HAProxy: TCP

load balancer

Mihalis Tsoukalos teaches you how to install and set up HAProxy to load

balance a MySQL Replica Set, all before lunchtime.





is a Unix

administrator, a

programmer, a

DBA and a


who enjoys writing

articles and

learning new




Should you use

HAProxy? It’s the

perfect tool for

load balancing

mainly because

of its simplicity

and transparent

operation. You

should definitely

consider using it.

HAProxy, which stands for High Availability Proxy, is a

Load Balancing reverse proxy for TCP and HTTP

applications. This tutorial will use HAProxy in

combination with MySQL to illustrate its load balancing

capabilities. However, HAProxy can also be used with other

TCP servers such as Apache and Nginx.

Please do make backup copies of every file you make

changes to, so you’ll be able to get back to your initial

configuration more easily. The main reason for this is that

software that deals with web sites and database servers can

make them inaccessible to clients when configured

incorrectly, and this will provide a quick fix.

Getting and installing

Installing HAProxy on Debian or Ubuntu Linux machines is as

simple as executing the following command with root


# apt-get install haproxy

Then, you can find out the version of HAProxy you are

using as follows:

# haproxy -v

HA-Proxy version 1.5.8 2014/10/31

The contents of both the default HAProxy configuration

file and /etc/haproxy/errors/400.http.

Copyright 2000-2014 Willy Tarreau

Please bear in mind that stable Debian distributions tend

to install older versions of packages because they are more

secure and stable despite the fact that they have fewer

features. At the time of writing this the latest stable HAProxy

versions are 1.7.2, 1.6.11 and 1.5.19. If your main concern is

stability use either version 1.6.x or version 1.5.x as version 1.7.x

of HAProxy is pretty new.


The main HAProxy configuration directory is /etc/haproxy,

which contains the following:

# ls -l /etc/haproxy

total 8

drwxr-xr-x 2 root root 4096 Jan 21 19:16 errors

-rw-r--r-- 1 root root 1129 Jul 14 2015 haproxy.cfg

# ls /etc/haproxy/errors

400.http 403.http 408.http 500.http 502.http 503.http


The main configuration file of HAProxy is /etc/haproxy/

haproxy.cfg. The error directory contains various error

messages related to given HTTP status codes. The

screenshot above shows the contents of /etc/haproxy/

haproxy.cfg and /etc/haproxy/errors/400.http.

Get on with it…

With the basic install done, let’s take a look at basic function

before continuing with illustrating how to use HAProxy to load

balance multiple MySQL instances. You can start HAProxy by:

# service haproxy start

The following output proves that HAProxy is up and

running successfully:

76 LXF224 June 2017 www.linuxformat.com

HAProxy Tutorial

hatop & haproxyctl

hatop is an interactive ncurses client for HAProxy

whereas haproxyctl is a utility for managing

HAProxy from the command line. Neither of

them is required for HAProxy to work but they

can make your life easier should you decide to

install them. On Debian and Ubuntu Linux

systems you can install them as follows:

# apt-get install hatop haproxyctl

haproxyctl allows you to quickly administer

and overview HAProxy with its handy command

line options:

$ sudo haproxyctl show health

$ sudo haproxyctl show stat

The first command allows you to quickly

check the status of HAProxy whereas the second

command returns counters for each proxy and

server. Should you wish to learn more about

haproxyctl, you can visit its man page.

The image on the final page shows the hatop

utility in action. For hatop to work, you will need

to have the HAProxy Unix socket enabled.

# ps ax | grep -i haproxy

17747 ? Ss 0:00 /usr/sbin/haproxy-systemd-wrapper -f

/etc/haproxy/haproxy.cfg -p /run/haproxy.pid

17749 ? S 0:00 /usr/sbin/haproxy -f /etc/haproxy/

haproxy.cfg -p /run/haproxy.pid -Ds

17751 ? Ss 0:00 /usr/sbin/haproxy -f /etc/haproxy/

haproxy.cfg -p /run/haproxy.pid -Ds

The good thing is that the previous output shows various

useful things about your running HAProxy instance, including

the full path of the configuration file used and the location

where you can find its process ID (/run/haproxy.pid).

Similarly, you can stop HAProxy from running:

# service haproxy stop

It is now time to load balance two MySQL instances. For

the purposes of this tutorial, the two MySQL instances will be

on the same network using two test machines because you

cannot try such things on production servers! The IP address

of the first machine is (MyA) whereas the IP

address of the second machine will be (MyB). The

IP address of the HAProxy machine is As you can

see it belongs to a different network but this should not be a

problem as long as the two networks can communicate with

each other successfully. To get a better understanding of

what is going on, have in mind that HAProxy runs on a Virtual

Machine on MyB.

Please note that you might need to enable network access

to both MySQL servers, which will allow them to listen for

TCP/IP connections – this feature is disabled by default for

security reasons. The following diff output shows the change

you need to make to the MySQL configuration file in order to

enable remote TCP/IP connections:

$ diff my.cnf my.cnf.orig


< bind-address =

Both MySQL processes listen to the default port number

of MySQL which is 3306. Next, you will need to grant the

required permissions that enable remote access to the

selected users, which in this case is just root:

mysql> GRANT ALL ON *.* TO ‘root'@'’


Query OK, 0 rows affected, 1 warning (0.00 sec)

You should also execute the next command on MyB:

mysql> GRANT ALL ON *.* TO ‘root'@'’


Query OK, 0 rows affected, 1 warning (0.00 sec)

Do not forget to restart both MySQL servers after making

these changes. You can make sure that both MySQL

instances are up and running and can be accessed from the

network by trying to connect to each one of them from a third

machine, as follows:

$ ifconfig | grep “inet addr” | head -1

inet addr: Bcast:


$ mysql -u root -h -p

$ mysql -u root -h -p

You will also need to set up MySQL Master-Master

Replication. Talking about that is beyond the scope of this

tutorial, but just make sure that the replication is working

properly before continuing; the easiest way to do so is by

creating a new table to a new database on one of the two

MySQL instances and see whether this will get replicated to

the other database.

Lastly, you will have to create two more MySQL users on

each one of the two MySQL databases in order to allow

HAProxy to monitor them – the good thing is that if the

replication works as expected, you will only have to execute

the following commands once:

$ mysql -u root -p

mysql> INSERT INTO mysql.user (Host,User) values

('’,‘haproxy_check'); FLUSH PRIVILEGES;

mysql> GRANT ALL PRIVILEGES ON *.* TO ‘haproxy_

root'@'’ IDENTIFIED BY ‘password’ WITH GRANT


Please note that you will need to use the IP address of the

Linux machine that runs the HAProxy server. Executing the

same commands using the IP addresses of the two MySQL

servers might make your life easier, especially if you are using

any virtual machines, so go ahead and run them as well.

Then, you should make some changes to the configuration

file of HAProxy. Please have in mind that the original haproxy.

cfg file is saved as haproxy.cfg.orig. The following output

shows the changes to the original haproxy.cfg file using the



HAProxy only

supports TCP

balancing. If you

need to perform

UDP balancing,

you can look at

software such

as udp-balancer,

udpbalancer and,

guess what, Nginx!

You can find more

information about


and udpbalancer

at https://



udp-balancer and




The kind of information you should expect to find in the log files of HAProxy.

Never miss another issue Head to http://bit.ly/LinuxFormat


June 2017 LXF224 77

Tutorial HAProxy




the latest version

of HAProxy? Using

the latest version

of software that is

as critical to your

infrastructure as

a proxy server and

a load balancer is

not always the best

idea, especially

when you do not

need any of its new

features. If it is not

broken, do not fix it.

diff command line utility:

$ diff haproxy.cfg haproxy.cfg.orig



< listen mysql-setup

< bind

< mode tcp

< option mysql-check user haproxy_check

< balance roundrobin

< server mysql-1 check

< server mysql-2 check

Although using IP addresses instead of machine names

might make you life a little more difficult, it saves HAProxy

from having to resolve the machine names.

Please make sure that the machine that runs HAProxy

does not run MySQL in any way because the next commands

will try to connect to the local copy of MySQL instead:

$ mysql -h -u haproxy_root -p -e “show variables

like ‘server_id'”

Enter password:


| Variable_name | Value |


| server_id | 2 |


$ mysql -h -u haproxy_root -p -e “show variables

like ‘server_id'”

Enter password:


| Variable_name | Value |


| server_id | 1 |


As each MySQL member of a replica set has a different

server_id, the previous output tells us that you can connect to

both MySQL servers of the replica set while querying

localhost, which is the machine that runs HAProxy. So, from

now on when you need a MySQL server you can only give the

How to use the HAProxy Unix socket, which in this case is /run/haproxy/

admin.sock, to read metrics.

The Statistics web page of HAProxy after successfully

configuring HAProxy.

IP address and the port number of the HAProxy server

instead of giving the IP address of one of the two MySQL

servers, and HAProxy takes care of the rest. This is a

transparent way of using services without the world knowing

what is going on behind the scenes.

Log files

On an Ubuntu system, the log messages of HAProxy can be

found at /var/log/haproxy.log. The kind of log entries you

are going to find inside /var/log/haproxy.log will be similar

to the following:

Jan 31 23:19:08 LTTng haproxy[936]: Server mysql-setup/

mysql-1 is DOWN, reason: Layer4 connection problem, info:

“General socket error (Network is unreachable)”, check

duration: 0ms.

Jan 31 23:21:57 LTTng haproxy[936]: [31/

Jan/2017:23:21:57.856] stats stats/ 0/0/0/0/0 200

1346 - - LR-- 1/1/0/0/0 0/0 “GET /hastats;csv HTTP/1.1”

The screenshot on the previous page shows more entries

from /var/log/haproxy.log to get a better understanding of

the kind of information found in /var/log/haproxy.log. The

general idea is that you should keep an eye on the related log

files when you are learning a new piece of software because

log files give you a better understanding of what is happening

behind the scenes.

HAProxy offers a large number of metrics that allow you

to monitor the way it works as well as its performance. Those

metrics can be divided in three main categories: frontend,

backend and health metrics. Frontend metrics collect

information about clients whereas backend metrics collect

data about the availability and the status of the backend

machines. The last kind of metrics informs you about the

status of the HAProxy setup.

Frontend metrics include information such as HTTP

requests per second (req_rate), number of request errors

(ereq) and number of bytes sent (bout). Backend metrics

include ways to measure the average response time (rtime)

and the number of requests that are not in a queue (qcur).

The most secure way to get the HAProxy metrics is using

UNIX sockets. On an Ubuntu Linux system, the default

Get print and digital subs See www.myfavouritemagazines.co.uk/linsubs

78 LXF224 June 2017 www.linuxformat.com

HAProxy Tutorial

HAProxy vs Nginx

Nginx is not just a web server; it can do many

other tasks equally well including some of the

tasks that can be performed by HAProxy. The

main advantage of Nginx over HAProxy is the

integration that it offers, which also means that

you will have to learn just one piece of software

to do your job. On the other hand, this is also a

disadvantage because everything you do

depends on a single piece of software.

Additionally, most of the time specialised

software tends to perform better than relatively

generic tools, has less bugs because it has a

smaller implementation and performs faster

because it has a simpler design.

Of course, the integration Nginx offers allows

you to perform routing based on information

found in the HTTP layer, which includes both

URL paths and cookies – this also means that

Nginx might be able to perform complex tasks

more easily. Lastly, Nginx has native support for

SSL and is also a caching server. Nevertheless,

HAProxy is simpler to install, configure and use

and works transparently, which means that after

a proper setup, you forget that HAProxy is there.

On top of these advantages, HAProxy works with

TCP services in general whereas Nginx only

works with HTTP. Another point where HAProxy

is better is that it revisits a dead server almost

as soon as it restarts whereas Nginx waits a

while to do so.

The general advice here is to use Nginix when

you need a web server; otherwise, using

HAProxy is a wiser choice.

HAProxy configuration has support for the desired Unix

socket, so you will not need to do anything else. To make sure

that the Unix socket (/run/haproxy/admin.sock) has been

created and is usable, you can do the following:

$ sudo nc -U /run/haproxy/admin.sock


> show info


The image on the previous page also shows the kind of

output you can get from the Unix socket using the netcat

command line utility. Keep in mind that if you are going to do

any serious work with HAProxy, you will need to learn how to

interpret its metrics.

Monitoring page

HAProxy offers a monitoring page where you can learn more

about the HAProxy operation in a graphical way. This should

be the first place to visit when you are having problems with

HAProxy. However, this page is not enabled by default. In

order to enable it, you should add the next block to the

HAProxy configuration file:

listen stats

bind *:8080

stats enable

stats hide-version

stats realm Haproxy\ Statistics

stats uri /hastens

stats auth user:password

The last line allows you to define valid username and

password combinations whereas the uri definition defines the

URI of the statistics page. The bind value defines the port

number the statistics page will listen to. For changes to take

effect, you will need to restart HAProxy.

The image on the previous page (top) shows the

monitoring page of HAProxy, which usually listens to port

number 6427 on the localhost address, which means that

this page is not accessible from the internet by default.

However, the presented configuration uses port number

8080. As you can see from the web page, you can also get the

output in CSV format, which can be very handy because it

allows you to easily store the values of the metrics on a

database server. If you are using HAProxy, you will most likely

need to enable the monitoring page.

Please note that HAproxy provides its own web server for

displaying the monitoring page.


HAProxy can use a plethora of algorithms to decide which

server is going to be selected when load balancing multiple

servers. The algorithms that can be used include Round

Robin, an algorithm that selects the server with the lowest

number of connections and another one based on the IP

address of the client. The last method makes sure that each

IP address always connects to the same server. Additionally,

you can assign a weight to each server that defines how often

each server will be selected compared to the other servers.

You should not deal with HAProxy algorithms unless you

have performance problems.

New features

A handy new feature of the latest HAProxy version is that if

the configuration file given using the -f switch is a directory,

all files found in the directory will be loaded in alphabetical

order. Additionally, it has support for OpenSSL 1.1.0, performs

better than version 1.6.x and includes many bug fixes.

You can find more information about HAProxy at

http://www.haproxy.org whereas you can find the full

documentation of all the HAProxy versions at:

http://cbonte.github.io/haproxy-dconv. LXF

The hatop utility in action. hatop uses the HAProxy Unix socket and its output

looks similar to the output of the top utility.


June 2017 LXF224 79

GnuPG Exploring the trust model,

but who trusts the trustmen?

GnuPG: Know

who to trust

John Lane explores the convoluted world of PGP key validation.



John Lane

trusts that GnuPG

will trust him to

trust that key that

trusts the web of


Not all keys are genuine. Anyone can fake a key. It’s your

decision what to trust.



You can make

signatures “local” to

prevent exporting

them. See “man

gpg” for “lsign”.

PGP means “Pretty Good Privacy”. Not perfect then,

but, perhaps, a pretty good way to encrypt messages

and other documents or to prove that they have not

been altered by anyone. If you can understand its so-called

“Web of Trust”, that is. Many Linux users will at some point

encounter the Gnu Privacy Guard (GnuPG), an

implementation of the OpenPGP standard, but getting the

most from it requires untangling that web.

You use someone’s key to send them an encrypted

message or document, or to verify the authenticity of a

signed message or document that they’ve sent you. But

anybody can create a key for any identity, so before using a

key, you need to be sure that whoever created it is who they

claim to be and that their identity is correctly represented.

Which is where it gets complicated. You can either do this

yourself or trust the efforts of other people.

OpenPGP defines a trust model called the Web of Trust

(WoT), which GnuPG usually uses. It also supports alternative

models including simple trust and Trust On First Use (TOFU),

but WoT is the core trust principle behind OpenPGP.

Unfortunately, it can be difficult to understand its model of

verification, trust and validity.

In this tutorial we will explore the dark art of key validation

and explain what it means to verify, sign and certify a key, or

to place trust in a key or its owner. Keys, in this context, are

public OpenPGP keys (we’ll assume some prior knowledge of

GnuPG and how it uses public-key asymmetric

cryptography). Verification, trust and validity affect public, not

private, keys, so when we refer to a key in this tutorial, we

mean the public key.

The best way to get someone’s key is by meeting them in

person so that they can personally transfer their key file to

you, perhaps by handing you a USB key or CD-ROM. That

way, you can be sure it’s authentic. In practice people may

use other methods such as publishing their key on their web

site or to a public key server. These are services on the

internet, some of which are web-based, that you can query to

obtain other people’s keys. However, because anyone can

create and upload keys, it is unwise to trust their authenticity

without verifying it first.

Elvis has left the building

To illustrate this point you can consult one of the OpenPGP

key servers available on the web. Point your browser to

https://sks-keyservers.net/i and search for “Elvis Aaron

Presley”. You’ll find some keys there despite the fact that their

purported owner passed away several years before the first

version of PGP saw the light of day. If you want something

more contemporary, try searching for “Linus Torvalds” – sure,

some of those will be genuine but others clearly aren’t,

although they serve as good examples of why you need to

trust any keys you use! You need to believe in a key’s

80 LXF224 June 2017 www.linuxformat.com

GnuPG Tutorial


You can set up a separate GnuPG configuration if

you want to follow our examples without

affecting your own GnuPG setup. You can either

append a --homedir” option to each gpg

command or set GNUPGHOME . Both point to

a directory where GnuPG should maintain its

configuration, including your sandbox key-ring.

Going with the latter option, here’s how to set up

a new sandbox:

$ export GNUPGHOME=~/sandbox.gpg

$ mkdir -m 700 $GNUPGHOME

$ gpg --gen-key

Remember you’ll need to unset

“GNUPGHOME” or use “--homedir=~/.gnupg” to

access your real configuration.

authenticity before trusting it, so the most basic trust

principle is verification. This is an investigative process where

you take the steps that you feel are appropriate to confirm a

key’s authenticity by verifying who owns it and proving their

identity. Notice how this is a subjective definition and,

therefore, personal to you. Individuals may take very different

approaches to this, from casual observation to insistence on

meeting in person and physically verifying formally issued

documents such as passports or other government-issued

identification (a driver’s licence being a popular choice).

You should only consider a key to be authentic when you

are certain that it belongs to its purported owner and you

know that its owner is who they claim to be.

You sign a key to certify that you consider it to be

authentic, and you may use it after doing so because GnuPG

trusts that you are happy with your own approach to

verification and, thus, accepts that keys you sign are valid.

We say “sign” a key but the certification is also bound to a

“UID” – a user id, which can be a name, email address and

comment triple or, perhaps, a photo. Certifications remain

valid until the key and/or associated UID value are revoked

even if other changes occur, but separate certifications are

needed for each UID.

If a key has multiple UID values then GnuPG will warn you

and ask whether you want to sign all of them at once. If you

don’t then you can select and certify them individually. The

examples that follow have one UID value.

GnuPG maintains your key-ring where it stores the keys

you collect and any certifications you apply to them. Your keyring

represents your personal Web of Trust.

To demonstrate certification, we’ll imagine receiving an

encrypted document from our friend Alice who likes helping

with examples. She would have used your public key to

encrypt it and you can decrypt it straight away (you don’t

even need her key).

$ gpg --decrypt document

But if she also signed it then you would need her key to

confirm the document is authentic, as gpg would tell you:

gpg: Can’t check signature: No public key

It will, however, verify the signature once the key is in your

key-ring even without you certifying it, but it will also warn you

about that:

gpg: Good signature from “Alice ”


gpg: WARNING: This key is not certified with a trusted


gpg: There is no indication that the signature belongs

to the owner.

The “unknown” presented in brackets is the validity that

GnuPG calculates for Alice’s key. It’s also displayed if you list

your copy of her key from your keyring:

$ gpg --list-key alice

pub rsa2048 2017-01-16 [SC] [expires: 2019-01-16]

UID [ unknown] Alice

sub rsa2048 2017-01-16 [E] [expires: 2019-01-16]

The calculated validity can be one of “unknown”, “full” or,

for your own key, “ultimate”. You should verify Alice’s key using

whatever approach you find acceptable and then certify its

authenticity by signing:

$ gpg --edit-key alice

gpg> sign

Quit the editor and save changes when prompted. If you

list the key again you’ll see that its calculated validity is now

reported as “full":

UID [ full ] Alice

and that will produce a good signature report:

$ gpg --decrypt document

gpg: Good signature from “Alice ” [full]

You can also record within the certification signature how

carefully you verified the key owner’s identity. If you add the

--ask-cert-level option to the gpg command, the signing

process will request you choose a certification level from:

I will not answer

I have not checked at all.

I have done casual checking.

I have done very careful checking.

The OpenPGP standard calls these certification levels

“Generic”, “Persona”, “Casual” and “Positive” and they are

identified as levels 0 thru 3 on GnuPG’s signature output. This

example shows a carefully checked certification ("3"):

$ gpg --list-sigs alice

UID [ full ] Alice

sig 3 814EE2DB21D58552 ...

How one interprets certification levels is subjective but the

OpenPGP specification (RFC4880, section 5.2.1) describes



You can use the

shortcut “gpg

--sign-key” instead

of “gpg --edit-key”

followed by “sign”

Trust signatures let you certify both authenticity and trust. Just be prepared

to answer some additional questions.

We’re #1 for Linux! Subscribe and save at http://bit.ly/LinuxFormat


June 2017 LXF224 81

Tutorial GnuPG



You may need to

explicitly trust

your own key after

importing it. Use

“gpg --edit-key”

to set your own

key’s owner trust to


them as follows:

Generic certification applies where the level of verification, if

any, is unknown or unspecified.

Persona certification is where no verification has been done.

Casual certification applies where some casual verification

has been done

Positive certification applies where substantial verification of

the claim of identity has been achieved.

GnuPG applies the generic level 0 certification unless

requested otherwise ( --ask-cert-level ) except for selfcertifications

(signing one’s own key) where positive level 3

certification is used. It normally ignores persona certification

when calculating a key’s validity but you can use its --mincert-level

option should you desire to change this behaviour.

Having certified someone’s key, it is customary to export a

copy and send it to them ( --armour exports printable

characters (a base-64 encoding) that are easy to email):

$ gpg --armour --export alice | mail alice@example.org

That person may, at their discretion, upload that copy to a

public key server. If they do this then anyone else using that

key can also see your certification and they can decide to

trust it rather than verifying the key’s authenticity themselves.

Bear in mind that it is considered rude to upload someone

else’s key to a key server – you should send keys you certify

to their owners and let them decide whether to publicly

accept your certification.

Who you gonna trust?

Although you can certify a key, our example illustrates that it

is GnuPG, not you, that decides whether a key is valid. It does

so by accepting that you trust that whoever certified it

verified its authenticity in a manner acceptable to you. That’s

all trust means in the PGP world – your trust in someone’s

approach to verification of others’ identities; it doesn’t

necessarily mean you’d entrust them with your life savings!

So far, we only have the trust inherent in our own key, so,

beyond that, the only valid keys in our key ring are those we

You don’t have to use the command line to certify and trust keys. GUI tools

offering a nice pointy-clicky experience like the Enigmail plugin for Thunderbird.

certify. But we can extend trust to keys in our key ring whose

owners also perform verification in an acceptable manner,

which will allow GnuPG to also validate keys certified by them.

Like your attitude towards verification, such trust is also a

subjective and personal value; you decide how you trust –

here are some examples:

I trust my own key

I trust your key like my own

I trust your key but not on its own

I do not trust your key

You can convey such trust in two ways: “owner trust” – the

original Classic Trust Model – or, with the newer PGP Trust

Model, “trust signatures”. You explicitly assign owner trust to a

key separately from any certification and it is stored in your

private trust database, whereas trust signatures are

certifications that assert trust in addition to authenticity.

Being certifications, they are stored in your keyring and are

included when a key is exported and published to key servers;

trust signatures can therefore be made public.

The given examples are known as Trust Levels; they are

“ultimate”, “full”, “marginal”, “never” and, if you have not

assigned a level, “unknown”. If you feel unable to trust a key

but don’t want to go as far as denying it then there is another

level, “undefined”, that you can use.

The key editing mode of the GnuPG command-line tool is

also used to assign owner trust. If you had Alice’s key in your

keyring and wanted to trust it:

$ gpg --edit-key alice

gpg> trust

GnuPG then offers five options for your trust selection:

“I don’t know or won’t say” -> “undefined”

“I do NOT trust” -> “never”

“I trust marginally” -> “marginal”

“I trust fully” -> “full”

“I trust ultimately” -> “ultimate”

If you chose the fourth option to assign full trust, assuming

Alice certified Blake’s key, that would also be valid for you:

$ gpg --list-sigs blake

UID [ full ] Blake

You consider Alice an “introducer” of Blake’s key because

she certified it. By trusting Alice to verify that key you would

also consider him to be a “trusted introducer”.

Question time...

Use tsign instead of sign if you want make a trust

signature. GnuPG will extend the signing process with some

additional trust questions:

$ gpg --edit-key alice

gpg> tsign

The first such question asks how you trust the key owner

to verify other’s keys in a similar way to the owner trust

question previously described, except that your options are

limited to “marginal” or “full”.

The second question is about “Delegated Trust” which

allows you to trust-sign a key so that any keys that it signs are

also trusted (eg, I trust you and anyone that you trust, but not

those they trust). Such a trust signature has two levels of

delegated trust: trust in the key you sign plus one further

connection beyond it. Trust signatures may be given up to five

levels of delegated trust, where one level of delegated trust is

equivalent to owner trust.

If you trust us, why not… subscribe now at http://bit.ly/LinuxFormat

82 LXF224 June 2017 www.linuxformat.com

GnuPG Tutorial

Is it a key or certificate?

An OpenPGP certificate contains a

public key, one or more user identities

and one or more public subkeys.

However, certificates are commonly but

mistakenly referred to as keys, like the

key servers that really serve certificates.

The correct terms are described in

RFC4880 and in the PGP book An

Introduction to Cryptography which you

can find at (http://bit.ly/2n6Lwmgf).

This misrepresentation is also discussed

at http://bit.ly/2nzn7RD.

For this tutorial we’ve kept with

common parlance and used “key”

throughout the text.

As with owner trust, keys signed with trust signatures

having one level of delegated trust are trusted introducers.

Keys signed with two levels of delegated trust are called

“Meta Introducers” and those at level three are known as

“Meta-meta Introducers”.

The third and final question asks you to enter a domain to

restrict the signature. You can leave this blank (no restriction)

or enter a domain (such as example.com) to limit the

delegated trust to.

Another subtle difference between owner trust and a trust

signature is that you certify a UID whereas owner trust is

applied to a key. Owner trust can be applied to a key that has

a trust signature but only to upgrade its trust level.

However you assign trust (you can use either owner trust,

trust signatures or a mixture of both), GnuPG uses it to

determine a key’s validity and maintains it in your trust

database. A key is “Fully Valid” if it is certified by at least one

key with either ultimate or full trust, or by three keys with

marginal trust. Those are the standard parameters but you

can change them by adding entries to your GnuPG

configuration file.

GnuPG automatically maintains your trust database, but

you can request an immediate update and this is a good way

to see a summary of it:

$ gpg --update-trustdb

gpg: marginals needed: 3 completes needed: 1 trust model:


gpg: depth: 0 valid: 1 signed: 1 trust: 0-, 0q, 0n, 0m, 0f, 1u

gpg: depth: 1 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 1f, 0u

gpg: next trustdb check due at 2019-01-16

The first line confirms the parameters described in the

previous paragraph and that the PGP trust model is in use.

The following lines summarise the your web of trust. The

“depth” represents steps away from you; there is one key at

depth 0 – your own – which is valid, signed and has ultimate

trust. The trust values are counts of keys with unknown (q),

no (n), marginal (m), full (f) or ultimate (u) trust.

Trust the Web of Trust?

One of the defining properties of the Web of Trust is that keys

and their certifications are publicised, typically by uploading

them to public key servers. Keys contain personal data

(names, email addresses, photos and whatever is written in

comment fields) that you may not want on a public server.

Also, the certifications on your key leak information about

your social graph – those you know and what your interests

and associations are. And if you include certification levels

when you sign a key then your approach to certification is

also revealed to anyone data-mining the key servers.

Key servers also have an important property that you

shouldn’t overlook – keys cannot be deleted from them! You

can revoke keys or certifications but they still leave an

indelible footprint. So you should think carefully before

uploading your key to a key server; this also helps explain why

it is completely wrong to upload somebody else’s key.

An important part of the trust process is to also protect

yourself, so you should consider both positive and negative

aspects of the web of trust to reach a balanced view of their

benefits to your workflow and the cost in terms of privacy.

Even if you decide against using key servers you can still

maintain a private web of trust, privately distributing keys

between your contacts and certifying them discreetly. If you

want to certify a key for your own use without ever exporting

it then you can use a lsign ("local sign") variant of the key

signing command that we used in our examples. It works the

same way but prevents you accidentally exporting

certifications that you’d rather retain privately.

But, on the other hand, should you wish to build your web

of trust publicly then upload your key to a key server such as

https://sks-keyservers.net. Key servers synchronise to

others, propagating uploaded keys among them. You may

also like to advertise your willingness to sign others’ keys

(and have them certify yours) by using sites such as:


In summary…

For those times when you aren’t sure if it’s verify or validate...

The “owner” of an OpenPGP key has the identity stated

within it; a name, email or photo.

A key is “authentic” if the stated owner actually owns it.

You “verify” a key to prove its authenticity.

You “sign” a key to “certify” that you have verified it.

You call a signature on a key a “certification” to distinguish it

from a document signature made with a key.

You can “trust” a key’s owner and accept their certifications,

either on their own ("full”) or with others ("marginal” trust).

A key having sufficiently trusted certifications (one full or

three marginal) is “valid”.

You can “delegate” trust decisions to the owner of a valid key

that you trust. LXF



The trust

database updates


you can safely

“quit” the “gpg

--edit-key” after

setting owner trust.

Next issue:


your IoT

You can visualise a web of trust using the dot utility and a script “sig2dot”

(see http://www.chaosreigns.com/code/sig2dot) . This example shows part of

the Debian keyring’s web of trust.


June 2017 LXF224 83

Routers Discover how to create rules

for routing on Linux from scratch.

LXF server:

Route traffic

Afnan Rehman delves into the world of networking to find out how to turn

a Linux distribution into a fully functioning router.





is a student,

Linux tinkerer

and computer

geek who breaks

things first so you

don’t have to.

In the interfaces file you will see lines already present.

Most likely they deal with the loopback interface and

should be left alone. Add your changes below these.



You may notice

a connection

separate from your

wired connections.

If you have a

wireless card or

functionality built

into your machine,

make sure to

disable it.

It’s time to take charge of your network. As individuals in

the unfortunate situation of being both attracted to

technology and prone having it fail on us, we’ve gone

through our fair share of consumer routers. Hunting for this

year’s replacement we stumbled on a new idea, that you

could build your own using Linux, with full control of the

functionality and settings. What a novel idea! We immediately

retrieved a PC from the LXF dungeon and set to work.

The idea of building a router is not completely new, but is

growing in popularity among tech enthusiasts as a way to

squeeze every last bit of performance out of your routing

configuration while also maintaining full control in an era of

cut-down app-controlled consumer products. The reasoning

for building your own homebrew solution is that it makes the

typical home or small office network completely your own.

You control every aspect of the functionality from routing to

IP tables to NAT and DHCP services. You can even add other

functions to the router to control certain types of traffic,

speeds, and how devices are prioritised.

This tutorial will serve as a basic how-to on setting up a

functioning router and giving you a platform on which you can

expand further and take it as far as you’d like.

First, let’s discuss the main components of any router

intended to facilitate a home network. The router of today is

often a bundle of many different components designed as a

complete solution in one box. The ones you see on shop

shelves typically have the actual router hardware, a network

switch (the network ports you see on the back) and a wireless

access point, which enables a wireless signal to connect all

your wireless devices. Often these consumer routers use only

the hardware necessary, and have a low storage capacity,

RAM, and processing power. These small compromises can

cause bottlenecks in your network, especially when you are

using higher speeds from your internet service provider such

as a 100Mbps connection or higher. The ones that perform

better often cost you an arm and a leg. The solution built in

this tutorial only contains the core components of a router,

not including a switch or wireless access point. However, you

can add these separately.

Small is beautiful

Before we get down to the nitty-gritty of setup, let’s take a

moment to talk about hardware. Some of you may be

wondering exactly what kind of hardware is necessary to

create a functioning router. Some of you may also be

wondering who in their right mind would use a full sized

desktop tower to function as a router. For those of you not

keen on the idea of displaying your old hardware in the middle

of your study, fear not, The wonderful thing about modern

technology is how it grows ever smaller and sleeker. This also

applies to the personal computer. For this project, we used a

84 LXF224 June 2017 www.linuxformat.com

Routers Tutorial

Troubleshooting internet connection

Once you’ve rebooted after editing your network

interfaces file the first time, you may notice that

your internet connection has gone bad and you

are no longer receiving an active connection. You

may notice a message over the network interface

saying “device not managed”. This is because you

altered the network configuration in the

/etc/network/interfaces for your two Ethernet

ports. You can fix this by editing the network

manager configuration file. You can open the


conf file and change the line that says

managed=false to true . Save the file and restart

the network manager with the following:

$ sudo service network-manager restart

Alternatively, I recommend setting up a

wireless connection here or just make sure you

had everything downloaded beforehand. If you

need to reverse this, simply roll back the changes

in the interfaces file and restart the computer. If

you are experiencing other issues, be sure to

check the Ubuntu documentation at https://


The official Ubuntu documentation provides a lot

of good information on not only how to fix

problems, but how to edit configurations to add

additional functionality such as logging.

full size desktop tower to test the idea, then moved everything

over to a mini PC with dual Gigabit NIC about the same size

as a consumer router for actual production use.

The hardware used for the build included a PC that was

lying around rocking an Intel Celeron N3150 CPU with 4GB

DDR3 RAM and a 64GB SSD. Is this overkill? Absolutely. Is

this the cheapest system you can get to set this up? Probably

not. You can certainly cut corners here using a smaller SSD, a

spinning hard drive, or even a SD card to house the operating

system, and you can certainly cut down on the amount of

RAM. The processor can also be slower depending on what

you want. We simply had these components on hand and

frankly wanted top notch performance as well.

Most importantly, you must have at least two Ethernet

ports, preferably Gigabit speed. The reason for this is simple:

you need one port for a WAN connection (incoming from the

internet), and one for LAN (outgoing traffic to local network).

The LAN port can be connected to a switch to facilitate the

use of multiple wired devices.

Now let’s talk about the operating system. We’re using

Ubuntu Desktop to demonstrate this concept in a simple

manner and most of the work will be done in the command

line. Linux in general is built with routing in mind, making it a

natural choice. As such the instructions provided here can be

adapted to almost any common Linux distro. In a lower-spec

system it may be wiser to use a minimal install such as base

Ubuntu Server or CentOS Minimal to minimise the overhead

taken up by the operating system, reserving your processing

power for the actual routing.

Setting up

The first step is of course to install Ubuntu or your distro of

choice. This is quite simple and there are plenty of guides

online. Whatever you end up using, we recommend you make

sure it has long term support, such as the Ubuntu LTS

version. This will ensure that there will be continued security

updates for the foreseeable future, which is important for a

router that you may be using for a few years.

The script is in a new file and is only two lines long. This

script simply refreshes the interface and saves us some

time in restarting the system.

The iptables file starts out empty and we will add several lines. Make sure to

add comments for clarity in case you ever need to revisit it.

The first thing you want to do once you log in is find out

which network interface is which. You might want to grab a

pen and paper to keep track. The screen should show a

couple of network connections, and one marked “lo” for the

loopback which we won’t worry about. Ours are labelled

“enp2s0” and “enp3s0” and are both Gigabit Ethernet

connections. Your hardware may vary, and the interface name

may vary from what I have. Be sure to record these names as

you will be using them throughout this tutorial.

The next step is to configure your network interfaces now

that you know which one is which. Type the following

command into your console to open the editor:

$ sudo nano /etc/network/interfaces

You’ll be greeted with a configuration file that already has

a couple of lines in it regarding the loopback interface. Leave

those lines alone and type the following underneath them:

# The WAN interface, above the USB port

auto enp3s0

iface enp3s0 inet dhcp

# The LAN interface, above the HDMI port

auto enp2s0

iface enp2s0 inet static



As you can see we have configured both our WAN

(incoming) port and our LAN (outgoing) port. I also labelled

them with comments so that I know which is which. This will

become very helpful later when we are using these interfaces

to write our rules for routing. The LAN port is configured with

a static IP address that should correspond to the one of your



If you run into an

error here, make

sure you have a

root user enabled

in Ubuntu, as this

will often fix the


Improve your Linux skills Subscribe now at http://bit.ly/LinuxFormat


June 2017 LXF224 85

Tutorial Routers



Remember that

your interface

names and IP

addresses are likely

very different from

what we typed

out, so change

per the specifics

of your machine

and network


current router. The netmask can also be determined by

looking at the settings of your current router. Both may be

different from what’s listed above depending on your network,

so make sure to double check. The WAN interface is

configured with DCHP from your internet provider so we

simply write the line above and leave it as is. Once you’re

done, save the file and reboot.

Next you will want to edit the file /etc/sysctl.conf and

uncomment (by deleting the “#” symbol) the line that says

net.ipv4.ip_forward=1 .

This will allow packet forwarding for all network interfaces,

which is essential to forward packets between your WAN and

LAN networks. Save this change and run sudo sysctl -p to

refresh the configuration.

Time for tables

Now we get to the meat and potatoes of this tutorial. We are

now going to set up iptables! Iptables is the most widely used

Linux firewall for a long time, and we will use it here to sort

and limit traffic incoming and outgoing, which will be essential

if we are going to connect to the internet or any other device

for that matter. The first thing we will take care of is setting up

rules for packet forwarding that are applied before the

network interfaces are started, which will ensure that if we

ever restart the router, packets will immediately be forwarded.

First, we will install iptables-persistent, which is a package

that will allow iptables rules to remain after any reboots. Run

the following command to install it:

$ sudo apt-get install -y iptables-persistent netfilter-persistent

Once that’s completed, let’s set up a startup script to tell

the operating system to run the iptables ruleset before the

network interfaces become available, so that the router never

goes online or accesses the internet without the protection of

the iptables ruleset. Create the script using the command:

$ sudo nano /etc/network/if-pre-up.d/iptables

Populate the script file with the following two lines:


/sbin/iptables-restore < /etc/network/iptables

Now, save the file and run the following commands in the

command line in the order given:

$ sudo chown root /etc/network/if-pre-up.d/iptables

$ sudo chmod 755 /etc/network/if-pre-up.d/iptables

The first tells the system that the script is owned by root

and the second tells the system it is writeable by root and

readable/executable by everybody.

Now we will create the iptables by creating a file in /etc/

network/iptables with your preferred editor. Populate it with

the following lines to start out:






#enp3s0 is WAN interface and enp2s0 is LAN interface



The sysctl file is where we will uncomment a line allowing port forwarding. This

file has many settings, so be sure to only uncomment the proper line.





# Service rules


# Forwarding rules



What we have done here is create a basic skeleton which

includes “nat” and “filter” categories, each ending with the

word “commit”. One important thing this initial ruleset does is

enable NAT, or Network Address Translation. NAT handles

address translation between the local addresses on your local

Where to go from here…

Now that you have a finished router, you may be

itching to go even further. There are limitless

possibilities for what to do now. The interesting

thing about building your own router is that you

are now in charge of both your hardware and

software, and you will have resources to spare to

add functionality such as ad-blocking software,

and have it all in one box. If you wish to add extra

functionality, there are many tutorials on the

internet to help you on your journey. You can

also work on adding new hardware, such as

network expansion cards, additional storage and

RAM, even adding additional operating systems

and programs to make a multifunctional server

system. You can purchase wireless access points

and network switches to connect all your wired

and wireless devices to the router and ultimately

to the internet. What you do here is completely

up to you. We also recommend taking steps to

ensure that your router functions well and for as

long as you need it. Simple maintenance tasks

such as dusting it every six months or so will

reduce noise and heat generation, greatly

increasing the lifespan of your hardware and

allowing you to reap the benefits of a high

performing and heavily customisable router for

years to come.

Never miss another issue Subscribe to the #1 source for Linux on page 30.

86 LXF224 June 2017 www.linuxformat.com

Routers Tutorial

network, and addresses on the other side of the router. This

makes sure the router knows where to send a packet of data

coming in from outside, and send it to the proper client device

on the local network.

We’re not quite ready to go online yet. We want to also

make sure the router can hand out IP addresses to clients

just like a consumer router would. This part is very easy. First,

we will install a DHCP server package:

$ sudo apt-get install isc-dhcp-server

Next, open the /etc/dhcp/dhcpd.conf configuration file

and add the following clause to set parameters for router

address, client IP range, and broadcast address:

subnet netmask {


option routers;

option domain-name-servers;

option broadcast-address;


Of course, be sure to change these specific addresses

based on your network situation. You can set whichever

parameters for client IP addresses you wish, and the range

can be as large or small as you want. To apply the

configurations we just made run the following command:

$ sudo /etc/init.d/isc-dhcp-server restart

We are still missing a local DNS, however this is even

easier to acquire. Simply run the following command, no

configuration will be necessary:

$ sudo apt-get install bind9

Loosening up

At this point all the basics are there, and our router is now

able to handle DNS queries, give IP addresses to clients, and

forward traffic. However our rules are currently so extremely

strict that it will refuse to do any of this. What we will do now

is add several rules to the ruleset to specify what traffic goes

out to the internet, what can go into the local network from

the internet, and rules for port forwarding.

So we’ll go back to editing /etc/network/iptables and

start with creating a service ruleset, forwarding rules, and

NAT prerouting. Our complete ruleset is shown below:






# enp3s0 is WAN and enp2s0 is LAN


# NAT pinhole: HTTP from WAN to LAN

-A PREROUTING -p tcp -m tcp -i enp3s0 --dport 80 -j DNAT







# Service rules

# basic accept rules

-A INPUT -s -d -i lo -j ACCEPT

-A INPUT -p icmp -j ACCEPT

-A INPUT -m state --state ESTABLISHED -j ACCEPT

# enable traceroute reject

-A INPUT -p udp -m udp --dport 33434:33523 -j REJECT

--reject-with icmp-port-unreachable

The DHCP configuration file has a decent amount of content already. Add your

configuration lines at the very end, taking care not to edit anything else.


-A INPUT -i enp2s0 -p tcp --dport 53 -j ACCEPT

-A INPUT -i enp2s0 -p udp --dport 53 -j ACCEPT


-A INPUT -i enp2s0 -p tcp --dport 22 -j ACCEPT

# DHCP client requests - accept from LAN

-A INPUT -i enp2s0 -p udp --dport 67:68 -j ACCEPT

# drop all other inbound traffic


# Forwarding rules

# forward packets along related connections

-A FORWARD -m conntrack --ctstate


# forward from LAN to WAN

-A FORWARD -i enp2s0 -o enp3s0 -j ACCEPT

# allow traffic from our NAT pinhole

-A FORWARD -p tcp -d --dport 80 -j ACCEPT

# drop all other forwarded traffic



The service ruleset is under the filter area and will make

rules for what the router can accept and what it can forward

to the local network. Here we also allow SSH access so that

once the router is configured we can remote-in to make

changes rather than plug in a monitor and keyboard.

The part labelled “forwarding rules” instructs the router to

forward traffic to the LAN and from the LAN to the WAN for

outgoing traffic. In addition, we add a line to the NAT section

to create a NAT pinhole instructing the router to forward any

arbitrary traffic form the internet to the local machine at the

specified address. Ensure that the PREROUTING and the

FORWARDING rules are there and in the right places.

To wrap up, restart your iptables by running the following:

$ sudo /etc/network/if-pre-up.d/iptables

Once you’ve done that, you should be good to go! Enjoy

your new and improved routing experience! LXF


June 2017 LXF224 87

Django Framework

Django: Explore

the framework

Thomas Rumbold walks us through the basics of the Django Framework

and Daniel Samuels show us how to get started with our first lines of code.





has a background

in web


application and

technical writing.

These days he’s


Director at web

design agency,




Use virtual


to isolate your

dependencies in

projects. You can

run pip install


virtualenv .venv

. .venv/bin/

activate to start.

Aweb framework (or web application framework) is a

software framework that is designed to support the

development of websites and online software

applications – including web services, web resources and web

APIs. Web frameworks aim to alleviate the overhead

associated with common activities performed in web

development by providing a series of tools designed to

simplify and accelerate the technical development process

for web developers.

So, what is Django, and what is it good for? Simply put,

Django is a Python web framework designed to enable

website and web application developers to build and deliver

complex projects quickly, securely, and to a consistent, ‘clean’

style. It’s got a series of out-of-the-box tools that support,

enhance and speed up the traditional web development

process, and which help get otherwise complicated work

done rapidly to a very high standard.

Django provides a series of software tools that take care of

a lot of the non-project specific grunt work for a developer

straight out of the box. That means that developers can

spend more of their time writing the important stuff – like

actual features – while lots of the mundane, standard stuff

that needs to happen in the background to serve a web

application is handled automatically.

“Have I heard of anything that’s built on Django?” you may

be wondering to yourself. Yes, you probably have. Pinterest,

the image collating website is, under the bonnet, actually a

very high traffic, distributed Django system. Instagram’s web

application is also written on Django. The Washington Post,

The Onion, and NASA also use Django. You’re probably

sensing a theme here. These guys are in charge of

complicated, very high-traffic, large scale systems – and

they’re all doing it pretty well (or appear to be, anyway).

Bitbucket and EventBrite also run on Django (and in fact, the

current EventBrite team are made up of a number of Django’s

core development team).

So, what does it look like on the technical side? There’s an

awful lot to Django – lots of pretty brilliant engineering under

the hood, and subsequently lots of great features to use.

Covering everything Django can do would be impossible for a

single article (and that’s what the documentation is for) – but

if you’re interested in a more general technical evaluation of

how it works, you’ve generally got the following five main

components to deal with.


Building features in Django typically means you’ve got to put

together an app, which is designed to do a single thing, and

do it very well. An app in Django is a self-contained code

structure that is made up of a number of different files that

usually contain everything it needs to get something done –

as well as any references to other apps, if you need to share

data between them. An app is made of a series of base files.

You can also create new ones and import them where you

think they’re necessary – but for building simple applications,

the base project structure is ample.


Models in Django are a mechanism to enable developers to

interact with a database (doing things like creating, reading,

updating and deleting) without directly touching the database

layer of the stack with any of their own code. Models are what

are referred to in the software world as an abstraction layer

between the database and application layer – and in Django’s

specific implementation, it means you almost never have to

write any raw SQL – because your model is a fully accessible

Python representation of your database table. That means

that instead of writing a raw query, you can simply import and

call elements from your model in Python, without having to

go any lower down than the application layer.

88 LXF224 June 2017 www.linuxformat.com

Django Framework

What do each of these files do?

There are four files that are generated when

you run startproject , they are manage.py,

settings.py, urls.py and wsgi.py.

The manage.py file is a replacement for the

django-admin command we used earlier, it’s

your main entry point into your application and

it’s what we use to run any management

commands related to our project, for example

creating an admin user. It’s very rare that you’ll

need to edit this file.

The settings.py file contains all of the settings

needed for our project, such as the name of the

project, the URL it will live under and so forth. We

won’t be editing it as part of this process, but it’s

worth having a look through just to see what’s in

there anyway.

The file urls.py is the base routing file for all

requests which come into Django. It handles

mapping the various URLs to their counterpart

views. We’ll be editing this later on. The wsgi.py

file is used when you deploy your project to a

server. It enables a WSGI HTTP server such as

Gunicorn to mount your application and pass

HTTP requests through.

It’s a brilliant way to keep code clean, well structured and

reusable – and of limiting any interactions that could

potentially be dangerous or inconsistent. It also means that a

lot of the SQL specific stuff (such as validation) can be kept

and handled at the model level, which makes it very easy to

scan a model file and understand how it corresponds to the

tables in your database, thanks to Django’s heavy lifting

behind the scenes.


After creating a model, creating your database and then

storing some data, you’re probably going to want to do

something with it. Serving data out to a HTML template for

the user to see requires what is called a “View” – which is a

file designed to process and deliver data to other parts of the

web application. Views are powerful because they’re just

Python files – anything you can do in Python you can do

within a view – which means you can access, cut up and

stitch together data in pretty much any way you like. Django

has a whole load of tools to help developers do it rapidly, too.

Importantly, Django’s Generic Class Based Views are a

powerful time saver designed to stop you rewriting the same

views over and over again for different projects.


Good URL schemas underpin any good web information

architecture, and Django is designed to help build simple,

dynamic URL systems with ease. Each app usually has an

associated urls.py file – designed to allow you to define a

collection of routes which Django will test the website visitor’s

current URL against. If there’s a match, then Django will call

that URL’s associated view and execute any logic associated

with it, returning the result to the appropriate template, and

the user to the appropriate URL. This in-built URL handling

system is an exceptionally powerful way to architect large,

dynamic URL schemas, with very little overhead.


Templates are the parts of a website or web application that

are served as either individual HTML pages, or components of

other HTML pages to website visitors. They’re written in

HTML, and their styling is dictated by CSS – but Django’s own

templating language means that you can call database

objects out with ease. They can either be fed data from a

corresponding view file – or they can feed data to a view, by

way of taking input via a form.

Django’s templating syntax is simple and effective – but a

good rule of thumb is to keep as much ‘logic’ away from

templates as you can, to keep the code clean and

understandable. Any complex processing or filtering should

be done either at the model or view level (or potentially, in a

template tag ) – so when it lands in your actual template, the

processing is already done and the front end work is as

simple as it can be. Django has a lot of cool, built-in filtering

tools that help you get work done with your data – they’re

worth taking a look at.


Django’s automatic administration system is hugely useful –

mostly because of the amount of time it saves. It works like

this: any app that you construct has its own corresponding

admin file. Importing the relevant database models into that

file, and then dictating how you want those fields laid out

visually is all you need to do to create an administration

system – because Django will take those components and

make them accessible for an administrator (by default, at the

/admin URL). The beauty of this is that unless you want

some advanced, custom features, there’s really nothing you

need to do. You create some very small admin files and lo and

behold – an administration system. How long would that have

taken to write on your own?

Get started

The Django community is a very active community of highly

capable developers, and there are a lot of resources available

to get started with. To demonstrate the simplicity of Django,

we’re going to make a simple blogging app. (The final source

code is available here: https://github.com/danielsamuels/


To begin, you’ll need to have Python installed. It comes

pre-installed on OS X and Linux-based systems, so if you’re



Python comes with

a useful program

for installing


it’s called pip. You

can install Django

by running pip

install Django . If

you want to share

your requirements

you can run

pip freeze >


and someone else

can run

pip install -r


to install them.

This is the default project structure for a Django project, it’s the template that

Django itself provides for you to use. It’s a great starting point.

Improve your code skills Subscribe now at http://bit.ly/LinuxFormat


June 2017 LXF224 89

Django Framework



Django projects

work best when

they’re split into


chunks, so consider

splitting different

parts of your

project into their

own apps, then split

the individual

components of the

app into their own

file. It’s much easier

to maintain.

using either of those, you should already be set. If you’re on

Windows you’ll need to go to the Python website to download

and install it. Before we install Django, we’re going to create

something called a “virtual environment”. A virtual

environment allows us to isolate the dependencies of a

project from the rest of your system so you don’t end up with

multiple versions of the same library trying to be loaded at

the same time. To start using a virtual environment under

Python 2 you’ll need to first install the package: pip install

virtualenv . If you’re using Python 3, it comes with a virtual

environment system built-in, so we’ll make use of that.

To create the virtual environment on Python 2 run

virtualenv .venv , to create one on Python 3 use python3 -m

venv .venv . Once the environment has been created it needs

to be activated before it can be used. You can do this by

running . .venv/bin/activate .

Now we’re all set and ready to install Django without it

affecting anything else, so simply run pip install Django . The

latest version will be downloaded and installed. To start using

Django we’ll first need to create ourselves a project. Django

provides a base template, so we’ll make use of that. Run

django-admin startproject blog . If you view this directory

you’ll see that a few new files have been created, these will

form the basis of our project.

Now that we have the first files for our project, we’ll start

building it out. To keep our workspace clean we’re going to

create our application files within a folder in our project,

rather than alongside everything else, so go ahead and create

a folder called blog. You’ll also need to create an empty file

named __init__.py in this folder so that Python knows it’s a

loadable module. Once we have this folder we’ll need to

create a models.py file inside it which will contain our

representation of a blog post. We’ll keep it simple and only

When you run the development server the first time and navigate to, this is the page you’ll be greeted with.

include the title and content for now.

from django.db import models

class Post(models.Model):

title = models.CharField(



content = models.TextField()

Then we’ll need the individual views which take care of

receiving the request, working out what needs to be done and

then returning a web page. For this we’ll make use of Django’s

Generic Class Based Views which provide a simple way to

perform common tasks. Create a views.py in your blog folder

with these contents:

from django.core.urlresolvers import reverse

from django.views.generic import CreateView, DeleteView,

DetailView, ListView, UpdateView

from .models import Post

class PostListing(ListView):

model = Post

class PostCreate(CreateView):

model = Post

success_url = ‘/’

fields = ['title’, ‘content']

class PostDetail(DetailView):

model = Post

class PostUpdate(UpdateView):

model = Post

fields = ['title’, ‘content']

def get_success_url(self):

return reverse('blog:detail’, kwargs={

‘pk': self.object.pk,


class PostDelete(DeleteView):

model = Post

success_url = ‘/’

The last file we’ll need in our app folder is urls.py. This will

map the URL you visit in your browser to the relevant view in

the views.py file. We’re going to use some regular expressions

to match values within the URL, but all they’re going to do is

look for a number and pass it into the view as a “pk” value –

that is, a primary key – which will be used to find the blog

post in the database.

from django.conf.urls import url

from .views import PostCreate, PostDelete, PostDetail,

PostListing, PostUpdate

urlpatterns = [

url(r'^$’, PostListing.as_view(), name='listing'),

url(r'^create/$’, PostCreate.as_view(), name='create'),

url(r'^(?P\d+)/$’, PostDetail.as_view(), name='detail'),

url(r'^(?P\d+)/update/$’, PostUpdate.as_view(),

Abstracting away complexity

The example code (shown above) demonstrates

how easy Django makes it to develop

applications. With less than 30 lines of view

code, you’re able to take care of listing news

articles, creating new articles, viewing them,

updating them and deleting them. This is made

possible to the amount of time spent by the

Django developers in creating a collection of

generic views which are both useful, but also

unopinionated. They deliver all of the

functionality you require without getting in your

way – and if you don’t like the way they’re doing

something, you can just override or extend them

with your methods. This level of thinking extends

across the entire spectrum of Django features,

with simplicity and security being the default.

Knowing that the framework is doing all of the

heavy lifting for you leaves you to take care of

the application logic, making you more

productive and able to deliver projects in a more

timely manner.

Never miss another issue Subscribe to the #1 source for Linux on page 30.

90 LXF224 June 2017 www.linuxformat.com

Django Framework


url(r'^(?P\d+)/delete/$’, PostDelete.as_view(),



There are a few small things we’ll need to do before our

app is integrated into our project. First we need to add the

app to the list of installed apps in the settings. So open up

settings.py and have a look for INSTALLED_APPS. Add

another line with the value blog.blog (“blog” being the name

of the project folder and also the name of the app folder). The

last thing we’ll need to do is add the blog URLs to the project

URLs. Open up the urls.py in the root of your project and add

this line underneath the admin line:

url(r'^’, include(‘blog.blog.urls’, namespace="blog"))

You’ll also need to add include to the list of imports from


from django.conf.urls import include, url

We now need to let Django know about our new article

model so that it knows it needs to create a database table. Go

back to your terminal and run python manage.py

makemigrations blog . You’ll see it create a file. Then run

python manage.py migrate , it will turn all of the models in

your project into database tables. You should now have

everything in place to be able to run the development server,

so run python manage.py runserver . You’ll see the server

start up and direct you towards ,

which you can then visit. You’re going to receive an error

saying a template is missing and this is a good thing – it

means you’re hitting the correct view (in this case, it’s the

PostListing view).

The next thing we’ll do is add all of the templates that we’ll

need. We’ll need four templates in total: the blog listing, the

article detail, the forms and the delete confirmation. We’re

going to keep them as simple as possible for now, so go

ahead and create these files:


{% csrf_token %}

{{ form.as_p }}


{% for post in post_list %}

{{ post.title


{% endfor %}

Your final structure should look like this:









If you refresh your browser you should see your working

site (you may need to restart the development server first).

And there you have it, your first Django-powered blog! LXF

which has enabled users to trade over

€100m, an online at-home learning

platform, COSMOS, for children’s

science education charity Cambridge

Science Centre, and ongoing projects

for Imperial Innovations which ensure

that their digital presence reflects their

status as champions of innovation.

Django comes

with a built-in

server to use

when developing

your application.

It makes it very

easy to get your

projects up and



June 2017 LXF224 91

Web development

Webdev: Build

a system toolkit

Kent Elchuk explains how to get into web development on the cheap.



Kent Elchuk

has been a


writer, web

developer and

hobbyist for quite

some time. His

works range from

simple websites

to custom

web applications

systems for a

number of




Windows Vista

came to the end

of its life in April

2017, which means

you can get a

web development


very cheaply.


auctions are one

source of well-built,


machines. Get

units with 4GB of

memory and

throw in an SSD.

One of the great reasons to use Linux is that it makes

web development an easy process. When we say

easy, we mean it makes it free and easy to build and

test websites and web applications.

Since so much of the web is powered by websites hosted

with Linux, including giants like Google and Facebook, using

Linux for building makes it convenient to set up a solid

building and testing environment.

If that does not convince you, maybe the source at

https://en.wikipedia.org/wiki/Goobuntu will, which

explains that 10,000 employees at Google use Goobuntu. A

flavour of Ubuntu, is enough to let you know that an Ubuntu

option is a good choice.

We are going to build a web development environment

from scratch. This can be on real hardware or inside a

VirtualBox, the choice is yours. Regardless of your setup, keep

in mind web server technologies change over time, thus using

VirtualBox does enable you to run the latest web server

technologies like Apache, PHP, mySQL, and PhyMyadmin.

Starting fresh

If you want to get a new machine to dedicate to development,

you can choose new or used. New has obvious perks, but

there are many used gems out there. In our time we’ve had

old Lenovo Business class laptops cost less than $100 and

several HP XW8600 workstations bought through auctions.

Add an SSD and you have a capable development PC.

One more thing to keep in mind with your choice of Linux

flavour; if you are going use, or want Raspberry Pi

compatibility, Ubuntu Mate is a prime option. As for a Pi itself,

the Raspberry Pi 3 is recommended, it’s far faster compared

to older models and you can hook up an HDMI TV, mouse and

keyboard for desktop use, or you can simply run the web

server off the Raspberry Pi and transfer files to it. Thus, it can

be used like a remote web server just as you would have with

a VPS or dedicated server from a hosting company.

The software

Since you will be carrying out web development on your

chosen machine, you will need to install a web server,

database and optional scripting languages. To cover the

numbers out there on the web, this tutorial will use and setup

a traditional “Lamp” stack; which translates to Linux, Apache,

mySQL and PHP (or Perl or Python). For this tutorial, the P in

Lamp will be PHP. According to stats from https://w3techs.

com/technologies/details/pl-php/all/all, PHP is used by

82.5% of websites whose server-side programming language

is known, so we’re in good company.

Set up an FTP with Netbeans to transfer files and you can

easily maintain a development box and web server.

The stats do not mean it is necessarily better, and if you

Google around you’ll find it seems like there is an army

against PHP. But its popularity has led to a great many tools

that make it an easy language to start development with;

tools such as Wordpress, the world’s most popular CMS. So,

let’s get down to the Lamp setup. The command sequence is

shown below. When prompted for yes or no, simply follow the

instructions for ‘yes’ and move along.

With mySQL and PhpMyAdmin, you can leave the

password fields empty to complete the installation. That is OK

for a test environment on your Linux platform, but if you plan

to use the server as a web host with port forward on your

router, you should use secure passwords and write them

down in a safe place, if you don’t think you’ll remember them.

sudo apt-get update

sudo apt-get install apache2

sudo apt-get install mysql-server

sudo apt-get install php7.0 php-pear libapache2-mod-php7.0


sudo apt-get install php7.0-curl php7.0-json php7.0-cgi

sudo apt-get install phpmyadmin

These days, PhpMyAdmin does not play friendly if you

want to use it in a test environment without a password. You

can find more details right from the horse’s mouth at

https://docs.phpmyadmin.net/en/latest. If you really plan

to do web development with mySQL(which includes

Wordpress) and access the databases with a GUI,

PhpMyAdmin is a must.

92 LXF224 June 2017 www.linuxformat.com

Web development

The Netbeans PHP developer version has code

completion and tips so you know what the functions do

and what parameters you can send to the function.

Now that you are at this stage, we would like to add an

optional installation. It is Samba. You can install it on both the

guest and host machines, if you have another Linux machine

like the Raspberry Pi for which you want to save file or folder

backups, and flip back and forth between machines, Samba is

very convenient, you just drag and drop folders and files just

as you would on your laptop or desktop. If you don’t want to

install Samba and backup files to a Samba server, you can

just skip the Samba stuff.

Installing Samba and basic configuration is as follows:

apt-get install samba samba-common-bin

service samba start or /etc/init.d/samba start

The steps below will explain how to make a folder for

sharing which will be located at /home/mysamba


mkdir kent

chown kent:sambashare kent

New SMB password:

Retype new SMB password:

Note that the Samba setup is quite loose for all users on

the network. Samba does allow you to engage in strong user

control and read-only privileges. After you have set-up the

web server (which we had done above), you can test it right

away. Open a browser and type http://localhost. That file

comes from the folder /var/www/html/index.html file. To

access the file via your desktop, you can click your ‘Home

folder’ on your Desktop. Then, open the sequence var/www/

html where you will see the index.html file.

You can right click on the file and open it with your chosen

editor. For example, Open with Pluma or Open With Vim will

do the trick. Now, you can edit and save it. If, for whatever

reason, you come across permission issues with saving the

file, you will need to make sure that the folders www/html

have the proper permissions, which can be your username.

The command below shows a simple way to give each file the

permissions for your usename.

$ sudo chown -R username:username www

With web development, and Android development for that

matter, a popular technology is NodeJS. The steps below

explain a sequence to get NodeJS up and running.

sudo apt-get update

sudo apt-get install nodejs

sudo apt-get install npm

sudo apt-get install build-essential

which node

which nodejs

$ /usr/bin/nodejs

sudo ln -s /usr/bin/nodejs /usr/bin/node

node -v


Now that NodeJS is installed, I will explain how to install

Grunt, which is a popular tool to manage tasks. Install Grunt

and use it as follows:




Worpdress from


Now, unzip the file

and move the folder

to the location you

want; such as


in Ubuntu. If you

want the site in

the root domain,

copy all files inside

the Wordpress

unzipped folder and

copy them into the

/html folder.

cd /etc/samba

vi smb.conf or /etc/samba/smb.conf

Now you can make changes to the configuration file.


comment = Public Share

path = /home/mysamba/

#valid users = @users

valid users = @sambashare,fileserver,myusername

#force group = users

create mask = 0770

directory mask = 0770

read only = no

guest ok = yes

browseable = yes

#security = user

wins support = yes

Reload or restart Samba for changes to take effect.

/etc/init.d/samba restart

The commands below show how to create a user and give

it a Samba password.

sudo useradd mysername -m -G users

sudo passwd myusername

...follow prompt for password and verify

sudo smbpasswd -a myusername

1 Install the command-line interface

sudo npm install -g grunt-cli

2 With the command line, navigate to the folder where you

want to run grunt; such as a web application or website folder.

cd myfoldername

cd /var/www/html

A successful FTP connection to a Raspberry Pi 3 on a local network. The

Raspberry Pi 3 emulates a remote VPS or dedicated server.

Improve your code skills Subscribe now at http://bit.ly/LinuxFormat


June 2017 LXF224 93

Web development

3 Install Grunt locally

npm install grunt --save-dev

4 Make a simple grunt file called gruntfile.js and add the

code below. This code will allow you to see messages in

your command line terminal when changes are made to the

test.js file. Why might you want that? Imagine if you had a

coworker or employees who were working on this file. If the

file did not change, you might become irate and wonder why

no work was getting done.

module.exports = function(grunt) {


jshint: {

files: ['test.js'],

options: {

globals: {

jQuery: true




watch: {

files: [''],

tasks: ['jshint']



Adding Netbeans to Ubuntu Mate main menu. Add path, name, and click OK.

Vim: command line editing

Installing Vim is a good choice if you plan to access remote servers or need to make

quick changes with the command line. On top of that, it is a fantastic editor and one

you can use for changing files with super user privileges.

A few such files where you need super privileges are php.ini and /etc/apache2/

apache2.conf and /etc/php/7.0/apache2/php.ini.

apt-get install vim

If you don’t want to use Vim to alter files, you can use Nano which ships with

Linux. But if you spend some time with Vim, you may find its navigation and saving

a real asset. It won’t save the file until you command it to do so.

Here is a little list of simple Vim commands, most are preceded by .

G – moves you to the bottom of the file.

/stringname – The forward slash followed by string name, then pressing Enter will

go to the string name you type. You can then click ‘N’ to go to the next occurrence.

:q! – This will exit the file without changes

:wq! – This will save the file and quit

:w – This will save the file

– allows you to add text

– leaves Insert mode



grunt.registerTask('default’, ['jshint']);

console.log('File Changed');


5 Make a simple package.json file or run the command

npm init to make it from the command line. Below is an

example of a package.json file with all desired and required



“name": “Test-Project”,

“version": “0.1.0”,

“devDependencies": {

“grunt": “~0.4.1”,

“grunt-contrib-concat": “~0.1.3”,

“grunt-contrib-jshint": “^1.1.0”,

“grunt-contrib-uglify": “^2.0.0”,

“grunt-contrib-watch": “^1.0.0”,

“grunt-livereload": “^0.1.3”



Here is how to install dependencies:

npm install grunt-contrib-jshint --save-dev

npm install grunt-contrib-watch --save-dev

npm install grunt-livereload --save-dev

To see in grunt in practice, run the command below, make

changes to the file, then look in the console and you will see

that a file changed.

grunt watch


Gulp is another task manager. Here is how to get up and

running with Gulp. Install globally just as you would Grunt.

sudo npm install -g gulp

Go to the folder you want to use. The example uses the

folder gulp located inside the html folder.

cd /var/www/html/gulp

Create a simple package.json file or run the command

rpm init to create it in the command line. See the sample

package.json file below:


“name": “test”,

“version": “0.1.0”,

“devDependencies": {



Install Gulp locally:

npm install gulp --save-dev

Create a file called gulpfile.js and add your desired

task(s). Here is a sample gulpfile:

var gulp = require('gulp');

livereload = require('gulp-livereload');

gulp.task('watch’, function () {


gulp.watch('*.html').on('change’, function(file) {


console.log('File Changed');

We’re #1 for Linux! Subscribe and save at http://bit.ly/LinuxFormat

94 LXF224 June 2017 www.linuxformat.com

Web development

Installing Netbeans

Netbeans offers excellent editors for all sorts

of programming. There are versions for Java,

PHP, HTML / Javascript and C/C++ just to

name a few. Go to https://netbeans.org

and download. There are many versions, but

the PHP version will work for Lamp web

development. You have options for x86 and

64-bit versions too. Netbeans will need Java

JDK 8 in order to work. You can download and

install it via command line or from:



After the download, you can right click on

the netbeans-8.2-php-linux-x64.sh file,

select permissions and allow it to run as an

executable, then close. Now, you just have to

right-click on it and select Run. At this point,

you just follow simple, typical GUI installation

instructions. After installation, there will be a

Netbeans icon on your desktop. Doubleclicking

that will start Netbeans.

Another way to open the program with

Ubuntu Mate is to click Application >

Programming > Netbeans IDE 8.2. If you

want it in the main main menu, you can go to

System > Preferences > Look and Feel > Main

Menu > New Item > Find Command, enter a

path like /home/username/netbeans-8.2/

bin/netbeans > Close. Now, you can access

it from main menu too.

Starting a new project is very simple.

Select File > New Project > Select HTML/

Javascript or PHP > Next > Name the project

> Select Folder. Keep in mind that the folder

is var/www by default, but you will likely want

/var/www/html instead. > Select Finish.

Now, you will have access to all files and

folders in the www directory.

Everything is set up locally, so we need to

set up FTP so you can transfer the files to a

remote server as well. To set-up FTP or SFTP,

right click on the projects folder > Select

Properties > Under categories in the left

column and select Run Configuration. The

configuration process takes place now.

Create a new confirmation and name it.

Under Run As, select Remote Web Server.

Then add a project URL. After that, Select

Manage next to Remote Connection > fill out

the specs > Click OK.

In addition to the above, you can start a

project from remote files. Here is how that is

done. File > New Project > PHP > PHP

Application From Remote Server > Next >

Add Project Name, Select a PHP Version >

Next > Make sure Upload directory exists on

remote server > Select Next > Click Finish.

At this point you will see the new project

on the left hand side Projects list. Note that if

you do not have an FTP server on the remote

host, you will need to install it. One such

circumstance could be that you are using a

Raspberry Pi as a remote host. FTP can be

installed and configured very quickly on the

remote server.

sudo apt-get update

apt-get install vsftpd

vi /etc/vsftpd.conf

Add the following to /etc/vsftpd.conf:




After the code changes, restart the


systemctl restart vsftpd

gutil.log(gutil.colors.yellow('HTML file changed’ + ' (’ +

file.path + ‘)'));



Add required dependencies. Doing this will automatically

modify package.json too.

npm install gulp-concat --save-dev

npm install --save-dev gulp-livereload

npm install gulp-uglify gulp-rename --save-dev

Here is how package.json will look after dependencies are

installed; although it will not be exactly the same.


“name": “test”,

“version": “0.1.0”,

“devDependencies": {

“gulp": “^3.9.1”,

“gulp-concat": “^2.6.1”,

“gulp-livereload": “^3.8.1”,

Starting a new Netbeans project. Select the source folder

you want to use and you are good to go.

“gulp-rename": “^1.2.2”,

“gulp-uglify": “^2.1.2”,

“gulp-watch": “^4.3.11”



To run the Gulp ‘watch’ task, to see all HTML files that

change, type the code below into the console:

gulp watch

If you need to terminate Gulp, you can use the command

Ctrl + C. If Grunt or Gulp does not work properly, check file

permissions. If they all belong to the user, all will work as

expected and mimic a hosting environment. Let’s run the

watch task and see the console change when any HTML file in

the test folder is modified. Asides from monitoring files, Grunt

and Gulp have many other uses.

The extras

For other programs, you can always install Filezilla to transfer

files to a remote server. Not only is Filezilla an easy method to

upload and download files with a remote server from text

editors like Vim, Gedit and Pluma