A NEW BREED
1LxhtJc
1LxhtJc
- No tags were found...
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
storageage<br />
Scaling out of the past<br />
The ability for a storage set-up to scale according to the data at an<br />
organisation’s disposal is a vital component of modern infrastructure<br />
I<br />
n today’s big data age,<br />
organisations long for a<br />
storage system that can<br />
support data growth at a<br />
scalable and manageable cost.<br />
Cloud computing has presented one<br />
answer, but the security and privacy<br />
concerns around trusting a third<br />
party with business-critical data<br />
has left many organisations seeking<br />
an alternative.<br />
Scale-out storage has emerged as a<br />
popular option. As the name suggests,<br />
this solution allows organisations to<br />
expand their total amount of disk space<br />
as required, delivering predicted and<br />
manageable levels of performance,<br />
capacity and cost.<br />
The result, vendors claim, is a flexible<br />
and dynamic storage environment that<br />
can support balanced data growth<br />
and reconfigure infrastructure on an<br />
as-needed basis.<br />
Such a set-up is underpinned by the<br />
ability of multiple storage arrays to<br />
connect over Ethernet. While a<br />
traditional unit is autonomous and<br />
cannot share or receive data from other<br />
arrays, the connectivity between units<br />
in a scale-out structure means users<br />
have multiple boxes that can work<br />
together as a clustered storage array.<br />
‘The faster the communications link,<br />
the more performant the cluster<br />
will be,’ says John Abel, engineered<br />
systems leader for the UK, Ireland and<br />
Israel at Oracle. ‘Scale-out storage is<br />
‘Scale-out storage<br />
is ideally suited to<br />
handling linear growth<br />
in performance<br />
needs, which is where<br />
traditional storage risks<br />
falling into bottlenecks’<br />
>> John Abel, Oracle<br />
ideally suited to handling linear<br />
growth in performance needs, which<br />
is where traditional storage risks<br />
falling into bottlenecks.<br />
‘Of course, as you continue to add<br />
arrays, you are also taking on more<br />
physical storage, which demands more<br />
overhead and management costs.’<br />
The key technologies that make up<br />
scale-out storage are the management<br />
software that orchestrates the nodes in<br />
the cluster and the network that links<br />
the nodes together.<br />
Both are designed to deliver coherent<br />
data across the nodes and ensure that<br />
in the event of reliability issues data<br />
remains accessible and consistent as<br />
nodes drop out of the cluster.<br />
The design point that various<br />
products solve differently – and with<br />
different levels of success – relates to<br />
the CAP theorem.<br />
‘This states broadly that you can’t<br />
have consistency, availability and<br />
partitioning tolerance at any given<br />
time, just two out of the three,’ says<br />
Alex McMullan, EMEA chief technology<br />
officer at Pure Storage. ‘That drives<br />
multiple copies of data for resilience<br />
across nodes, increasing cluster traffic<br />
and, ultimately, cost.’<br />
Old and new<br />
Traditional storage infrastructure is<br />
often based on proprietary hardware<br />
that was never designed to scale<br />
intuitively to meet fluctuating demand<br />
for capacity.<br />
Often developed to be deployed from<br />
a single location, most organisations’<br />
legacy infrastructure is designed to<br />
manage a consistent flow of basic data.<br />
This makes it rigid to varying<br />
demands and, thus, likely to choke<br />
when expected to cope with vast<br />
amounts of data – like that produced as<br />
a consequence of big data initiatives or<br />
Internet of Things (IoT) deployments.<br />
September 15 information-age.com 27