Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
HOW DO WE GET THERE?<br />
If you are sold on the idea of using the probabilities of independent<br />
events in your favor, you’re probably wondering how that’s done.<br />
It’s actually not that hard in terms of skill to pull this off, it’s more a<br />
matter of resource allocation.<br />
To demonstrate independence, let’s first look at statistically<br />
dependent events. This doesn’t mean that one necessarily causes<br />
the other, but just that they are affected by common factors. Load<br />
balancing and clustering servers in the same datacenter are<br />
helpful, but probably aren’t going to result in independent servers.<br />
If they share the same internet connection, power provider, router,<br />
Sorry kids,<br />
there’s just no<br />
such thing as<br />
100% uptime.<br />
dedicated database server, or anything else that could cause a<br />
general failure, then they are statistically dependent.<br />
To create statistically independent servers that will improve your<br />
uptimes, there are two categories of issues that must be tackled.<br />
The first includes technical issues, such as getting load balancing,<br />
clustering, and overall redundancy in place. This can be handled<br />
at the operating system level by most modern server OSes like<br />
Linux, Windows, and BSD and even at the control panel level with<br />
software like the Interworx-CP.<br />
The second includes more practical matters, such as telecom<br />
providers, climate control, and electricity. Unless you can get<br />
independent, redundant systems for providing internet connectivity,<br />
electricity, and the like, there’s probably not going to be a safe way<br />
to have everything hosted at one datacenter. Even then, those<br />
catastrophic events mentioned above could come into play. If you<br />
want to play things as safely as possible, you are probably going<br />
to need to have multiple datacenters with different internet and<br />
utility providers.<br />
CONCLUSION<br />
Sorry kids, there’s just no such thing as 100% uptime. But<br />
with planning, preparation, and investment in your network<br />
infrastructure, it’s possible to get downtime as low as you need<br />
it to be! P!<br />
Writer’s Bio: Rollie Hawk is a consultant, writer, husband and<br />
father living and working in southern Illinois.<br />
A Few Words on Grid<br />
Hosting – A 100%<br />
Uptime Solution?<br />
As is pointed out in the adjacent article, although a 100%<br />
uptime solution is theoretically impossible, it is possible<br />
to get extremely close to 100% uptime through the use<br />
of multiple, independent servers. (In other words, if the<br />
hosting situation is based on combined servers configured<br />
in such a way that they can tolerate the outage of certain<br />
servers and still effectively maintain hosting capabilities,<br />
then it is possible to get very close to 100% uptime).<br />
Of course, many things are well and good in theory,<br />
but not very good at all in practice. Unfortunately, this<br />
is sometimes the situation when trying to create a highuptime<br />
hosting solution. Although things like clustering<br />
and load-balancing are well-developed and available,<br />
implementing them into a hosting solution tends to be<br />
complex and expensive.<br />
A relatively recent introduction to the web hosting industry<br />
is a concept that aims to make high-uptime, enterprisegrade<br />
hosting solutions extremely cost-effective and easyto-implement.<br />
Perhaps the most salient example of this<br />
are so-called grid computing hosting solutions (although<br />
perhaps more accurately described as distributed<br />
computing). One of the leading examples is Rackspace’s<br />
venture, Mosso (located at www.mosso.com; and known<br />
by the tagline “the hosting system”).<br />
Essentially, the main difference between a system such<br />
as Mosso and conventional hosting is that a cluster of<br />
servers, combined with enterprise-grade, redundant<br />
storage technologies such as NAS (Network Attached<br />
Storage) are mated to an extremely high-quality, redundant<br />
network. Because of the multiple levels of redundancy<br />
in terms of drives, actual machines serving pages, and<br />
network uplinks, it is possible to obtain an extremely high<br />
uptime. In other words, such solutions come quite close to<br />
satisfying the independence requirement that can ensure<br />
uptimes in the very “high nines” -- effectively creating a<br />
virtually 100% uptime solution.<br />
Such solutions can be affordably provided largely as a<br />
result of economies of scale. In other words, companies<br />
like Rackspace have the ability to invest significant<br />
amounts of capital and other resources into things like<br />
giant server clusters, high-redundancy NAS configurations,<br />
and specialized software to make it all work together. By<br />
enabling a large number of clients to use these resources<br />
via a fairly conventional web hosting model (i.e. buying x<br />
amount of space and y amount of bandwidth for z dollars),<br />
initial capital costs are spread across a broad range of<br />
users. Moreover, such solutions tend to be designed to<br />
allow easy scalability. It is likely the situation that as Mosso<br />
grows, Rackspace will simply need to add additional<br />
servers and drives to their existing architectures to provide<br />
the same level of service to their Mosso customers.<br />
In sum then, although developing one’s own distributed/<br />
grid hosting solution is certainly not for the faint-hearted<br />
(nor those without deep pockets), it is rapidly becoming<br />
possible to utilize extremely redundant, distributed/grid<br />
architectures in much the same way as a conventional<br />
web hosting solution, with costs that are very competitive<br />
compared to other enterprise-grade hosting services.<br />
www.pingzine.com 37