Standard <strong>Data</strong> <strong>Center</strong> AttributesDesigned to LastHeterogeneousCompute FlavorsSilo ManagementToolsMix Physical andVirtual ComputeLegacy ComputeMultitude of StoragePlatformsNetworkedto FunctionFigure 1perform monitoring and management functionsacross server farms. Moreover, many use aseparate set of tools for managing network andstorage performance. This results in enterprisedata centers that lack a single, unified view ofresource availability across infrastructure, to <strong>the</strong>application and database layers.<strong>The</strong> infrastructure is designed and provisionedconsidering <strong>the</strong> specific volumetric for supporting<strong>the</strong> business applications and considering <strong>the</strong>peak load transaction in jobs per second, availabilityand scalability requirements. When volumetricand projected growth do not manifest asenvisaged, this method of sizing infrastructurecompute and storage could lead to ei<strong>the</strong>r undersizingor oversizing <strong>the</strong> footprint. Often, havingsuch islands of infrastructure compute andstorage leads to underutilization of resources.This has a cascading effect on investment and<strong>the</strong> effort expended toward energy consumption,management overheads, software licenses anddata center costs.<strong>The</strong> shortcomings of this model led many enterprisesto <strong>the</strong> next wave of infrastructure design— utilizing shared infrastructure services andvirtualized compute to increase efficiency inresource utilization and ensure that infrastructureis designed and fit for <strong>the</strong> purpose, and notover-engineered. Figure 2 depicts how a sharedinfrastructure delivery design is leveraged, consideringguidelines such as grouping of applicationswith similar workload and grouping ofline-of-business applications, with virtualizedcompute resources. In this model, applicationsreap <strong>the</strong> benefit of virtualization, with hypervisorsenabled to programmatically allocate and deallocatecompute resources for applications. <strong>The</strong>bare metal hardware controlled by <strong>the</strong> hypervisorsoftware can be fur<strong>the</strong>r partitioned and guestoperating systems with logically separate instancescan be instantiated, resulting in increased utilization.This model has its advantages in terms of howresources are efficiently utilized in ideal applicationworkloads. However, when one or more applicationworkloads begin to consume more resourcesthan expected, scenarios could arise whereseveral guest operating systems are short ofcompute resources, <strong>the</strong>reby impacting businessapplication service level agreements.While this approach brought holistic capacitymanagement, monitoring and tools capabilities,it also provided evidence that infrastructurecompute and server resources were truly benefitingfrom improved resource utilization andautomation. This was brought about, to a certainextent, by programmatically controlling <strong>the</strong>resources provided to guest instances. However,new thinking about solutions was still needed tomeet <strong>the</strong> challenges of dynamic workloads of run<strong>the</strong>-businessapplications and compute-intensiveenterprise applications.With <strong>the</strong> emergence of <strong>the</strong> cloud, <strong>the</strong> new age“mantra” and infrastructure as a service (IaaS) asa delivery model (as illustrated in Figure 3), <strong>the</strong>challenges of processing demands from dynamicworkloads is being addressed. Designing highavailabilityclusters and scalable solutions can bearchitected based on nonfunctional requirements.cognizant 20-20 insights2
IT Infrastructure: Shared Resource ModelGuest OS/VM withApplication WorkloadsAbstraction Layer for ComputeHypervisorBare MetalFigure 2Since it is elastic by nature, <strong>the</strong> cloud deliverymodel enables resources to expand or shrinkbased on consumption. An abstraction softwarelayer, known as hypervisor, virtualizes processingresources from <strong>the</strong> bare metal, <strong>the</strong>reby enablingcompute, memory and hard disks to accommodateflexing demand.<strong>Software</strong>-defined server virtualization is ableto efficiently and dynamically allocate sharedpooledresources to balance workloads, <strong>the</strong>rebymeeting application requirements. Through automation,self-service, orchestration and meteringability, <strong>the</strong> success of enterprise server virtualizationhas prompted IT organizations to extend thiscapability across <strong>the</strong> data center, with softwarecontrolling <strong>the</strong> hardware. What follows are <strong>the</strong>steps that key industry sectors must take andways enterprises can approach and overcome <strong>the</strong>perceived challenges that will emerge.<strong>Software</strong>-defined server virtualization has thusbecome mainstream. Our experience reveals thatnumerous organizations have implemented or arein <strong>the</strong> process of implementing technologies tohelp <strong>the</strong>m get <strong>the</strong>re. With this model, we can saythat compute bottleneck at <strong>the</strong> server hardwarelayer is more or less eliminated. However, unlessplanned and executed, this hardware resourceconstraint shifts to <strong>the</strong> storage I/O and networkI/O. This, in turn, brings an equally challengingproblem of ensuring <strong>the</strong> abstraction at <strong>the</strong>storage and network layers is tightly coupled with<strong>the</strong> server abstraction layer.In <strong>the</strong> next sections, we examine if this is permissibleand how <strong>the</strong> industry is helping to perpetuatesoftware-defined infrastructure and programmatically-controlledhardware to create moreelastic IT resources.Dynamic Infrastructure Compute SchemaSelf-Service PortalsOrganization VDCsCatalogsMeteringAutomationManagement StackAbstraction LayerFigure 3cognizant 20-20 insights3