Magellan Final Report - Office of Science - U.S. Department of Energy
Magellan Final Report - Office of Science - U.S. Department of Energy
Magellan Final Report - Office of Science - U.S. Department of Energy
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
<strong>Magellan</strong> <strong>Final</strong> <strong>Report</strong><br />
greater than three months that were not possible with earlier versions. Many <strong>of</strong> our users commented favorably<br />
on the stability and performance <strong>of</strong> the system. Newer releases show even more promise, as developers<br />
continue to add features and improve stability.<br />
Infrastructure. OpenStack’s management infrastructure (shown in Figure 6.2) is comprised <strong>of</strong> a number<br />
<strong>of</strong> individual services which communicate with each other via asynchronous message passing and a MySQL<br />
database. The API service handles remote client interaction and the serving <strong>of</strong> instance metadata to hypervisor<br />
nodes. The object store and volume services manage the storage and access <strong>of</strong> virtual machine images<br />
and EBS volumes, respectively, and the scheduler handles the allocation <strong>of</strong> compute resources to spread load<br />
evenly across the hypervisor cluster. The networking service is responsible for all networking tasks, both<br />
local and externally-routable, and the compute service runs on the hypervisors themselves, performing the<br />
local management <strong>of</strong> individual KVM instances.<br />
User Management. All users on an OpenStack cluster belong to one or more projects, and are assigned<br />
roles both globally and within their projects. These roles govern their level <strong>of</strong> access, with the union <strong>of</strong><br />
global and project roles determining the precise privileges. Each user/project combination requires its own<br />
set <strong>of</strong> client credentials e.g., a user belonging to three projects will require three different sets <strong>of</strong> credentials.<br />
Each project is given its own local subnet and has project-wide instance firewall rules and access privileges<br />
to published images and EBS volumes. The Nova management tools allow administrators to apply quotas<br />
to user projects, limiting the number <strong>of</strong> instances a user may have allocated simultaneously, or the number<br />
or total size <strong>of</strong> volumes they may create. User interaction with the OpenStack API is accomplished via the<br />
eucatools client in the same manner as Eucalyptus; as such it is API-compatible with both Eucalyptus and<br />
Amazon EC2.<br />
Image and Volume Management. User virtual machine images are managed by the object store service,<br />
which is an Amazon S3-like bucket storage application dedicated solely to storing and serving images. Image<br />
access privileges are project-specific, meaning that should a user’s roles within a project allow it, they have<br />
access to use and modify all <strong>of</strong> the project’s private images. Unlike Eucalyptus, users are also allowed to register<br />
kernels and ramdisk images; making customization <strong>of</strong> existing images and addition <strong>of</strong> new ones far easier<br />
for the user and reducing the support burden on the administrators. EBS volumes are handled by the volume<br />
service, which creates, stores, exports and deletes iSCSI volumes. All volume privileges are bound to the user.<br />
Machine Virtualization and Scheduling. The OpenStack compute service runs on each hypervisor<br />
node, and is responsible for the creation and destruction <strong>of</strong> virtual machine instances and all associated<br />
hypervisor-local resources, such as project-LAN networking and virtual disks. The underlying virtualization<br />
technology in use at Argonne is KVM, though others are supported as well, such as Xen, UML, and the<br />
relatively new LXC. The hypervisor for a group <strong>of</strong> instances is controlled by the Scheduling service, which<br />
spreads the compute load evenly across the hypervisor cluster. The versions <strong>of</strong> OpenStack used by the<br />
<strong>Magellan</strong> project did not support live migration, and compute resources are acquired by the first user to<br />
request them. This has the effect <strong>of</strong> spreading even small single-core instances across the hypervisor cluster<br />
and subsequently blocking a number <strong>of</strong> large 8-core jobs from running.<br />
Accounting. All information pertinent to instances launched on an OpenStack cluster is recorded in the<br />
OpenStack database. The information in the database is available to administrators. For example, the<br />
database includes the date and time <strong>of</strong> instance creation and destruction, the user and project responsible<br />
for the instance, the current state <strong>of</strong> the instance, the image from which the instance was created.<br />
Network. All networking for OpenStack instances is managed by a Network service. Both project-LAN and<br />
externally-routable IPs are assigned to instances via this service. Each project-LAN is given its own VLAN<br />
in order to separate the broadcast domains <strong>of</strong> projects from each other. DHCP and an intricately designed<br />
NATing chain is used to accomplish this task. Externally-routable addresses are broadcast to a border router<br />
33