23.10.2014 Views

Advanced POWER Virtualization on IBM System p5 - Previous ...

Advanced POWER Virtualization on IBM System p5 - Previous ...

Advanced POWER Virtualization on IBM System p5 - Previous ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Dedicated processors<br />

Dedicated processors are whole processors that are assigned to a single<br />

partiti<strong>on</strong>. If you choose to assign dedicated processors to a logical partiti<strong>on</strong>, you<br />

must assign at least <strong>on</strong>e processor to that partiti<strong>on</strong>. You cannot mix shared<br />

processors and dedicated processors in the same partiti<strong>on</strong>.<br />

By default, a powered-off logical partiti<strong>on</strong> using dedicated processors will have its<br />

processors available to the shared processing pool. When the processors are in<br />

the shared processing pool, an uncapped partiti<strong>on</strong> that needs more processing<br />

power can use the idle processing resources. However, when you power <strong>on</strong> the<br />

dedicated partiti<strong>on</strong> while the uncapped partiti<strong>on</strong> is using the processors, the<br />

activated partiti<strong>on</strong> will regain all of its processing resources. If you want to<br />

prevent dedicated processors from being used in the shared processing pool,<br />

you can disable this functi<strong>on</strong> <strong>on</strong> the HMC by deselecting the Allow idle<br />

processor to be shared check box in the partiti<strong>on</strong>’s properties.<br />

Note: The opti<strong>on</strong> “Allow idle processor to be shared” is activated by default. It<br />

is not part of profile properties and it cannot be changed dynamically.<br />

2.4.2 Shared processor pool overview<br />

A shared processor pool is a group of physical processors that are not dedicated<br />

to any logical partiti<strong>on</strong>. Micro-Partiti<strong>on</strong>ing technology coupled with the <str<strong>on</strong>g>POWER</str<strong>on</strong>g><br />

Hypervisor facilitates the sharing of processing units between logical partiti<strong>on</strong>s in<br />

a shared processing pool.<br />

In a shared logical partiti<strong>on</strong>, there is no fixed relati<strong>on</strong>ship between virtual<br />

processors and physical processors. The <str<strong>on</strong>g>POWER</str<strong>on</strong>g> Hypervisor can use any<br />

physical processor in the shared processor pool when it schedules the virtual<br />

processor. By default, it attempts to use the same physical processor, but this<br />

cannot always be guaranteed. The <str<strong>on</strong>g>POWER</str<strong>on</strong>g> Hypervisor uses the c<strong>on</strong>cept of a<br />

home node for virtual processors, enabling it to select the best available physical<br />

processor from a memory affinity perspective for the virtual processor that is to<br />

be scheduled.<br />

Affinity scheduling is designed to preserve the c<strong>on</strong>tent of memory caches, so<br />

that the working data set of a job can be read or written in the shortest time<br />

period possible. Affinity is actively managed by the <str<strong>on</strong>g>POWER</str<strong>on</strong>g> Hypervisor since<br />

each partiti<strong>on</strong> has a completely different c<strong>on</strong>text. Currently, there is <strong>on</strong>e shared<br />

processor pool, so all virtual processors are implicitly associated with the same<br />

pool.<br />

Figure 2-3 <strong>on</strong> page 38 shows the relati<strong>on</strong>ship between two partiti<strong>on</strong>s using a<br />

shared processor pool of a single physical CPU. One partiti<strong>on</strong> has two virtual<br />

Chapter 2. <str<strong>on</strong>g>Virtualizati<strong>on</strong></str<strong>on</strong>g> technologies <strong>on</strong> <strong>System</strong> p servers 37

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!