02.11.2012 Views

ServerView Deployment Manager V6.20 - Fujitsu Technology ...

ServerView Deployment Manager V6.20 - Fujitsu Technology ...

ServerView Deployment Manager V6.20 - Fujitsu Technology ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

User Guide - English<br />

<strong>ServerView</strong> Suite<br />

<strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong><br />

<strong>V6.20</strong><br />

Edition September 2012


Comments… Suggestions… Corrections…<br />

The User Documentation Department would like to know your opinion of<br />

this manual. Your feedback helps us optimize our documentation to suit<br />

your individual needs.<br />

Feel free to send us your comments by e-mail to<br />

manuals@ts.fujitsu.com.<br />

Certified documentation according to DIN EN<br />

ISO 9001:2008<br />

To ensure a consistently high quality standard and user-friendliness, this<br />

documentation was created to meet the regulations of a quality management<br />

system which complies with the requirements of the standard<br />

DIN EN ISO 9001:2008.<br />

cognitas. Gesellschaft für Technik-Dokumentation mbH<br />

www.cognitas.de<br />

Copyright and trademarks<br />

Copyright © 1998 - 2012 <strong>Fujitsu</strong> <strong>Technology</strong> Solutions.<br />

All rights reserved.<br />

Delivery subject to availability; right of technical modifications reserved.<br />

All hardware and software names used are trademarks of their respective<br />

manufacturers.


Contents<br />

Contents 3<br />

1 Introduction 13<br />

1.1 Supported systems 14<br />

1.2 Target groups and purpose of this manual 15<br />

1.3 Changes since the previous manual 16<br />

1.4 System requirements 18<br />

1.4.1 <strong>Deployment</strong> <strong>Manager</strong> packages 18<br />

1.4.2 Software packages on the deployment server 19<br />

1.4.3 Supported operating systems for image creation/cloning 19<br />

1.5 <strong>ServerView</strong> Suite link collection 22<br />

1.6 Documentation for the <strong>ServerView</strong> Suite 24<br />

1.7 Typographic conventions 24<br />

2 <strong>Deployment</strong> - Overview 27<br />

2.1 Supported server systems 28<br />

2.1.1 Generic server 28<br />

2.1.2 PRIMERGY server 29<br />

2.1.3 PRIMERGY blade server 30<br />

2.1.4 Detection of target servers 31<br />

2.2 Cloning process 32<br />

2.2.1 Reference installation 32<br />

2.2.2 Image creation 35<br />

2.2.3 Cloning 37<br />

2.3 Mass remote installation 41<br />

2.4 System architecture 44<br />

2.4.1 Hardware architecture 44<br />

2.4.2 Software architecture 46<br />

2.5 First step guide for using <strong>Deployment</strong> <strong>Manager</strong> 48<br />

3 Installing <strong>Deployment</strong> <strong>Manager</strong> 51<br />

3.1 Software-Requirements 52<br />

3.1.1 JBoss web server 52<br />

3.1.1.1 Launching and ports used 53<br />

3.1.1.2 Role-based user administration 55<br />

3.1.1.3 Managing certificates 55<br />

3.1.2 Installation of <strong>ServerView</strong> Installation <strong>Manager</strong> 56<br />

3.1.3 Installation of <strong>ServerView</strong> Operations <strong>Manager</strong> 56<br />

<strong>Deployment</strong> <strong>Manager</strong> 3


Contents<br />

3.1.4 Installation of <strong>ServerView</strong> agents 57<br />

3.1.5 <strong>Deployment</strong> server 57<br />

3.1.6 Network configuration 59<br />

3.1.6.1 LAN connections 60<br />

3.1.6.2 LAN connections (blade server) 62<br />

3.1.6.3 LAN connection topology and IGMP settings (Switch<br />

Blade) 63<br />

3.1.6.4 Multiple segment cloning 64<br />

3.1.6.5 Multiple LAN ports of target systems 66<br />

3.1.7 Creating an image repository 66<br />

3.2 Installation 67<br />

3.2.1 Installation via the <strong>ServerView</strong> Suite DVD 1 67<br />

3.2.1.1 Installing the <strong>Deployment</strong> <strong>Manager</strong> package 68<br />

3.2.1.2 Installing the <strong>Deployment</strong> Services package 78<br />

3.2.2 Installation via the <strong>Fujitsu</strong> <strong>Technology</strong> Solutions web server 83<br />

3.3 Update installation 84<br />

3.4 Uninstalling/Removing <strong>Deployment</strong> <strong>Manager</strong> 84<br />

4 Working with <strong>Deployment</strong> <strong>Manager</strong> 87<br />

4.1 Opening <strong>Deployment</strong> <strong>Manager</strong> 87<br />

4.2 Closing <strong>Deployment</strong> <strong>Manager</strong> 88<br />

4.3 Actions after starting the <strong>Deployment</strong> <strong>Manager</strong> front-end 88<br />

4.3.1 Role-based permissions on accessing <strong>Deployment</strong> <strong>Manager</strong>90<br />

4.4 <strong>Deployment</strong> <strong>Manager</strong> main window 91<br />

4.4.1 Servers view 92<br />

4.4.2 Repositories view 93<br />

4.4.3 Registered Boot Images view 93<br />

4.4.4 Tabs 93<br />

4.4.4.1 Servers (tab) 94<br />

4.4.4.2 Information (tab) 95<br />

4.4.4.3 Tasks (tab) 97<br />

4.4.4.4 Operations on the Tasks tab 99<br />

4.4.5 Icons 102<br />

4.4.6 Wizards 105<br />

4.4.6.1 General buttons 105<br />

4.5 Repositories 106<br />

4.5.1 Add Repository dialog box 108<br />

4.5.2 Adding a new folder 110<br />

4.5.3 Deleting a repository or folder 111<br />

4 <strong>Deployment</strong> <strong>Manager</strong>


Contents<br />

4.6 Registered Boot Images 111<br />

4.6.1 Register Boot Image wizard 113<br />

4.6.1.1 Image ID step (Register Boot Image wizard) 113<br />

4.6.1.2 Boot Image step (Register Boot Image wizard) 114<br />

4.6.1.3 Bootstrap Loader step (Register Boot Image wizard) 115<br />

4.6.2 Unregistering a boot image 117<br />

4.6.3 Unregistering a client 117<br />

4.7 Configuration before cloning/installation tasks 118<br />

4.7.1 <strong>Deployment</strong> Configuration wizard 118<br />

4.7.1.1 General Settings step (<strong>Deployment</strong> Configuration wizard)<br />

118<br />

4.7.1.2 LAN Ports step (<strong>Deployment</strong> Configuration wizard) 121<br />

4.7.1.3 DNS Search Suffixes step (<strong>Deployment</strong> Configuration<br />

wizard) 124<br />

4.7.1.4 Remote Management Ports step (<strong>Deployment</strong> Configuration<br />

wizard) 125<br />

4.7.1.5 Notes step (<strong>Deployment</strong> Configuration wizard) 129<br />

4.7.2 Export <strong>Deployment</strong> Configuration dialog box 130<br />

4.7.3 Import <strong>Deployment</strong> Configuration dialog box 131<br />

4.7.4 Deleting an exported deployment configuration 132<br />

4.8 Adding new servers 132<br />

4.8.1 New Server wizard 132<br />

4.8.1.1 General Settings step (New Server wizard) 133<br />

4.8.1.2 LAN Ports step (New Server wizard) 135<br />

4.8.1.3 DNS Search Suffixes step (New Server wizard) 138<br />

4.8.1.4 Remote Management Ports step (New Server wizard) 139<br />

4.8.1.5 Notes step (New Server wizard) 143<br />

4.8.1.6 Found MAC Addresses dialog box 144<br />

4.9 IP addresses 146<br />

4.9.1 IPv4 IP addresses 146<br />

4.9.1.1 Add IPv4 Address dialog box 146<br />

4.9.1.2 Edit IPv4 Address dialog box 147<br />

4.9.2 IPv6 IP addresses 148<br />

4.9.2.1 Add IPv6 Address dialog box 148<br />

4.9.2.2 Edit IPv6 Address dialog box 149<br />

4.10 Managing the power state of a server 150<br />

4.10.1 Manage Power State dialog box 150<br />

4.11 Collecting diagnostic information 151<br />

<strong>Deployment</strong> <strong>Manager</strong> 5


Contents<br />

5 Image creation 153<br />

5.1 File-System-Dependent image creation 154<br />

5.2 File-System-Independent image creation (raw image creation) 155<br />

5.3 Supported operating systems for image creation 156<br />

5.4 Image creation of a Windows reference system 158<br />

5.4.1 Manual rollback of changes done during system preparation159<br />

5.5 Windows 2008 and Windows 2012 reference systems 160<br />

5.5.1 Configuring the Windows firewall on the reference system 160<br />

5.5.2 Sysprep restrictions 162<br />

6 Mass Cloning 165<br />

6.1 Creating a cloning image 165<br />

6.1.1 Create Cloning Image wizard 166<br />

6.1.1.1 Task Name step (Create Cloning Image wizard) 166<br />

6.1.1.2 <strong>Deployment</strong> Server step (Create Cloning Image wizard)<br />

167<br />

6.1.1.3 Image Path and Name step (Create Cloning Image wizard)<br />

168<br />

6.1.1.4 Options step (Create Cloning Image wizard) 170<br />

6.1.1.5 Disks step (Create Cloning Image wizard) 172<br />

6.1.1.6 Bios Boot Type step (Create Cloning Image wizard) 178<br />

6.1.1.7 Scheduling step (Create Cloning Image wizard) 179<br />

6.1.1.8 Action after starting the "Create Cloning Image" task 182<br />

6.2 Cloning groups 185<br />

6.2.1 Adding a cloning group 185<br />

6.2.1.1 Group Name step (Add Cloning Group wizard) 185<br />

6.2.1.2 Cloning Image step (Add Cloning Group wizard) 186<br />

6.2.1.3 Group Members step (Add Cloning Group wizard) 188<br />

6.2.2 Copying a cloning group 189<br />

6.2.2.1 Group Name step (Copy Cloning Group wizard) 189<br />

6.2.2.2 Cloning Image step (Copy Cloning Group wizard) 190<br />

6.2.2.3 Group Members step (Copy Cloning Group wizard) 192<br />

6.2.3 Editing a cloning group 193<br />

6.2.3.1 Group Name step (Edit Cloning Group wizard) 193<br />

6.2.3.2 Cloning Image step (Edit Cloning Group wizard) 194<br />

6.2.3.3 Group Members step (Edit Cloning Group wizard) 196<br />

6.2.4 Deleting cloning groups 197<br />

6.3 Cloning an image 197<br />

6.3.1 Clone wizard 198<br />

6 <strong>Deployment</strong> <strong>Manager</strong>


Contents<br />

6.3.1.1 Task Name step (Clone wizard) 198<br />

6.3.1.2 <strong>Deployment</strong> Server step (Clone wizard) 199<br />

6.3.1.3 Disks step (Clone wizard) 199<br />

6.3.1.4 System Preparation step (Clone wizard) 201<br />

6.3.1.5 Settings step (Clone wizard) 204<br />

6.3.1.6 Post <strong>Deployment</strong> step (Clone wizard) 207<br />

6.3.1.7 Bios Boot Type step (Clone wizard) 211<br />

6.3.1.8 Scheduling step (Clone wizard) 212<br />

6.3.1.9 Action after starting the "Clone" task 215<br />

6.3.2 Clone with Image wizard 217<br />

6.3.2.1 Task Name step (Clone with Image wizard) 217<br />

6.3.2.2 <strong>Deployment</strong> Server step (Clone with Image wizard) 218<br />

6.3.2.3 Disk Image step (Clone with Image wizard) 218<br />

6.3.2.4 Disks step (Clone with Image wizard) 220<br />

6.3.2.5 System Preparation step (Clone with Image wizard) 222<br />

6.3.2.6 Settings step (Clone with Image wizard) 225<br />

6.3.2.7 Post <strong>Deployment</strong> step (Clone with Image wizard) 228<br />

6.3.2.8 Bios Boot Type step (Clone with Image wizard) 232<br />

6.3.2.9 Scheduling step (Clone with Image wizard) 233<br />

6.3.2.10 Action after starting the "Clone with Image" task 236<br />

6.4 Cloning/Installing Baseboard Management Controllers (BMCs) 238<br />

6.4.1 Displaying BMCs 239<br />

6.4.2 Changing the deployment configuration 240<br />

6.4.3 Actions after cloning/installation 242<br />

7 Mass Installation 243<br />

7.1 Installation Groups 243<br />

7.1.1 Adding an installation group 244<br />

7.1.1.1 Group Name step (Add Installation Group wizard) 244<br />

7.1.1.2 Group Members step (Add Installation Group wizard) 245<br />

7.1.1.3 Installation Configuration step (Add Installation Group<br />

wizard) 247<br />

7.1.2 Copying an installation group 248<br />

7.1.2.1 Group Name step (Copy Installation Group wizard) 248<br />

7.1.2.2 Group Members step (Copy Installation Group wizard) 249<br />

7.1.2.3 Installation Configuration step (Copy Installation Group<br />

wizard) 251<br />

7.1.3 Editing an installation group 252<br />

7.1.3.1 Group Name step (Edit Installation Group wizard) 252<br />

<strong>Deployment</strong> <strong>Manager</strong> 7


Contents<br />

7.1.3.2 Group Members step (Edit Installation Group wizard) 253<br />

7.1.3.3 Installation Configuration step (Edit Installation Group<br />

wizard) 255<br />

7.1.4 Deleting an installation group 256<br />

7.2 Installing servers 256<br />

7.2.1 Install wizard 257<br />

7.2.1.1 Task Name step (Install wizard) 257<br />

7.2.1.2 Installation Configuration step (Install wizard) 258<br />

7.2.1.3 Settings step (Install wizard) 258<br />

7.2.1.4 Bios Boot Type step (Install wizard) 261<br />

7.2.1.5 Scheduling step (Install wizard) 262<br />

7.2.1.6 Actions after starting the "Install" task 265<br />

7.3 Boot Groups 267<br />

7.3.1 Adding a boot group 267<br />

7.3.1.1 Group Name step (Add Boot Group wizard) 267<br />

7.3.1.2 Boot Image step (Add Boot Group wizard) 268<br />

7.3.1.3 Group Members step (Add Boot Group wizard) 269<br />

7.3.2 Copying a boot group 270<br />

7.3.2.1 Group Name step (Copy Boot Group wizard) 270<br />

7.3.2.2 Boot Image step (Copy Boot Group wizard) 271<br />

7.3.2.3 Group Members step (Copy Boot Group wizard) 272<br />

7.3.3 Editing a boot group 274<br />

7.3.3.1 Group Name step (Edit Boot Group wizard) 274<br />

7.3.3.2 Boot Image step (Edit Boot Group wizard) 275<br />

7.3.3.3 Group Members step (Edit Boot Group wizard) 276<br />

7.3.4 Deleting a boot group 277<br />

7.4 MDP booting of servers 278<br />

7.4.1 MDP Boot wizard 278<br />

7.4.1.1 Task Name step (MDP Boot wizard) 278<br />

7.4.1.2 MDP Application step (MDP Boot wizard) 279<br />

7.4.1.3 Settings step (MDP Boot wizard) 280<br />

7.4.1.4 Bios Boot Type step (MDP Boot wizard) 283<br />

7.4.1.5 Scheduling step (MDP Boot wizard) 284<br />

7.5 Generic booting of servers 287<br />

7.5.1 Generic Boot wizard 287<br />

7.5.1.1 Task Name step (Generic Boot wizard) 287<br />

7.5.1.2 Settings step (Generic Boot wizard) 288<br />

7.5.1.3 Bios Boot Type step (Generic Boot wizard) 291<br />

8 <strong>Deployment</strong> <strong>Manager</strong>


Contents<br />

7.5.1.4 Scheduling step (Generic Boot wizard) 292<br />

8 Crash Recovery 295<br />

8.1 Creating a snapshot image 295<br />

8.1.1 Create Snapshot Image Wizard 295<br />

8.1.1.1 Task Name step (Create Snapshot Image wizard) 296<br />

8.1.1.2 <strong>Deployment</strong> Server step (Create Snapshot Image wizard)<br />

297<br />

8.1.1.3 Image Path and Name step (Create Snapshot Image<br />

wizard) 297<br />

8.1.1.4 Options step (Create Snapshot Image wizard) 299<br />

8.1.1.5 Disks step (Create Snapshot Image wizard) 301<br />

8.1.1.6 Bios Boot Type step (Create Snapshot Image wizard) 304<br />

8.1.1.7 Scheduling step (Create Snapshot Image wizard) 305<br />

8.1.1.8 Action after starting the "Create Snapshot Image" task 308<br />

8.2 Restoring a snapshot image 309<br />

8.2.1 Restore Snapshot Image Wizard 309<br />

8.2.1.1 Task Name step (Restore Snapshot Image wizard) 309<br />

8.2.1.2 <strong>Deployment</strong> Server step (Restore Snapshot Image wizard)<br />

310<br />

8.2.1.3 Disk Image step (Restore Snapshot Image wizard) 311<br />

8.2.1.4 Disks step (Restore Snapshot Image wizard) 312<br />

8.2.1.5 System Preparation step (Restore Snapshot Image<br />

wizard) 314<br />

8.2.1.6 Settings step (Restore Snapshot Image wizard) 317<br />

8.2.1.7 Post <strong>Deployment</strong> step (Restore Snapshot Image wizard)<br />

320<br />

8.2.1.8 Bios Boot Type step (Restore Snapshot Image wizard) 321<br />

8.2.1.9 Scheduling step (Restore Snapshot Image wizard) 323<br />

8.2.1.10 Action after starting the "Restore Snapshot Image"<br />

task 325<br />

9 Messages 327<br />

9.1 Messages List dialog box 327<br />

10 Settings 329<br />

10.1 <strong>Deployment</strong> Server step (Settings wizard) 329<br />

10.1.1 Log On Properties dialog box 331<br />

10.2 Repositories step (Settings wizard) 332<br />

<strong>Deployment</strong> <strong>Manager</strong> 9


Contents<br />

10.3 Licenses Installed step (Settings wizard) 332<br />

10.3.1 Add New License Key dialog box 334<br />

10.4 Licenses Used step (Settings wizard) 335<br />

10.5 Global Options step (Settings wizard) 336<br />

11 High-Availability support 339<br />

11.1 Installation 340<br />

11.1.1 <strong>ServerView</strong> Operations <strong>Manager</strong> installation 340<br />

11.1.2 <strong>Deployment</strong> <strong>Manager</strong> installation 341<br />

11.1.3 <strong>Deployment</strong> Services installation 342<br />

11.2 Hints 345<br />

11.2.1 User Accounts 345<br />

11.2.2 Repositories 345<br />

11.2.3 Actions Needed When No Cluster Software is Available 345<br />

11.3 Failover scenarios 346<br />

11.3.1 <strong>Deployment</strong> <strong>Manager</strong> and <strong>Deployment</strong> Services Packages<br />

are Installed on Same Server 346<br />

11.3.2 <strong>Deployment</strong> <strong>Manager</strong> and <strong>Deployment</strong> Services Packages<br />

are Installed on Different Servers 346<br />

12 Cloning deployment process 349<br />

12.1 Power control via remote control 353<br />

12.1.1 Management blade power control 353<br />

12.2 System preparation phase 354<br />

12.2.1 Manual preparation 355<br />

12.2.2 Automatic preparation as part of a DOS session 355<br />

12.2.3 Preparation based on WinPE 356<br />

12.2.4 Customer system preparation image 357<br />

12.3 Supported storage devices 360<br />

12.3.1 SCSI/IDE drives 360<br />

12.3.2 RAID devices 360<br />

12.3.3 FC and iSCSI Devices 361<br />

12.3.4 Partitioning and File System Formatting 361<br />

12.3.5 Multi-boot Operating System Partitioning 363<br />

12.3.6 Adding NDIS Driver to <strong>Deployment</strong> <strong>Manager</strong> 363<br />

12.4 Image creation 364<br />

12.5 Tag file handling 365<br />

12.5.1 Directory and tag file 365<br />

12.5.2 Creating a tag file on Windows systems 366<br />

10 <strong>Deployment</strong> <strong>Manager</strong>


Contents<br />

12.5.3 Creating a tag file on LINUX systems 366<br />

12.5.4 Removing the tag file 366<br />

12.6 PXE protocol 366<br />

12.7 LAN traffic and deployment methods 368<br />

12.7.1 Unicast and Multicast 368<br />

12.7.2 Switch, Hub and Bridge Configuration 369<br />

13 Appendix - network techniques 371<br />

13.1 MAC address handling 371<br />

13.2 PXE 371<br />

13.3 DHCP 373<br />

13.4 VLAN (Virtual Local Area Network) 373<br />

13.5 Example: <strong>Deployment</strong> over VLAN 375<br />

<strong>Deployment</strong> <strong>Manager</strong> 11


12 <strong>Deployment</strong> <strong>Manager</strong>


1 Introduction<br />

<strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> (called simply <strong>Deployment</strong> <strong>Manager</strong><br />

below) enables individual deployment, such as duplication and installation, of<br />

servers. To reduce data center administration costs, full parallel mass<br />

remote cloning and mass remote installation on a group of servers selected<br />

from the <strong>ServerView</strong> database are supported. An automated individualization<br />

process allows each server instance to be put into operation quickly. Automated<br />

and time-controlled installation or cloning of servers is supported using<br />

centrally held boot images in an own image repository database.<br />

<strong>Deployment</strong> <strong>Manager</strong> is a professional addition to the standard <strong>ServerView</strong><br />

Suite. As part of this suite, <strong>Deployment</strong> <strong>Manager</strong> is integrated in the user<br />

interface of <strong>ServerView</strong> Operations <strong>Manager</strong> (called simply Operations <strong>Manager</strong><br />

below). It provides a central overview of installed or additionally available<br />

servers, existing images and assignment of servers to deployment or<br />

installation groups.<br />

Server installation comprises a number of individual steps:<br />

l Hardware preparation, BIOS settings, RAID configuration, etc.<br />

l Installation of the operating system and parameterization (licensing,<br />

network configuration)<br />

l Application environment setup<br />

l User and service setup<br />

<strong>Deployment</strong> <strong>Manager</strong> and <strong>ServerView</strong> Installation <strong>Manager</strong> (called simply<br />

Installation <strong>Manager</strong> below) support these steps. Automated installation<br />

ensures that more servers can be put into error-free operation with minimum<br />

effort.<br />

<strong>Deployment</strong> <strong>Manager</strong> has the further advantage that servers can be quickly<br />

loaded with an operating system/application software image so that other<br />

tasks can be assigned to them. For example, by exchanging a hard disk<br />

image, you can allow a server that previously ran under Linux to perform a different<br />

task in Windows.<br />

<strong>Deployment</strong> <strong>Manager</strong> 13


1 Introduction<br />

1.1 Supported systems<br />

<strong>Deployment</strong> <strong>Manager</strong> provides generic methods to enable system preparation<br />

before cloning for all kinds of IA32-based servers. <strong>Deployment</strong> <strong>Manager</strong><br />

is also certified especially for the following PRIMERGY servers.<br />

l BX600 Blade Server<br />

l Blade BX620 S4, BX620 S5, BX620 S6<br />

l Blade BX630 S2<br />

l BX400 Blade Server<br />

l BX900 Blade Server<br />

l Blade BX920 S1, BX920 S2, configurable with Storage Blade SX940<br />

S1 and SX960 S1<br />

l Blade BX920 S3, configurable with Storage Blade SX960 S1 and<br />

SX980 S1<br />

l Blade BX922 S2, BX924 S2, BX924 S3, BX960 S1<br />

l PRIMEQUEST 1400, PRIMEQUEST 1800<br />

l CX122 S1<br />

l CX400 S1 Multi-Node server system configurable with CX210 S1,<br />

CX250 S1 and CX270 S1<br />

l Econel 100 S2, Econel 200 S2<br />

l RX100 S3, RX100 S4, RX100 S5, RX100 S6, RX100 S7, RX100 S7p<br />

l RX200 S4, RX200 S5, RX200 S6, RX200 S7<br />

l RX300 S4, RX300 S5, RX300 S6, RX300 S6 (Nebs), RX300 S7<br />

l RX350 S7<br />

l RX500 S7<br />

l RX600 S4, RX600 S5, RX600 S6<br />

l RX800 S3<br />

l RX900 S1, RX900 S2<br />

14 <strong>Deployment</strong> <strong>Manager</strong>


l TX100 S1, TX100 S2, TX100 S3, TX100 S3p<br />

l TX120, TX120 S2, TX120 S3, TX120 S3p<br />

l TX140 S1, TX140 S1p<br />

l TX150 S6, TX150 S7, TX150 S8<br />

l TX200 S4, TX200 S5, TX200 S6, TX200 S7<br />

l TX300 S4, TX300 S5, TX300 S6, TX300 S7<br />

l TX600 S3<br />

1.2 Target groups and purpose of this manual<br />

Any last-minute changes/corrections of the supported systems can be found<br />

in the <strong>Deployment</strong> <strong>Manager</strong> data sheet<br />

(http://ts.fujitsu.com/products/standard_servers/system_management/svs_<br />

deploy.html).<br />

1.2 Target groups and purpose of this manual<br />

This manual is intended for system administrators, network administrators<br />

and service technicians who have a basic knowledge of hardware and software.<br />

The manual explains how to create and manage image files with the <strong>Deployment</strong><br />

<strong>Manager</strong> software and how to deploy these files. It also describes how<br />

to install PRIMERGY servers using Installation <strong>Manager</strong> configuration files<br />

(if Installation <strong>Manager</strong> is installed on the same system as <strong>Deployment</strong> <strong>Manager</strong>).<br />

<strong>Deployment</strong> <strong>Manager</strong> 15


1 Introduction<br />

1.3 Changes since the previous manual<br />

This edition of the manual applies to <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> <strong>V6.20</strong><br />

and replaces the following online manual: “<strong>ServerView</strong> Suite <strong>Deployment</strong><br />

<strong>Manager</strong> V6.10”, edition March 2012.<br />

This manual has been updated to reflect the latest software status and offers<br />

the following new features:<br />

l The list of supported systems has been updated, see "Supported systems"<br />

on page 14.<br />

l The <strong>Deployment</strong> <strong>Manager</strong> packages <strong>Deployment</strong> <strong>Manager</strong> and<br />

<strong>Deployment</strong> Services can be installed under Microsoft Windows<br />

Server 2012 Datacenter/Standard/Foundation, Windows Storage<br />

Server 2012 Standard. Microsoft Windows Server 2003 R2 Standard/Enterprise<br />

(x86, x64) is no longer supported.<br />

l The software package Microsoft Windows Server 2008 or Windows<br />

Server 2012 must be available on the deployment server.<br />

l Microsoft Windows Server 2012 Datacenter/Standard/Foundation,<br />

Windows Storage Server 2012 Standard is supported for image creation/cloning<br />

with personalization.<br />

l The preparation method Generic Boot Image has been changed to<br />

Preparation Boot Image.<br />

l The preparation boot image is provided with the <strong>Deployment</strong> <strong>Manager</strong><br />

software in the Sample_PreparationBootImages directory.<br />

l The section "JBoss web server " on page 52 has been updated.<br />

l New: Servers by Boot Image group in the tree view of the Servers<br />

view, see "Servers view " on page 92.<br />

l You can now also perform the operation Unregistering all clients at<br />

the PXE service on the Tasks tab, see "Operations on the Tasks tab"<br />

on page 99.<br />

16 <strong>Deployment</strong> <strong>Manager</strong>


l In the Add Repository dialog box you can also add repositories for<br />

Generic Boot Images and MDP Applications, see "Add Repository<br />

dialog box" on page 108.<br />

l New: Registered Boot Images view, which displays a list of all registered<br />

boot images, see "Registered Boot Images" on page 111.<br />

This view allows you to<br />

1.3 Changes since the previous manual<br />

o register a boot image via the Register Boot Image wizard<br />

(see "Register Boot Image wizard" on page 113),<br />

o unregister a boot image (see "Unregistering a boot image" on<br />

page 117,<br />

o unregister a client from the PXE service (see "Unregistering a<br />

client" on page 117).<br />

l New: You can now add, copy or edit a boot group via the context<br />

menu in the Server by Boot Image group, see "Boot Groups" on<br />

page 267.<br />

l New: MDP booting of servers. Via the MDP Boot wizard, you can<br />

execute an MDP application on one or more servers, see "MDP Boot<br />

wizard" on page 278.<br />

l New: Generic booting of servers. Via the Generic Boot wizard, you<br />

can boot one or more servers with a generic image, see "Generic Boot<br />

wizard" on page 287.<br />

<strong>Deployment</strong> <strong>Manager</strong> 17


1 Introduction<br />

1.4 System requirements<br />

Please check the system requirements before you start installing <strong>Deployment</strong><br />

<strong>Manager</strong>.<br />

1.4.1 <strong>Deployment</strong> <strong>Manager</strong> packages<br />

<strong>Deployment</strong> <strong>Manager</strong> consists of two packages which can be installed on<br />

the same system or on different systems, see also section "Installing <strong>Deployment</strong><br />

<strong>Manager</strong>" on page 51.<br />

<strong>Deployment</strong> <strong>Manager</strong> package<br />

This package consists of the Web-based graphical user interface including<br />

the <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> service.<br />

This package can be installed under:<br />

l Microsoft Windows Server 2008 Standard/Enterprise/Datacenter/Foundation<br />

(x86, x64)<br />

l Microsoft Windows Server 2008 R2 Standard/Enterprise/Datacenter/Foundation<br />

(x64)<br />

l Microsoft Windows Server 2012 Datacenter/Standard/Foundation,<br />

Windows Storage Server 2012 Standard<br />

<strong>Deployment</strong> Services package<br />

This package consists of the deployment services including PXE service,<br />

TFTP service and deployment service (= cloning module).<br />

This package can be installed under:<br />

l Microsoft Windows Server 2008 Standard/Enterprise/Datacenter/Foundation<br />

(x86, x64)<br />

l Microsoft Windows Server 2008 R2 Standard/Enterprise/Datacenter/Foundation<br />

(x64)<br />

l Microsoft Windows Server 2012 Datacenter/Standard/Foundation,<br />

Windows Storage Server 2012 Standard<br />

18 <strong>Deployment</strong> <strong>Manager</strong>


Windows Script Host (WSH) as of version 5.6 is required for the<br />

deployment services.<br />

1.4.2 Software packages on the deployment server<br />

The following software packages must be available on the deployment<br />

server:<br />

l Microsoft Windows Server 2008 or Windows Server 2012.<br />

l Installation <strong>Manager</strong> as of version 7.804 for PXE-based remote installation.<br />

l Installation <strong>Manager</strong> as of version 6.711 and deployment service (cloning<br />

module) for PXE-based remote image creation and cloning.<br />

l A DHCP service (if none is already present in the LAN segment).<br />

l A PXE service must run on the deployment server. It can be installed<br />

using either <strong>Deployment</strong> <strong>Manager</strong> or Installation <strong>Manager</strong>.<br />

Installing the deployment server to PRIMEQUEST is not supported.<br />

PRIMEQUEST is supported only as a target server.<br />

Older Installation <strong>Manager</strong> versions (up to V10.10.09) can only be<br />

installed with the IIS Web Server. The <strong>ServerView</strong> Apache Web<br />

Server (that is delivered with these older Installation <strong>Manager</strong> versions)<br />

and the JBoss Application Server both listen on ports 3169<br />

and 3170 and therefore cannot coexist.<br />

1.4.3 Supported operating systems for image creation/cloning<br />

<strong>Deployment</strong> <strong>Manager</strong> supports the following operating systems for image creation/cloning<br />

with personalization:<br />

In Japan, Linux SuSE is not supported.<br />

1.4 System requirements<br />

<strong>Deployment</strong> <strong>Manager</strong> 19


1 Introduction<br />

l Microsoft Windows Server 2003 Standard/Enterprise (x86 and x64)<br />

l Microsoft Windows Server 2003 R2 Standard/Enterprise (x86 and<br />

x64)<br />

l Microsoft Windows Server 2008 Standard/Enterprise/Datacenter/Foundation<br />

(x86 and x64)<br />

l Microsoft Windows Server 2008 R2 Standard/Enterprise/Datacenter/Foundation/Web<br />

(x64)<br />

Mass Cloning of Windows Domain Controllers (e.g.<br />

Windows Small Business Server) is not supported.<br />

l Microsoft Windows Server 2012 Datacenter/Standard/Foundation,<br />

Windows Storage Server 2012 Standard<br />

l Resilient file system is not supported.<br />

l Storage Spaces and Virtual Disks created<br />

from Storage Spaces are not supported.<br />

l Before starting an image creation process for<br />

a Windows 2008 or Windows 2012 reference<br />

system, you must configure the Windows firewall<br />

on the reference system, see "Configuring<br />

the Windows firewall on the reference<br />

system" on page 160.<br />

l In cloning with personalization, only "full" Windows<br />

2008 or Windows 2012 installations are<br />

supported. Saving an image of a Windows<br />

2008 or Windows 2012 "core" installation is<br />

not supported.<br />

l Linux RedHat (AS/ES) 3.0 - not supported in Japan<br />

l Linux RedHat (AS/ES) 4.0 (x86 and EM64T) - not supported in Japan<br />

l Linux RedHat 5.0 (x86 and EM64T)<br />

l Linux RedHat 6.0 (x86 and EM64T)<br />

20 <strong>Deployment</strong> <strong>Manager</strong>


Before image creation, a specific package must be<br />

installed on the reference system. This package is<br />

provided with the <strong>Deployment</strong> <strong>Manager</strong> software in<br />

the directory LINUX_support\RHEL6_Cloning_<br />

Support.<br />

Before image creation, the Network<strong>Manager</strong> should<br />

be disabled - otherwise the personalization may fail<br />

during cloning. Please set NM_CONTROLLED=no<br />

in the /etc/sysconfig/network-scripts/ifcfg-eth*<br />

files and issue the following commands:<br />

/sbin/service Network<strong>Manager</strong> stop<br />

/sbin/chkconfig --del Network<strong>Manager</strong><br />

/sbin/chkconfig --add network<br />

/sbin/service network start<br />

1.4 System requirements<br />

l Linux SuSE Enterprise Server SLES 8 - not supported in Japan<br />

l Linux SuSE SLES 9 (x86 and EM64T) - not supported in Japan<br />

l Linux SuSE United Linux (with SLES 8) - not supported in Japan<br />

l Linux SuSE Enterprise Server SLES 10 (x86 and EM64T) - not supported<br />

in Japan<br />

l Linux SuSE Enterprise Server SLES 11 (x86 and EM64T) - not supported<br />

in Japan<br />

l Reiser file system versions 3.5 and 3.6 on SuSE SLES 9 and SuSE<br />

SLES 10<br />

Before image creation, a specific package must be installed on the reference<br />

system if Reiser is used as the root file system. This package<br />

is provided with the <strong>Deployment</strong> <strong>Manager</strong> software in the directory<br />

LINUX_support\ReiserFS_Cloning_Support.<br />

l All other operating system types can always be supported without<br />

personalization in raw snapshot mode. See also the Read-<br />

Me file.<br />

l Creating a cloning image of a disk with multiple operating systems<br />

is not supported. Creating a snapshot image is supported.<br />

<strong>Deployment</strong> <strong>Manager</strong> 21


1 Introduction<br />

For restrictions see section "Supported operating systems for image creation"<br />

on page 156.<br />

1.5 <strong>ServerView</strong> Suite link collection<br />

Via the link collection, <strong>Fujitsu</strong> <strong>Technology</strong> Solutions provides you with numerous<br />

downloads and further information on the <strong>ServerView</strong> Suite and PRIM-<br />

ERGY servers.<br />

For <strong>ServerView</strong> Suite, links are offered on the following topics:<br />

l Forum<br />

l Service Desk<br />

l Manuals<br />

l Product information<br />

l Security information<br />

l Software downloads<br />

l Training<br />

The downloads include the following:<br />

o Current software statuses for the <strong>ServerView</strong> Suite as well as<br />

additional Readme files.<br />

o Information files and update sets for system software components<br />

(BIOS, firmware, drivers, <strong>ServerView</strong> agents and<br />

<strong>ServerView</strong> update agents) for updating the PRIMERGY<br />

servers via <strong>ServerView</strong> Update <strong>Manager</strong> or for locally updating<br />

individual servers via <strong>ServerView</strong> Update <strong>Manager</strong> Express.<br />

o The current versions of all documentation on the <strong>ServerView</strong><br />

Suite.<br />

You can retrieve the downloads free of charge from the <strong>Fujitsu</strong> <strong>Technology</strong><br />

Solutions Web server.<br />

For PRIMERGY servers, links are offered on the following topics:<br />

22 <strong>Deployment</strong> <strong>Manager</strong>


l Service Desk<br />

l Manuals<br />

l Product information<br />

l Spare parts catalogue<br />

Access to the link collection<br />

You can reach the link collection of the <strong>ServerView</strong> Suite in various ways:<br />

1. Via <strong>ServerView</strong> Operations <strong>Manager</strong>.<br />

l Select Help – Links on the start page or on the menu bar.<br />

This opens the start page of the <strong>ServerView</strong> link collection.<br />

2. Via the <strong>ServerView</strong> Suite DVD 2 or via the start page of the online documentation<br />

for the <strong>ServerView</strong> Suite on the <strong>Fujitsu</strong> <strong>Technology</strong> Solutions<br />

manual server.<br />

You access the start page of the online documentation via the<br />

following link:<br />

http://manuals.ts.fujitsu.com<br />

l In the selection list on the left, select Industry standard servers.<br />

l Click the menu item PRIMERGY <strong>ServerView</strong> Links.<br />

This opens the start page of the <strong>ServerView</strong> link collection.<br />

3. Via the <strong>ServerView</strong> Suite DVD 1.<br />

l In the start window of the <strong>ServerView</strong> Suite DVD 1, select the<br />

option Select <strong>ServerView</strong> Software Products.<br />

l Click Start. This takes you to the page with the software products<br />

of the <strong>ServerView</strong> Suite.<br />

l On the menu bar select Links.<br />

1.5 <strong>ServerView</strong> Suite link collection<br />

This opens the start page of the <strong>ServerView</strong> link collection.<br />

<strong>Deployment</strong> <strong>Manager</strong> 23


1 Introduction<br />

1.6 Documentation for the <strong>ServerView</strong> Suite<br />

The documentation for the <strong>ServerView</strong> Suite can be found on the <strong>ServerView</strong><br />

Suite DVD 2 supplied with each server system.<br />

The documentation can also be downloaded free of charge from the Internet.<br />

You will find the online documentation at http://manuals.ts.fujitsu.com under<br />

the link Industry standard servers.<br />

1.7 Typographic conventions<br />

The following typographic conventions are used:<br />

Convention Explanation<br />

Indicates various types of risk, namely health risks, risk of<br />

data loss and risk of damage to devices.<br />

Indicates additional relevant information and tips.<br />

bold Indicates references to names of interface elements.<br />

monospace Indicates system output and system elements, e.g., file<br />

names and paths.<br />

monospace<br />

semibold<br />

blue continuous<br />

text<br />

pink continuous<br />

text<br />

Indicates statements that are to be entered using the keyboard.<br />

Indicates a link to a related topic.<br />

Indicates a link to a location you have already visited.<br />

Indicates variables which must be replaced with real values.<br />

[abc] Indicates options that can be specified (syntax).<br />

[key] Indicates a key on your keyboard. If you need to enter text in<br />

uppercase, the Shift key is specified, for example, [SHIFT] +<br />

[A] for A. If you need to press two keys at the same time, this<br />

is indicated by a plus sign between the two key symbols.<br />

24 <strong>Deployment</strong> <strong>Manager</strong>


Screenshots<br />

1.7 Typographic conventions<br />

Some of the screenshots are system-dependent, so some of the details<br />

shown may differ from your system. There may also be system-specific differences<br />

in menu options and commands.<br />

<strong>Deployment</strong> <strong>Manager</strong> 25


26 <strong>Deployment</strong> <strong>Manager</strong>


2 <strong>Deployment</strong> - Overview<br />

Today’s standard server environment has significantly changed the requirements<br />

of server administration. In a typical IT environment a large number of<br />

servers are used and must be managed remotely and without consoles.<br />

There are server farms, for example, where hundreds of blade servers are<br />

installed and must be administered.<br />

Scalability and ease of deployment or support for different tasks must be<br />

offered to fulfill the increasing demands on flexibility in IT systems. While a<br />

system may be used as a Web server for the Internet presence of a company<br />

during the day, it could be used as an FTP server at night with a different IP<br />

address. This must be achieved simply by remote administration rather than<br />

through physical operation. Administrators often have to set up the same software<br />

stack, such as the operating system and application software, on hundreds<br />

of identical servers in a server farm. In this case, manual installation is<br />

not feasible.<br />

New strategies must be developed to deploy the boot image of a server over<br />

the LAN without any interaction. The basic idea of deployment is to create<br />

one or more central deployment servers to administer this enormous number<br />

of clients over the LAN. To manage the servers on the fly, special base management<br />

controllers (such as management blades for blade servers, or Kalypso<br />

service processors) are assigned inside of a server chassis to guide the<br />

servers during their boot and working phases.<br />

<strong>Deployment</strong> is the convenient method of preparing server systems to start<br />

their work immediately. <strong>Deployment</strong> servers are responsible for preparing<br />

multiple servers and their environment from a central instance over the LAN.<br />

The servers can therefore start their work immediately.<br />

<strong>Deployment</strong> <strong>Manager</strong> provides two procedures for deployment:<br />

l Cloning<br />

Cloning means to install a reference system, create an image file of<br />

that reference installation, and clone this image file to a group of<br />

servers to be installed with the same configuration parameters. This<br />

procedure is described in more detail in the section "Cloning process"<br />

on page 32.<br />

<strong>Deployment</strong> <strong>Manager</strong> 27


2 <strong>Deployment</strong> - Overview<br />

l Mass remote installation<br />

Mass remote installation means to use configuration files created with<br />

Installation <strong>Manager</strong>, assign them to servers or server groups, and<br />

then start the installation for all servers. This procedure is described in<br />

more detail in the section "Mass remote installation" on page 41.<br />

2.1 Supported server systems<br />

<strong>Deployment</strong> <strong>Manager</strong> supports the following types of server:<br />

l Generic IA32-based server<br />

l PRIMERGY server<br />

l PRIMERGY blade server<br />

l PRIMEQUEST server (only as target server)<br />

2.1.1 Generic server<br />

From the deployment point of view, a generic server is a server that has been<br />

ideally prepared, so that each device access can be handled via a standardized<br />

generic device or BIOS APIs.<br />

<strong>Deployment</strong> <strong>Manager</strong> provides generic methods to support almost any kind<br />

of IA32-based server system. Cloning with <strong>Deployment</strong> <strong>Manager</strong> requires<br />

the following standard APIs:<br />

l PXE boot capability as part of the BIOS code based on PXE specification<br />

V2.1<br />

l A standard onboard NIC with full UNDI API support as required for<br />

PXE specification V1.2<br />

l A fully prepared bootable storage device accessible via DOS Int13h<br />

BIOS API<br />

For a generic server <strong>Deployment</strong> <strong>Manager</strong> expects a fully prepared storage<br />

device, e.g. with complete RAID configuration. This must be done by the<br />

user manually or by booting a PXE image in floppy emulation. This image can<br />

be a raw copy of a bootable DOS floppy providing the full knowledge to perform<br />

an unattended system preparation of a target server. The requirements<br />

28 <strong>Deployment</strong> <strong>Manager</strong>


for this <strong>Deployment</strong> <strong>Manager</strong> preparation image are similar to the MS ADS<br />

(Automated <strong>Deployment</strong> Services) required preparation image. For the creation<br />

of MS ADS compatible PXE preparation images, each system vendor<br />

offering ADS-compatible server systems provides a Web page containing a<br />

prescription how to prepare such a system preparation image for each server<br />

type together with the required command line tools.<br />

Finally, each image must initiate a reboot of its target server automatically in<br />

PXE mode (enabled static in the BIOS setting) to continue the <strong>Deployment</strong><br />

<strong>Manager</strong> generic cloning phase. These kinds of images can be administered<br />

via the <strong>Deployment</strong> <strong>Manager</strong> repository management by a separate preparation<br />

boot image repository.<br />

An example of such a preparation boot image is provided with the <strong>Deployment</strong><br />

<strong>Manager</strong> software in the Sample_PreparationBootImages directory.<br />

See also the ReadMe file in this directory for further information.<br />

Once a server fulfills the generic server requirements as described above, it<br />

can be cloned by <strong>Deployment</strong> <strong>Manager</strong> with the system preparation method<br />

Preparation Boot Image. This type is seen as a common classification for<br />

all types of server prepared in the appropriate way in the system preparation<br />

phase, either by <strong>Deployment</strong> <strong>Manager</strong> or by the users themselves.<br />

It is not necessary for a generic server to support remote management<br />

besides a static PXE boot mode, as configured once in the BIOS settings.<br />

PXE boots are initiated manually or just by a simple reboot driven by the<br />

locally running preparation image. A remote power on of a generic server can<br />

be done by "Wake On LAN" functionality integrated into <strong>Deployment</strong> <strong>Manager</strong><br />

if the server supports "Wake On LAN".<br />

Remote identification of a generic server is not possible. The initial MAC<br />

address must be added manually.<br />

2.1.2 PRIMERGY server<br />

2.1 Supported server systems<br />

PRIMERGY servers are developed by <strong>Fujitsu</strong> <strong>Technology</strong> Solutions and can<br />

be supported in a much more detailed and convenient way than generic<br />

servers.<br />

<strong>Deployment</strong> <strong>Manager</strong> 29


2 <strong>Deployment</strong> - Overview<br />

PRIMERGY servers are equipped with:<br />

l Onboard or plugged-in service processor for remote management<br />

such as<br />

o automatic identification of MAC address by Operations <strong>Manager</strong><br />

o automatic remote PXE boot<br />

o automatic modification of remote boot device table<br />

l Remote management via SNMP or IPMI (Intelligent Platform Management<br />

Interface (Kalypso- or Kronos-supported))<br />

l High-level system preparation, e.g. RAID controller and attached storage<br />

devices<br />

l Fully supported by <strong>Fujitsu</strong> <strong>Technology</strong> Solutions service<br />

Based on these services, <strong>Deployment</strong> <strong>Manager</strong> can directly check whether<br />

or not a cloning image is compatible with a given PRIMERGY server. After<br />

cloning, this server can be managed directly by Operations <strong>Manager</strong> and<br />

Remote Management.<br />

PRIMERGY servers have at least 512 MB RAM and an onboard 100 Mbit<br />

LAN controller with one or more ports. The storage device may be a simple<br />

IDE or SCSI drive, an onboard RAID controller based on IDE, SATA, SCSI,<br />

or SAS and/or a SAN-attached storage device with an additional administration<br />

LAN port. These systems may exist in a floor-standing or rack housing<br />

with or without a local console.<br />

2.1.3 PRIMERGY blade server<br />

A blade server system is a rack system with server plug-in cards of up to 18<br />

(BX900), 9 (BX400) or 10 (BX600) server blades. Each server blade consists<br />

of one or two CPUs, one or two hard disks (IDE, IDE-RAID, SATA, SATA-<br />

RAID, SCSI, SCSI-RAID, SAS, SAS-RAID), at least 512 MB memory, at<br />

least 2 x 1Gbit LAN ports, and the system chip set.<br />

To manage these server blades remotely both online and offline, two management<br />

blades are assigned inside of these racks with a local IPMI-based<br />

communication path to each server blade at each significant point in its life<br />

cycle, e.g. power off, BIOS boot phase, operating system boot phase. The<br />

30 <strong>Deployment</strong> <strong>Manager</strong>


hard disk may be controlled by a SCSI/IDE or IDE/SCSI RAID controller.<br />

Management blades have their own type of hardware to administer and control<br />

the server blades in respect of power, temperature, BIOS and so on.<br />

To provide fast and stable connectivity to the server blades over the LAN,<br />

two or four switch blades are available in the same rack. 1 Gbit Ethernet channels<br />

or Fibre Channel connections are used. These consist of real 1 Gbit or 2<br />

Gbit switch hardware with external connectors to the external LAN, and are<br />

hard-wired with internal port connections to each LAN chip port of each<br />

server blade in a chassis. Two LAN ports per blade are connected to these<br />

switch blades for redundancy purposes. Each LAN port has its own MAC<br />

address. It is important that the PXE BIOS uses port 0 by default to correctly<br />

handle the PXE protocol. In the CPU blade BIOS, you can define which port<br />

is used as the default PXE boot port.<br />

A blade server can have a VGA connector and two USB 2.0 connectors for a<br />

USB floppy or USB CD-ROM drive for service purposes only. This is useful<br />

for native operating system installations on a reference server.<br />

The racks may be assembled into a rack tower to concentrate the number of<br />

CPU blades per square meter. A server farm may consist of many of these<br />

rack towers.<br />

2.1.4 Detection of target servers<br />

2.1 Supported server systems<br />

<strong>Deployment</strong> <strong>Manager</strong> obtains the list of server systems from Operations <strong>Manager</strong><br />

front-end by reading the <strong>ServerView</strong> database.<br />

<strong>Deployment</strong> <strong>Manager</strong> also offers an Add Server dialog in the New Server<br />

wizard to create server list entries for servers which are initially not detected<br />

by Operations <strong>Manager</strong>. In this case the MAC address, IP address and host<br />

name must be added manually. These servers are called bare servers. If a<br />

server is deployed and a <strong>ServerView</strong> agent is active and running on the<br />

clone, it will automatically be detected by the discovery cycle running during<br />

the lifetime of the Operations <strong>Manager</strong> front-end. Operations <strong>Manager</strong> will<br />

take over responsibility for that server entry and update all parameters<br />

received from the agent (including LAN port settings).<br />

<strong>Deployment</strong> <strong>Manager</strong> 31


2 <strong>Deployment</strong> - Overview<br />

2.2 Cloning process<br />

The cloning process consists of the following steps:<br />

l installation of a reference system<br />

l image creation from that reference system<br />

l cloning of an image to one or more target server systems<br />

Figure 1: Cloning deployment cycle<br />

The following sections describe these phases in more detail.<br />

2.2.1 Reference installation<br />

First you must install and configure a server system (the reference system)<br />

manually. This so-called reference installation on the reference system is<br />

used to install further servers in the same way. For each kind of server to be<br />

installed via <strong>Deployment</strong> <strong>Manager</strong>, there must be one reference system.<br />

32 <strong>Deployment</strong> <strong>Manager</strong>


Figure 2: <strong>Deployment</strong> cycle - reference installation<br />

2.2 Cloning process<br />

A reference system is basically a simple server of a specific type. It may<br />

have additional devices attached, such as a monitor, mouse and keyboard,<br />

mass storage devices (e.g. RAID), USB CD-ROM and USB floppy drive in<br />

the case of local operating system installation via CD and floppy.<br />

The hardware configuration must be detected by several agents and the system<br />

resources must be assigned by the BIOS and operating system<br />

instances. A large number of parameters of BIOS, storage systems and operating<br />

system must be defined previously and interactively in a separate configuration<br />

session or interactively in parallel with the installation process.<br />

Detailed information on the installation process of the individual reference<br />

systems can be found in the corresponding installation documentation.<br />

Installed applications on the reference system possibly may not run successfully<br />

on a target system after cloning without adaptation. In that case<br />

user-specific scripts are to be performed after the cloning process to ensure<br />

the successful operation.<br />

<strong>Deployment</strong> <strong>Manager</strong> 33


2 <strong>Deployment</strong> - Overview<br />

After each basic operating system installation, you may install and configure<br />

appropriate applications which will be started on each cloned server immediately<br />

after reboot. Image creation software which is part of the deployment<br />

server software generates a copy of this installed hard disk as an image file<br />

and copies it to the deployment server repository. It is used as a reference<br />

(master) image for further cloning sessions. The server management frontend<br />

of the deployment software allows you to define assignments between<br />

clients and reference images.<br />

Many parameters of the BIOS, storage systems (RAID) and operating systems<br />

can be defined interactively in a separate configuration session of Installation<br />

<strong>Manager</strong> before the installation, or interactively in parallel with the<br />

installation process using native operating system installation via local<br />

devices. You can use this installation as a reference installation for other<br />

servers with the same hardware and BIOS configuration. The other systems<br />

may differ in memory size, hard disk size and RAID level.<br />

The following methods of reference installation are supported:<br />

l Local installation with Installation <strong>Manager</strong><br />

l Multiple remote installation with Installation <strong>Manager</strong> (maximum five<br />

clients)<br />

A detailed description of Installation <strong>Manager</strong> can be found in the Installation<br />

<strong>Manager</strong> user guide which is available on the <strong>ServerView</strong> Suite DVD 2.<br />

Local installation with Installation <strong>Manager</strong><br />

Local installation with Installation <strong>Manager</strong> requires a bootable USB DVD-<br />

ROM and a graphical console with mouse, monitor and keyboard connected<br />

to the reference server for the master image generation. For blade servers the<br />

graphical console with mouse and keyboard is available using the KVM<br />

switch functionality via the management blade.<br />

With this local installation method you can use the Guided Mode of Installation<br />

<strong>Manager</strong> which provides an online check of the defined configuration<br />

parameters with the target hardware.<br />

Multiple remote installation with Installation <strong>Manager</strong> and PXE<br />

You can use Installation <strong>Manager</strong> Remote Installation which you will find on<br />

the Installation <strong>Manager</strong> CD/DVD version 6.605 (or higher) or on the<br />

34 <strong>Deployment</strong> <strong>Manager</strong>


<strong>ServerView</strong> Suite DVD 1 as of version 7.810. To do this, you must install the<br />

Installation <strong>Manager</strong> CD/DVD or the Installation <strong>Manager</strong> from the Server-<br />

View Suite DVD 1 on the PC from which you want to perform the installation.<br />

This PC acts as the deployment server for the remote installation. It can be a<br />

notebook or a server.<br />

Installation <strong>Manager</strong> uses WinPE as the boot platform and is booted via PXE<br />

in the RAM of each target server. This allows any kind of remote access during<br />

the installation process.<br />

Multiple remote installation is recommended for installing up to five clients.<br />

Detailed information on the reference installation with Installation <strong>Manager</strong><br />

can be found in the Installation <strong>Manager</strong> user guide.<br />

2.2.2 Image creation<br />

2.2 Cloning process<br />

From the reference system an image file is created which contains the configuration<br />

information on the reference system, e.g. on the hard disk, the configuration<br />

of the partitions, and all data of the operating system. It is independent<br />

of the RAID level, the BIOS settings and the memory size of the<br />

target systems.<br />

<strong>Deployment</strong> <strong>Manager</strong> 35


2 <strong>Deployment</strong> - Overview<br />

Figure 3: <strong>Deployment</strong> cycle - master image creation<br />

The following methods of image creation are supported:<br />

l File-system-dependent image creation<br />

This method is only available for supported file systems which only<br />

come with the supported operating systems (see section "Partitioning<br />

and File System Formatting" on page 361). In this case only used<br />

areas are copied to the image. This results in small image sizes and<br />

reduced cloning time. The target partition may be smaller than the partition<br />

on the reference system.<br />

l File-system-independent image creation (raw image creation)<br />

This method is used for unsupported file systems. In this case the<br />

complete partition information is copied to the image file and must be<br />

cloned to a partition with the same size. The image may be larger and<br />

the cloning process takes more time to recreate the partition.<br />

l Snapshot image<br />

A “snapshot image” can be used to restore the server after a system<br />

crash.<br />

36 <strong>Deployment</strong> <strong>Manager</strong>


The images are stored in an image repository. This is located in a shared<br />

folder on any file server in the network. The image is used as a reference<br />

image for further cloning sessions. Each image object in the repository refers<br />

to a set of files with identical names but different extensions: Image documentation<br />

(e.g. reference to a text document which is created during the<br />

image creation) could be used for describing an image in a <strong>Deployment</strong> <strong>Manager</strong><br />

frame.<br />

*.img Reference to binary cloning image file (URL of a storage location<br />

somewhere in the network)<br />

*.txt Image documentation (e.g. reference to a text document which<br />

is created during the image creation) could be used for describing<br />

an image in a <strong>Deployment</strong> <strong>Manager</strong> frame.<br />

*.cfg A hardware and operating system parameter file. This information<br />

is used for later modification of an existing image and for<br />

a compatibility check against the assigned target hardware.<br />

The image creation process can be started from <strong>Deployment</strong> <strong>Manager</strong>.<br />

Detailed information on image creation can be found in the chapter "Image<br />

creation" on page 153.<br />

How to use <strong>Deployment</strong> <strong>Manager</strong> is described in the chapter "Working with<br />

<strong>Deployment</strong> <strong>Manager</strong>" on page 87.<br />

2.2.3 Cloning<br />

2.2 Cloning process<br />

The cloning process deploys an image to the servers. It prepares the hardware,<br />

BIOS and storage devices, copies the image onto the hard disks, and<br />

initiates a reboot of the target server. No configuration session is required during<br />

the cloning process or later.<br />

The result is an identical system with the exception of variable parameters<br />

such as IP address, host name and SecureID (Windows systems only) of<br />

the operating system, which must be unique to each server.<br />

<strong>Deployment</strong> <strong>Manager</strong> 37


2 <strong>Deployment</strong> - Overview<br />

Figure 4: <strong>Deployment</strong> cycle - cloning<br />

The hardware configurations of clone and reference system must be identical,<br />

except for the RAID level, memory size, CPU clock and hard disk<br />

capacity.<br />

It is possible to define assignments between reference images and servers<br />

by setting up logical groups, called deployment groups, in the <strong>Deployment</strong><br />

<strong>Manager</strong> front-end. Cloning images can be assigned to deployment groups<br />

and can be cloned simultaneously.<br />

<strong>Deployment</strong> groups must be created manually. Each group has its own set of<br />

group attributes (parameters) relevant for all group members, e.g.<br />

l Network path to master image in the <strong>Deployment</strong> <strong>Manager</strong> repository<br />

l File system formatting type<br />

l Final power status after post-cloning phase has finished<br />

38 <strong>Deployment</strong> <strong>Manager</strong>


All physical clients of that deployment group will be cloned with that referring<br />

image, and the target storage device will be configured in the specified way<br />

for each server.<br />

The attributes for a member of a physical group are basically defined in the<br />

deployment table, which contains an entry for each server listed in the physical<br />

server list. This deployment table is always the definitive reference for<br />

the current deployment status of the server indicating the last operation performed<br />

by <strong>Deployment</strong> <strong>Manager</strong> for that server or the currently running operation<br />

(e.g. cloned or installed). <strong>Deployment</strong> <strong>Manager</strong> always uses the<br />

parameters from this table. This helps to show the same status of each<br />

server if more than one <strong>Deployment</strong> <strong>Manager</strong> front-end has been started. If<br />

one front-end has initiated a job on a particular server, other front-ends are not<br />

able to access this server as long as the jobs are running. Each group<br />

member may have its own set of group parameters which may overrule the<br />

group settings for this server. Finally, each physical server, regardless of<br />

whether it is a member of a group or not, has its own set of individual physical<br />

settings which are stored server-specifically in the deployment table.<br />

A cloning session consists of several phases:<br />

1. The target server is powered on.<br />

2. The system is prepared.<br />

3. The image is cloned to the server.<br />

4. The post-preparation tasks are performed.<br />

Each phase offers different alternatives depending on the types of servers to<br />

be cloned. Details can be found in chapter "Cloning deployment process " on<br />

page 349.<br />

The cloning process with <strong>Deployment</strong> <strong>Manager</strong> works as follows:<br />

Preparations<br />

2.2 Cloning process<br />

l The administrator defines a deployment group, assigns an appropriate<br />

image file, and adds the servers to the deployment group for deployment<br />

on the basis of the assigned image file.<br />

<strong>Deployment</strong> <strong>Manager</strong> 39


2 <strong>Deployment</strong> - Overview<br />

l The deployment table is set up with the individual parameters for the<br />

servers: IP settings, host name etc. For each server in the group, a<br />

separate entry is created in the deployment table.<br />

Starting the cloning process<br />

l The administrator defines the cloning parameters and selects the<br />

servers for the deployment job.<br />

l He/she initiates the cloning directly or configures the scheduler once<br />

per server in the deployment group.<br />

Cloning process<br />

l <strong>Deployment</strong> <strong>Manager</strong> prepares the system based on the preparation<br />

mode selected by the administrator:<br />

o unchanged:<br />

For already manually prepared target servers. No additional<br />

preparation tasks are performed.<br />

o Preparation Boot Image:<br />

For target systems to be booted via a preparation boot image<br />

created by the users themselves. The operations are performed<br />

via PXE boot from a 1.44 MB floppy image or an external<br />

bootstrap image. (See also "Generic server" on page 28.)<br />

o All Primergy:<br />

For target servers based on WinPE PXE boot (e.g. PRIM-<br />

ERGY servers). The RAID configuration is done via Installation<br />

<strong>Manager</strong>.<br />

l <strong>Deployment</strong> <strong>Manager</strong> performs the cloning task. It copies the image<br />

to the target system.<br />

l <strong>Deployment</strong> <strong>Manager</strong> performs the post-preparation tasks defined by<br />

the administrator, e.g. executing user-defined installation scripts and<br />

changing the power status.<br />

<strong>Deployment</strong> <strong>Manager</strong> stores the final deployment status of the server<br />

in the central deployment table.<br />

The cloning process is described in detail in the chapter "Cloning deployment<br />

process " on page 349.<br />

40 <strong>Deployment</strong> <strong>Manager</strong>


Information on handling the cloning process via the <strong>Deployment</strong> <strong>Manager</strong><br />

front-end can be found in the chapter "Mass Cloning" on page 165.<br />

2.3 Mass remote installation<br />

With <strong>Deployment</strong> <strong>Manager</strong> the unattended installation process of Installation<br />

<strong>Manager</strong> can be initiated for mass installation via the <strong>Deployment</strong> <strong>Manager</strong><br />

console. Appropriate configuration files created with Installation <strong>Manager</strong><br />

must exist.<br />

The maximum number of supported target servers is basically unlimited but<br />

is influenced by the bandwidth of the network topology used. The final value<br />

must be determined for each environment. For example, in a 1 Gbit LAN topology<br />

about 20 to 25 concurrent installation jobs are a sensible limit.<br />

The mass remote installation process consists of the following steps:<br />

l Create installation groups<br />

l Assign configuration files to installation groups<br />

l Start mass installation process<br />

Figure 5: The deployment cycle for mass remote installation<br />

2.3 Mass remote installation<br />

<strong>Deployment</strong> <strong>Manager</strong> 41


2 <strong>Deployment</strong> - Overview<br />

Installation groups are formed in order to group servers of a specific type with<br />

specific characteristics. A different configuration file is assigned to each<br />

server of the installation group.<br />

A server can be assigned to different installation groups, so it can take on different<br />

roles at different times.<br />

The configuration files created by the Installation <strong>Manager</strong> wizards during the<br />

configuration phase are stored in a repository folder and can be assigned to<br />

servers or server groups for mass installation. The administrator assigns an<br />

appropriate configuration file to the members of an installation group. A specific<br />

configuration file can be assigned to each server in the group.<br />

At the start of the installation, the configuration files are merged in each<br />

individual server information contained in the <strong>Deployment</strong> <strong>Manager</strong> deployment<br />

table and the remote installation processes are started parallel.<br />

The remote installation process with <strong>Deployment</strong> <strong>Manager</strong> works as follows:<br />

Preparations<br />

l The administrator defines a remote installation group and decides<br />

whether all servers in the group will have the same configuration file<br />

assigned or each server will have its own configuration file.<br />

l The administrator adds servers to the installation group and assigns<br />

the appropriate configuration file to the group or different configuration<br />

files to individual servers.<br />

l The deployment table is set up with the individual parameters for the<br />

servers: IP settings, host name and administrator password.<br />

Starting the mass installation process<br />

l The administrator sets the power control status for the group or for<br />

individual servers (power off or power on).<br />

l The administrator initiates the remote installation directly or configures<br />

a scheduled installation job.<br />

Mass installation process<br />

l A separate configuration file is created for each server in the group.<br />

l <strong>Deployment</strong> <strong>Manager</strong> performs the installation task.<br />

42 <strong>Deployment</strong> <strong>Manager</strong>


l <strong>Deployment</strong> <strong>Manager</strong> stores the final server status in the central<br />

deployment table.<br />

Figure 6: Mass installation with <strong>Deployment</strong> <strong>Manager</strong><br />

2.3 Mass remote installation<br />

The installation process with <strong>Deployment</strong> <strong>Manager</strong> is described in detail in<br />

chapter "Installation" on page 67.<br />

How to use the <strong>Deployment</strong> <strong>Manager</strong> front-end is described in the chapter<br />

"Working with <strong>Deployment</strong> <strong>Manager</strong>" on page 87.<br />

<strong>Deployment</strong> <strong>Manager</strong> 43


2 <strong>Deployment</strong> - Overview<br />

2.4 System architecture<br />

2.4.1 Hardware architecture<br />

<strong>Deployment</strong> <strong>Manager</strong> is designed for a server-client architecture. It supports<br />

Windows 2008 Server or Windows 2012 Server operating systems as management<br />

platforms.<br />

The clients, i.e. the <strong>Deployment</strong> <strong>Manager</strong> front-end user interfaces, can be<br />

installed on a central management station or can be accessed via a Web<br />

interface from any workstation with Web browser in the network.<br />

Figure 7: Topological overview of a deployment environment<br />

44 <strong>Deployment</strong> <strong>Manager</strong>


<strong>Deployment</strong> server<br />

The deployment server contains the following software packages:<br />

l a Windows 2008 Server or a Windows 2012 Server platform<br />

l Installation <strong>Manager</strong> v6.605 (or higher) for mass remote installation<br />

and for the support of mass cloning for all PRIMERGY servers<br />

l a DHCP (Dynamic Host Configuration Protocol) service (if there is<br />

none present in the LAN segment). PXE requires an existing DHCP<br />

server.<br />

l a PXE service coming from <strong>Deployment</strong> <strong>Manager</strong> or from Installation<br />

<strong>Manager</strong> or from the Update <strong>Manager</strong> tool.<br />

Only one deployment server can be in use by a <strong>Deployment</strong> <strong>Manager</strong> frontend<br />

per LAN segment, but several deployment servers can exist per LAN<br />

segment. If PXE and the DHCP service are installed on one machine, only<br />

one deployment server per segment is allowed.<br />

Even if it is allowed to have more than one DHCP service and one PXE service<br />

in one LAN segment, it is recommended to have only one of each service.<br />

Otherwise, it can only be guaranteed which service will be used by the client,<br />

if the DHCP and PXE services are configured carefully. The DHCP service<br />

may be installed on the same server as the PXE service but this is not necessary.<br />

The <strong>Deployment</strong> <strong>Manager</strong> cloning module requires Windows 2008 or Windows<br />

2012 (for the PXE service) installed on the deployment server to support<br />

DHCP proxy functionality.<br />

For fast image creation handling, the partition where the cloning module is<br />

installed should always have as much free space left as the currently used<br />

image size will be. Otherwise the image creation process may stop.<br />

Using the raw mode for image creation, the size could be up to 80% of the<br />

disk size (normally 2/3 of the disk size).<br />

LAN environment<br />

2.4 System architecture<br />

For administrative purposes, the LAN may be organized hierarchically with<br />

external switches, hubs and gateways at each node point. For switches<br />

<strong>Deployment</strong> <strong>Manager</strong> 45


2 <strong>Deployment</strong> - Overview<br />

Virtual LAN (VLAN) software may be used to simulate hub or bridge functionality.<br />

The deployment server must be placed inside of each LAN segment or on a<br />

higher level in the hierarchy behind switches or hubs. In these cases, the<br />

switch or hub ports must be configured to let through broadcasts coming from<br />

the PXE BIOS to the PXE boot server, as well as multicast broadcasts.<br />

2.4.2 Software architecture<br />

Figure 8: Architecture - overview<br />

46 <strong>Deployment</strong> <strong>Manager</strong>


<strong>Deployment</strong> <strong>Manager</strong> obtains the list of server systems from Operations <strong>Manager</strong><br />

by reading the <strong>ServerView</strong> database. Both the <strong>Deployment</strong> <strong>Manager</strong><br />

front-end and Operations <strong>Manager</strong> can run on the same machine. This<br />

machine does not have to act as a deployment server on which the <strong>Deployment</strong><br />

<strong>Manager</strong> cloning module and the PXE service are running.<br />

But for easier installation and administration it is advisable to install all components<br />

(<strong>Deployment</strong> <strong>Manager</strong> (package <strong>Deployment</strong> <strong>Manager</strong> and package<br />

<strong>Deployment</strong> Services) and Operations <strong>Manager</strong>) on a deployment<br />

server.<br />

<strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> service<br />

The <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> service is the central node which<br />

manages jobs for deployment, provides status and parameters to the <strong>Deployment</strong><br />

<strong>Manager</strong> front-end, and collects information from the <strong>ServerView</strong> database<br />

and Installation <strong>Manager</strong>. The <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> service<br />

itself consists of different building blocks for the different actions. One<br />

important block is the module which is responsible for the communication<br />

with the cloning module.<br />

The cloning module may be located on a separate server, the deployment<br />

server.<br />

Both the <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> service and the cloning module<br />

have access to the image repository which contains the images for cloning<br />

jobs.<br />

Remote control<br />

Remote control is used to initiate requests to the target server especially for<br />

power control and server detection. Access to the service processors such<br />

as the management blade (via SNMP) and Kalypso BMC (via IPMI) is supported<br />

via remote control. PRIMERGY servers supporting none of these services<br />

can be rerouted via a message box to the <strong>Deployment</strong> <strong>Manager</strong> frontend<br />

to get the user to initiate tasks manually instead of remotely.<br />

Installation <strong>Manager</strong> Extension<br />

2.4 System architecture<br />

The Installation <strong>Manager</strong> Extension is used to communicate with Installation<br />

<strong>Manager</strong>. It is session-oriented based on a ServerStart agent concept. Starting<br />

and stopping an agent identifies the start and the end of a session. In<br />

<strong>Deployment</strong> <strong>Manager</strong> 47


2 <strong>Deployment</strong> - Overview<br />

between, certain commands can be sent down to the running agent. Each<br />

agent has its session ID, based on which the <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong><br />

service can manage many tasks to different agents in parallel, as<br />

required, for example, for mass remote installation.<br />

2.5 First step guide for using <strong>Deployment</strong> <strong>Manager</strong><br />

The following steps describe the procedures from installation of <strong>Deployment</strong><br />

<strong>Manager</strong> packages to starting the deployment task.<br />

You might need additional procedures which aren’t described below<br />

depending on your deployment/installation purpose. Therefore, you<br />

need to read each relevant chapter in the manual for details.<br />

Step 1: Configuring the DHCP server<br />

If the DHCP server doesn’t exist on the network, you need to configure<br />

the DHCP server. For details, please refer to section"<strong>Deployment</strong> server"<br />

on page 57.<br />

Step 2: Installing the necessary software on the deployment server<br />

Install the necessary software on the deployment server. For details,<br />

please refer to section "Installing <strong>Deployment</strong> <strong>Manager</strong>" on page 51. In<br />

most cases, the following software is needed at least.<br />

1. <strong>ServerView</strong> Operations <strong>Manager</strong> (local or remote)<br />

2. Java Runtime Environment package<br />

3. <strong>Deployment</strong> <strong>Manager</strong> package<br />

4. <strong>Deployment</strong> Services package<br />

5. <strong>ServerView</strong> Installation <strong>Manager</strong> (mandatory with <strong>Deployment</strong><br />

<strong>Manager</strong> as of version 5.50)<br />

Step 3: Creating the image repository<br />

<strong>Deployment</strong> <strong>Manager</strong> needs a shared network directory as the repository<br />

in order to save the image/installation configuration file and so on. For<br />

details, please refer to section "Creating an image repository" on page 66<br />

and "Repositories" on page 106.<br />

48 <strong>Deployment</strong> <strong>Manager</strong>


2.5 First step guide for using <strong>Deployment</strong> <strong>Manager</strong><br />

Step 4: Registering target servers<br />

<strong>Deployment</strong> <strong>Manager</strong> can automatically get the server list from Server-<br />

View Operations <strong>Manager</strong> database. Therefore, if the server you want to<br />

deploy or install is being managed in Operations <strong>Manager</strong> already, you<br />

don’t need to manually register the target server in <strong>Deployment</strong> <strong>Manager</strong><br />

front-end.<br />

However, if you want to add new servers which aren’t being managed in<br />

Operations <strong>Manager</strong>, you can add new servers in the <strong>Deployment</strong> <strong>Manager</strong><br />

front-end. In order to register new servers, the MAC address of the<br />

server is needed. For details, please refer to section "Adding new<br />

servers" on page 132.<br />

Step 5: Configuring the deployment configuration<br />

Before deploying the image to servers or before installing the servers you<br />

must configure the deployment configuration of the target system. For<br />

detail, please refer to section "Configuration before cloning/installation<br />

tasks" on page 118.<br />

Step 6: Starting the task<br />

You can create the cloning/installation/crash recovery task after finishing<br />

all steps from step 1 to step 5. For details about each task, please refer to<br />

the following section:<br />

l Mass Cloning: section "Mass Cloning" on page 165 and "Image creation"<br />

on page 153.<br />

l Mass Installation: section "Mass Installation" on page 243<br />

l Crash Recovery: section "Crash Recovery" on page 295 and "Image<br />

creation" on page 153.<br />

<strong>Deployment</strong> <strong>Manager</strong> 49


50 <strong>Deployment</strong> <strong>Manager</strong>


3 Installing <strong>Deployment</strong> <strong>Manager</strong><br />

<strong>Deployment</strong> <strong>Manager</strong> is available on the <strong>ServerView</strong> Suite DVD 1. It is also<br />

possible to download the software and the latest updates from the download<br />

area of the <strong>Fujitsu</strong> <strong>Technology</strong> Solutions web server<br />

(http://support.ts.fujitsu.com/com/support/downloads.html).<br />

In Japan, it is also possible to download the software from the web site<br />

http://www.fmworld.net/cgi-bin/drviasearch/drviaindex.cgi.<br />

<strong>Deployment</strong> <strong>Manager</strong> consists of two installation packages:<br />

l <strong>Deployment</strong> <strong>Manager</strong> package<br />

Web-based graphical user interface including the <strong>ServerView</strong> <strong>Deployment</strong><br />

<strong>Manager</strong> service.<br />

l <strong>Deployment</strong> Services package<br />

The deployment services including PXE service, TFTP service and<br />

deployment service (= cloning module).<br />

You can install both packages at the same time or separately from each<br />

other. The packages can be installed on different systems.<br />

If an image repository is used by more than one computer in a LAN, you must<br />

release the repository for the user account that was specified during the<br />

installation process.<br />

<strong>Deployment</strong> <strong>Manager</strong> must access the <strong>ServerView</strong> database. Both modules<br />

can be installed on one system. During the installation you will be asked<br />

where Operations <strong>Manager</strong> is installed.<br />

The following add-on packages are provided with the <strong>Deployment</strong> <strong>Manager</strong><br />

software:<br />

l JavaRuntimeEnvironment package<br />

Package for the Java runtime environment. This package is required<br />

for the graphical user interface of <strong>Deployment</strong> <strong>Manager</strong>.<br />

l Installation <strong>Manager</strong> package<br />

Installs Installation <strong>Manager</strong> from the <strong>ServerView</strong> Suite DVD 1 as of<br />

version 7.810.<br />

<strong>Deployment</strong> <strong>Manager</strong> 51


3 Installing <strong>Deployment</strong> <strong>Manager</strong><br />

3.1 Software-Requirements<br />

l Java Runtime Environment (ORACLE (formerly SUN) JRE as of version<br />

1.6.0_18).<br />

l <strong>ServerView</strong> Operations <strong>Manager</strong> as of version 5.00 (local or remote<br />

installation)<br />

l Installed DHCP server<br />

For a detailed description of system requirements see section "System<br />

requirements" on page 18.<br />

3.1.1 JBoss web server<br />

As of <strong>ServerView</strong> Operations <strong>Manager</strong> version 5.0, the Microsoft<br />

web server (MS Internet Information Server) and the <strong>ServerView</strong><br />

web server (based on Apache for Windows) are no longer supported.<br />

As of <strong>ServerView</strong> Operations <strong>Manager</strong> version 5.0, the web server used is<br />

JBoss.<br />

The necessary files for JBoss are automatically installed when <strong>ServerView</strong><br />

Operations <strong>Manager</strong> is installed. JBoss is configured as a standalone service<br />

<strong>ServerView</strong> JBoss Application Server 5.1, which you can start and stop<br />

via the Windows start menu:<br />

l Select Start – Administrative Tools – Services.<br />

You can also start and stop the service via the following command:<br />

%WINDIR%\system32\net.exe start "<strong>ServerView</strong> JBoss Application<br />

Server 5.1"<br />

%WINDIR%\system32\net.exe stop "<strong>ServerView</strong> JBoss Application<br />

Server 5.1"<br />

Automatic deletion of JBoss access logs<br />

Log files localhost_access_log..log are written in the<br />

…/jboss/server/serverview/log access directory. A separate log is created<br />

52 <strong>Deployment</strong> <strong>Manager</strong>


for each day. In previous versions they were never deleted.<br />

As of version Operations <strong>Manager</strong> version 5.10, JBoss now includes automatic<br />

deletion. In the …/jboss/server/serverview/conf directory the configuration<br />

file sv-com-config.xml describes the parameters:<br />

localhost_access_log.<br />

List of file name prefixes, qualifying the files that are to be checked for<br />

deletion. The default is a list with the single entry "localhost_access_<br />

log.".<br />

12:00<br />

Test interval, measured in minutes. The test interval can range between 1<br />

minute and 24 hours. The behavior of the automated file deletion depends<br />

on the specified value:<br />

Number [1...1439]<br />

File checking begins when JBoss starts up, and is repeated at the<br />

specified time interval.<br />

Time [hh:mm]<br />

The test interval is 24 hours and the file checking occurs every day at<br />

the specified time. (The values for hh range from 00 till 23.) The<br />

default is 12:00, i.e. files are checked every day at noon.<br />

10080<br />

Maximum age of a log file, measured in minutes. Any file matching an<br />

entry in the fileNamePrefix list which is older than this value is deleted.<br />

The default is 10080, i.e. seven days. If no value is specified in the configuration<br />

file, the value 4320, i.e. three days, is used.<br />

3.1.1.1 Launching and ports used<br />

The launch address for Operations <strong>Manager</strong> begins with the prefix https.<br />

Port Used for<br />

3170 https (The port must be<br />

unlocked in the firewall.)<br />

3.1 Software-Requirements<br />

<strong>Deployment</strong> <strong>Manager</strong> 53


3 Installing <strong>Deployment</strong> <strong>Manager</strong><br />

Port Used for<br />

3171 Update Management (The port<br />

must be unlocked in the firewall<br />

to allow Update Management.)<br />

3172 Remote Connector Service<br />

(Server Configuration <strong>Manager</strong>,<br />

Power Monitor, etc.)<br />

(The port must be unlocked in<br />

the firewall.)<br />

3173 <strong>ServerView</strong> RAID <strong>Manager</strong><br />

1072, 1111, 1149, 1287, 1301,<br />

1302, 1325, 1336, 1338, 1374,<br />

1380, 1383, 1385, 1399, 1400,<br />

1401, 1404, 1441, 1442, 1443,<br />

1445, 1446, 1447, 8787<br />

JBoss (only used for internal<br />

socket connections)<br />

1468 JBoss anonymous port<br />

1473 non-SSL port of <strong>ServerView</strong>’s<br />

directory service (OpenDS)<br />

1474 LDAPS, if OpenDS is configured<br />

as directory service.<br />

4444 OpenDS control port<br />

9212 PostgreSQL database server<br />

(only for Linux)<br />

9363 used by the Operations <strong>Manager</strong><br />

to contact XEN daemons<br />

16509, 16514 used by the KVM service<br />

For detailed security information see the White Paper “Secure PRIMERGY<br />

Server Management“.<br />

54 <strong>Deployment</strong> <strong>Manager</strong>


To gather information, Operations <strong>Manager</strong> accesses the following ports of<br />

all network nodes of a subnet which is specified on ServerBrowser and managed<br />

servers.<br />

Port Used for<br />

80 Citrix<br />

135 Hyper-V<br />

161 SNMP<br />

443 VMware<br />

623 BMC (iRMC)<br />

3172 <strong>ServerView</strong> Remote Connector<br />

5988 VMware<br />

5989 VMware<br />

9363 XEN<br />

16509 KVM<br />

16514 KVM<br />

3.1.1.2 Role-based user administration<br />

JBoss also enables role-based user administration. For further information on<br />

managing certificates, see "User Management in <strong>ServerView</strong>" user guide.<br />

3.1.1.3 Managing certificates<br />

3.1 Software-Requirements<br />

To communicate with the JBoss web server, web browsers always use an<br />

HTTPS connection (i.e. a secure SSL connection). Therefore, the JBoss<br />

web server needs a certificate (X.509 certificate) to authenticate itself at the<br />

web browser. The X.509 certificate contains all the information required to<br />

identify the JBoss web server plus the public key of the JBoss web server.<br />

For further information on managing certificates, see "User Management in<br />

<strong>ServerView</strong>" user guide.<br />

<strong>Deployment</strong> <strong>Manager</strong> 55


3 Installing <strong>Deployment</strong> <strong>Manager</strong><br />

3.1.2 Installation of <strong>ServerView</strong> Installation <strong>Manager</strong><br />

If Installation <strong>Manager</strong> as of version 7.804 is installed on the deployment<br />

server, you can use the <strong>Deployment</strong> <strong>Manager</strong> function to create installation<br />

groups for mass installation of PRIMERGY servers with Installation <strong>Manager</strong><br />

configuration files. If Installation <strong>Manager</strong> as of version 6.711 is installed on<br />

the deployment server, you can select WinPE MDP as the deployment platform<br />

for cloning.<br />

You can also install Installation <strong>Manager</strong> from the <strong>ServerView</strong> Suite DVD 1<br />

as of version 7.810.<br />

l There are no dependencies between the Installation <strong>Manager</strong><br />

package and the <strong>Deployment</strong> <strong>Manager</strong> package. These packages<br />

can be installed or uninstalled in any order.<br />

l Mass Installation functionality depends on <strong>ServerView</strong> Installation<br />

<strong>Manager</strong>. Therefore, you must also meet the software<br />

requirements of <strong>ServerView</strong> Installation <strong>Manager</strong>. For example,<br />

when you want to install a Linux system, you need to configure<br />

a FTP, HTTP or NFS server. For details, please refer to the<br />

<strong>ServerView</strong> Installation <strong>Manager</strong> user guide.<br />

3.1.3 Installation of <strong>ServerView</strong> Operations <strong>Manager</strong><br />

<strong>Deployment</strong> <strong>Manager</strong> is integrated in the user interface of Operations <strong>Manager</strong>.<br />

The Operations <strong>Manager</strong> software package (as of version 5.00) must<br />

be installed locally or remotely before you start the <strong>Deployment</strong> <strong>Manager</strong><br />

installation.<br />

This package is available on the <strong>ServerView</strong> Suite DVD 1, which is a component<br />

of the <strong>ServerView</strong> Suite. For more information on installing Operations<br />

<strong>Manager</strong>, see the Operations <strong>Manager</strong> installation guide on the Server-<br />

View Suite DVD 2.<br />

<strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> reads the server list from the<br />

Operations <strong>Manager</strong> database.<br />

56 <strong>Deployment</strong> <strong>Manager</strong>


3.1.4 Installation of <strong>ServerView</strong> agents<br />

During the image creation process you can specify whether the server should<br />

be shut down before the image creation process is started, see section<br />

"Options step (Create Cloning Image wizard)" on page 170.<br />

If you want to do this, you need to install and run <strong>ServerView</strong> agents on the<br />

server:<br />

l On a Windows server platform, <strong>ServerView</strong> agents as of version<br />

3.10.05.<br />

l On a Linux server platform, <strong>ServerView</strong> agents as of version 3.10.06.<br />

During the installation of the <strong>ServerView</strong> agents you must allow SNMP Set<br />

operations. How to install and configure these agents is described in the<br />

Operations <strong>Manager</strong> installation documentation on the <strong>ServerView</strong> Suite<br />

DVD 2.<br />

You must also configure the SNMP service in the Properties window on the<br />

system. In the Security tab, add the community with the relevant SNMP<br />

rights.<br />

3.1.5 <strong>Deployment</strong> server<br />

3.1 Software-Requirements<br />

<strong>Deployment</strong> <strong>Manager</strong> and Installation <strong>Manager</strong> support remote access to<br />

required resources over a LAN during the installation process. The deployment<br />

software is booted using the PXE boot service, allowing the unattended<br />

installation to be started on a remote server.<br />

In order to perform a remote installation using Installation <strong>Manager</strong> or image<br />

creation/cloning using <strong>Deployment</strong> <strong>Manager</strong>, the following software packages<br />

must be available on the deployment server:<br />

<strong>Deployment</strong> <strong>Manager</strong> 57


3 Installing <strong>Deployment</strong> <strong>Manager</strong><br />

l Microsoft Windows Server 2008 or Windows Server 2012<br />

l You have to configure the Windows firewall on the<br />

reference system, see "Configuring the Windows<br />

firewall on the reference system" on page 160.<br />

l Please see the manual of "<strong>ServerView</strong> Installation<br />

<strong>Manager</strong>" for a list of operating systems that are supported<br />

by <strong>ServerView</strong> Installation <strong>Manager</strong>.<br />

l Installation <strong>Manager</strong> as of version 7.804 for PXE-based remote installation<br />

l Installation <strong>Manager</strong> as of version 6.711 and deployment service (cloning<br />

module) for PXE-based remote image creation and cloning<br />

l A DHCP service (if none is already present in the LAN segment)<br />

The deployment server and the target system must be in the same LAN<br />

segment (only with the deployment method Multicast). Only one deployment<br />

server can be in use but more than one can exist in one LAN segment.<br />

The IP address of a deployment server must be static.<br />

Installing the deployment server to PRIMEQUEST is not supported.<br />

PRIMEQUEST is supported only as a target server.<br />

PXE Service<br />

The <strong>Fujitsu</strong> PXE service must run on the deployment server. It comes with<br />

either <strong>Deployment</strong> <strong>Manager</strong>, Installation <strong>Manager</strong> (as of version 6.711) or<br />

Update <strong>Manager</strong>. These components use the same PXE service.<br />

Unless the PXE Service API is standardized, other PXE services are not supported.<br />

To avoid conflicts of PXE services during the installation, <strong>Deployment</strong><br />

<strong>Manager</strong> automatically detects other PXE services on the<br />

LAN. This works only for aggressive services reacting to each<br />

PXE request independently of the client MAC address used.<br />

If Installation <strong>Manager</strong> and <strong>Deployment</strong> <strong>Manager</strong> and maybe Update <strong>Manager</strong><br />

are installed on the same machine, the basic PXE service will be shared<br />

58 <strong>Deployment</strong> <strong>Manager</strong>


etween both modules. The installer of each product will enhance the basic<br />

PXE service by adding specific modules for their specific use of the basic<br />

service. With Installation <strong>Manager</strong> the PXE service will only be installed/enhanced<br />

if Classic + Remote Installation is selected in the first installation<br />

frame.<br />

You can only install one PXE service per system. This service uses port<br />

4011 (default) on the network. If more than one network interface card (NIC)<br />

is installed on the deployment server, the PXE service can be assigned to<br />

one of them. To reassign the NIC to the PXE service, edit the file localipaddress.txt<br />

in the installation folder .\\bin of the deployment service.<br />

DHCP Service<br />

If no DHCP service is installed on the deployment server, the PXE service<br />

uses the standard DHCP port. A PXE client must scan for a PXE service in<br />

the LAN segment by transmitting a broadcast on port 67.<br />

If a DHCP service is present on the deployment server, the PXE service is<br />

installed as a DHCP proxy service (activated DHCP option: 60). In this case<br />

the DHCP service is able to directly report the IP address of a PXE service<br />

(NBS = NetBootService) to a DHCP client, whereupon the client can directly<br />

access the PXE service by its IP address on port 4011.<br />

If the DHCP and PXE services are installed on different systems, the 060<br />

option must be disabled inside the DHCP service.<br />

For more details on the PXE protocol, see the detailed description of the PXE<br />

service.<br />

3.1.6 Network configuration<br />

3.1 Software-Requirements<br />

Deploying operating systems requires a high-bandwidth network operating at<br />

least 100Mbps. In addition, special network services and techniques are<br />

used to control the transaction of transferred packages.<br />

When extensive cloning images are to be transferred to a large number of<br />

clients, the standard multicast protocol is used. Each data package is sent<br />

over the LAN only once. Recipients must be members of the relevant multicast<br />

session group in order to collect and read these packages. Otherwise<br />

the transferred packages are ignored. After completion of a data transfer,<br />

<strong>Deployment</strong> <strong>Manager</strong> 59


3 Installing <strong>Deployment</strong> <strong>Manager</strong><br />

each recipient returns an acknowledgment of receipt to the multicast server.<br />

When all receipts have been submitted, the server sends out the next data<br />

package.<br />

If one of the session group members is using a 10Mbps LAN connection, the<br />

whole multicast session will run at 10Mbps. When it comes to multicast,<br />

therefore, it is highly advisable to set up a homogeneous LAN environment<br />

running at 100Mbps or faster.<br />

Managed switches and routers can control the transmission of multicast<br />

packages in order to prevent non-multicast session members from being bothered<br />

by them. This standard for IP multicasting in the Internet is called Internet<br />

Group Management Protocol (IGMP). With the IGMP protocol only<br />

members joining the current multicast session will see the sent out packages.<br />

3.1.6.1 LAN connections<br />

The LAN ports can be connected externally in one of the following ways (the<br />

following assumes that <strong>Deployment</strong> <strong>Manager</strong> and Operations <strong>Manager</strong> are<br />

installed on the same server):<br />

See "LAN connection<br />

topology and IGMP settings<br />

(Switch Blade)" on<br />

page 63, [02], [04].<br />

60 <strong>Deployment</strong> <strong>Manager</strong>


3.1 Software-Requirements<br />

See "LAN connection<br />

topology and IGMP settings<br />

(Switch Blade)" on<br />

page 63, [06], [08], [10],<br />

[12].<br />

<strong>Deployment</strong> <strong>Manager</strong> 61


3 Installing <strong>Deployment</strong> <strong>Manager</strong><br />

See "LAN connection<br />

topology and IGMP settings<br />

(Switch Blade)" on<br />

page 63, [05], [07], [09],<br />

[11].<br />

For high-performance configurations a switch or router should be used<br />

instead of hubs.<br />

3.1.6.2 LAN connections (blade server)<br />

Each switch blade has ten downlink ports to the server blades and three<br />

uplink ports to level-3 switches. These LAN connections provide the only I/O<br />

functionality for the server blade.<br />

The two management blades have separate LAN connections for maintaining<br />

accessibility to the system even if the standard traffic network connections<br />

are down.<br />

In the Blade Server System manuals, you will find various examples of the<br />

blade system configuration and an overview of the LAN mapping.<br />

The management blade is fixed at 10Mbps only (no autosensing). The other<br />

end of the management blade LAN should be configured as “10Mbps/full<br />

62 <strong>Deployment</strong> <strong>Manager</strong>


duplex fix”. If autonegotiation is configured on the switch/router side, the final<br />

connection is negotiated to 10Mbps/half. Collisions will occur.<br />

3.1.6.3 LAN connection topology and IGMP settings (Switch Blade)<br />

The following table summarizes the LAN connection topology and IGMP settings<br />

of switch blades and external switches/routers:<br />

External<br />

switch<br />

Switch<br />

blade/<br />

NIC<br />

External switch not used<br />

BMC/ Management<br />

blade LAN<br />

Performance of<br />

deployment<br />

[01] IGMP off not same segment not configurable<br />

[02] same segment governed by 10 Mbps<br />

*1<br />

[03] IGMP on not same segment not configurable<br />

[04] same segment IGMP effective<br />

External switch used<br />

IGMP<br />

off<br />

IGMP<br />

on<br />

3.1 Software-Requirements<br />

[05] IGMP off not same segment not configurable<br />

[06] same segment governed by 10 Mbps<br />

*1<br />

[07] IGMP on not same segment not configurable<br />

[08] same segment governed by 10 Mbps<br />

*1<br />

[09] IGMP off not same segment not configurable<br />

[10] same segment governed by 10 Mbps<br />

*1<br />

[11] IGMP on not same segment not configurable<br />

[12] same segment IGMP effective<br />

<strong>Deployment</strong> <strong>Manager</strong> 63


3 Installing <strong>Deployment</strong> <strong>Manager</strong><br />

*1: When a large number of broadcast packets is sent to a broadcast group<br />

which contains 10Mbps ports, the overall performance of the broadcast is<br />

governed by the slowest port, e.g. 10Mbps.<br />

IGMP enabling is only effective in cases [04] and [12]!<br />

3.1.6.4 Multiple segment cloning<br />

<strong>Deployment</strong> <strong>Manager</strong> uses unicast (image creation and cloning) and multicast<br />

(cloning only) data transfer.<br />

Multicast and unicast transfers both work with a data flow control based on<br />

acknowledgment packages.<br />

This means that for each data package an acknowledgment from the client is<br />

expected. Combined with certain timeouts, up to 2 retries are initiated before<br />

a connection is set to the status broken.<br />

A data package has bypassed the router and obviously reached the client<br />

(cause of the reaction on the client screen), but the acknowledgment has not<br />

reached the server. This fakes the mutual progress of preparing the first partition,<br />

which is not the case.<br />

To support cloning over multiple LAN segments via a router or VLAN border,<br />

you must set certain LAN ports to be passed through in the router configuration:<br />

The following ports of the router must be enabled in the direction from the<br />

deployment server to the target:<br />

Source: <strong>Deployment</strong> Server<br />

Destination: Target<br />

Destination ports: 4973 - 4989/udp<br />

The following ports of the router must be enabled in the direction from the target<br />

to the deployment server:<br />

Source: Target<br />

Destination: <strong>Deployment</strong> Server<br />

Destination ports:<br />

67/udp, 4011/udp (PXE)<br />

64 <strong>Deployment</strong> <strong>Manager</strong>


69/udp (TFTP)<br />

4972/udp<br />

4974- 4989/udp<br />

4974 - 4989/tcp<br />

If the <strong>Deployment</strong> <strong>Manager</strong> and <strong>Deployment</strong> Services packages are<br />

installed on two servers in different segments, port 4971 must be enabled in<br />

the router. Other multicast services in the LAN segment may use the same<br />

ports as well.<br />

The port selection by the current deployment server can be initialized in the<br />

registry settings of <strong>Deployment</strong> <strong>Manager</strong> at:<br />

HKEY_LOCAL_MACHINE\SOF-<br />

TWARE\FUJITSU\SystemcastWizard\CLONE<br />

Make the following settings:<br />

Portbase = 0x0000136e(4974)<br />

and<br />

PortRange = 0x00000010 (16)<br />

After a reboot, the new settings are active. These settings are of interest<br />

when multiple deployment servers or multiple multicast applications are used<br />

in one segment.<br />

If you want to transfer IP packages across multiple segment borders, the<br />

TTL counter in the header of an IP package must be set accordingly. With the<br />

TTL parameter, you can specify how many “hops” you allow multicast packets<br />

to make. Increasing the value means that multicast packets can be transferred<br />

over many routers.<br />

A configuration is mandatory if multiple segment cloning is required. There is<br />

one entry with which you can manipulate TTL in the deployment server operating<br />

system registry:<br />

HKEY_LOCAL_MACHINE\SOF-<br />

TWARE\FUJITSU\SystemcastWizard\CLONE\MRestore<br />

-> Value: TTL<br />

By default <strong>Deployment</strong> <strong>Manager</strong> uses the value 3.<br />

3.1 Software-Requirements<br />

<strong>Deployment</strong> <strong>Manager</strong> 65


3 Installing <strong>Deployment</strong> <strong>Manager</strong><br />

3.1.6.5 Multiple LAN ports of target systems<br />

It is possible to use the other LAN port of servers for the deployment process.<br />

For this purpose the other LAN port of each server must be configured<br />

using the <strong>Deployment</strong> Configuration wizard (see section "Remote Management<br />

Ports step (<strong>Deployment</strong> Configuration wizard)" on page 125 or<br />

"Remote Management Ports step (New Server wizard)" on page 139 (Lan<br />

Port for PXE Boot option.).<br />

When the other LAN port is to be used for deployment, both the deployment<br />

server and the other LAN port of the server must belong to the<br />

same LAN segment (only with the deployment method Multicast).<br />

The deployment server LAN port used for the PXE services is defined<br />

in the file localipaddress.txt.<br />

3.1.7 Creating an image repository<br />

l Create a shared network directory (UNC network path) on your deployment<br />

server (or on another server in the LAN segment).<br />

l Share this folder so that it can be accessed remotely.<br />

Both user profiles created in the course of the <strong>Deployment</strong><br />

<strong>Manager</strong> installation must be allowed to fully access this<br />

network share, see section "Installation" on page 67.<br />

l Specify a repository name and the UNC network path in order to register<br />

the new share as an image repository in <strong>Deployment</strong> <strong>Manager</strong>.<br />

The images will be stored in this directory.<br />

For details refer to the section "Repositories" on page 106.<br />

66 <strong>Deployment</strong> <strong>Manager</strong>


3.2 Installation<br />

You have the following options for installing the <strong>Deployment</strong> <strong>Manager</strong> software:<br />

l Via the <strong>ServerView</strong> Suite DVD 1, see section Installation via the<br />

<strong>ServerView</strong> Suite DVD 1.<br />

l Download from the <strong>Fujitsu</strong> <strong>Technology</strong> Solutions web server:<br />

http://support.ts.fujitsu.com/com/support/downloads.html<br />

In Japan, it is also possible to download the software from<br />

the following web site:<br />

http://primeserver.fujitsu.com/primergy/<br />

3.2.1 Installation via the <strong>ServerView</strong> Suite DVD 1<br />

The <strong>Deployment</strong> <strong>Manager</strong> software can be found in the directory<br />

SVSSoftware\Software\<strong>Deployment</strong>\<strong>Deployment</strong><strong>Manager</strong> on the<br />

<strong>ServerView</strong> Suite DVD 1.<br />

You can start the installation via the <strong>ServerView</strong> Suite DVD 1. Proceed as follows:<br />

1. Insert the <strong>ServerView</strong> Suite DVD 1 in the DVD-ROM drive. If the<br />

DVD does not start automatically, click the setup.exe file in the root<br />

directory of the DVD-ROM.<br />

2. Select the option Select <strong>ServerView</strong> Software Products.<br />

3. Click Start.<br />

4. In the next window, select the required language.<br />

5. Select <strong>ServerView</strong> - <strong>Deployment</strong> Tools.<br />

6. Click RDSetup.exe.<br />

The installation wizard is launched with the start window.<br />

3.2 Installation<br />

<strong>Deployment</strong> <strong>Manager</strong> 67


3 Installing <strong>Deployment</strong> <strong>Manager</strong><br />

Figure 9: Start window<br />

Select the packages you wish to install. Click OK. For further information<br />

click Info.<br />

If you select both <strong>Deployment</strong> <strong>Manager</strong> packages, the <strong>Deployment</strong><br />

<strong>Manager</strong> package is the first one to be installed.<br />

3.2.1.1 Installing the <strong>Deployment</strong> <strong>Manager</strong> package<br />

If you select the <strong>Deployment</strong> <strong>Manager</strong> package in the start window, the<br />

<strong>Fujitsu</strong> <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> Setup window opens.<br />

1. Click Next.<br />

2. Approve the license agreement by selecting the appropriate option button.<br />

3. Click Next.<br />

4. Fill out your name and organization.<br />

5. Specify whether the <strong>Deployment</strong> <strong>Manager</strong> settings should only apply<br />

to the current user or to anyone who uses the target computer. Select<br />

the appropriate option button.<br />

68 <strong>Deployment</strong> <strong>Manager</strong>


6. Click Next.<br />

7. Enter the license key. If you do not enter a valid license key, an evaluation<br />

key for 5 servers is created. After you start <strong>Deployment</strong> <strong>Manager</strong><br />

a message window is displayed which shows when the evaluation<br />

key will expire.<br />

Valid license keys can be installed later, see section " Licenses<br />

Installed step (Settings wizard)" on page 332.<br />

8. Click Next.<br />

3.2 Installation<br />

The <strong>ServerView</strong> Operations <strong>Manager</strong>, <strong>ServerView</strong> <strong>Deployment</strong><br />

<strong>Manager</strong> and <strong>ServerView</strong> Installation <strong>Manager</strong> packages must<br />

be installed on the same drive. The <strong>ServerView</strong> Installation <strong>Manager</strong><br />

Data packages can be installed on any drive.<br />

<strong>Deployment</strong> <strong>Manager</strong> 69


3 Installing <strong>Deployment</strong> <strong>Manager</strong><br />

9. Set up your user account. The user profile you specify must already<br />

be available on the target system or domain.<br />

You can also create a new user. Click Create New User in the window.<br />

This opens the Create New User window. Enter the new user<br />

account.<br />

70 <strong>Deployment</strong> <strong>Manager</strong>


3.2 Installation<br />

If the <strong>Deployment</strong> <strong>Manager</strong> packages (<strong>Deployment</strong> <strong>Manager</strong><br />

package and <strong>Deployment</strong> Services package) are installed on<br />

different computers, one of the following requirements must be<br />

met:<br />

l When using one or more domain accounts, both computers<br />

must be present in the same Windows network<br />

domain. In addition, the domain user account specified<br />

during the installation process must be a member of this<br />

network domain.<br />

l When using local user accounts, both computers may be<br />

present in different network domains or in no domain at<br />

all. However, both local user accounts must be allowed<br />

to fully access the shared image repository. This can be<br />

achieved by setting up identical user accounts for <strong>Deployment</strong><br />

<strong>Manager</strong> and services on the computer containing<br />

the repository. User names and passwords must match<br />

those of the local user accounts. If only one user account<br />

is to be used, it must be reproduced identically on all computers<br />

containing the <strong>Deployment</strong> <strong>Manager</strong> package,<br />

the <strong>Deployment</strong> Services package and/or the image<br />

repository.<br />

The user account which is defined during the installation of <strong>Deployment</strong><br />

<strong>Manager</strong> is assigned to the service <strong>ServerView</strong> <strong>Deployment</strong><br />

<strong>Manager</strong>. The user account which is defined during the installation of<br />

the deployment service is assigned to the service <strong>Deployment</strong> Service.<br />

If the password of one of both users was changed by the administrator<br />

using the user management of the Control Panel, the appropriate<br />

password setting must also be changed in the server<br />

administration of the administration tools in the Control Panel.<br />

10. Select Start - Settings - Control Panel - Administrative Tools -<br />

Services. Double-click the service (<strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong><br />

or <strong>Deployment</strong> Service) or select Properties from the context<br />

menu and change to the Log On tab.<br />

<strong>Deployment</strong> <strong>Manager</strong> 71


3 Installing <strong>Deployment</strong> <strong>Manager</strong><br />

This account shows the user name whose password has been<br />

changed. Change the password of that user. To activate the new settings<br />

click OK and stop and restart the service. Afterwards you should<br />

be able to log on to <strong>Deployment</strong> <strong>Manager</strong> again.<br />

11. Click Next.<br />

72 <strong>Deployment</strong> <strong>Manager</strong>


12. Enter the user account for the JBoss service.<br />

For this user name you must have set up a standard user account with<br />

no special rights. Enter the password and repeat it. When you have<br />

already installed the <strong>ServerView</strong> Operations <strong>Manager</strong> in the server,<br />

you cannot enter the user name and passwords. The username and<br />

passwords are already entered at the installation of the <strong>ServerView</strong><br />

Operations <strong>Manager</strong>.<br />

13. Click Next.<br />

3.2 Installation<br />

<strong>Deployment</strong> <strong>Manager</strong> 73


3 Installing <strong>Deployment</strong> <strong>Manager</strong><br />

14. Select the directory server. You can use the directory server OpenDS,<br />

which is automatically installed when you install <strong>ServerView</strong> Operations<br />

<strong>Manager</strong>. Or you want to use an existing directory server. If you<br />

select the Existing directory server option, you must make further<br />

settings for the directory service:<br />

74 <strong>Deployment</strong> <strong>Manager</strong>


15. Specify the following parameters for the directory service:<br />

Host (FQN) Fully-qualified name of the server on which the directory<br />

service is running.<br />

Port Port number used for access to the directory service.<br />

By default, port 636 is used.<br />

SSL This option is enabled by default to protect the data<br />

transfer. It is not advisable to use the directory service<br />

without SSL encryption.<br />

Base DN Highest level of the LDAP directory tree.<br />

User User ID for read access to the data. The user ID<br />

should only have basic read rights.<br />

Password Password for read access.<br />

16. Click Next.<br />

3.2 Installation<br />

<strong>Deployment</strong> <strong>Manager</strong> 75


3 Installing <strong>Deployment</strong> <strong>Manager</strong><br />

17. If you select the No option, a standard installation is performed (recommended).<br />

Select Yes to support the high-availability function. For<br />

more information see section "High-Availability support" on page 339.<br />

18. Click Next.<br />

19. Click Next to start the installation process.<br />

20. Click Finish to exit the wizard.<br />

21. After installation you have to specify the location where Operations<br />

<strong>Manager</strong> is installed.<br />

76 <strong>Deployment</strong> <strong>Manager</strong>


If Operations <strong>Manager</strong> is installed on the same system, select the<br />

Local on this server option and click Ok. Otherwise you must manually<br />

install a certificate at the remote server before you can continue<br />

the installation. The certificate is generated automatically by Server-<br />

View <strong>Deployment</strong> <strong>Manager</strong> installer and can be found in the following<br />

directory:<br />

%Program Files%\<strong>Fujitsu</strong>\<strong>ServerView</strong> Suite\svcommon\data\download\pki<br />

Copy the files in this directory to the following directory on the remote<br />

server:<br />

%Program Files%\<strong>Fujitsu</strong>\<strong>ServerView</strong> Suite\Remote Connector\pki<br />

3.2 Installation<br />

<strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> reads the server list from<br />

the Operations <strong>Manager</strong> database.<br />

<strong>Deployment</strong> <strong>Manager</strong> 77


3 Installing <strong>Deployment</strong> <strong>Manager</strong><br />

If Operations <strong>Manager</strong> is installed on a remote server and<br />

Windows firewall is enabled on the remote server, you must<br />

configure the Windows firewall on the remote server. Select<br />

Start - Administrative Tools - Windows Firewall with<br />

Advanced Security, there select Inbound Rules in the left<br />

pane, and then select New Rule... from the context menu.<br />

Please add a rule of type Program for \Remote Connector\SVRemCon.exe.<br />

You can accept the default values<br />

offered in the wizard (choose SVRemCon as rule name).<br />

Once the installation has been successfully completed, you can start <strong>Deployment</strong><br />

<strong>Manager</strong>, see section "Opening <strong>Deployment</strong> <strong>Manager</strong>" on page 87.<br />

In order to start <strong>Deployment</strong> <strong>Manager</strong>, a deployment server running<br />

the required deployment services must be installed in the LAN segment<br />

, see section "Installing the <strong>Deployment</strong> Services package" on<br />

page 78.<br />

3.2.1.2 Installing the <strong>Deployment</strong> Services package<br />

If you select the <strong>Deployment</strong> Services package in the start window, the<br />

<strong>Deployment</strong>Service - InstallShield Wizard window opens:<br />

78 <strong>Deployment</strong> <strong>Manager</strong>


1. Click Next.<br />

2. If you select the No option, a standard installation is performed (recommended).<br />

Select Yes to support the high-availability function. For<br />

more information "High-Availability support" on page 339.<br />

3. Click Next.<br />

3.2 Installation<br />

<strong>Deployment</strong> <strong>Manager</strong> 79


3 Installing <strong>Deployment</strong> <strong>Manager</strong><br />

4. Select the LAN port to be used for deployment and click Next.<br />

The PXE service requires a DHCP service to be present on the LAN.<br />

80 <strong>Deployment</strong> <strong>Manager</strong>


The selected LAN port (IP address) is stored in the localipaddress.txt<br />

file in the same directory as the PXE-<br />

Service.exe file. By default, this is: \<strong>Deployment</strong>Service\bin<br />

5. Specify whether the DHCP service is running on the PXE service target<br />

machine or another computer in the LAN segment. Select the<br />

appropriate option button.<br />

6. Click Next.<br />

7. Fill out the user account under which the deployment service is to be<br />

run.<br />

You must enter this user account in the Settings wizard<br />

when adding a deployment server.<br />

The installation creates a (local) user group <strong>Deployment</strong><br />

Admins on the deployment server. This user account is<br />

automatically added to the <strong>Deployment</strong> Admins group.<br />

3.2 Installation<br />

<strong>Deployment</strong> <strong>Manager</strong> 81


3 Installing <strong>Deployment</strong> <strong>Manager</strong><br />

8. The user account can either be a domain login [notation:<br />

\] or<br />

a local profile [notation: \].<br />

In the latter case, details of the computer name can be left out.<br />

9. Click Next.<br />

It is recommended that you enter the same values that<br />

were specified when installing the <strong>Deployment</strong> <strong>Manager</strong><br />

package.<br />

10. Click Install to start the installation process.<br />

11. Click Finish to exit the wizard.<br />

12. For the Mass Cloning of Windows servers you need some files which<br />

are available on the <strong>ServerView</strong> Suite DVD 1 as of version 7.810. For<br />

more information, see the Hints section. Click OK to start the installation.<br />

Click Cancel if you want to install the files later.<br />

Configuring the Windows firewall<br />

After successful installation of the <strong>Deployment</strong> Services package, the following<br />

message is displayed:<br />

82 <strong>Deployment</strong> <strong>Manager</strong>


You can also change the firewall settings later at any time by calling the<br />

fw_setting.vbs script.<br />

The fw_setting.vbs script is copied to the \<strong>Deployment</strong> <strong>Manager</strong> directory.<br />

To execute the script proceed as follows:<br />

1. Start a command prompt.<br />

2. Change to the following directory:<br />

%Program Files%\<strong>Fujitsu</strong>\<strong>ServerView</strong> Suite\<strong>Deployment</strong> <strong>Manager</strong><br />

3. Execute the following command:<br />

cscript.exe fw_setting.vbs<br />

3.2.2 Installation via the <strong>Fujitsu</strong> <strong>Technology</strong> Solutions web<br />

server<br />

You can download the <strong>Deployment</strong> <strong>Manager</strong> software from the <strong>Fujitsu</strong> <strong>Technology</strong><br />

Solutions web server:<br />

http://support.ts.fujitsu.com/com/support/downloads.html<br />

3.2 Installation<br />

You can enter the product name or you can be guided there by your choices<br />

in the successive selection lists offered. This method is described below by<br />

using an example:<br />

1. In the first selection list, choose the product line Industry standard<br />

server.<br />

<strong>Deployment</strong> <strong>Manager</strong> 83


3 Installing <strong>Deployment</strong> <strong>Manager</strong><br />

2. Then select the product group PRIMERGY TXxxx.<br />

3. Then select the product family PRIMERGY TX150 S6.<br />

4. Then select the operating system Microsoft Windows Server 2008.<br />

5. Then klick Windows Server 2008 (x64).<br />

6. Select Server Management Software - <strong>ServerView</strong> - <strong>Deployment</strong><br />

<strong>Manager</strong>.<br />

7. Download the relevant <strong>Deployment</strong> <strong>Manager</strong> version.<br />

3.3 Update installation<br />

You start an update installation as described in section "Installation via the<br />

<strong>ServerView</strong> Suite DVD 1" on page 67<br />

In the Start window select the packages you wish to update. The <strong>Deployment</strong><br />

<strong>Manager</strong> and <strong>Deployment</strong> Services package are not selected if they<br />

are already installed.<br />

3.4 Uninstalling/Removing <strong>Deployment</strong> <strong>Manager</strong><br />

l It is highly recommended that you uninstall <strong>Deployment</strong> <strong>Manager</strong><br />

rather than deleting its folder. This is because various components<br />

of a program can be spread all over your system and in<br />

the registry.<br />

l The deployment services must be removed from the system<br />

before an update or reinstallation of <strong>Deployment</strong> <strong>Manager</strong>. If<br />

services remain on the system, the reinstallation may not be<br />

performed properly and error messages may be issued when<br />

the operating system is restarted.<br />

84 <strong>Deployment</strong> <strong>Manager</strong>


3.4 Uninstalling/Removing <strong>Deployment</strong> <strong>Manager</strong><br />

l If you uninstall <strong>Deployment</strong> <strong>Manager</strong> (<strong>Deployment</strong> <strong>Manager</strong><br />

and <strong>Deployment</strong> Services packages) and afterwards you<br />

install a new <strong>Deployment</strong> <strong>Manager</strong> version, you must clear the<br />

site cache from the corresponding browser.<br />

If you use Internet Explorer, select Internet Options ... in the<br />

Tools menu and click the Delete Files ... button on the General<br />

tab.<br />

l If you uninstall <strong>Deployment</strong> <strong>Manager</strong> (<strong>Deployment</strong> <strong>Manager</strong><br />

and <strong>Deployment</strong> Services packages) and subsequently want<br />

to install an older version than before, you must delete the Java<br />

plug-in cache and remove the previous *.jar and *.gif files<br />

stored there. Otherwise a mixture of old and new modules will<br />

be running in the <strong>Deployment</strong> <strong>Manager</strong> front-end session and<br />

indeterminate errors may occur.<br />

You must also clear the browser cache for downloaded files.<br />

l You may see a dialog box during the uninstallation of the<br />

<strong>Deployment</strong> Services package showing a list of files which<br />

are being used. If you accept the default action Close and<br />

restart the applications automatically, there are some event<br />

log entries created by the Microsoft Windows Restart<strong>Manager</strong><br />

(e.g.: Application or service '<strong>Deployment</strong> Service' could not be<br />

restarted.). If you want to prevent the creation of these event<br />

log entries, please select Do not close applications. (A<br />

Reboot may be required.) instead.<br />

l If you uninstall <strong>Deployment</strong> <strong>Manager</strong> (<strong>Deployment</strong> <strong>Manager</strong><br />

and <strong>Deployment</strong> Services packages) you should execute the<br />

del_fw_setting.bat script. This script deletes the firewall settings<br />

that have been added with script fw_setting.vbs. This<br />

script is delivered with the <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong><br />

and <strong>ServerView</strong> Installation <strong>Manager</strong> packages. It is copied to<br />

the \<strong>Deployment</strong><br />

<strong>Manager</strong> directory.<br />

<strong>Deployment</strong> <strong>Manager</strong> 85


3 Installing <strong>Deployment</strong> <strong>Manager</strong><br />

There are different methods of uninstalling:<br />

1. Running uninstall from the Programs folder in the Windows Start<br />

menu.<br />

l Make sure the <strong>Deployment</strong> <strong>Manager</strong> front-end is not open.<br />

l Select Start - Programs - <strong>Fujitsu</strong> - <strong>ServerView</strong> Suite -<br />

<strong>Deployment</strong> <strong>Manager</strong>.<br />

l Select Uninstall <strong>Deployment</strong> <strong>Manager</strong> Completely. The<br />

packages which are installed are marked.<br />

l Confirm to uninstall the packages.<br />

2. Running Add/Remove Programs from the Control Panel.<br />

l Make sure the <strong>Deployment</strong> <strong>Manager</strong> is not running.<br />

l Click Start - Settings - Control Panel.<br />

l Double-click the Add/Remove Programs icon.<br />

l Find the entry for the packages (<strong>Deployment</strong> Service or <strong>Fujitsu</strong><br />

<strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong>) and select the<br />

Add/Remove button to remove the package.<br />

These packages may have been installed on different<br />

computers.<br />

l A screen will prompt you for confirmation. Click yes to start the<br />

uninstallation process.<br />

l During the uninstallation of the <strong>Deployment</strong> Services package,<br />

you can select whether you want to delete the SCW database<br />

and the SCW log files. If you select to keep the files, the<br />

event log information is retained for future analysis.<br />

l There are no dependencies between Installation <strong>Manager</strong><br />

package and <strong>Deployment</strong> <strong>Manager</strong> package. These packages<br />

can be installed or uninstalled in any order.<br />

l The <strong>Deployment</strong> <strong>Manager</strong> package must be uninstalled<br />

before the Operations <strong>Manager</strong> package is uninstalled.<br />

86 <strong>Deployment</strong> <strong>Manager</strong>


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

Web-based <strong>Deployment</strong> <strong>Manager</strong> is integrated in Operations <strong>Manager</strong> as a<br />

separate application. <strong>Deployment</strong> <strong>Manager</strong> handles all PRIMERGY servers<br />

that appear in the <strong>ServerView</strong> server list.<br />

Operations <strong>Manager</strong> is the first application to detect the PRIMERGY servers<br />

and their system information:<br />

l On request Operations <strong>Manager</strong> searches for the LAN to find all the<br />

existing servers in one segment.<br />

l Operations <strong>Manager</strong> requests a list of system information for each<br />

non-blade server and for the management blade of each server blade<br />

contained in a blade chassis.<br />

l Operations <strong>Manager</strong> creates a physical server list for each server and<br />

consolidates logical server groups based on the system information<br />

received.<br />

How to start Operations <strong>Manager</strong> is described in the “Operations <strong>Manager</strong>”<br />

manual.<br />

4.1 Opening <strong>Deployment</strong> <strong>Manager</strong><br />

You can open <strong>Deployment</strong> <strong>Manager</strong> in the following ways:<br />

Opening via Operations <strong>Manager</strong><br />

<strong>Deployment</strong> <strong>Manager</strong> is integrated in Operations <strong>Manager</strong>. In Operations<br />

<strong>Manager</strong> select the <strong>Deployment</strong> <strong>Manager</strong> item from the <strong>Deployment</strong> menu<br />

or select the <strong>Deployment</strong> <strong>Manager</strong> link in the Operations <strong>Manager</strong> main window.<br />

The <strong>Deployment</strong> menu will only appear in Operations <strong>Manager</strong> if<br />

<strong>Deployment</strong> <strong>Manager</strong> has been installed.<br />

<strong>Deployment</strong> <strong>Manager</strong> 87


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

Opening via the Windows start menu<br />

You can also start the <strong>Deployment</strong> <strong>Manager</strong> front-end via the Windows start<br />

menu:<br />

Start - Programs - <strong>Fujitsu</strong> - <strong>ServerView</strong> Suite - <strong>Deployment</strong> <strong>Manager</strong> -<br />

Start <strong>Deployment</strong> <strong>Manager</strong><br />

Opening via a link<br />

If you specified during the installation that Operations <strong>Manager</strong> is located<br />

remotely on another server in the network, the <strong>Deployment</strong> menu is not<br />

added to the menu bar of Operations <strong>Manager</strong>. You can start <strong>Deployment</strong><br />

<strong>Manager</strong> via the following link:<br />

http://:3169/svdm<br />

: ..<br />

4.2 Closing <strong>Deployment</strong> <strong>Manager</strong><br />

You can close <strong>Deployment</strong> <strong>Manager</strong> by closing the main window:<br />

l To close the <strong>Deployment</strong> <strong>Manager</strong> main window, click the Close icon<br />

in the browser window.<br />

<strong>Deployment</strong> <strong>Manager</strong> waits until all currently running deployment jobs are finished<br />

before terminating.<br />

4.3 Actions after starting the <strong>Deployment</strong> <strong>Manager</strong><br />

front-end<br />

If you are opening <strong>Deployment</strong> <strong>Manager</strong> via the Windows start menu or via a<br />

link, the login window of the Central Authentication Service is displayed.<br />

88 <strong>Deployment</strong> <strong>Manager</strong>


4.3 Actions after starting the <strong>Deployment</strong> <strong>Manager</strong> front-end<br />

In this window, enter the user name and the password of the ID under which<br />

you are authorized to use <strong>Deployment</strong> <strong>Manager</strong>.<br />

By default there are predefined user names with different roles:<br />

l Administrator for the role Administrator (default password: admin)<br />

l Operator for the role Operator (default password: admin)<br />

l Monitor for the role Monitor (default password: admin)<br />

l User<strong>Manager</strong> for the role UserAdministrator (default password:<br />

admin)<br />

For more information on role-based user management, see section "Rolebased<br />

permissions on accessing <strong>Deployment</strong> <strong>Manager</strong>" on page 90 and the<br />

"User management in <strong>ServerView</strong>” user guide.<br />

If you are opening <strong>Deployment</strong> <strong>Manager</strong> via Operations <strong>Manager</strong>, the<br />

login window of the Central Authentication Service is not displayed.<br />

You are already authorized to use <strong>Deployment</strong> <strong>Manager</strong>.<br />

<strong>Deployment</strong> <strong>Manager</strong> 89


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

If you start <strong>Deployment</strong> <strong>Manager</strong> for the first time after installation, you must<br />

specify global settings in the Settings wizard. You must work through all<br />

steps in the wizard, see section "Settings" on page 329.<br />

4.3.1 Role-based permissions on accessing <strong>Deployment</strong> <strong>Manager</strong><br />

This chapter provides you with detailed information on the permissions granted<br />

by the user roles Administrator, Operator and Monitor.<br />

"Forbidden" components are deactivated in the <strong>Deployment</strong> <strong>Manager</strong> GUI<br />

Components and functions displayed in the <strong>Deployment</strong> <strong>Manager</strong> GUI are<br />

deactivated ("greyed out") if the role the user is assigned to does not have the<br />

required permissions.<br />

Description of authorization Permitted with user role<br />

Access the Workspaces „Mass<br />

Cloning“, „Mass Installation“ and<br />

“Crash Recovery“<br />

Create a cloning image or snapshot<br />

image of a server<br />

Restore a cloning image or snapshot<br />

image to a server<br />

Install a server P<br />

Modify the global “Settings” P<br />

Administrator Operator Monitor<br />

P P P<br />

P P<br />

Create, modify and delete a server P P<br />

Perform power operations on a<br />

server<br />

Perform a "Generic Boot" or an<br />

"MDP Boot" of a server<br />

P<br />

P P<br />

90 <strong>Deployment</strong> <strong>Manager</strong><br />

P


4.4 <strong>Deployment</strong> <strong>Manager</strong> main window<br />

After successful authentication the main window of <strong>Deployment</strong> <strong>Manager</strong> is<br />

displayed.<br />

The main window contains the following elements (from top to bottom):<br />

l the <strong>ServerView</strong> Suite header<br />

You can collapse the header with the icon on the right.<br />

l the <strong>Deployment</strong> <strong>Manager</strong> menu bars<br />

l the work area, which is divided into a left and right area.<br />

The work area on the left contains the Servers view and the Repositories<br />

view. The Servers view is displayed by default.<br />

On the right select one of the following work areas:<br />

Work Areas Description<br />

Mass Cloning Mass Cloning allows you to create an image of a server<br />

(the reference system) and to clone this image to any<br />

number of target servers in parallel.<br />

Mass Installation<br />

Crash Recovery<br />

4.4 <strong>Deployment</strong> <strong>Manager</strong> main window<br />

Mass Installation allows you to install any number of target<br />

servers in parallel.<br />

Crash Recovery allows you to create and restore snapshot<br />

images of a server.<br />

The contents on the right change depending on your selection on the left.<br />

<strong>Deployment</strong> <strong>Manager</strong> 91


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

4.4.1 Servers view<br />

In the Servers view, the tree structure shows the servers contained in the<br />

<strong>ServerView</strong> server list. This was known as the "Physical View" in a previous<br />

<strong>Deployment</strong> <strong>Manager</strong> version. The Servers view and Mass Cloning workspace<br />

are selected by default after login. You can add additionally servers to<br />

the server list.<br />

The tree structure is divided into different groups, depending on your selection<br />

in the menu bar:<br />

Group Significance<br />

All Servers Predefined server group. The servers listed<br />

in this section are the servers which are<br />

managed in Operations <strong>Manager</strong> and/or the<br />

servers which you added by the New<br />

Server wizard.<br />

Servers by<br />

Cloning<br />

Image<br />

Servers by<br />

Installation<br />

List of all cloning groups. This is formally<br />

known as the "<strong>Deployment</strong> View". It contains<br />

the cloning groups which are created<br />

on the selected deployment server. The<br />

group has a cloning image assigned. If you<br />

select a cloning group, the servers in this<br />

group are displayed. By selecting Clone<br />

from the context menu of the group, all<br />

servers of the group can be cloned in parallel<br />

in one multicast job.<br />

Installation groups. Every server in the<br />

group can have a separate configuration file<br />

assigned. By selecting Install from the context<br />

menu of the group, all servers in the<br />

group can be installed in parallel.<br />

Selected<br />

menu item<br />

Mass Cloning<br />

Mass Installation<br />

92 <strong>Deployment</strong> <strong>Manager</strong>


Group Significance<br />

Servers by<br />

Boot Image<br />

Generic boot groups. By selecting Generic<br />

Boot/MDP Boot from the context menu of<br />

the group, all servers of the group are booted<br />

with the boot image that has been<br />

assigned to the group.<br />

Selected<br />

menu item<br />

Mass Installation<br />

The icon in front of each server shows the status of the PRIMERGY servers,<br />

see section "Icons" on page 102.<br />

The <strong>Deployment</strong> <strong>Manager</strong> functions are available via several context menus.<br />

4.4.2 Repositories view<br />

In the Repositories view, the tree structure shows all available image repositories<br />

and folders which are stored in image repository folders on the deployment<br />

server. Depending on the work area there is only a limited number of<br />

repository types displayed.<br />

The first time you start <strong>Deployment</strong> <strong>Manager</strong> after installation, you must<br />

specify at least one Disk Image or Installation Configuration repository.<br />

Later you can add additional folders and repositories.<br />

4.4.3 Registered Boot Images view<br />

4.4.4 Tabs<br />

In the Registered Boot Images view, there is a list of all registered boot<br />

images displayed. You can register and unregister generic boot images here.<br />

The right work area in the main window contains the following tabs:<br />

l Servers tab<br />

l Information tab<br />

l Tasks tab<br />

4.4 <strong>Deployment</strong> <strong>Manager</strong> main window<br />

<strong>Deployment</strong> <strong>Manager</strong> 93


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

4.4.4.1 Servers (tab)<br />

The Servers tab displays server information of the selected servers in the<br />

Servers view.<br />

Figure 10: Servers tab<br />

The columns in the table have the following meanings:<br />

Column Meaning<br />

Process State The icon displays the process state.<br />

Action The icon displays the last deployment action issued for<br />

the server.<br />

Health State The icon displays the <strong>ServerView</strong> state.<br />

Server Name Name of the server.<br />

Slot Slot number.<br />

94 <strong>Deployment</strong> <strong>Manager</strong>


Column Meaning<br />

Server Type Type of the server.<br />

Cloning Image Path of the image file.<br />

Configuration<br />

File<br />

Path of the configuration file.<br />

Type PRIMERGY server type.<br />

For the meaning of the icons, see section "Icons" on page 102.<br />

4.4.4.2 Information (tab)<br />

The Information tab displays general information.<br />

Figure 11: Information tab<br />

4.4 <strong>Deployment</strong> <strong>Manager</strong> main window<br />

Depending on the selected item in the Servers view, different information is<br />

displayed:<br />

<strong>Deployment</strong> <strong>Manager</strong> 95


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

Chassis Information<br />

Product Name PRIMERGY server type<br />

BIOS Version BIOS version<br />

IP Address (iRMC) IP address of the iRMC.<br />

Health State <strong>ServerView</strong> state<br />

Housing Name Name of the blade server<br />

Model Type Type of the server blade<br />

IP Address (MMB) IP address of the management blade.<br />

Remote Console http address. This address is used to<br />

access the iRMC (for server blades and<br />

PRIMERGY servers) or to access the management<br />

blade of a blade server.<br />

Network Information<br />

(displayed for non-server blades)<br />

The columns in the table below have the following meanings:<br />

Column Meaning<br />

MAC Address MAC address of the server.<br />

Current IP Current IP address.<br />

Config. IP Configured IP address.<br />

Config. Mode Static IP or DHCP.<br />

Slot Information<br />

(displayed for server blades)<br />

Slot Slot number<br />

Product Name PRIMERGY server name<br />

BIOS Version BIOS version<br />

Health State The icon displays the health state, see section<br />

"Icons" on page 102.<br />

96 <strong>Deployment</strong> <strong>Manager</strong>


The columns in the table below have the following meanings:<br />

Column Meaning<br />

MAC Address MAC address of the server blade.<br />

Current IP Current IP address.<br />

Config. IP Configured IP address.<br />

Config. Mode Static IP or DHCP.<br />

Assigned Image (only for a selected cloning group)<br />

Displays the assigned image name.<br />

Image Properties (only for a selected cloning group)<br />

Displays the image properties.<br />

4.4.4.3 Tasks (tab)<br />

You can watch the progress of a created task in the Tasks tab.<br />

Figure 12: Tasks tab<br />

4.4 <strong>Deployment</strong> <strong>Manager</strong> main window<br />

<strong>Deployment</strong> <strong>Manager</strong> 97


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

Refresh<br />

Immediate update of the display.<br />

refresh every<br />

Specify a refresh interval. It is advisable to select refresh every to have<br />

the task status updated at regular intervals.<br />

Task List<br />

The columns in the table below have the following meanings:<br />

Column Meaning<br />

Scheduling The icon displays whether this is an immediate<br />

task or a scheduled task.<br />

State The icon displays the state of the task.<br />

Name Name of the task.<br />

Type Type of the task: Save Disk Image, Deploy<br />

Servers, Install Servers, Generic Boot, MDP<br />

Boot<br />

Scheduling Time Scheduling details of the task.<br />

Task Details<br />

The Task Details view displays details of the selected task in the Job Status<br />

tab and Event Log tab.<br />

Job Status tab<br />

The Job Status tab displays the following information about the current<br />

(latest) execution of the task:<br />

State The icon displays the state of the job.<br />

Start Time Time when the task begin.<br />

End Time Time when the task will finish.<br />

Phase Command Name of the running process.<br />

Phase Status Current state of the process.<br />

The columns in the table below have the following meanings:<br />

98 <strong>Deployment</strong> <strong>Manager</strong>


Column Meaning<br />

Job ID Job identification.<br />

State The icon displays the state of the job.<br />

Server Name Name of the server.<br />

Start Time Time when the task was started.<br />

End Time Time when the task was finished.<br />

Phase Command Current task phase.<br />

Detail Info A detailed status string.<br />

Detail Info displays detailed information about the process.<br />

Event Log tab<br />

The Event Log tab displays information about all past executions of the<br />

task. You can delete the event log via the Clear All Log button.<br />

The columns in the table below have the following meanings:<br />

Column Meaning<br />

State The icon displays the state of the task.<br />

Time Time when this event log entry was created.<br />

Result A detailed status string.<br />

Result displays detailed information about the execution of the task (probably<br />

in the past).<br />

You can perform different operations via the context menu, see section "Operations<br />

on the Tasks tab" on page 99.<br />

For the meaning of the icons, see section "Icons" on page 102.<br />

4.4.4.4 Operations on the Tasks tab<br />

You can perform the following operations on the Tasks tab:<br />

l Executing a task<br />

l Retrying a failed task<br />

4.4 <strong>Deployment</strong> <strong>Manager</strong> main window<br />

<strong>Deployment</strong> <strong>Manager</strong> 99


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

l Editing and deleting task<br />

l Retrying a failed server<br />

l Canceling a task<br />

l Unregistering all clients at the PXE service (only for Generic Boot<br />

tasks)<br />

Executing a task<br />

1. Select the relevant task from the Task List.<br />

2. Select Execute Task from the context menu.<br />

Retrying a failed task<br />

1. Select the relevant task from the Task List.<br />

2. Select Retry Failed Task from the context menu.<br />

Editing a task<br />

The job is not started for all servers, but only for the servers<br />

that failed during last execution of the task.<br />

1. Select the relevant task from the Task List.<br />

2. Select Edit Task from the context menu.<br />

The corresponding wizard is displayed.<br />

Deleting a task<br />

1. Select the relevant task in the Task List.<br />

2. Select Delete Task from the context menu.<br />

Retrying a failed server<br />

1. Under Task Details select the relevant server from the table.<br />

2. Select Retry Failed Server from the context menu.<br />

Canceling a task<br />

1. Select the relevant task in the Task List.<br />

2. Select Cancel Task from the context menu.<br />

100 <strong>Deployment</strong> <strong>Manager</strong>


When you specify iRMC Support, MMB SNMP Support or<br />

MMB Remote <strong>Manager</strong> Support in step Remote Management<br />

Ports in <strong>Deployment</strong> Configuration, the target<br />

server is forced to shutdown if you cancel the task.<br />

Unregistering all clients at the PXE service (only for Generic Boot<br />

tasks)<br />

1. Select the relevant task in the Task List.<br />

4.4 <strong>Deployment</strong> <strong>Manager</strong> main window<br />

2. Select Unregister All Clients from the context menu.<br />

This applies only to servers in Generic Boot tasks that have<br />

been started with selection Permanent for Registration at<br />

PXE Service (see "Settings step (Generic Boot wizard)" on<br />

page 288). The task must already have been completed.<br />

<strong>Deployment</strong> <strong>Manager</strong> 101


4.4.5 Icons<br />

4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

The following list shows the <strong>Deployment</strong> <strong>Manager</strong> icons and their meanings.<br />

General icons:<br />

Icons Meaning<br />

Displays info text.<br />

Magnify result or show details in a larger dialog box.<br />

Server type icons:<br />

Icons Meaning<br />

Server<br />

Blade server<br />

Server blade<br />

BMC<br />

Power icons:<br />

Group (e.g. All Systems, Systems by Installation, Systems by<br />

Cloning Image)<br />

Icon Meaning<br />

Server is on.<br />

Server is off.<br />

Unknown power state.<br />

102 <strong>Deployment</strong> <strong>Manager</strong>


<strong>Deployment</strong> action icons:<br />

Icons Meaning<br />

Installation<br />

Cloning<br />

Not available<br />

Process state icons:<br />

Icons Meaning<br />

Running<br />

Okay<br />

Unknown<br />

Not available<br />

Health state icons:<br />

Icon Meaning<br />

Okay<br />

Unknown<br />

Warning<br />

Error<br />

Standard SNMP agent okay<br />

RSB mode<br />

Not manageable<br />

Ping<br />

4.4 <strong>Deployment</strong> <strong>Manager</strong> main window<br />

<strong>Deployment</strong> <strong>Manager</strong> 103


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

Task scheduling icons:<br />

Icons Meaning<br />

Immediate task<br />

Scheduled task<br />

Task state icons:<br />

Icons Meaning<br />

Not started task<br />

Running task<br />

Retry task<br />

Task okay<br />

Task error<br />

Job state icons:<br />

Icons Meaning<br />

Scheduled job<br />

Initialized job<br />

Waiting job<br />

Running job<br />

Job okay<br />

Job error<br />

Canceling job<br />

Canceled job<br />

104 <strong>Deployment</strong> <strong>Manager</strong>


4.4.6 Wizards<br />

A wizard is an assistant that guides you through a task.<br />

A wizard usually consists of several steps. The number of steps used and<br />

their sequence are shown in a tree structure on the left. Steps that you have<br />

already completed are indicated in the tree.<br />

4.4.6.1 General buttons<br />

The buttons in the bottom right of each step allow you to progress through the<br />

wizard workflow.<br />

Next<br />

Opens the next step in the wizard.<br />

Previous<br />

Opens the previous step in the wizard.<br />

Cancel<br />

Cancels the wizard workflow without saving your changes.<br />

Finish<br />

Executes the wizard with your settings.<br />

4.4 <strong>Deployment</strong> <strong>Manager</strong> main window<br />

<strong>Deployment</strong> <strong>Manager</strong> 105


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

4.5 Repositories<br />

The Repositories view displays on the left all images which are stored in<br />

image repository folders on the deployment server.<br />

Figure 13: Repositories view<br />

The Repositories tree view contains the following repository types:<br />

Disk Images<br />

These are the repositories of disk images where you save disk images<br />

which can then be used to define deployment groups for cloning servers.<br />

A repository of disk images is a directory with subdirectories where you<br />

store your disk images.<br />

106 <strong>Deployment</strong> <strong>Manager</strong>


Preparation Boot Images<br />

These are the repositories of preparation boot images. Preparation boot<br />

images are used to boot a server by using the PXE service. A repository<br />

of preparation boot images is a directory with subdirectories where you<br />

store the PXE boot images you have created in order to perform a 'generic'<br />

system preparation.<br />

<strong>Deployment</strong> Tables<br />

These are the repositories for deployment configurations. These repositories<br />

are used to export or import deployment configurations of selected<br />

servers.<br />

Installation Configuration<br />

These are the repositories of installation configuration files. These files<br />

must be created by Installation <strong>Manager</strong>. A repository of installation configuration<br />

files is a directory with subdirectories where you have stored<br />

the configuration files created with Installation <strong>Manager</strong>. Installation configuration<br />

files are needed when creating an installation group, which is<br />

only possible when Installation <strong>Manager</strong> is installed.<br />

Generic Boot Images<br />

These are the repositories of the generic boot images. These files must<br />

be created manually by the user. Generic boot images can be registered<br />

in the Registered Boot Images view. After registration, a generic boot<br />

image can be used to create a generic boot group.<br />

MDP Applications<br />

These are the repositories of the MDP applications. These files must be<br />

created manually by the user. MDP applications are needed to perform an<br />

MDP Boot of a generic boot group.<br />

For details on MDP (Multi <strong>Deployment</strong> Platform) please see the<br />

<strong>ServerView</strong> Installation <strong>Manager</strong> user guide.<br />

In the Repositories view you can perform the following functions:<br />

4.5 Repositories<br />

l Add a new repository, see section "Add Repository dialog box" on<br />

page 108.<br />

<strong>Deployment</strong> <strong>Manager</strong> 107


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

l Add a new repository folder, see section "Adding a new folder" on<br />

page 110.<br />

l Delete a repository or folder, see section "Deleting a repository or<br />

folder" on page 111.<br />

4.5.1 Add Repository dialog box<br />

In the Add Repository dialog box you can add repositories for all types of<br />

repositories:<br />

l Disk Images<br />

l Preparation Boot Images<br />

l <strong>Deployment</strong> Tables<br />

l Installation Configuration<br />

l Generic Boot Images<br />

l MDP Applications<br />

Figure 14: Add Repository dialog box<br />

108 <strong>Deployment</strong> <strong>Manager</strong>


Repository Name<br />

Name of the repository.<br />

4.5 Repositories<br />

Network Path<br />

In general the path name should be the network path to a shared directory<br />

(UNC notation). It can also be the absolute path to a local directory and<br />

not a network path, but this requires the <strong>Deployment</strong> <strong>Manager</strong> user interface<br />

and the deployment services to be installed on the same machine. If<br />

this is not the case, it must be the network path to a share in UNC notation:<br />

"\\server_name\share_name"<br />

It is important to set the security permissions correctly for the directory that<br />

is used as the repository. The user account(s) that you specified during the<br />

installation of <strong>Deployment</strong> <strong>Manager</strong> must have full control for this directory<br />

(and all subdirectories). This is also true for the sharing permissions.<br />

<strong>Deployment</strong> <strong>Manager</strong> 109


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

There are two services that access the image repository: <strong>ServerView</strong><br />

<strong>Deployment</strong> <strong>Manager</strong> and <strong>Deployment</strong> Service. These two services<br />

may run under two different accounts that you specified during<br />

installation. These services need to access the image repository and<br />

need the right to read, write and list the directories. In general it is possible<br />

to install the <strong>Deployment</strong> <strong>Manager</strong> Web user interface to which<br />

the <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> belongs and the deployment<br />

services on different machines. And there can be a repository on a<br />

third machine. Therefore you must fulfil some requirements concerning<br />

the user account(s):<br />

4.5.2 Adding a new folder<br />

l The first possibility is that you specified a domain account during<br />

<strong>Deployment</strong> <strong>Manager</strong> installation. In this case the machine<br />

on which a repository is located needs to be in the same<br />

domain, and the domain account(s) that you specified for<br />

<strong>Deployment</strong> <strong>Manager</strong> must have full control of the image directory.<br />

l The second possibility is that you are using local user<br />

accounts. In this case the user accounts that you specified during<br />

installation must also be available on the machine on which<br />

a repository is located. Since you are using local account(s),<br />

you must create the same account(s) with the same password<br />

on the machine where the repository is to be located.<br />

In the Repository view you can add new folders.<br />

1. Select a repository or a subfolder of a repository on the left.<br />

2. Select New Folder from the context menu.<br />

New Folder is not supported for Generic Boot Images and MDP<br />

Applications repositories.<br />

The New Folder dialog box opens.<br />

110 <strong>Deployment</strong> <strong>Manager</strong>


Figure 15: New Folder dialog box<br />

Folder Name<br />

Name of the new folder.<br />

4.5.3 Deleting a repository or folder<br />

In the Repositories view you can delete a repository or a folder:<br />

1. Select the relevant repository or folder on the left.<br />

2. Select Delete Repository or Delete Folder from the context menu.<br />

Delete Folder is not supported for Generic Boot Images and<br />

MDP Applications repositories.<br />

A dialog box is displayed to confirm the deletion.<br />

4.6 Registered Boot Images<br />

4.6 Registered Boot Images<br />

The Registered Boot Images view displays a list of all registered boot<br />

images.<br />

<strong>Deployment</strong> <strong>Manager</strong> 111


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

Figure 16: Registered Boot Images view<br />

The built-in images SVIM32 and SVIM64 are 32 bit and 64 bit images of type<br />

SVIM Agent Image (MDP).<br />

If you want use MDP Boot to execute an MDP application, you can<br />

create a boot group from the built-in images SVIM32 or SVIM64. In<br />

most cases it is not necessary to add and register a boot image of<br />

type SVIM Agent Image (MDP).<br />

In the Registered Boot Images view you can perform the following functions:<br />

l Register a boot image, see "Register Boot Image wizard" on page<br />

113.<br />

l Unregister a boot image, see "Unregistering a boot image" on page<br />

117.<br />

l Unregister a client at the PXE service, see "Unregistering a client" on<br />

page 117.<br />

112 <strong>Deployment</strong> <strong>Manager</strong>


4.6.1 Register Boot Image wizard<br />

The Register Boot Image wizard allows you to register a boot image. To<br />

open the wizard, select Register Boot Image from the context menu of Registered<br />

Boot Images.<br />

Before you open the wizard, perform the following actions:<br />

1. Add a Generic Boot Images repository. Please use some shared network<br />

directory. Do not use a directory below the TFTP service directory.<br />

2. Manually copy the boot image files to a sub directory in the Generic<br />

Boot Images repository.<br />

During boot image registration, the boot image files will be copied<br />

from the repository to the TFTP service directory.<br />

4.6.1.1 Image ID step (Register Boot Image wizard)<br />

4.6 Registered Boot Images<br />

Image ID is the first step in the Register Boot Image wizard. In this step<br />

you must enter an Image ID.<br />

<strong>Deployment</strong> <strong>Manager</strong> 113


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

Figure 17: Image ID step<br />

Image ID<br />

An identifier for the boot image. Only alphanumeric characters are<br />

allowed.<br />

Description<br />

Optional: Specify any textual information that you want to be saved with<br />

the boot image. This information will be stored internally by <strong>ServerView</strong><br />

Installation <strong>Manager</strong>.<br />

4.6.1.2 Boot Image step (Register Boot Image wizard)<br />

Boot Image is the next step in the Register Boot Image wizard. In this step<br />

you must select a boot image folder.<br />

114 <strong>Deployment</strong> <strong>Manager</strong>


Figure 18: Boot Image step<br />

Select a boot image folder in one of your Generic Boot Images repositories.<br />

4.6.1.3 Bootstrap Loader step (Register Boot Image wizard)<br />

4.6 Registered Boot Images<br />

Bootstrap Loader is the final step in the Register Boot Image wizard. In<br />

this step you must select a bootstrap loader and provide additional parameter.<br />

<strong>Deployment</strong> <strong>Manager</strong> 115


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

Figure 19: Bootstrap Loader step<br />

Boot Image Folder<br />

Select a bootstrap loader and select the image type.<br />

Image Type<br />

SVIM Agent Image (MDP)<br />

An image that is compatible with the built-in images SVIM32 and<br />

SVIM64 of <strong>ServerView</strong> Installation <strong>Manager</strong>. The image must be a<br />

WinPE based image with a <strong>ServerView</strong> Installation <strong>Manager</strong> agent<br />

included. If you create a boot group with such an image, you can use<br />

MDP Boot to execute an MDP application.<br />

Optionally, you can specify the following parameters:<br />

116 <strong>Deployment</strong> <strong>Manager</strong>


BCD File The filename of the BCD file.<br />

Bootstrap Loader for<br />

EFI Boot<br />

The filename of the bootstrap loader for<br />

EFI boot.<br />

BCD File for EFI Boot The filename of the BCD file for EFI<br />

boot.<br />

Path to TFTP Root Directory<br />

The path to the TFTP root directory.<br />

Generic Image<br />

Any other image. If you create a boot group with such an image, you<br />

can use Generic Boot to boot the generic image.<br />

4.6.2 Unregistering a boot image<br />

In the Registered Boot Images view you can unregister a boot image:<br />

1. Select the relevant boot image.<br />

2. Select Unregister Boot Image from the context menu.<br />

A dialog box opens to confirm the deletion.<br />

4.6.3 Unregistering a client<br />

In the Registered Boot Images view you can unregister a client at the PXE<br />

service:<br />

1. Select the relevant client in the Registered Clients list.<br />

2. Select Unregister Client from the context menu.<br />

A dialog box opens to confirm the deletion.<br />

4.6 Registered Boot Images<br />

This applies only to servers in Generic Boot tasks that have been<br />

started with selection Permanent for Registration at PXE Service<br />

(see "Settings step (Generic Boot wizard)" on page 288). The task<br />

must already have been completed.<br />

<strong>Deployment</strong> <strong>Manager</strong> 117


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

4.7 Configuration before cloning/installation tasks<br />

Before deploying the image to servers or before installing the servers you<br />

must configure the target system.<br />

After defining such a configuration you can export the deployment configuration<br />

and later import it. For more information, see section "Export<br />

<strong>Deployment</strong> Configuration dialog box" on page 130 and section "Import<br />

<strong>Deployment</strong> Configuration dialog box" on page 131.<br />

4.7.1 <strong>Deployment</strong> Configuration wizard<br />

The <strong>Deployment</strong> Configuration wizard allows you to define settings on the<br />

target system. To open the wizard, select <strong>Deployment</strong> Configuration from<br />

the context menu in the Servers view.<br />

4.7.1.1 General Settings step (<strong>Deployment</strong> Configuration wizard)<br />

General Settings is the first step in the <strong>Deployment</strong> Configuration wizard.<br />

In this step you can specify some basic parameters.<br />

118 <strong>Deployment</strong> <strong>Manager</strong>


Figure 20: General Settings step<br />

Name<br />

Server name.<br />

DNS Suffix<br />

If specified, displays the DNS suffix.<br />

4.7 Configuration before cloning/installation tasks<br />

The DNS Suffix is not applied to the target machine in case of<br />

"Mass Installation".<br />

GUID (UUID)<br />

Optional: GUID (globally unique identifier) of the server.<br />

One method for finding out the GUID: If you add a LAN port in the LAN<br />

Ports step, select the List button next to the MAC Address option. The<br />

Found MAC Addresses dialog box displays the GUIDs.<br />

<strong>Deployment</strong> <strong>Manager</strong> 119


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

Primergy Type<br />

When Installation <strong>Manager</strong> is installed, a list of supported PRIMERGY<br />

types appears, from which you should select one.<br />

Process State<br />

Selects the process state:<br />

unknown The process state is unknown. The server has not yet<br />

been deployed or installed by <strong>Deployment</strong> <strong>Manager</strong>.<br />

running A deployment job or installation is running.<br />

OK The server has been successfully deployed or installed by<br />

<strong>Deployment</strong> <strong>Manager</strong>.<br />

User Name<br />

Account name of the administrator. This is the account name used to<br />

select an administrator user account for a cloned system. After this<br />

server has been cloned, the password of this user account is set to the<br />

specified password. During installation of this server, only the specified<br />

password is used to set the password of the administrative account,<br />

which is Administrator on Windows systems and root on Linux systems.<br />

If you are installing a server, the account name has no meaning. If the<br />

administrator account is left empty but a password is specified, the password<br />

of the account Administrator on a Windows operating system and<br />

root on a Linux operating system will be set to the specified password<br />

after an image is cloned.<br />

Password/Repeat Password<br />

Installation:<br />

During installation of this server, the specified password is used to set<br />

the password of the administrative account, which is Administrator on<br />

Windows systems and root on Linux system.<br />

Deploy Image:<br />

The specified password will be set as the password of the selected<br />

account (see parameter "Administrator Account" above). The password<br />

change occurs during the post-preparation of the cloned system.<br />

120 <strong>Deployment</strong> <strong>Manager</strong>


4.7.1.2 LAN Ports step (<strong>Deployment</strong> Configuration wizard)<br />

LAN Ports is the next step in the <strong>Deployment</strong> Configuration wizard. In<br />

this step you can add, edit or delete LAN ports of the server. For each LAN<br />

port you must configure a MAC address. At least one network adapter must<br />

be specified.<br />

Click New to add a LAN port. Click Delete to remove the selected LAN port.<br />

You can only use these buttons for bare servers and servers on which the<br />

<strong>ServerView</strong> agents are not running.<br />

Figure 21: LAN Ports step<br />

4.7 Configuration before cloning/installation tasks<br />

General tab<br />

The General tab allows you to specify a MAC address.<br />

<strong>Deployment</strong> <strong>Manager</strong> 121


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

MAC Address<br />

MAC Address of the selected LAN port. Click List to open the Found<br />

MAC Addresses dialog box in which you can select a MAC address, see<br />

section "Found MAC Addresses dialog box" on page 144.<br />

Register Addresses in DNS<br />

Allows you to register the IP address automatically in DNS. The configuration<br />

of the value is only supported when cloning a Windows disk<br />

image. In addition, the image must be created with Remote Deploy as of<br />

version 4.0.<br />

DNS Suffix for this Connection<br />

Specify a DNS suffix for the selected network adapter. The configuration<br />

of the value is only supported when cloning a Windows disk image. In<br />

addition, the image must be created with Remote Deploy as of version<br />

4.0.<br />

Use Suffix in DNS Registration<br />

Enables a DNS dynamic update to register the IP addresses and the connection-specific<br />

domain name of this connection. The configuration of the<br />

value is only supported when cloning a Windows disk image. In addition,<br />

the image must be created with Remote Deploy as of version 4.0.<br />

IPv4 tab<br />

The IPv4 tab allows you to configure one or more IPv4 addresses.<br />

DHCP<br />

Assignment of a dynamic IP address.<br />

Static<br />

Assignment of static IP addresses. You must enter a fixed IP address<br />

and subnet mask. The default gateway is optional.<br />

DNS Servers<br />

You can assign a list of DNS servers to each LAN port. After an installation<br />

or cloning process the IP addresses are assigned. Click<br />

Add/Edit/Remove to add, edit or remove a DNS server from the list.<br />

122 <strong>Deployment</strong> <strong>Manager</strong>


IP Addresses<br />

For Mass Cloning, only the first two IP addresses are set.<br />

Gateways<br />

For Mass Cloning, only the first gateway address is set. For Mass<br />

Installation of Windows targets the gateway addresses are ignored.<br />

You can add, edit or delete IPv4 addresses, see "Add IPv4 Address dialog<br />

box " on page 146 and "Edit IPv4 Address dialog box " on page 147.<br />

IPv6 tab<br />

The IPv6 tab allows you to configure one or more IPv6 addresses.<br />

DHCP<br />

Assignment of a dynamic IP address.<br />

4.7 Configuration before cloning/installation tasks<br />

Static<br />

Assignment of one or more static IP addresses. You must enter a fixed IP<br />

address and subnet prefix length. The default gateway is optional.<br />

IP Addresses<br />

For Mass Cloning, only the first two IP addresses are set.<br />

Gateways<br />

For Mass Cloning, only the first gateway address is set. For Mass<br />

Installation of Windows targets the gateway addresses are ignored.<br />

DNS Servers<br />

You can assign a list of DNS servers to each LAN port. After an installation<br />

or cloning process the IP addresses are assigned to the corresponding<br />

LAN ports. Click Add/Edit/Remove to add, edit or remove a<br />

DNS server from the list.<br />

You can add, edit or delete IPv6 addresses, see "Add IPv6 Address dialog<br />

box" on page 148 and "Edit IPv6 Address dialog box" on page 149.<br />

<strong>Deployment</strong> <strong>Manager</strong> 123


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

For PRIMEQUEST partitions only: If a PRIMEQUEST Server Agent<br />

(PSA) is installed, you cannot set a static IPv4 address for the NIC of<br />

the PSA-to-MMB communication LAN. The PRIMEQUEST Server<br />

Agent must be setup manually after cloning or installation (see descriptions<br />

of configuring PSA in "PRIMEQUEST 1000 Series Installation<br />

Manual").<br />

4.7.1.3 DNS Search Suffixes step (<strong>Deployment</strong> Configuration wizard)<br />

DNS Search Suffixes is the next step in the <strong>Deployment</strong> Configuration<br />

wizard.<br />

In this step you can optionally specify a list of DNS suffixes which are used<br />

to extend a server name to a fully qualified domain name (FQDN) when trying<br />

to resolve the IP address of a specified server name. When trying to find the<br />

IP address of a specified name, these suffixes are appended to the name and<br />

an address resolution is tried with the fully qualified domain name (FQDN)<br />

produced. This is tried for each specified suffix. The suffixes are tried in the<br />

order in which they are specified.<br />

The configuration of this value is only supported when cloning a disk image.<br />

It is not supported by a remote installation. In addition the image must be<br />

created with a <strong>Deployment</strong> <strong>Manager</strong> version 4.00 or higher.<br />

Click New..., Edit... or Delete to add, edit and remove suffixes. Use the<br />

arrow buttons to change the order.<br />

124 <strong>Deployment</strong> <strong>Manager</strong>


Figure 22: DNS Search Suffixes step<br />

4.7 Configuration before cloning/installation tasks<br />

4.7.1.4 Remote Management Ports step (<strong>Deployment</strong> Configuration wizard)<br />

Remote Management Ports is the next step in the <strong>Deployment</strong> Configuration<br />

wizard. In this step you can select a port for PXE boot and specify<br />

a method for remote management.<br />

If you select iRMC Support, MMB SNMP Support or MMB<br />

Remote <strong>Manager</strong> Support in this step, you need to turn off a target<br />

server before you start the deployment. You can perform the<br />

shutdown manually or by using the Force System Shutdown<br />

before <strong>Deployment</strong> option in the task you want to execute.<br />

<strong>Deployment</strong> <strong>Manager</strong> 125


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

Figure 23: Remote Management Ports step<br />

iRMC Support<br />

You can specify the following options:<br />

IP Address Specify the IP address that was assigned to the<br />

iRMC. This is needed for remote power-on and<br />

remote configuration of the boot sequence for PXE<br />

boot.<br />

User Name Preconfigured user name.<br />

Password /<br />

Repeat Password<br />

Preconfigured password.<br />

Timeout (s) Timeout value for access to iRMC.<br />

Retries Number of retries.<br />

126 <strong>Deployment</strong> <strong>Manager</strong>


For PRIMEQUEST, you can specify the following options:<br />

IP Address<br />

(MMB)<br />

Displays the IP address of the PRIMEQUEST<br />

MMB.<br />

User Name Preconfigured user name on the PRIMEQUEST<br />

MMB. The user name is valid for all partitions.<br />

Password /<br />

Repeat Password<br />

Preconfigured password.<br />

Timeout (s) Timeout value for access to iRMC.<br />

Retries Number of retries.<br />

Click Test Connectivity to test whether the iRMC can be accessed via<br />

the specified parameters.<br />

Wake ON LAN Support<br />

In the case of Wake On LAN, <strong>Deployment</strong> <strong>Manager</strong> uses IP broadcast or<br />

Ethernet broadcast to send a magic packet as a UDP datagram to the<br />

subnet in which the target system is located.<br />

The following applies:<br />

4.7 Configuration before cloning/installation tasks<br />

l If the target system is in the same LAN segment as the deployment<br />

server, you do not need to specify an address under Broadcast<br />

Address. In this case, <strong>Deployment</strong> <strong>Manager</strong> automatically<br />

uses the limited broadcast address 255.255.255.255 and sends<br />

the magic packet to UDP port 9 as an Ethernet broadcast using<br />

the MAC Address of the target system.<br />

l If the target system is in a different LAN segment that is bridged<br />

by one or more gateways, enter either of the following under<br />

Broadcast Address:<br />

o Subnet broadcast address of the LAN segment in which the target<br />

system is located. The address must contain the value<br />

"255" in the device area (host area) (e.g. 192.168.2.255). In this<br />

case, the magic packet is sent to the gateway via one or several<br />

hops, and the gateway ultimately transmits the Ethernet<br />

<strong>Deployment</strong> <strong>Manager</strong> 127


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

broadcast to the subnet of the target system.<br />

o IP address of a BOOTP/DHCP server. In this case, you have<br />

to select the Broadcast to Bootstrap Protocol Server option<br />

and specify a valid IP address in the LAN segment of the target<br />

system under IP Address.<br />

You can specify the following options:<br />

Broadcast<br />

Address<br />

Broadcast<br />

to BootstrapProtocol<br />

Server<br />

IP<br />

Address<br />

Subnet broadcast address of the LAN segment in which<br />

the target system is located, or Unicast address of a<br />

BOOTP/DHCP server. If you specify the Unicast<br />

address of a BOOTP/DHCP server, you must select the<br />

Broadcast to Bootstrap Server option.<br />

The magic packet is sent to UDP port 67 (Bootstrap Protocol<br />

(BOOTP) Server); otherwise, it is sent to UDP port<br />

9.<br />

This option is required if you specify the Unicast address<br />

of a BOOTP/DHCP server under Broadcast Address. Furthermore,<br />

you should select this option if it is not guaranteed<br />

that all gateways included in a subnet broadcast<br />

are configured for "subnet broadcasting".<br />

Any Unicast address in the subnet of the target system.<br />

Using this IP address, the BOOTP/DHCP server determines<br />

the LAN interface (LAN port) via which it is to send<br />

the magic packet (in this case, a DHCP/BOOTP Reply<br />

Packet).<br />

Manual Management<br />

You must start or reboot the server manually.<br />

MMB SNMP Support (only for blades)<br />

The management blade of the blade server is accessed by SNMP. There<br />

must be an SNMP community with read-write access.<br />

128 <strong>Deployment</strong> <strong>Manager</strong>


MMB Remote <strong>Manager</strong> Support (only for blades)<br />

The management blade of the blade server is accessed via telnet. Readwrite<br />

access via SNMP is not necessary. Only blade servers BX600<br />

(with MMB S3) and BX900 are supported.<br />

This option can only be selected for Mass Cloning and Crash<br />

Recovery. For Mass Installation use the MMB SNMP Support<br />

option.<br />

4.7.1.5 Notes step (<strong>Deployment</strong> Configuration wizard)<br />

Notes is the final step in the <strong>Deployment</strong> Configuration wizard. In this step<br />

you can enter notes about this server.<br />

Figure 24: Notes step<br />

4.7 Configuration before cloning/installation tasks<br />

<strong>Deployment</strong> <strong>Manager</strong> 129


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

4.7.2 Export <strong>Deployment</strong> Configuration dialog box<br />

The Export <strong>Deployment</strong> Configuration dialog box allows you to export the<br />

deployment configuration for a list of selected servers in the server list to a<br />

file in XML format. This deployment configuration can then be imported. This<br />

enables you to change between different deployment configurations for<br />

servers whenever it is necessary.<br />

The exported files are always stored in the <strong>Deployment</strong> Tables repository in<br />

the Repositories view.<br />

To open the dialog box, select Export <strong>Deployment</strong> Configuration from the<br />

context menu in the Servers view.<br />

Figure 25: Export <strong>Deployment</strong> Configuration dialog box<br />

130 <strong>Deployment</strong> <strong>Manager</strong>


<strong>Deployment</strong> Tables<br />

Select the directory to which the deployment configuration is exported.<br />

File Name<br />

File name to which the deployment configuration is exported.<br />

Encrypt Passwords<br />

The exported deployment configuration contains several passwords. By<br />

default these passwords are exported in encrypted form. If you want to<br />

edit them, uncheck the option.<br />

4.7.3 Import <strong>Deployment</strong> Configuration dialog box<br />

The Import <strong>Deployment</strong> Configuration dialog box allows you to import the<br />

deployment configuration from a file that was created by a previous export<br />

run.<br />

To open the dialog box, proceed as follows:<br />

4.7 Configuration before cloning/installation tasks<br />

l Select the corresponding file in the <strong>Deployment</strong> Tables group in the<br />

Repositories view.<br />

l Select Import <strong>Deployment</strong> Configuration from the context menu.<br />

Figure 26: Import <strong>Deployment</strong> Configuration dialog box<br />

<strong>Deployment</strong> <strong>Manager</strong> 131


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

Add non existing Bare Servers<br />

The deployment configuration of bare servers which do not exist in the<br />

server list will be imported.<br />

Match Server ID<br />

The deployment configuration of servers whose server ID stored in the<br />

file also matches the server ID found in the server list will be imported.<br />

Match Slot ID (Only for blade servers)<br />

The deployment configuration of servers whose slot number stored in the<br />

file also matches the slot number found in the server list will be imported.<br />

4.7.4 Deleting an exported deployment configuration<br />

To delete an exported file, proceed as follows:<br />

1. Select the corresponding file in the <strong>Deployment</strong> Tables group in the<br />

Repositories view.<br />

2. Select Delete File from the context menu.<br />

4.8 Adding new servers<br />

You can add new servers to the <strong>ServerView</strong> server list. This list provides an<br />

overview of all the configured servers.<br />

To add a new server use the New Server wizard.<br />

4.8.1 New Server wizard<br />

The New Server wizard does not support blade server and PRI-<br />

MEQUEST. If you want to add new server blades or new PRI-<br />

MEQUEST partitions, add their MMB from <strong>ServerView</strong> Operations<br />

<strong>Manager</strong>. The server blades and PRIMEQUEST partitions are automatically<br />

added by adding the MMB. Do not directly add the server<br />

blades or PRIMEQUEST partitions to the server list of <strong>ServerView</strong><br />

Operations <strong>Manager</strong>.<br />

The New Server wizard allows you to define settings for the new server.<br />

132 <strong>Deployment</strong> <strong>Manager</strong>


You open the wizard by selecting New Server from the context menu of All<br />

Servers in the Servers view.<br />

4.8.1.1 General Settings step (New Server wizard)<br />

General Settings is the first step in the New Server wizard. In this step you<br />

can specify some basic parameters.<br />

Figure 27: General Settings step<br />

Name<br />

Server name.<br />

DNS Suffix<br />

If specified, displays the DNS suffix.<br />

4.8 Adding new servers<br />

The DNS Suffix is not applied to the target machine in case of<br />

"Mass Installation".<br />

<strong>Deployment</strong> <strong>Manager</strong> 133


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

GUID (UUID)<br />

Optional: GUID (globally unique identifier) of the server.<br />

One method for finding out the GUID: If you add a LAN port in the LAN<br />

Ports step, select the List button next to the MAC Address option. The<br />

Found MAC Addresses dialog box displays the GUIDs.<br />

Primergy Type<br />

When Installation <strong>Manager</strong> is installed, a list of supported PRIMERGY<br />

types appears, from which you should select one.<br />

Process State<br />

Selects the process state:<br />

unknown The process state is unknown. The server has not yet<br />

been deployed or installed by <strong>Deployment</strong> <strong>Manager</strong>.<br />

running A deployment job or installation is running.<br />

OK The server has been successfully deployed or installed by<br />

<strong>Deployment</strong> <strong>Manager</strong>.<br />

User Name<br />

Account name of the administrator. This is the account name used to<br />

select an administrator user account for a cloned system. After this<br />

server has been cloned, the password of this user account is set to the<br />

specified password. During installation of this server, only the specified<br />

password is used to set the password of the administrative account,<br />

which is Administrator on Windows systems and root on Linux systems.<br />

If you are installing a server, the account name has no meaning. If the<br />

administrator account is left empty but a password is specified, the password<br />

of the account Administrator on a Windows operating system and<br />

root on a Linux operating system will be set to the specified password<br />

after an image is cloned.<br />

Password/Repeat Password<br />

Installation:<br />

During installation of this server, the specified password is used to set<br />

134 <strong>Deployment</strong> <strong>Manager</strong>


the password of the administrative account, which is Administrator on<br />

Windows systems and root on Linux system.<br />

Deploy Image:<br />

The specified password will be set as the password of the selected<br />

account (see parameter "Administrator Account" above). The password<br />

change occurs during the post-preparation of the cloned system.<br />

4.8.1.2 LAN Ports step (New Server wizard)<br />

4.8 Adding new servers<br />

LAN Ports is the next step in the New Server wizard. In this step you can<br />

add, edit or delete LAN ports of the server. For each LAN port you must configure<br />

a MAC address. One network adapter must be specified to be used for<br />

PXE boot.<br />

Click New to add a LAN port. Click the Delete button to remove the selected<br />

LAN port. You can only use these buttons for bare servers and servers on<br />

which the <strong>ServerView</strong> agents are not running.<br />

Bare server indicates a server where an operating system has not<br />

been installed yet.<br />

<strong>Deployment</strong> <strong>Manager</strong> 135


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

Figure 28: LAN Ports step<br />

General tab<br />

The General tab allows you to specify a MAC address.<br />

MAC Address<br />

MAC address of the selected LAN port. Click List to open the Found<br />

MAC Addresses dialog box in which you can select a MAC address, see<br />

section "Found MAC Addresses dialog box" on page 144.<br />

Register Addresses in DNS<br />

Allows you to register the IP address automatically in DNS. The configuration<br />

of the value is only supported when cloning a Windows disk<br />

image. In addition, the image must be created with RemoteDeploy as of<br />

version 4.0.<br />

136 <strong>Deployment</strong> <strong>Manager</strong>


DNS Suffix for this Connection<br />

Specify a DNS suffix for the selected network adapter. The configuration<br />

of the value is only supported when cloning a Windows disk image. In<br />

addition, the image must be created with RemoteDeploy as of version<br />

4.0.<br />

Use Suffix in DNS Registration<br />

Enables a DNS dynamic update to register the IP addresses and the connection-specific<br />

domain name of this connection. The configuration of the<br />

value is only supported when cloning a Windows disk image. In addition,<br />

the image must be created with RemoteDeploy as of version 4.0.<br />

IPv4 tab<br />

The IPv4 tab allows you to configure one or more IPv4 addresses.<br />

DHCP<br />

Assignment of a dynamic IP address.<br />

Static<br />

Assignment of static IP addresses. You must enter a fixed IP address<br />

and subnet mask. The default gateway is optional.<br />

DNS Servers<br />

You can assign a list of DNS servers to each LAN port. After an installation<br />

or cloning process the IP addresses are assigned. Click<br />

Add/Edit/Remove to add, edit or remove a DNS server from the list.<br />

IP Addresses<br />

For Mass Cloning, only the first two IP addresses are set.<br />

4.8 Adding new servers<br />

Gateways<br />

For Mass Cloning, only the first gateway address is set. For Mass<br />

Installation of Windows targets the gateway addresses are ignored.<br />

You can add, edit or delete IPv4 addresses, see "Add IPv4 Address dialog<br />

box " on page 146 and "Edit IPv4 Address dialog box " on page 147.<br />

IPv6 tab<br />

The IPv6 tab allows you to configure one or more IPv6 addresses.<br />

<strong>Deployment</strong> <strong>Manager</strong> 137


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

DHCP<br />

Assignment of a dynamic IP address.<br />

Static<br />

Assignment of one or more static IP addresses. You must enter a fixed IP<br />

address and subnet prefix length. The default gateway is optional.<br />

IP Addresses<br />

For Mass Cloning, only the first two IP addresses are set.<br />

Gateways<br />

Displays a list of IPv6 gateway addresses.<br />

DNS Servers<br />

You can assign a list of DNS servers to each LAN port. After an installation<br />

or cloning process the IP addresses are assigned to the corresponding<br />

LAN ports. Click Add/Edit/Remove to add, edit or remove a<br />

DNS server from the list.<br />

You can add, edit or delete IPv6 addresses, see "Add IPv6 Address dialog<br />

box" on page 148 and "Edit IPv6 Address dialog box" on page 149.<br />

4.8.1.3 DNS Search Suffixes step (New Server wizard)<br />

DNS Search Suffixes is the next step in the New Server wizard.<br />

In this step you can optionally specify a list of DNS suffixes which are used<br />

to extend a server name to a fully qualified domain name (FQDN) when trying<br />

to resolve the IP address of a specified server name. When trying to find the<br />

IP address of a specified name, these suffixes are appended to the name and<br />

an address resolution is tried with the fully qualified domain name (FQDN)<br />

produced. This is tried for each specified suffix. The suffixes are tried in the<br />

order in which they are specified.<br />

The configuration of this value is only supported when cloning a disk image.<br />

It is not supported by a remote installation. In addition the image must be<br />

created with a <strong>Deployment</strong> <strong>Manager</strong> version 4.00 or higher.<br />

Click New..., Edit... or Delete to add, edit and remove suffixes. Use the<br />

arrow buttons to change the order.<br />

138 <strong>Deployment</strong> <strong>Manager</strong>


Figure 29: DNS Search Suffixes step<br />

4.8.1.4 Remote Management Ports step (New Server wizard)<br />

Remote Management Ports is the next step in the New Server wizard. In<br />

this step you can select a port for PXE boot and specify one of the following<br />

methods for remote management:<br />

l iRMC Support<br />

l Wake on LAN Support<br />

l Manual Management<br />

4.8 Adding new servers<br />

<strong>Deployment</strong> <strong>Manager</strong> 139


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

Figure 30: Remote Management Ports step<br />

iRMC Support<br />

You can specify the following options:<br />

IP Address Specify the IP address that was assigned to the<br />

iRMC. This is needed for remote power-on and<br />

remote configuration of the boot sequence for PXE<br />

boot.<br />

User Name Preconfigured user name.<br />

Password /<br />

Repeat Password<br />

Preconfigured password.<br />

Timeout (s) Timeout value for access to iRMC.<br />

Retries Number of retries.<br />

140 <strong>Deployment</strong> <strong>Manager</strong>


For PRIMEQUEST, you can specify the following options:<br />

IP Address<br />

(MMB)<br />

Displays the IP address of the PRIMEQUEST<br />

MMB.<br />

User Name Preconfigured user name on the PRIMEQUEST<br />

MMB. The user name is valid for all partitions.<br />

Password /<br />

Repeat Password<br />

Preconfigured password.<br />

Timeout (s) Timeout value for access to iRMC.<br />

Retries Number of retries.<br />

Click Test Connectivity to test whether the iRMC can be accessed via<br />

the specified parameters.<br />

Wake ON LAN Support<br />

In the case of Wake On LAN, <strong>Deployment</strong> <strong>Manager</strong> uses IP broadcast or<br />

Ethernet broadcast to send a magic packet as a UDP datagram to the<br />

subnet in which the target system is located.<br />

The following applies:<br />

4.8 Adding new servers<br />

l If the target system is in the same LAN segment as the deployment<br />

server, you do not need to specify an address under Broadcast<br />

Address. In this case, <strong>Deployment</strong> <strong>Manager</strong> automatically<br />

uses the limited broadcast address 255.255.255.255 and sends<br />

the magic packet to UDP port 9 as an Ethernet broadcast using<br />

the MAC Address of the target system.<br />

l If the target system is in a different LAN segment that is bridged<br />

by one or more gateways, enter either of the following under<br />

Broadcast Address:<br />

o Subnet broadcast address of the LAN segment in which the target<br />

system is located. The address must contain the value<br />

"255" in the device area (host area) (e.g. 192.168.2.255). In this<br />

case, the magic packet is sent to the gateway via one or several<br />

hops, and the gateway ultimately transmits the Ethernet<br />

<strong>Deployment</strong> <strong>Manager</strong> 141


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

broadcast to the subnet of the target system.<br />

o IP address of a BOOTP/DHCP server. In this case, you have<br />

to select the Broadcast to Bootstrap Protocol Server option<br />

and specify a valid IP address in the LAN segment of the target<br />

system under IP Address.<br />

You can specify the following options:<br />

Broadcast<br />

Address<br />

Broadcast<br />

to BootstrapProtocol<br />

Server<br />

IP<br />

Address<br />

Subnet broadcast address of the LAN segment in which<br />

the target system is located, or Unicast address of a<br />

BOOTP/DHCP server. If you specify the Unicast<br />

address of a BOOTP/DHCP server, you must select the<br />

Broadcast to Bootstrap Server option.<br />

The magic packet is sent to UDP port 67 (Bootstrap Protocol<br />

(BOOTP) Server); otherwise, it is sent to UDP port<br />

9.<br />

This option is required if you specify the Unicast address<br />

of a BOOTP/DHCP server under Broadcast Address. Furthermore,<br />

you should select this option if it is not guaranteed<br />

that all gateways included in a subnet broadcast<br />

are configured for "subnet broadcasting".<br />

Any Unicast address in the subnet of the target system.<br />

Using this IP address, the BOOTP/DHCP server determines<br />

the LAN interface (LAN port) via which it is to send<br />

the magic packet (in this case, a DHCP/BOOTP Reply<br />

Packet).<br />

Manual Management<br />

You must start or reboot the server manually.<br />

MMB SNMP Support (only for blades)<br />

The management blade of the blade server is accessed by SNMP. There<br />

must be an SNMP community with read-write access.<br />

142 <strong>Deployment</strong> <strong>Manager</strong>


MMB Remote <strong>Manager</strong> Support (only for blades)<br />

The management blade of the blade server is accessed via telnet. Readwrite<br />

access via SNMP is not necessary. Only blade servers BX600<br />

(with MMB S3) and BX900 are supported.<br />

This option can only be selected for Mass Cloning and Crash<br />

Recovery. For Mass Installation use the MMB SNMP Support<br />

option.<br />

4.8.1.5 Notes step (New Server wizard)<br />

Notes is the final step in the New Server wizard. In this step you can enter<br />

notes about this server.<br />

Figure 31: Notes step<br />

4.8 Adding new servers<br />

<strong>Deployment</strong> <strong>Manager</strong> 143


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

4.8.1.6 Found MAC Addresses dialog box<br />

The Found MAC Addresses dialog box displays a list of MAC addresses<br />

found by the PXE service. This is a list of all MAC addresses that initiated a<br />

PXE request. By default all MAC addresses that belong to a server that is<br />

already in the server list are not shown in this list. Also only the last occurrence<br />

of a MAC address is listed by default.<br />

To open the dialog box, select List in the LAN Ports step in the New Server<br />

wizard.<br />

Figure 32: Found MAC Addresses dialog box<br />

144 <strong>Deployment</strong> <strong>Manager</strong>


Refresh<br />

Use this button to refresh the display.<br />

Exclude MAC Addresses Of Existing Servers<br />

When this option is not checked, the MAC addresses belonging to<br />

servers that are already in the server list are listed.<br />

Show Duplicates<br />

When this option is not checked, only the last PXE request of the server<br />

(MAC address) will be shown.<br />

The table in the dialog box lists the defined MAC addresses. The columns<br />

are explained below:<br />

Column Meaning<br />

MAC Address MAC address of the first LAN port of the server.<br />

GUID GUID (globally unique identifier) of the server.<br />

PXE Request Time Time of the last PXE request.<br />

4.8 Adding new servers<br />

After selecting a MAC address and clicking OK, the following window is displayed:<br />

<strong>Deployment</strong> <strong>Manager</strong> 145


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

Confirm with Yes to use the GUID for this server; otherwise confirm with No.<br />

4.9 IP addresses<br />

4.9.1 IPv4 IP addresses<br />

As of <strong>Deployment</strong> <strong>Manager</strong> version 5.40 you can specify more IPv4 addresses.<br />

Only two IPv4 addresses are set during Mass Cloning.<br />

You can add, edit or delete IPv4 addresses.<br />

4.9.1.1 Add IPv4 Address dialog box<br />

The Add IPv4 Address dialog box allows you to configure a static IPv4<br />

address.<br />

To open the dialog, select New in the IPv4 tab in the LAN Ports step of the<br />

New Server or <strong>Deployment</strong> Configuration wizard.<br />

Figure 33: Add IPv4 Address dialog box<br />

146 <strong>Deployment</strong> <strong>Manager</strong>


IP Address<br />

IPv4 uses 32-bit (four-byte) addresses, which limits the address space to<br />

4,294,967,296 (2 32 ) possible unique addresses. IPv4 addresses are<br />

usually written in dot-decimal notation, which consists of the four octets<br />

of the address expressed in decimal and separated by periods.<br />

Subnet Mask<br />

In IPv4 networks, the routing prefix is traditionally expressed as a subnet<br />

mask, which is the prefix bit mask expressed in quad-dotted decimal representation.<br />

4.9.1.2 Edit IPv4 Address dialog box<br />

The Edit IPv4 Address dialog box allows you to edit a static IPv64 address.<br />

To open the dialog, select Edit in the IPv4 tab in the LAN Ports step of the<br />

New Server or <strong>Deployment</strong> Configuration wizard.<br />

Figure 34: Edit IPv4 Address dialog box<br />

4.9 IP addresses<br />

IP Address<br />

IPv4 uses 32-bit (four-byte) addresses, which limits the address space to<br />

4,294,967,296 (2 32 ) possible unique addresses. IPv4 addresses are<br />

<strong>Deployment</strong> <strong>Manager</strong> 147


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

usually written in dot-decimal notation, which consists of the four octets<br />

of the address expressed in decimal and separated by periods.<br />

Subnet Mask<br />

In IPv4 networks, the routing prefix is traditionally expressed as a subnet<br />

mask, which is the prefix bit mask expressed in quad-dotted decimal representation.<br />

4.9.2 IPv6 IP addresses<br />

As of <strong>Deployment</strong> <strong>Manager</strong> version 5.20 you can use the IPv6 address<br />

range.<br />

The previous globally available address space under IPv4 is quickly running<br />

out. This has been resolved by introducing the IPv6 address space. An IPv6<br />

address is 128 bits long and consists of eight 16-bit fields, each separated by<br />

colons. Each field must contain a hexadecimal number, as opposed to the<br />

dot-decimal notation used in IPv4 addresses. You can add, edit or delete<br />

IPv6 addresses.<br />

4.9.2.1 Add IPv6 Address dialog box<br />

The Add IPv6 Address dialog box allows you to configure a static IPv6<br />

address.<br />

To open the dialog, select New in the IPv6 tab in the LAN Ports step of the<br />

New Server or Deplyoment Configuration wizard.<br />

Figure 35: Add IPv6 Address dialog box<br />

148 <strong>Deployment</strong> <strong>Manager</strong>


IP Address<br />

IPv6 addresses are normally written as eight groups of four hexadecimal<br />

digits, where each group is separated by a colon (:).<br />

For example: 2001:0db8:85a3:0000:0000:8a2e:0370:7334<br />

Any leading zeros in a group may be omitted; thus, the given example<br />

becomes<br />

2001:db8:85a3:0:0:8a2e:370:7334<br />

One or any number of consecutive groups of 0 values may be replaced<br />

with two colons (::):<br />

2001:db8:85a3::8a2e:370:7334<br />

Subnet Prefix Length<br />

Length of the subnet prefix for the IPv6 address. For IPv6 unicast<br />

addresses this value should be 64 (the standard value).<br />

4.9.2.2 Edit IPv6 Address dialog box<br />

The Edit IPv6 Address dialog box allows you to edit a static IPv6 address.<br />

To open the dialog, select Edit in the IPv6 tab in the LAN Ports step of the<br />

New Server or <strong>Deployment</strong> Configuration wizard.<br />

Figure 36: Edit IPv6 Address dialog box<br />

4.9 IP addresses<br />

<strong>Deployment</strong> <strong>Manager</strong> 149


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

IP Address<br />

IPv6 addresses are normally written as eight groups of four hexadecimal<br />

digits, where each group is separated by a colon (:).<br />

For example: 2001:0db8:85a3:0000:0000:8a2e:0370:7334<br />

Any leading zeros in a group may be omitted; thus, the given example<br />

becomes<br />

2001:db8:85a3:0:0:8a2e:370:7334<br />

One or any number of consecutive groups of 0 values may be replaced<br />

with two colons (::):<br />

2001:db8:85a3::8a2e:370:7334<br />

Subnet Prefix Length<br />

Length of the subnet prefix for the IPv6 address. For IPv6 unicast<br />

addresses this value should be 64 (the standard value).<br />

4.10 Managing the power state of a server<br />

You can manage the power state of selected servers via the Manage Power<br />

State dialog box. To open the dialog box, select one or more servers and<br />

choose Manage Power State from the context menu.<br />

4.10.1 Manage Power State dialog box<br />

The Manage Power State dialog box allows you to control the power state of<br />

one or more servers.<br />

150 <strong>Deployment</strong> <strong>Manager</strong>


Figure 37: Manage Power State dialog box<br />

Current Power State<br />

Displays the selected server. The icon next to it shows the current power<br />

state, see section "Icons" on page 102.<br />

Change Power State to<br />

New power state to be assigned to all servers in the list.<br />

On Server is on. Default value if the server is off.<br />

Force Off Server is off. Default value if the server is on.<br />

4.11 Collecting diagnostic information<br />

4.11 Collecting diagnostic information<br />

Sometimes it might be necessary to gather diagnostic information in order to<br />

send it to your <strong>Fujitsu</strong> support service. Via the Windows start menu please<br />

execute:<br />

Start - Programs - <strong>Fujitsu</strong> - <strong>ServerView</strong> Suite - <strong>Deployment</strong> <strong>Manager</strong> -<br />

Collect Diagnostic Information<br />

<strong>Deployment</strong> <strong>Manager</strong> 151


4 Working with <strong>Deployment</strong> <strong>Manager</strong><br />

The diagnostic information will be saved to the \<strong>Deployment</strong> <strong>Manager</strong>\diag directory.<br />

152 <strong>Deployment</strong> <strong>Manager</strong>


5 Image creation<br />

Once a reference system is completely installed and finally configured for the<br />

end user, the image creation process can be started via <strong>Deployment</strong> <strong>Manager</strong>.<br />

<strong>Deployment</strong> <strong>Manager</strong> supports<br />

l snapshot image creation. A “snapshot image” of a server can be used<br />

to restore the server after a system crash. A snapshot image can be<br />

created for multiple selected servers at the same time.<br />

l file-system-dependent image creation and<br />

l file-system-independent image creation (raw image creation).<br />

Only when creating a non-snapshot image from a Windows system:<br />

If an error occurs during image creation, you should boot the Windows<br />

operating system normally. At the beginning of this boot there might be<br />

a "rollback" of changes already made. Do not interrupt this rollback.<br />

Before you start the image creation process, make sure that all connected<br />

network drives are disconnected.<br />

The created image is stored in an image repository. This is located in a<br />

shared folder on any file server in the network.<br />

An image consists of the following files:<br />

l .img<br />

The file contains the image.<br />

l .cfg<br />

The file contains information about the image.<br />

l .txt<br />

The file contains user-defined text.<br />

<strong>Deployment</strong> <strong>Manager</strong> 153


5 Image creation<br />

5.1 File-System-Dependent image creation<br />

The following file systems are supported:<br />

l FAT, FAT32 (Windows)<br />

l NTFS, NTFS5 (Windows), NTF5<br />

l EXT2, EXT3, EXT4 Reiser file system (V3.5, 3.6) (Linux)<br />

l Reiser file system with <strong>Deployment</strong> <strong>Manager</strong> version 4.0<br />

If the file-system-dependent image creation has been selected for a<br />

hard disk drive with unsupported file systems, the image is still created<br />

correctly.<br />

If a customer has an implementation of several partitions with different<br />

file systems, the deployment engine will check for each partition<br />

whether the file system is supported (and add partition to image in filesystem-dependent<br />

mode) or not (and add partition to image in raw<br />

mode) and will save all partition information in one image file. Otherwise<br />

(if file-system-independent or raw image creation has been selected),<br />

the file system type will not be checked and all data will be moved<br />

to the image file in raw mode (1:1).<br />

Partitions with unsupported file systems will be read out according to<br />

the file-system-independent image creation process.<br />

File-system-dependent image creation does determine the used clusters of<br />

each file in the file table of a file system, but only stores this file table and the<br />

used clusters in the image. With this method, the disk geometry is not limited<br />

because the used blocks are stored file-system-(partition)-oriented, not disksector-oriented.<br />

The image is compressed and encoded (for security reasons) and written to a<br />

location defined by a UNC network path (UNC = Universal Naming Convention).<br />

When using this kind of image, the cloning process first creates partitions<br />

with the appropriate size. Based on the information stored in the image header,<br />

these partitions are then formatted. Finally, the image is decoded and<br />

154 <strong>Deployment</strong> <strong>Manager</strong>


5.2 File-System-Independent image creation (raw image creation)<br />

decompressed on the fly and the extracted clusters are written directly to the<br />

correct location in the file system. With this method, only the user data of a<br />

file system must be copied to a target drive.<br />

If you have created an image of a 20 GB partition with 3 GB user data, only 3<br />

GB user data must be written on the hard drive. The rest of the 20 GB is<br />

defined by the formatting process (if verification is selected, this will take<br />

extra time).<br />

5.2 File-System-Independent image creation (raw<br />

image creation)<br />

The file-system-independent image is a 1:1 copy of a hard drive but is encoded<br />

and compressed in the same way as for the file-system-dependent method.<br />

The final image also has a similar size because of effective compression<br />

of the non-used areas. The hardcopy is read by sector containing information<br />

which depends on the drive geometry. This method requires that the hard<br />

drive of the target system is identical to the hard drive of the reference system.<br />

The big difference is the extraction of this image, which takes about three<br />

times as long but is independent of the file system and thus the operating system.<br />

On the target side, each byte of the raw copy of the image must be<br />

written to the target drive which takes the majority of the cloning process.<br />

If you have created an image of a 20 GB partition and 3 GB user data, you<br />

must write 20 GB to the hard drive again. On the other hand, you save the<br />

time required for formatting the drive (slow, especially with verification, but<br />

more flexible).<br />

For partitions which are converted as dynamic disks, the image creation can<br />

only be done in raw mode.<br />

Before you start the raw image creation you must clean the drive.<br />

This can be done via Installation <strong>Manager</strong>. Start the Disk<strong>Manager</strong><br />

tool and select CleanOut drive.<br />

<strong>Deployment</strong> <strong>Manager</strong> 155


5 Image creation<br />

5.3 Supported operating systems for image creation<br />

<strong>Deployment</strong> <strong>Manager</strong> supports the image creation for:<br />

l Microsoft Windows Server 2003 Standard/Enterprise (x86 and x64)<br />

l Microsoft Windows Server 2003 R2 Standard/Enterprise (x86 and<br />

x64)<br />

l Microsoft Windows Server 2008 Standard/Enterprise/Datacenter/Foundation<br />

(x86 and x64)<br />

l Microsoft Windows Server 2008 R2 Standard/Enterprise/Datacenter/Foundation/Web<br />

(x64)<br />

Mass Cloning of Windows Domain Controllers (e.g.<br />

Windows Small Business Server) is not supported.<br />

l Microsoft Windows Server 2012 Datacenter/Standard/Foundation,<br />

Windows Storage Server 2012 Standard<br />

l Resilient file system is not supported.<br />

l Storage Spaces and Virtual Disks created<br />

from Storage Spaces are not supported.<br />

l Before starting an image creation process for<br />

a Windows 2008 or Windows 2012 reference<br />

system, you must configure the Windows firewall<br />

on the reference system, see "Configuring<br />

the Windows firewall on the reference<br />

system" on page 160.<br />

l In cloning with personalization, only "full" Windows<br />

2008 or Windows 2012 installations are<br />

supported. Saving an image of a Windows<br />

2008 or Windows 2012 "core" installation is<br />

not supported.<br />

l Linux RedHat (AS/ES) 3.0 - not supported in Japan<br />

l Linux RedHat (AS/ES) 4.0 (x86 and EM64T) - not supported in Japan<br />

156 <strong>Deployment</strong> <strong>Manager</strong>


l Linux RedHat 5.0 (x86 and EM64T)<br />

l Linux RedHat 6.0 (x86 and EM64T)<br />

5.3 Supported operating systems for image creation<br />

Before image creation, a specific package must be<br />

installed on the reference system. This package is<br />

provided with the <strong>Deployment</strong> <strong>Manager</strong> software in<br />

the directory LINUX_support\RHEL6_Cloning_<br />

Support.<br />

Before image creation, the Network<strong>Manager</strong> should<br />

be disabled - otherwise the personalization may fail<br />

during cloning. Please set NM_CONTROLLED=no<br />

in the /etc/sysconfig/network-scripts/ifcfg-eth*<br />

files and issue the following commands:<br />

/sbin/service Network<strong>Manager</strong> stop<br />

/sbin/chkconfig --del Network<strong>Manager</strong><br />

/sbin/chkconfig --add network<br />

/sbin/service network start<br />

l Linux SuSE Enterprise Server SLES 8 - not supported in Japan<br />

l Linux SuSE SLES 9 (x86 and EM64T) - not supported in Japan<br />

l Linux SuSE United Linux (with SLES 8) - not supported in Japan<br />

l Linux SuSE Enterprise Server SLES 10 (x86 and EM64T) - not supported<br />

in Japan<br />

l Linux SuSE Enterprise Server SLES 11 (x86 and EM64T) - not supported<br />

in Japan<br />

l Reiser file system versions 3.5 and 3.6 on SuSE SLES 9 and SuSE<br />

SLES 10<br />

Before image creation, a specific package must be installed on the reference<br />

system if Reiser is used as the root file system. This package<br />

is provided with the <strong>Deployment</strong> <strong>Manager</strong> software in the directory<br />

LINUX_support\ReiserFS_Cloning_Support.<br />

<strong>Deployment</strong> <strong>Manager</strong> 157


5 Image creation<br />

Hints:<br />

l All other operating system types can always be supported without<br />

personalization in raw snapshot mode. See also the Read-<br />

Me file.<br />

l Creating a cloning image of a disk with multiple operating systems<br />

is not supported. Creating a snapshot image is supported.<br />

l Image creation is only supported for Linux reference systems that<br />

have the Fstab option Mount in /etc/fstab set to Device name for all<br />

partitions. During a native installation of SuSE SLES 10 and SLES 11,<br />

this option is set to Device ID by default. You must manually change<br />

this option for all partitions. If you install SuSE SLES 10 and SLES 11<br />

unattended with an older Installation <strong>Manager</strong> version (up to<br />

V10.09.11), Device name is set by default for all partitions.<br />

If you install SuSE SLES 10 and SLES 11 unattended with Installation<br />

<strong>Manager</strong> as of version 10.09.12, you must change the Mount option<br />

from mount by id (the default) to mount by device in the Bootloader<br />

page of the Linux wizard.<br />

l If the Linux reference system (probably Red Hat 5) has LVM Volume<br />

Groups configured, the image creation is done in raw mode. Server-<br />

View <strong>Deployment</strong> <strong>Manager</strong> does not recognize LVM Volume Groups.<br />

l If the reference system has Native Multipath IO configured, the image<br />

creation is done in raw mode. <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> does<br />

not recognize Native Multipath IO configurations.<br />

5.4 Image creation of a Windows reference system<br />

During image creation the reference system is booted multiple times:<br />

l First the reference system is booted with Windows to do the system<br />

preparation:<br />

o The Client Agent package is temporarily installed on the reference<br />

system. After a successful installation the <strong>Deployment</strong><br />

Agent service should be running.<br />

158 <strong>Deployment</strong> <strong>Manager</strong>


o The personalization files are copied to the reference system.<br />

o The LAN interfaces are disabled.<br />

l Then WinPE is booted and the image is saved to the deployment<br />

server.<br />

l Finally the reference system is booted again with Windows to rollback<br />

the changes that were done during system preparation.<br />

If an error occurs when saving the image to the deployment server,<br />

the reference system must first be booted once before you try again<br />

to create an image (the rollback must be done).<br />

If the system preparation fails, please have a look at the following log directory:<br />

\-<br />

<strong>Deployment</strong>Service\tftp\log\ID#....<br />

The system preparation log file is named .log.<br />

If there is already a Client Agent package installed on the reference system,<br />

the image creation fails with result Serverscript returns error. The preparation<br />

log file contains the message ScwAgent has been already<br />

installed on the reference system. Please manually cleanup the reference<br />

system like described in the manual.. Please perform the steps<br />

described below before you try again to create an image.<br />

5.4.1 Manual rollback of changes done during system preparation<br />

If the automatic rollback fails, you can perform the following steps to undo<br />

the changes done during system preparation:<br />

1. Uninstall the Client Agent package:<br />

l Logon as local administrator.<br />

5.4 Image creation of a Windows reference system<br />

l Execute %Program Files%\SCWAgent\uninstall.bat.<br />

l Make sure that Windows has deleted the registry key<br />

HKLM\SYSTEM\CurrentControlSet\services\<strong>Deployment</strong><br />

Agent. If this key still exists, please do not delete it manually.<br />

<strong>Deployment</strong> <strong>Manager</strong> 159


5 Image creation<br />

Please make sure that you are logged on as local administrator<br />

and try to call uninstall.bat again.<br />

l Delete %Program Files%\SCWAgent directory.<br />

l Remove the following registry key and its subkeys:<br />

o HKLM\Software\<strong>Fujitsu</strong>\SystemcastWizard\Agent<br />

2. Delete files on the reference system:<br />

l Logon as local administrator.<br />

l Execute %SystemDrive%\cleanup.bat.<br />

l Delete the following directories in %SystemDrive%:<br />

o Ipadj<br />

o Sysprep<br />

l Delete the following files in %SystemDrive%:<br />

o clcomp.dat<br />

o clcomp3.dat<br />

o cleanup.bat<br />

o Win.tag<br />

l Delete the following file:<br />

o %SystemDrive%\<strong>Deployment</strong>Info\<strong>Deployment</strong>Mode<br />

5.5 Windows 2008 and Windows 2012 reference systems<br />

When you use Windows 2008 or Windows 2012 as the reference system you<br />

must configure the Windows firewall. Note also the restrictions for Sysprep.<br />

5.5.1 Configuring the Windows firewall on the reference system<br />

Before starting an image creation process, you must configure Windows firewall<br />

with Advanced Security on the reference system.<br />

160 <strong>Deployment</strong> <strong>Manager</strong>


5.5 Windows 2008 and Windows 2012 reference systems<br />

During system preparation, <strong>Deployment</strong> <strong>Manager</strong> tries to access the reference<br />

system with WMI and copies the Client Agent package (for temporary<br />

installation) to the reference system. If the reference system has Windows<br />

firewall switched on (which is the default after a Windows 2008 or<br />

Windows 2012 installation), perform the following steps on the reference system<br />

before starting the image creation process:<br />

1. Select Start - Administrative Tools - Windows Firewall with<br />

Advanced Security.<br />

2. Select Inbound Rules in the left pane.<br />

3. Enable the following Inbound Rules (set to Yes):<br />

File and Printer Sharing (SMB-In)<br />

Windows Management Instrumentation (DCOM-In)<br />

Windows Management Instrumentation (WMI-In)<br />

In a German installation the rules are called:<br />

<strong>Deployment</strong> <strong>Manager</strong> 161


5 Image creation<br />

Datei- und Druckerfreigabe (SMB eingehend)<br />

Windows-Verwaltungsinstrumentation (DCOM eingehend)<br />

Windows-Verwaltungsinstrumentation (WMI eingehend)<br />

Instead of performing these steps you can enter the following commands<br />

from a command prompt (replace the names of the rules with your language):<br />

netsh advfirewall firewall set rule name="File and Printer Sharing (SMB-In)"<br />

new enable=yes<br />

netsh advfirewall firewall set rule name="Windows Management Instrumentation<br />

(DCOM-In)" new enable=yes<br />

netsh advfirewall firewall set rule name="Windows Management Instrumentation<br />

(WMI-In)" new enable=yes<br />

The above commands enable the firewall rules for all profiles. If you<br />

want to enable the firewall rules for one profile only, please add the<br />

parameter profile to the netsh command (e.g. netsh advfirewall firewall<br />

set rule name="File and Printer Sharing (SMBIn)" new enable=yes<br />

profile=Domain).<br />

The System Access Data field in Disks step (Create Cloning Image wizard)"<br />

(see section "Disks step (Create Cloning Image wizard)" on page 172)<br />

can contain the IP address or the name of the server. If a server name is<br />

entered here, the following Inbound Rules must be enabled in addition:<br />

File and Printer Sharing (NB-Datagram-In)<br />

File and Printer Sharing (NB-Name-In)<br />

File and Printer Sharing (NB-Session-In)<br />

5.5.2 Sysprep restrictions<br />

Note the following restrictions for Sysprep:<br />

l Sysprep can only be called 3 times on Windows 2008 or Windows<br />

2012<br />

Make sure that Sysprep has been called no more than twice on the reference<br />

system otherwise Sysprep will crash during the deployment<br />

162 <strong>Deployment</strong> <strong>Manager</strong>


process.<br />

l Not all server roles support Sysprep.<br />

The following server roles will no longer function after deployment of<br />

Windows 2008 R2 or Windows 2012:<br />

o Active Directory Certificate Server (AD CS)<br />

o Active Directory Domain Services (AD DS)<br />

o Active Directory Federation Services (AD FS)<br />

o Active Directory Lightweight Directory Services (AD LDS)<br />

o Active Directory Rights Management Server (AD RMS)<br />

o DHCP Server<br />

o DNS Server<br />

o Fax Server<br />

o Network Policy and Access Services<br />

o UDDI Services<br />

o Windows <strong>Deployment</strong> Services<br />

The following server roles will no longer function after deployment of<br />

Windows 2008:<br />

o Active Directory Certificate Server (AD CS)<br />

o Active Directory Domain Services (AD DS)<br />

o Active Directory Federation Services (AD FS)<br />

o Active Directory Lightweight Directory Services (AD LDS)<br />

o Active Directory Rights Management Server (AD RMS)<br />

o DNS Server<br />

o Fax Server<br />

o File Services<br />

o Hyper-V<br />

5.5 Windows 2008 and Windows 2012 reference systems<br />

o Network Policy and Access Services<br />

<strong>Deployment</strong> <strong>Manager</strong> 163


5 Image creation<br />

o Print Services<br />

o UDDI Services<br />

o Windows <strong>Deployment</strong> Services<br />

The following server roles will continue to function with restrictions<br />

after deployment of Windows 2008, Windows 2008 R2 or Windows<br />

2012:<br />

o Terminal Services/Remote Desktop Services<br />

Not supported in scenarios where the master Windows image<br />

is joined to a domain.<br />

o Web Server (Internet Information Services)<br />

Does not support Sysprep with encrypted credentials in applicationhost.config.<br />

164 <strong>Deployment</strong> <strong>Manager</strong>


6 Mass Cloning<br />

The work area Mass Cloning allows you to:<br />

l Create cloning images.<br />

l Create cloning groups (an image is assigned to the group).<br />

l Clone one or more servers with a cloning image.<br />

l Clone a group or members of a group.<br />

If the target server has an UEFI BIOS, each LAN port must be manually<br />

configured for either Legacy PXE or UEFI PXE boot. If the target<br />

server has a GPT boot disk, UEFI PXE boot must be<br />

configured. Otherwise Legacy PXE or UEFI PXE boot can be configured.<br />

If the target server has an UEFI BIOS, UEFI: PXE boot (or Legacy<br />

Boot - if Legacy PXE boot is configured) must be set before UEFI<br />

Shell in BIOS boot order.<br />

6.1 Creating a cloning image<br />

You can create cloning images. A cloning image can be cloned to a group of<br />

servers to be installed with the same configuration parameters. You can also<br />

clone an image without specifying a deployment group. To do this, select a<br />

server from the list and select Clone (Servers by Cloning Image group) or<br />

Clone with Image (All Servers group) from the context menu.<br />

After starting the image creation process (by clicking on Finish), the cloning<br />

module now boots the reference system referenced by its MAC address with<br />

a WinPE boot image via PXE and starts the image creation tool on WinPE.<br />

<strong>Deployment</strong> <strong>Manager</strong> 165


6 Mass Cloning<br />

The automatic remote power-on and the remote PXE boot configuration<br />

of the BIOS require that the server can be managed by a<br />

management blade (blade system), an RSB or a BMC that supports<br />

IPMI 1.5 over LAN. For non-blade systems you must have specified<br />

the correct BMC settings. If the server cannot be managed in one of<br />

these ways, a dialog appears asking you to initiate a PXE boot manually.<br />

Scheduled tasks for non-manageable servers are not possible.<br />

6.1.1 Create Cloning Image wizard<br />

The Create Cloning Image wizard allows you to create cloning images.<br />

To open the wizard, select Create Cloning Image from the context menu of<br />

a selected server in the Servers view.<br />

6.1.1.1 Task Name step (Create Cloning Image wizard)<br />

Task Name is the first step in the Create Cloning Image wizard. In this<br />

step you must enter a task name.<br />

166 <strong>Deployment</strong> <strong>Manager</strong>


Figure 38: Task Name step<br />

Task Name<br />

Name of the task. Any string except the characters & and " is allowed.<br />

Image Information<br />

Optional: Specify any textual information that you want to be saved with<br />

the image. This information will be stored in the file with the suffix .txt.<br />

6.1.1.2 <strong>Deployment</strong> Server step (Create Cloning Image wizard)<br />

6.1 Creating a cloning image<br />

<strong>Deployment</strong> Server is the next step in the Create Cloning Image wizard.<br />

In this step you can select the deployment server.<br />

<strong>Deployment</strong> <strong>Manager</strong> 167


6 Mass Cloning<br />

Figure 39: <strong>Deployment</strong> Server step<br />

<strong>Deployment</strong> Server<br />

Name of the deployment server which is used for this task.<br />

6.1.1.3 Image Path and Name step (Create Cloning Image wizard)<br />

Image Path and Name is the next step in the Create Cloning Image wizard.<br />

In this step you can define the location (path) and the name of the image.<br />

168 <strong>Deployment</strong> <strong>Manager</strong>


Figure 40: Image Path and Name step<br />

Disk Images<br />

Displays existing image repositories in the tree view on the left. The table<br />

on the right displays the images in the repository. Select a folder in the<br />

repository.<br />

Image Filename<br />

Name of the image. You can use the following variables in the file name:<br />

%D This is replaced by the day (01, 02, ..., 31) of the month.<br />

%M This is replaced by the month (01, 02, ...,12).<br />

%Y This is replaced by the year (4 digits).<br />

6.1 Creating a cloning image<br />

%S This is replaced by the name of each server of which an image will<br />

be created.<br />

<strong>Deployment</strong> <strong>Manager</strong> 169


6 Mass Cloning<br />

6.1.1.4 Options step (Create Cloning Image wizard)<br />

Options is the next step in the Create Cloning Image wizard. In this step<br />

you can specify server options before and after deployment.<br />

Figure 41: Options step<br />

Force System Shutdown before <strong>Deployment</strong><br />

Force a system shutdown before creating the image. Operating-systemspecific<br />

<strong>ServerView</strong> agents must be installed on the relevant server.<br />

Shutdown Method<br />

Shutdown method:<br />

170 <strong>Deployment</strong> <strong>Manager</strong>


Graceful<br />

(via <strong>ServerView</strong> Agents)<br />

This shutdown method makes use of<br />

the <strong>ServerView</strong> agent functionality.<br />

<strong>ServerView</strong> agents running on the target<br />

systems are instructed to shut down the<br />

systems. This method requires Server-<br />

View agents to be installed and configured<br />

on all systems. It also requires<br />

specification of a shutdown user name<br />

and password.<br />

ACPI <strong>Deployment</strong> <strong>Manager</strong> will try to perform<br />

an ACPI shutdown of the system. If the<br />

running operating system supports<br />

ACPI, it should save unsaved data and<br />

clean up the file systems.<br />

Forced A hard power-off of all running target<br />

servers will be initiated if these servers<br />

support remote power-off by management<br />

blade or baseboard management<br />

controller. In this case<br />

unsaved data will be lost. Normally this<br />

is not a problem, since the disk will be<br />

overwritten by a new disk image.<br />

Shutdown Username<br />

Preconfigured user name.<br />

Shutdown Password/Repeat Shutdown Password<br />

Preconfigured user password.<br />

6.1 Creating a cloning image<br />

The shutdown user name and password are used for authentication<br />

of this shutdown request at the <strong>ServerView</strong> agent of the<br />

corresponding server. The <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong><br />

service remembers the last-used shutdown user name and password.<br />

These will be set as default shutdown user name and password.<br />

The last-used shutdown user name and password are lost<br />

when the <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> service is restarted.<br />

<strong>Deployment</strong> <strong>Manager</strong> 171


6 Mass Cloning<br />

System Status after <strong>Deployment</strong><br />

System status after creating the image:<br />

Shut down System The server will be shut down at the end<br />

of the process.<br />

Keep System Running The server will be kept running at the<br />

end of the process.<br />

GUID Check<br />

The GUID of a server as specified in the deployment configuration of that<br />

server is checked against the GUID that is sent as part of the PXE boot<br />

request sent by a server. The deployment of the server will only start if<br />

the GUID sent by the PXE boot request matches the configured GUID.<br />

6.1.1.5 Disks step (Create Cloning Image wizard)<br />

Disks is the next step in the Create Cloning Image wizard. In this step you<br />

can specify the disk from which the image will be saved. You can only select<br />

one disk.<br />

172 <strong>Deployment</strong> <strong>Manager</strong>


Figure 42: Disks step<br />

6.1 Creating a cloning image<br />

Logical Disk Number<br />

Select the logical disk number from which the image is to be created. By<br />

default the logical disk 0 is saved, which is the first logical device.<br />

If there are local disks and SAN disks connected to the system,<br />

the disk numbering depends on the driver load order (for deployment<br />

platform WinPE MDP). In most configurations with an Emulex<br />

board (e.g. in combination with an LSI SAS IME board) the<br />

SAN disks are listed before the local disks - if you accept the<br />

default value 0 for the logical disk number, the first SAN disk will<br />

be saved. As a workaround you could unplug the Fibre Channel<br />

cables before starting image creation.<br />

<strong>Deployment</strong> <strong>Manager</strong> 173


6 Mass Cloning<br />

Raw Mode<br />

You can generate an image creation in raw disk mode. All sectors of the<br />

disks will be saved in the image. This image can only be deployed to<br />

servers which have disks with the same size and the same disk layout.<br />

File System Independent<br />

This method is used for non-supported file systems. When selected, the<br />

complete partition information is copied to the image file and must be<br />

cloned to a partition of the same size. For more information see section<br />

"File-System-Independent image creation (raw image creation)" on page<br />

155.<br />

File System Dependent<br />

When selected, only the used blocks of a file system will be saved in the<br />

image. For more information see section "File-System-Dependent image<br />

creation" on page 154.<br />

This functionality is supported for the following file systems:<br />

l FAT, FAT32<br />

l NTFS, NTFS5<br />

l EXT2, EXT3, EXT4<br />

l Reiser FS version 3.5 and 3.6 on SuSE SLES 9 and 10.<br />

Other file systems and especially other versions of the Reiser file system<br />

are not supported by this functionality.<br />

If a disk contains several partitions for non-supported file system types,<br />

the whole partition including the non-used blocks will be saved. For the<br />

partitions with a supported file system, only the used blocks will be<br />

saved. This reduces the size of the image files and also increases the<br />

speed of the image generation. If you want to restore an image on a disk<br />

that has a different size than the source disk, this option should be specified.<br />

Verify File System<br />

The file system is checked before the image creation process is started.<br />

174 <strong>Deployment</strong> <strong>Manager</strong>


Raise Error for Unsupported File Systems<br />

Only use this option if the File System Dependent option is checked. If<br />

an unsupported file system is detected, the image creation process will<br />

raise an error and stop.<br />

Compress Image<br />

The image file is compressed during the image creation.<br />

6.1 Creating a cloning image<br />

Fast Image Creation<br />

When this option is checked, the image creation process uses an optimized<br />

method to access the target disk for the image creation. This optimization<br />

can be applied to EXT2, EXT3 and EXT4 file systems when the<br />

option File System Dependent is checked. For an image that was created<br />

with this option, the option Expand last partition to whole disk in<br />

the System Preparation step of the cloning is not available because<br />

applying the option Expand last partition to whole disk to such an<br />

image would raise an error.<br />

Operating System<br />

Specify the operating system installed on the reference system: Linux,<br />

Windows 2003, Windows 2008 or Windows 2012<br />

l Image creation of a Windows reference system only<br />

works if the interface used for the PXE boot is enabled on<br />

the reference system.<br />

l Before you can create an image of a Windows 2008 or<br />

Windows 2012 reference system, the following 3<br />

"Inbound Rules" must be enabled in Windows firewall on<br />

the reference system: File and Printer Sharing (SMB-In),<br />

Windows Management Instrumentation (DCOM-In) and<br />

Windows Management Instrumentation (WMI-In).<br />

<strong>Deployment</strong> <strong>Manager</strong> 175


6 Mass Cloning<br />

l If an image of a Windows 2008 R2 or Windows 2012 reference<br />

system with static IP addresses was created, the<br />

following IP configurations on the reference system are<br />

lost and must be manually restored: Default Gateway,<br />

DNS Servers, WINS Server and NetBIOS over TcpIp.<br />

l Image Creation of a Windows 2008 R2 or Windows 2012<br />

reference system with static IP addresses fails if there is<br />

a gateway between the reference system and the deployment<br />

server. During rollback the reference system tries to<br />

connect back to the deployment server - this fails<br />

because the Default Gateway was not restored (see<br />

above). As a workaround please save the .img, .cfg and<br />

.txt files away while the rollback is still running - the image<br />

is okay and can be used for cloning. If you created an<br />

image of a GPT boot disk, please manually add an empty<br />

section NVRAM at the end of the .cfg file (this allows you<br />

to select the NVRAM Information Restoration during<br />

cloning).<br />

Note the following restriction for some PRIMERGY servers and<br />

server blades (e.g. TX150 S7, TX300 S5, TX200 S6, BX924 S3)<br />

and for PRIMEQUEST:<br />

If iRMC Support , MMB SNMP Support or MMB Remote<br />

<strong>Manager</strong> Support is selected in the Remote Management<br />

Ports step in the <strong>Deployment</strong> Configuration wizard, the local<br />

hard disk must be set before PXE in BIOS boot order. Otherwise<br />

the image creation of a Windows reference system fails because<br />

WinPE instead of the Windows operating system will be booted<br />

first. This restriction does not apply for Wake On LAN Support or<br />

Manual Management when PXE is required to be on first place in<br />

BIOS boot order.<br />

Additional Parameters for Linux:<br />

176 <strong>Deployment</strong> <strong>Manager</strong>


Location of /etc<br />

Specify the location of the /etc directory for the system from which you<br />

want to generate the image. By default this is /dev/sda2.<br />

Additional Parameters for Windows 2003:<br />

System Access Data<br />

Displays either the IP address or the host name from the selected server.<br />

This value is used to access this system during system preparation.<br />

Administrator Account<br />

Name of an account with administrator’s rights. This must be a local<br />

account (not a domain account) on the reference system. In some situations<br />

(e.g. when the reference system is a domain member) it is advisable<br />

to use the notation \Administrator instead of just<br />

Administrator, where is replaced by the host name of the reference<br />

system.<br />

Password / Repeat Password<br />

Administrator password of the specified administrator account on the reference<br />

system.<br />

Windows Product ID<br />

The product ID is mandatory for all Windows 2003 DVDs.<br />

6.1 Creating a cloning image<br />

Additional Parameters for Windows 2008 and Windows 2012:<br />

System Access Data<br />

Displays either the IP address or the host name from the selected server.<br />

This value is used to access this system during system preparation.<br />

Administrator Account<br />

Name of an account with administrator’s rights. This must be a local<br />

account (not a domain account) on the reference system. In some situations<br />

(e.g. when the reference system is a domain member) it is advisable<br />

to use the notation \Administrator instead of just<br />

Administrator, where is replaced by the host name of the reference<br />

system.<br />

<strong>Deployment</strong> <strong>Manager</strong> 177


6 Mass Cloning<br />

Password / Repeat Password<br />

Administrator password of the specified administrator account on the reference<br />

system.<br />

Installation Medium<br />

Installation medium that was used for the installation of the reference system:<br />

<strong>Fujitsu</strong> OEM <strong>Fujitsu</strong> OEM (SLP-2) multi-language DVD.<br />

Microsoft All other DVDs.<br />

Language Version<br />

Corresponding language version of the <strong>Fujitsu</strong> OEM DVD. <strong>Deployment</strong><br />

<strong>Manager</strong> does not automatically recognize the installation language.<br />

Windows Product ID<br />

The product ID is mandatory for Windows 2008 and Windows 2012 Retail<br />

DVDs and Windows 2012 Volume DVDs. For Windows 2008 Volume<br />

DVDs please leave the Product ID empty. Do not enter a MAK (Multiple<br />

Activation Key) here. The MAK key can be entered during the image<br />

deployment.<br />

6.1.1.6 Bios Boot Type step (Create Cloning Image wizard)<br />

Bios Boot Type is the next step in the Create Cloning Image wizard.<br />

This step is only displayed when iRMC Support has been chosen as method<br />

for remote management in <strong>Deployment</strong> Configuration. The Bios Boot Type<br />

is needed to initiate the remote PXE boot of the target server.<br />

l The selection here is ignored if the target server does not<br />

have an UEFI BIOS.<br />

l Each LAN port must be manually configured for either Legacy<br />

PXE or UEFI PXE boot (in the BIOS of the target<br />

server). This configuration cannot be changed when initiating<br />

a PXE boot from the deployment server. The value that is<br />

specified here must match the manual configuration in the<br />

BIOS.<br />

178 <strong>Deployment</strong> <strong>Manager</strong>


Figure 43: Bios Boot Type step<br />

"PC compatible" Boot (legacy)<br />

Initiate a Legacy PXE boot at the target server.<br />

Extensible Firmware Interface Boot (EFI)<br />

Initiate an UEFI PXE boot at the target server.<br />

6.1.1.7 Scheduling step (Create Cloning Image wizard)<br />

6.1 Creating a cloning image<br />

Scheduling is the final step in the Create Cloning Image wizard. In this<br />

step you can create the image generation task as a scheduled task.<br />

<strong>Deployment</strong> <strong>Manager</strong> 179


6 Mass Cloning<br />

Figure 44: Scheduling step<br />

Time Unit to perform this Task (only for later option)<br />

Specifies how often you want the task to be performed: Once, Daily,<br />

Weekly, Monthly. Additional settings are shown depending on your selection.<br />

Select Date and Time to start (only for later option)<br />

Specifies the start date (now or later) when you want the task to be performed.<br />

If you select later, you can also specify the start time for the<br />

task.<br />

Perform this Task (only for Daily)<br />

Selects whether the task should be performed Every Day, Weekdays or<br />

Every x days.<br />

180 <strong>Deployment</strong> <strong>Manager</strong>


Select days of the week you want this task to start (only for Weekly)<br />

Specifies that the task should be executed every week or every n week<br />

(n = 1 ..52). You can also select the days of the week.<br />

Select the day and the month you want this task to start (only for<br />

Monthly)<br />

Specifies for which months of the year the task should be executed. You<br />

can also specify on which day of the month a task should be executed.<br />

And you can specify a weekday and whether the task should be executed<br />

in the first, second, third, fourth or last week of a month.<br />

Retry Count<br />

Specifies the number of retries for this task if the job fails. A value<br />

between 0 and 10 can be specified.<br />

Retry Interval (minutes) (only for later option)<br />

The time (in minutes) before the next attempt if previous attempts at a<br />

task failed and the number of retries does not exceed the retry counter. A<br />

value between 1 and 360 minutes can be specified.<br />

Task Start Time Window (minutes) (only for later option)<br />

This value defines the maximum period (in minutes) for retries of a task.<br />

Outside this period no retries will be performed, even if all the retries specified<br />

by the retry counter have not been used.<br />

It can happen that a task is handed over to the deployment server but is<br />

only queued, because too many other tasks are already running. If the<br />

Task Start Time Window elapses while a task is in this state, the task<br />

will also be canceled. A running task will not be canceled when the Task<br />

Start Time Window elapses.<br />

Enable Segmentation<br />

Enable the option to specify the segmentation size.<br />

6.1 Creating a cloning image<br />

Segmentation Size<br />

Specifies the maximum number of targets that will be deployed in parallel.<br />

This might be useful when a job with a huge number of targets is created.<br />

<strong>Deployment</strong> <strong>Manager</strong> 181


6 Mass Cloning<br />

6.1.1.8 Action after starting the "Create Cloning Image" task<br />

Basically, the target server behaves as follows after starting the Creating<br />

Cloning Image wizard. Please also refer to "Image creation of a Windows<br />

reference system " on page 158.<br />

Windows reference system<br />

When saving an image from a Windows Reference system, it is necessary<br />

to first perform a local boot of the reference system to install a “Client Agent”<br />

package. The following steps are done:<br />

1. After starting the task, power on the client. (Please see "Notes".)<br />

2. The client is sending a PXE request. However, this request is canceled.<br />

3. The client tries the next boot device (local hard drive) and boots the<br />

Windows operating system.<br />

4. Copy the Client Agent package from the deployment server to the<br />

Windows operating system which has just booted.<br />

5. After copying the Client Agent package, reboot the Windows operating<br />

system.<br />

6. The client is booted with WinPE.<br />

7. Start saving an image.<br />

8. After saving an image, reboot the client.<br />

9. The client boots the Windows operating system and deletes the<br />

Client Agent package.<br />

10. The client is shutdown.<br />

Linux reference system<br />

When saving an image from a Linux Reference system, the following steps<br />

are done:<br />

1. After starting the task, power on the client. (Please see “Notes”.)<br />

2. The client is sending a PXE request.<br />

3. The client is booted with WinPE.<br />

182 <strong>Deployment</strong> <strong>Manager</strong>


4. Start saving an image.<br />

5. After saving an image, shutdown the client.<br />

6.1 Creating a cloning image<br />

l When you select iRMC Support, MMB SNMP Support,<br />

MMB Remote <strong>Manager</strong> Support, or Wake on LAN Support<br />

in the Remote Management Ports step in the <strong>Deployment</strong><br />

Configuration wizard, <strong>Deployment</strong> <strong>Manager</strong> turns on<br />

the client power automatically after starting the task. When<br />

you select Manual Management, you need to turn on the<br />

client manually.<br />

l When you select iRMC Support, MMB SNMP Support, or<br />

MMB Remote <strong>Manager</strong> Support in the Remote Management<br />

Ports step in the <strong>Deployment</strong> Configuration wizard,<br />

PXE is automatically set on first place in BIOS boot<br />

order when starting the client.<br />

l When you select Wake on LAN Support or Manual Management<br />

in the Remote Management Ports step in the<br />

<strong>Deployment</strong> Configuration wizard, you need to set PXE<br />

on first place in BIOS boot order of the client before you start<br />

the task.<br />

l If you select iRMC Support, MMB SNMP Support or<br />

MMB Remote <strong>Manager</strong> Support in the Remote Management<br />

Ports step in the <strong>Deployment</strong> Configuration wizard,<br />

you need to turn off a target server before you start the<br />

deployment. You can perform the shutdown manually or by<br />

using the Force System Shutdown before <strong>Deployment</strong><br />

option in the task you want to execute.<br />

<strong>Deployment</strong> <strong>Manager</strong> 183


6 Mass Cloning<br />

l If you select iRMC Support, MMB SNMP Support or<br />

MMB Remote <strong>Manager</strong> Support in the Remote Management<br />

Ports step in the <strong>Deployment</strong> Configuration wizard,<br />

you need to set the user name and the password for<br />

iRMC or MMB in the <strong>Deployment</strong> Configuration wizard<br />

before starting the task.<br />

l When you select Keep System Running in the System<br />

Status after <strong>Deployment</strong> option, the client reboots after finishing<br />

the task.<br />

l When you execute a customer script, the client doesn't perform<br />

a shutdown after finishing the task. You need to perform<br />

a shutdown by your script or yourself.<br />

184 <strong>Deployment</strong> <strong>Manager</strong>


6.2 Cloning groups<br />

Before you start the cloning process you need to consider which servers are<br />

to get which image. Then you can build cloning groups consisting of one or<br />

more servers and one image file.<br />

Cloning groups with several servers help to reduce the workload of an administrator<br />

by organizing servers with the same configuration in one logical<br />

group. The servers getting the same image can then be cloned simultaneously<br />

by choosing only one deployment group.<br />

You can add, copy, edit or delete cloning groups.<br />

You can also clone an image without specifying a deployment group.<br />

To do this, select a server from the list and select Clone with Image<br />

from the context menu.<br />

6.2.1 Adding a cloning group<br />

The Add Cloning Group wizard allows you to create cloning groups.<br />

To open the wizard, select Add Cloning Group from the context menu in<br />

the Servers by Cloning Image group.<br />

6.2.1.1 Group Name step (Add Cloning Group wizard)<br />

6.2 Cloning groups<br />

Group Name is the first step in the Add Cloning Group wizard. In this step<br />

you can enter a group name.<br />

<strong>Deployment</strong> <strong>Manager</strong> 185


6 Mass Cloning<br />

Figure 45: Group Name step<br />

Group Name<br />

Name of the group.<br />

Description<br />

Optional: Description of the group.<br />

6.2.1.2 Cloning Image step (Add Cloning Group wizard)<br />

Cloning Image is the next step in the Add Cloning Group wizard. In this<br />

step you can choose the image file for the deployment.<br />

186 <strong>Deployment</strong> <strong>Manager</strong>


Figure 46: Cloning Image step<br />

Image File<br />

Displays the selected image.<br />

NVRAM Information Restoration<br />

During image creation the NVRAM boot entries are saved if an UEFI<br />

PXE boot of a reference system with GPT boot disk is done. The .cfg file<br />

contains a section NVRAM if boot entries have been saved. You can<br />

select here whether the NVRAM boot entries should be restored on the<br />

target.<br />

Update boot<br />

entries (delete<br />

the original<br />

entries)<br />

6.2 Cloning groups<br />

The boot entries from the reference system are restored<br />

on the target. The original entries on the target are deleted.<br />

You should select this option when restoring a boot<br />

disk.<br />

<strong>Deployment</strong> <strong>Manager</strong> 187


6 Mass Cloning<br />

Do not restore<br />

boot entries<br />

You should select this option when restoring a data<br />

disk.<br />

6.2.1.3 Group Members step (Add Cloning Group wizard)<br />

Group Members is the final step in the Add Cloning Group wizard. In this<br />

step you can select the servers for the group.<br />

Figure 47: Group Members step<br />

Show compatible servers only<br />

The Available Servers list displays all servers which are compatible.<br />

>><br />

Copies all servers from the Available Servers list to the Selected<br />

Servers list.<br />

188 <strong>Deployment</strong> <strong>Manager</strong>



6 Mass Cloning<br />

Figure 48: Group Name step<br />

Group Name<br />

Name of the group.<br />

Description<br />

Optional: Description of the group.<br />

6.2.2.2 Cloning Image step (Copy Cloning Group wizard)<br />

Cloning Image is the next step in the Copy Cloning Group wizard. In this<br />

step you can choose the image file for the deployment.<br />

190 <strong>Deployment</strong> <strong>Manager</strong>


Figure 49: Cloning Image step<br />

Image File<br />

Displays the selected image.<br />

NVRAM Information Restoration<br />

During image creation the NVRAM boot entries are saved if an UEFI<br />

PXE boot of a reference system with GPT boot disk is done. The .cfg file<br />

contains a section NVRAM if boot entries have been saved. You can<br />

select here whether the NVRAM boot entries should be restored on the<br />

target.<br />

Update boot<br />

entries (delete<br />

the original<br />

entries)<br />

6.2 Cloning groups<br />

The boot entries from the reference system are restored<br />

on the target. The original entries on the target are deleted.<br />

You should select this option when restoring a boot<br />

disk.<br />

<strong>Deployment</strong> <strong>Manager</strong> 191


6 Mass Cloning<br />

Do not restore<br />

boot entries<br />

You should select this option when restoring a data<br />

disk.<br />

6.2.2.3 Group Members step (Copy Cloning Group wizard)<br />

Group Members is the final step in the Copy Cloning Group wizard. In<br />

this step you can select the servers for the group.<br />

Figure 50: Group Members step<br />

Show compatible servers only<br />

The Available Servers list displays all servers which are compatible.<br />

>><br />

Copies all servers from the Available Servers list to the Selected<br />

Servers list.<br />

192 <strong>Deployment</strong> <strong>Manager</strong>



6 Mass Cloning<br />

Figure 51: Group Name step<br />

Group Name<br />

Name of the group.<br />

Description<br />

Optional: Description of the group.<br />

6.2.3.2 Cloning Image step (Edit Cloning Group wizard)<br />

Cloning Image is the next step in the Edit Cloning Group wizard. In this<br />

step you can choose the image file for the deployment.<br />

194 <strong>Deployment</strong> <strong>Manager</strong>


Figure 52: Cloning Image step<br />

Image File<br />

Displays the selected image.<br />

NVRAM Information Restoration<br />

During image creation the NVRAM boot entries are saved if an UEFI<br />

PXE boot of a reference system with GPT boot disk is done. The .cfg file<br />

contains a section NVRAM if boot entries have been saved. You can<br />

select here whether the NVRAM boot entries should be restored on the<br />

target.<br />

Update boot<br />

entries (delete<br />

the original<br />

entries)<br />

6.2 Cloning groups<br />

The boot entries from the reference system are restored<br />

on the target. The original entries on the target are deleted.<br />

You should select this option when restoring a boot<br />

disk.<br />

<strong>Deployment</strong> <strong>Manager</strong> 195


6 Mass Cloning<br />

Do not restore<br />

boot entries<br />

You should select this option when restoring a data<br />

disk.<br />

6.2.3.3 Group Members step (Edit Cloning Group wizard)<br />

Group Members is the final step in the Edit Cloning Group wizard. In this<br />

step you can select the servers for the group.<br />

Figure 53: Group Members step<br />

Show compatible servers only<br />

The Available Servers list displays all servers which are compatible.<br />

>><br />

Copies all servers from the Available Servers list to the Selected<br />

Servers list.<br />

196 <strong>Deployment</strong> <strong>Manager</strong>



6 Mass Cloning<br />

6.3.1 Clone wizard<br />

The Clone wizard allows you to clone a group of servers with an image.<br />

To open the wizard, select Clone from the context menu in the Servers by<br />

Cloning Image group.<br />

6.3.1.1 Task Name step (Clone wizard)<br />

Task Name is the first step in the Clone wizard. In this step you can enter a<br />

task name.<br />

Figure 54: Task Name step<br />

Task Name<br />

Name of the task. Any string except the characters & and " is allowed.<br />

198 <strong>Deployment</strong> <strong>Manager</strong>


6.3.1.2 <strong>Deployment</strong> Server step (Clone wizard)<br />

<strong>Deployment</strong> Server is the next step in the Clone wizard. In this step you<br />

can select the deployment server.<br />

Figure 55: <strong>Deployment</strong> Server step<br />

<strong>Deployment</strong> Server<br />

Name of the deployment server which is used for this task.<br />

6.3.1.3 Disks step (Clone wizard)<br />

6.3 Cloning an image<br />

Disks is the next step in the Clone wizard. In this step you can select the<br />

disk number to which the image is to be restored.<br />

<strong>Deployment</strong> <strong>Manager</strong> 199


6 Mass Cloning<br />

Figure 56: Disks step<br />

Restore to Logical Disk Number<br />

Optional: Select the disk on which the image is to be deployed.<br />

If there are local disks and SAN disks connected to the system,<br />

the disk numbering depends on the driver load order (for deployment<br />

platform WinPE MDP). In most configurations with an Emulex<br />

board (e.g. in combination with an LSI SAS IME board) the<br />

SAN disks are listed before the local disks - if you accept the<br />

default value 0 for the logical disk number, the first SAN disk will<br />

be overwritten. As a workaround you could unplug the Fibre<br />

Channel cables before starting the cloning.<br />

Partitions<br />

In the first column select whether the partition should be restored.<br />

The columns in the table have the following meanings:<br />

200 <strong>Deployment</strong> <strong>Manager</strong>


Column Meaning<br />

Volume Label Volume label<br />

Attributes Attributes (and Active flag)<br />

Format File System<br />

Size Size of the partition<br />

Used Percentage of used space<br />

When the system partition is restored, all partitions before the<br />

system partition (for example the ESP and MSR partitions for a<br />

GPT disk) must be restored as well.<br />

6.3.1.4 System Preparation step (Clone wizard)<br />

6.3 Cloning an image<br />

System Preparation is the next step in the Clone wizard. In this step you<br />

can select the system preparation before image deployment.<br />

<strong>Deployment</strong> <strong>Manager</strong> 201


6 Mass Cloning<br />

Figure 57: System Preparation step<br />

System Preparation Method<br />

Specify a system preparation method. Depending on the selection, further<br />

parameters are displayed:<br />

Unchanged<br />

When this method is selected, no further system preparation will be done<br />

before the disk image is cloned to the target servers. This requires that a system<br />

preparation (configuration of the RAID controller) was done manually for<br />

all target servers before the image cloning is started.<br />

All Primergy<br />

This method allows a RAID configuration for all RAID controllers supported<br />

by Installation <strong>Manager</strong>.<br />

202 <strong>Deployment</strong> <strong>Manager</strong>


The following parameters are displayed:<br />

Controller Vendor<br />

Select the controller vendor.<br />

Controller Family<br />

Select the controller family.<br />

Controller Model<br />

Select the controller model.<br />

Controller Number<br />

Select the controller number.<br />

Manual Configuration<br />

You can define additional parameters:<br />

Raid Level<br />

Select the RAID level.<br />

Number of Disks<br />

Select the number of disks.<br />

Use Hot Spare<br />

A standby hard disk can be used to replace a defective hard disk.<br />

l To create a RAID array on an SX940 storage blade with a<br />

Lynx controller, select LSI/Mylex RAID Controller as the<br />

controller vendor, LSI IME SAS as the controller family,<br />

Any as the controller model, and 1 as the controller<br />

number.<br />

l To create a RAID array on an SX940 storage blade with<br />

a Cougar controller, select LSI/Mylex RAID Controller as<br />

the controller vendor, LSI MegaRAID SAS as the controller<br />

family, RAID 5/6 SAS based on LSI MegaRAID as<br />

the controller model, and 0 as the controller number.<br />

Preparation Boot Image<br />

The following parameters are displayed:<br />

6.3 Cloning an image<br />

<strong>Deployment</strong> <strong>Manager</strong> 203


6 Mass Cloning<br />

Boot Image Path<br />

Displays the path where the selected image is stored.<br />

Boot Image Type<br />

Select the boot image type:<br />

Floppy Disk Image The boot image is a "virtual" floppy image (1<br />

file of size 1.4 MB).<br />

Bootstrap Image The boot image consists of multiple parts,<br />

whereby the first part is PXE booted and<br />

requests the rest by tftp download.<br />

Expand last partition to whole disk<br />

The last partition of the restored disk image will be expanded to use the<br />

rest of the whole disk.<br />

6.3.1.5 Settings step (Clone wizard)<br />

Settings is the next step in the Clone wizard. This step allows you to specify<br />

the deployment method and the system shutdown method before cloning<br />

the image.<br />

204 <strong>Deployment</strong> <strong>Manager</strong>


Figure 58: Settings step<br />

<strong>Deployment</strong> Method<br />

<strong>Deployment</strong> method for the cloning:<br />

6.3 Cloning an image<br />

Multicast The disk image data is sent via multicast IP packets over<br />

the network. This means that a data packet is sent to all target<br />

servers involved once and not n times (where n is the<br />

number of servers involved). This allows you to deploy a<br />

large number of servers in a short time. You might experience<br />

problems with the multicast method if there is a router<br />

between the deployment server and a target server, since<br />

not all routers allow multicast transfer.<br />

<strong>Deployment</strong> <strong>Manager</strong> 205


6 Mass Cloning<br />

Unicast The disk image data is sent via unicast IP packets over the<br />

network. This means that a data packet is sent to each<br />

server. When using the unicast deployment method, only 4<br />

servers can be deployed in parallel. If your deploy job<br />

involves more servers, the deployment of the other servers<br />

is queued. The other servers will be deployed as soon as the<br />

deployment of one server has finished.<br />

Force System Shutdown before <strong>Deployment</strong><br />

<strong>Deployment</strong> <strong>Manager</strong> will try to shut down the affected target systems<br />

before the deployment process is started:<br />

l When this option is checked, the target servers will be shut down<br />

before the deployment is started.<br />

l When this option is not checked, the target servers must be shut<br />

down, otherwise the deployment will fail.<br />

Depending on the selected shutdown method, a shutdown might<br />

be a hard power-off or a graceful shutdown. In the case of a hard<br />

power-off, unsaved data will be lost. This might not be a problem<br />

since the disk will be overwritten by a new disk image. But data<br />

on a second logical disk might be lost.<br />

Shutdown Method<br />

Select the shutdown method:<br />

Forced A hard power-off of all running target servers will be initiated<br />

if these servers support remote power-off by management<br />

blade or baseboard management controller. In<br />

this case unsaved data will be lost. Normally this is not a<br />

problem, since the disk will be overwritten by a new disk<br />

image.<br />

ACPI <strong>Deployment</strong> <strong>Manager</strong> will try to perform an ACPI shutdown<br />

of the system. If the running operating system supports<br />

ACPI, it should save unsaved data and clean up the<br />

file systems.<br />

206 <strong>Deployment</strong> <strong>Manager</strong>


Graceful<br />

(via Server-<br />

View<br />

Agent)<br />

6.3.1.6 Post <strong>Deployment</strong> step (Clone wizard)<br />

This shutdown method makes use of the <strong>ServerView</strong><br />

agent functionality. <strong>ServerView</strong> agents running on the target<br />

systems are instructed to shut down the systems.<br />

This method requires that <strong>ServerView</strong> agents are installed<br />

on all systems and are configured to allow this functionality.<br />

It also requires the specification of a shutdown<br />

user name and password.<br />

Post <strong>Deployment</strong> is the next step in the Clone wizard. This step allows you<br />

to specify options which are used after cloning.<br />

Figure 59: Post <strong>Deployment</strong> step<br />

6.3 Cloning an image<br />

<strong>Deployment</strong> <strong>Manager</strong> 207


6 Mass Cloning<br />

Customer Script<br />

Specifies a customer script that is executed on the cloned system at the<br />

end of the deployment process. You can use this to modify and configure<br />

the cloned system for your requirements. In order to specify a customer<br />

script, you must specify its path name as follows:<br />

l For Windows target systems:<br />

The path name should be the network path name of the script in<br />

UNC notation. This means that you must use a network share<br />

where you store the script. You can also specify optional parameters<br />

for this script.<br />

Example:<br />

\\server\share\path\script_name [parameters]<br />

l For Linux target systems:<br />

The path name has the following format:<br />

\\server\path\script_name [parameters]<br />

//server/path/script_name [parameters]<br />

server FTP server name or the IP address of the FTP server.<br />

path Path to the script relative to the root of the FTP server.<br />

If you select the Customer Script option, the System Status after<br />

<strong>Deployment</strong> option is ignored. If the user wants to reboot after<br />

executing the customer script, you must add a reboot command at<br />

the end of the script.<br />

Windows Domain User<br />

If you deploy a Windows image (not a snapshot image), you can specify a<br />

Windows domain.<br />

208 <strong>Deployment</strong> <strong>Manager</strong>


In order to use the domain join functionality, the program netdom.exe<br />

from the <strong>ServerView</strong> Suite DVD 1 or a Windows installation<br />

CD (support directory<br />

SUPPORT\TOOLS\SUPPORT.CAB) must be copied to the following<br />

directories:<br />

Windows 2003 and Windows 2008:<br />

C:\Program Files\<strong>Fujitsu</strong>\<strong>Deployment</strong>Service\<br />

PMKit\Win2003\boot\ipadj<br />

6.3 Cloning an image<br />

Windows 2003 X64, Windows 2008 X64 and Windows 2012:<br />

C:\Program Files\<strong>Fujitsu</strong>\<strong>Deployment</strong>Service\<br />

PMKit\Win2003x64\boot\ipadj<br />

The <strong>ServerView</strong> Suite DVD 1 contains the directory RD_FILES,<br />

which contains the corresponding files for Windows<br />

2003/2008/2012 (32-bit version only). During installation of <strong>Deployment</strong><br />

<strong>Manager</strong> you can optionally copy the file to the deployment<br />

server. In this case you are asked to specify the location of the<br />

file.<br />

The domain join functionality can only work with images that were<br />

generated with RemoteDeploy as of version 3.30, and only if the<br />

file was copied to the correct directory before generating the<br />

image. The file must be part of the image in order to work correctly.<br />

If the file is available on the deployment server, it is automatically<br />

copied to the image during image generation. If the file is not part<br />

of the image, the deployment job will time out and you will see<br />

more information on the deployed target.<br />

Domain Name<br />

If this field is not empty, the post-preparation process running on the<br />

cloned Windows systems will try to join the specified Windows domain.<br />

User Name<br />

Specifies an account to be used for the domain join operation. This must<br />

be an account that has the right to perform a domain join on the domain<br />

server. By default this is a domain administrator account.<br />

<strong>Deployment</strong> <strong>Manager</strong> 209


6 Mass Cloning<br />

Password / Repeat Password<br />

Password/Repeat password for the account.<br />

Windows 2008 and Windows 2012 Volume Activation<br />

If you deploy a Windows 2008 or Windows 2012 image created from a volume<br />

installation, the cloned image can be automatically activated after<br />

the cloning process:<br />

Activation<br />

Method<br />

Internet<br />

Proxy<br />

KMS<br />

Server<br />

Select the activation method: MAK (Multiple Activation<br />

Key) or KMS (Key Management Service).<br />

Optional: Specify a proxy server (IP address or name) to<br />

connect to the Internet. If you want to specify a proxy<br />

server by using a hostname, you need to select the is a<br />

hostname option, too. You can also enter a port number.<br />

If you select the activation method KMS (Key Management<br />

Service), you must specify the IP address or the<br />

name of the KMS server. Optionally, you can enter the<br />

KMS port number.<br />

Key If you select the activation method MAK (Multiple Activation<br />

Key), enter the activation key.<br />

Set hostname in /etc/hosts<br />

If you deploy a Linux Red Hat image, the line with 127.0.0.1 in /etc/hosts<br />

is overwritten after cloning.<br />

For Do not set: 127.0.0.1 localhost.localdomain localhost<br />

For Set: 127.0.0.1 .<br />

System Status after <strong>Deployment</strong><br />

System status of the target systems after the cloning:<br />

Shut down System The target servers involved will be shut<br />

down at the end of the process.<br />

Keep System Running The target servers involved will be kept<br />

running at the end of the process.<br />

210 <strong>Deployment</strong> <strong>Manager</strong>


6.3.1.7 Bios Boot Type step (Clone wizard)<br />

Bios Boot Type is the next step in the Clone wizard.<br />

6.3 Cloning an image<br />

This step is only displayed when iRMC Support has been chosen as method<br />

for remote management in <strong>Deployment</strong> Configuration. The Bios Boot Type<br />

is needed to initiate the remote PXE boot of the target server.<br />

l The selection here is ignored if the target server does not<br />

have an UEFI BIOS.<br />

l Each LAN port must be manually configured for either Legacy<br />

PXE or UEFI PXE boot (in the BIOS of the target<br />

server). This configuration cannot be changed when initiating<br />

a PXE boot from the deployment server. The value that is<br />

specified here must match the manual configuration in the<br />

BIOS.<br />

<strong>Deployment</strong> <strong>Manager</strong> 211


6 Mass Cloning<br />

Figure 60: Bios Boot Type step<br />

"PC compatible" Boot (legacy)<br />

Initiate a Legacy PXE boot at the target server.<br />

Extensible Firmware Interface Boot (EFI)<br />

Initiate an UEFI PXE boot at the target server.<br />

6.3.1.8 Scheduling step (Clone wizard)<br />

Scheduling is the final step in the Clone wizard. This step allows you to<br />

clone the image as a scheduled task.<br />

212 <strong>Deployment</strong> <strong>Manager</strong>


Figure 61: Scheduling step<br />

6.3 Cloning an image<br />

Time Unit to perform this Task (only for later option)<br />

Specifies how often you want the task to be performed: Once, Daily,<br />

Weekly, Monthly. Additional settings are shown depending on your selection.<br />

Select Date and Time to start (only for later option)<br />

Specifies the start date (now or later) when you want the task to be performed.<br />

If you select later, you can also specify the start time for the<br />

task.<br />

Perform this Task (only for Daily)<br />

Selects whether the task should be performed Every Day, Weekdays or<br />

Every x days.<br />

<strong>Deployment</strong> <strong>Manager</strong> 213


6 Mass Cloning<br />

Select days of the week you want this task to start (only for Weekly)<br />

Specifies that the task should be executed every week or every n week<br />

(n = 1 ..52). You can also select the days of the week.<br />

Select the day and the month you want this task to start (only for<br />

Monthly)<br />

Specifies for which months of the year the task should be executed. You<br />

can also specify on which day of the month a task should be executed.<br />

And you can specify a weekday and whether the task should be executed<br />

in the first, second, third, fourth or last week of a month.<br />

Retry Count<br />

Specifies the number of retries for this task if the job fails. A value<br />

between 0 and 10 can be specified.<br />

Retry Interval (minutes) (only for later option)<br />

The time (in minutes) before the next attempt if previous attempts at a<br />

task failed and the number of retries does not exceed the retry counter. A<br />

value between 1 and 360 minutes can be specified.<br />

Task Start Time Window (minutes) (only for later option)<br />

This value defines the maximum period (in minutes) for retries of a task.<br />

Outside this period no retries will be performed, even if all the retries specified<br />

by the retry counter have not been used.<br />

It can happen that a task is handed over to the deployment server but is<br />

only queued, because too many other tasks are already running. If the<br />

Task Start Time Window elapses while a task is in this state, the task<br />

will also be canceled. A running task will not be canceled when the Task<br />

Start Time Window elapses.<br />

Enable Segmentation<br />

Enable the option to specify the segmentation size.<br />

Segmentation Size<br />

Specifies the maximum number of targets that will be deployed in parallel.<br />

This might be useful when a job with a huge number of targets is created.<br />

214 <strong>Deployment</strong> <strong>Manager</strong>


6.3.1.9 Action after starting the "Clone" task<br />

Basically, the target server behaves as follows after starting the Clone wizard.<br />

Windows reference system<br />

When restoring an image of Windows Reference system, the following steps<br />

are done:<br />

1. After starting the task, power on the client. (Please see “Notes”.)<br />

2. The client is sending a PXE request.<br />

3. The client is booted with WinPE.<br />

4. Start restoring an image.<br />

5. After restoring an image, reboot the client.<br />

6. The client boots the Windows operating system, starts the Client<br />

Agent, and executes Sysprep.<br />

7. After executing Sysprep, the client reboots.<br />

8. Start Windows setup<br />

9. The client reboots after Windows setup<br />

10. The client boots the Windows operating system and start Client<br />

Agent again.<br />

11. The client is shutdown.<br />

Linux reference system<br />

When restoring an image of Linux Reference system, the following steps are<br />

done:<br />

1. After starting the task, power on the client. (Please see “Notes”.)<br />

2. The client is sending a PXE request.<br />

3. The client is booted with WinPE.<br />

4. Start restoring an image.<br />

5. After restoring an image, shutdown the client.<br />

6.3 Cloning an image<br />

<strong>Deployment</strong> <strong>Manager</strong> 215


6 Mass Cloning<br />

l When you select iRMC Support, MMB SNMP Support,<br />

MMB Remote <strong>Manager</strong> Support, or Wake on LAN Support<br />

in the Remote Management Ports step in the <strong>Deployment</strong><br />

Configuration wizard, <strong>Deployment</strong> <strong>Manager</strong> turns on<br />

the client power automatically after starting the task. When<br />

you select Manual Management, you need to turn on the<br />

client manually.<br />

l When you select iRMC Support, MMB SNMP Support, or<br />

MMB Remote <strong>Manager</strong> Support in the Remote Management<br />

Ports step in the <strong>Deployment</strong> Configuration wizard,<br />

PXE is automatically set on first place in BIOS boot<br />

order when starting the client.<br />

l When you select Wake on LAN Support or Manual Management<br />

in the Remote Management Ports step in the<br />

<strong>Deployment</strong> Configuration wizard, you need to set PXE<br />

on first place in BIOS boot order of the client before you start<br />

the task.<br />

l If you select iRMC Support, MMB SNMP Support or<br />

MMB Remote <strong>Manager</strong> Support in the Remote Management<br />

Ports step in the <strong>Deployment</strong> Configuration wizard,<br />

you need to turn off a target server before you start the<br />

deployment. You can perform the shutdown manually or by<br />

using the Force System Shutdown before <strong>Deployment</strong><br />

option in the task you want to execute.<br />

l If you select iRMC Support, MMB SNMP Support or<br />

MMB Remote <strong>Manager</strong> Support in the Remote Management<br />

Ports step in the <strong>Deployment</strong> Configuration wizard,<br />

you need to set the user name and the password for<br />

iRMC or MMB in the <strong>Deployment</strong> Configuration wizard<br />

before starting the task.<br />

l When you select Keep System Running in the System<br />

Status after <strong>Deployment</strong> option, the client reboots after finishing<br />

the task.<br />

216 <strong>Deployment</strong> <strong>Manager</strong>


6.3.2 Clone with Image wizard<br />

l When you execute a customer script, the client doesn't perform<br />

a shutdown after finishing the task. You need to perform<br />

a shutdown by your script or yourself.<br />

The Clone with Image wizard allows you to clone a single server with an<br />

image.<br />

To open the wizard, select Clone with Image from the context menu in the<br />

All Servers group.<br />

6.3.2.1 Task Name step (Clone with Image wizard)<br />

Task Name is the first step in the Clone with Image wizard. In this step you<br />

can enter a task name.<br />

Figure 62: Task Name step<br />

6.3 Cloning an image<br />

<strong>Deployment</strong> <strong>Manager</strong> 217


6 Mass Cloning<br />

Task Name<br />

Name of the task. Any string except the characters & and " is allowed.<br />

6.3.2.2 <strong>Deployment</strong> Server step (Clone with Image wizard)<br />

<strong>Deployment</strong> Server is the next step in the Clone with Image wizard. In this<br />

step you can select the deployment server.<br />

Figure 63: <strong>Deployment</strong> Server step<br />

<strong>Deployment</strong> Server<br />

Name of the deployment server which is used for this task.<br />

6.3.2.3 Disk Image step (Clone with Image wizard)<br />

Disk Image is the next step in the Clone with Image wizard. In this step<br />

you can select the disk image.<br />

218 <strong>Deployment</strong> <strong>Manager</strong>


Figure 64: Disk Image step<br />

6.3 Cloning an image<br />

Show compatible images only<br />

Displays all compatible images. Only valid for cloning of single servers<br />

without a group.<br />

Every server is identified by a D number (e.g. D2860 for PRIM-<br />

ERGY BX920). This D number is written to the .cfg file during<br />

image creation. An image is regarded as "compatible" with a server<br />

if the D number of the server is the same as the D number in the<br />

.cfg file of the image. <strong>Deployment</strong> <strong>Manager</strong> performs no additional<br />

compatibility checks. Servers with the same D number are handled<br />

as "image compatible". The D number of each server can be<br />

obtained from the serial number (Syyyy-Dxxxx) or from the BIOS<br />

welcome message on console at BIOS Boot phase.<br />

<strong>Deployment</strong> <strong>Manager</strong> 219


6 Mass Cloning<br />

PRIMEQUEST cannot use this function because PRI-<br />

MEQUEST does not have a D number.<br />

Disk Images<br />

Displays existing image repositories in the tree view on the left. The table<br />

on the right displays the images in the repository. Select the image from<br />

the repository.<br />

Image File<br />

Displays the selected image.<br />

NVRAM Information Restoration<br />

During image creation the NVRAM boot entries are saved if an UEFI<br />

PXE boot of a reference system with GPT boot disk is done. The .cfg file<br />

contains a section NVRAM if boot entries have been saved. You can<br />

select here whether the NVRAM boot entries should be restored on the<br />

target.<br />

Update boot<br />

entries (delete<br />

the original<br />

entries)<br />

Do not restore<br />

boot entries<br />

6.3.2.4 Disks step (Clone with Image wizard)<br />

The boot entries from the reference system are restored<br />

on the target. The original entries on the target are deleted.<br />

You should select this option when restoring a boot<br />

disk.<br />

You should select this option when restoring a data<br />

disk.<br />

Disks is the next step in the Clone with Image wizard. In this step you can<br />

select the logical disk number.<br />

220 <strong>Deployment</strong> <strong>Manager</strong>


Figure 65: Disks step<br />

Restore to Logical Disk Number<br />

Optional: Select the disk on which the image is to be deployed.<br />

If there are local disks and SAN disks connected to the system,<br />

the disk numbering depends on the driver load order (for<br />

deployment platform WinPE MDP). In most configurations with<br />

an Emulex board (e.g. in combination with an LSI SAS IME<br />

board) the SAN disks are listed before the local disks - if you<br />

accept the default value 0 for the logical disk number, the first<br />

SAN disk will be overwritten. As a workaround you could<br />

unplug the Fibre Channel cables before starting the cloning.<br />

Partitions<br />

In the first column select whether the partition should be restored.<br />

The columns in the table have the following meanings:<br />

6.3 Cloning an image<br />

<strong>Deployment</strong> <strong>Manager</strong> 221


6 Mass Cloning<br />

Column Meaning<br />

Volume Label Volume label<br />

Attributes Attributes (and Active flag)<br />

Format File System<br />

Size Size of the partition<br />

Used Percentage of used space<br />

When the system partition is restored, all partitions before the<br />

system partition (for example the ESP and MSR partitions for a<br />

GPT disk) must be restored as well.<br />

6.3.2.5 System Preparation step (Clone with Image wizard)<br />

System Preparation is the next step in the Clone with Image wizard. In<br />

this step you can select the system preparation before image deployment.<br />

222 <strong>Deployment</strong> <strong>Manager</strong>


Figure 66: System Preparation step<br />

Unchanged<br />

When this method is selected, no further system preparation will be done<br />

before the disk image is cloned to the target servers. This requires that a system<br />

preparation (configuration of the RAID controller) was done manually for<br />

all target servers before the image cloning is started.<br />

All Primergy<br />

This method allows a RAID configuration for all RAID controllers supported<br />

by Installation <strong>Manager</strong>.<br />

The following parameters are displayed:<br />

Controller Vendor<br />

Select the controller vendor.<br />

6.3 Cloning an image<br />

<strong>Deployment</strong> <strong>Manager</strong> 223


6 Mass Cloning<br />

Controller Family<br />

Select the controller family.<br />

Controller Model<br />

Select the controller model.<br />

Controller Number<br />

Select the controller number.<br />

Manual Configuration<br />

You can define additional parameters:<br />

Raid Level<br />

Select the RAID level.<br />

Number of Disks<br />

Select the number of disks.<br />

Use Hot Spare<br />

A standby hard disk can be used to replace a defective hard disk.<br />

l To create a RAID array on an SX940 storage blade with a<br />

Lynx controller, select LSI/Mylex RAID Controller as the<br />

controller vendor, LSI IME SAS as the controller family,<br />

Any as the controller model, and 1 as the controller<br />

number.<br />

l To create a RAID array on an SX940 storage blade with<br />

a Cougar controller, select LSI/Mylex RAID Controller as<br />

the controller vendor, LSI MegaRAID SAS as the controller<br />

family, RAID 5/6 SAS based on LSI MegaRAID as<br />

the controller model, and 0 as the controller number.<br />

Preparation Boot Image<br />

The following parameters are displayed:<br />

Boot Image Path<br />

Displays the path where the selected image is stored.<br />

224 <strong>Deployment</strong> <strong>Manager</strong>


Boot Image Type<br />

Select the boot image type: Floppy Disk Image or Bootstrap Image.<br />

Expand last partition to whole disk<br />

The last partition of the restored disk image will be expanded to use the<br />

rest of the whole disk.<br />

6.3.2.6 Settings step (Clone with Image wizard)<br />

Settings is the next step in the Clone with Image wizard. This step allows<br />

you to specify the deployment method and the system shutdown method<br />

before cloning the image.<br />

Figure 67: Settings step<br />

<strong>Deployment</strong> Method<br />

<strong>Deployment</strong> method for the cloning:<br />

6.3 Cloning an image<br />

<strong>Deployment</strong> <strong>Manager</strong> 225


6 Mass Cloning<br />

Multicast The disk image data is sent via multicast IP packets over<br />

the network. This means that a data packet is sent to all target<br />

servers involved once and not n times (where n is the<br />

number of servers involved). This allows you to deploy a<br />

large number of servers in a short time. You might experience<br />

problems with the multicast method if there is a router<br />

between the deployment server and a target server, since<br />

not all routers allow multicast transfer.<br />

Unicast The disk image data is sent via unicast IP packets over the<br />

network. This means that a data packet is sent to each<br />

server. When using the unicast deployment method, only 4<br />

servers can be deployed in parallel. If your deploy job<br />

involves more servers, the deployment of the other servers<br />

is queued. The other servers will be deployed as soon as the<br />

deployment of one server has finished.<br />

Force System Shutdown before <strong>Deployment</strong><br />

<strong>Deployment</strong> <strong>Manager</strong> will try to shut down the affected target systems<br />

before the deployment process is started:<br />

l When this option is checked, the target servers will be shut down<br />

before the deployment is started.<br />

l When this option is not checked, the target servers must be shut<br />

down, otherwise the deployment will fail.<br />

Depending on the selected shutdown method, a shutdown might<br />

be a hard power-off or a graceful shutdown. In the case of a hard<br />

power-off, unsaved data will be lost. This might not be a problem<br />

since the disk will be overwritten by a new disk image. But data<br />

on a second logical disk might be lost.<br />

Shutdown Method<br />

Select the shutdown method:<br />

226 <strong>Deployment</strong> <strong>Manager</strong>


Forced A hard power-off of all running target servers will be initiated<br />

if these servers support remote power-off by management<br />

blade or baseboard management controller. In<br />

this case unsaved data will be lost. Normally this is not a<br />

problem, since the disk will be overwritten by a new disk<br />

image.<br />

ACPI <strong>Deployment</strong> <strong>Manager</strong> will try to perform an ACPI shutdown<br />

of the system. If the running operating system supports<br />

ACPI, it should save unsaved data and clean up the<br />

file systems.<br />

Graceful<br />

(via Server-<br />

View<br />

Agent)<br />

Shutdown Username<br />

Preconfigured user name.<br />

This shutdown method makes use of the <strong>ServerView</strong><br />

agent functionality. <strong>ServerView</strong> agents running on the target<br />

systems are instructed to shut down the systems.<br />

This method requires that <strong>ServerView</strong> agents are installed<br />

on all systems and are configured to allow this functionality.<br />

It also requires the specification of a shutdown<br />

user name and password.<br />

Shutdown Password/Repeat Shutdown Password<br />

Preconfigured user password.<br />

6.3 Cloning an image<br />

The shutdown user name and password are used for authentication<br />

of this shutdown request at the <strong>ServerView</strong> agent of the<br />

corresponding server. The <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong><br />

service remembers the last-used shutdown user name and password.<br />

These will be set as default shutdown user name and password.<br />

The last-used shutdown user name and password are lost<br />

when the <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> service is restarted.<br />

<strong>Deployment</strong> <strong>Manager</strong> 227


6 Mass Cloning<br />

6.3.2.7 Post <strong>Deployment</strong> step (Clone with Image wizard)<br />

Post <strong>Deployment</strong> is the next step in the Clone with Image wizard. This<br />

step allows you to specify options which are used after cloning.<br />

Figure 68: Post <strong>Deployment</strong> step<br />

Customer Script<br />

Specifies a customer script that is executed on the cloned system at the<br />

end of the deployment process. You can use this to modify and configure<br />

the cloned system for your requirements. In order to specify a customer<br />

script, you must specify its path name as follows:<br />

l For Windows target systems:<br />

The path name should be the network path name of the script in<br />

228 <strong>Deployment</strong> <strong>Manager</strong>


UNC notation. This means that you must use a network share<br />

where you store the script. You can also specify optional parameters<br />

for this script.<br />

Example:<br />

\\server\share\path\script_name [parameters]<br />

l For Linux target systems:<br />

The path name has the following format:<br />

\\server\path\script_name [parameters]<br />

//server/path/script_name [parameters]<br />

6.3 Cloning an image<br />

server FTP server name or the IP address of the FTP server.<br />

path Path to the script relative to the root of the FTP server.<br />

If you select the Customer Script option, the System Status after<br />

<strong>Deployment</strong> option is ignored. If the user wants to reboot after<br />

executing the customer script, you must add a reboot command at<br />

the end of the script.<br />

Windows Domain User<br />

If you deploy a Windows image (not a snapshot image), you can specify a<br />

Windows domain.<br />

<strong>Deployment</strong> <strong>Manager</strong> 229


6 Mass Cloning<br />

In order to use the domain join functionality, the program netdom.exe<br />

from the <strong>ServerView</strong> Suite DVD 1 or a Windows installation<br />

CD (support directory<br />

SUPPORT\TOOLS\SUPPORT.CAB) must be copied to the following<br />

directories:<br />

Windows 2003 and Windows 2008:<br />

C:\Program Files\<strong>Fujitsu</strong>\<strong>Deployment</strong>Service\<br />

PMKit\Win2003\boot\ipadj<br />

Windows 2003 X64, Windows 2008 X64 and Windows 2012:<br />

C:\Program Files\<strong>Fujitsu</strong>\<strong>Deployment</strong>Service\<br />

PMKit\Win2003x64\boot\ipadj<br />

The <strong>ServerView</strong> Suite DVD 1 contains the directory RD_FILES,<br />

which contains the corresponding files for Windows<br />

2003/2008/2012 (32-bit version only). During installation of <strong>Deployment</strong><br />

<strong>Manager</strong> you can optionally copy the file to the deployment<br />

server. In this case you are asked to specify the location of the<br />

file.<br />

The domain join functionality can only work with images that were<br />

generated with RemoteDeploy as of version 3.30, and only if the<br />

file was copied to the correct directory before generating the<br />

image. The file must be part of the image in order to work correctly.<br />

If the file is available on the deployment server, it is automatically<br />

copied to the image during image generation. If the file is not part<br />

of the image, the deployment job will time out and you will see<br />

more information on the deployed target.<br />

Domain Name<br />

If this field is not empty, the post-preparation process running on the<br />

cloned Windows systems will try to join the specified Windows domain.<br />

User Name<br />

Specifies an account to be used for the domain join operation. This must<br />

be an account that has the right to perform a domain join on the domain<br />

server. By default this is a domain administrator account.<br />

230 <strong>Deployment</strong> <strong>Manager</strong>


Password / Repeat Password<br />

Password/Repeat password for the account.<br />

Windows 2008 and Windows 2012 Volume Activation<br />

If you deploy a Windows 2008 or Windows 2012 image created from a volume<br />

installation, the cloned image can be automatically activated after<br />

the cloning process:<br />

Activation<br />

Method<br />

Internet<br />

Proxy<br />

KMS<br />

Server<br />

Select the activation method: MAK (Multiple Activation<br />

Key) or KMS (Key Management Service).<br />

Optional: Specify a proxy server (IP address or name) to<br />

connect to the Internet. If you want to specify a proxy<br />

server by using a hostname, you need to select the is a<br />

hostname option, too. You can also enter a port number.<br />

If you select the activation method KMS (Key Management<br />

Service), you must specify the IP address or the<br />

name of the KMS server. Optionally, you can enter the<br />

KMS port number.<br />

Key If you select the activation method MAK (Multiple Activation<br />

Key), enter the activation key.<br />

Set hostname in /etc/hosts<br />

If you deploy a Linux Red Hat image, the line with 127.0.0.1 in /etc/hosts<br />

is overwritten after cloning.<br />

For Do not set: 127.0.0.1 localhost.localdomain localhost<br />

For Set: 127.0.0.1 .<br />

System Status after <strong>Deployment</strong><br />

System status of the target systems after the cloning:<br />

6.3 Cloning an image<br />

Shut down System The target servers involved will be shut<br />

down at the end of the process.<br />

Keep System Running The target servers involved will be kept<br />

running at the end of the process.<br />

<strong>Deployment</strong> <strong>Manager</strong> 231


6 Mass Cloning<br />

6.3.2.8 Bios Boot Type step (Clone with Image wizard)<br />

Bios Boot Type is the next step in the Clone with Image wizard.<br />

This step is only displayed when iRMC Support has been chosen as method<br />

for remote management in <strong>Deployment</strong> Configuration. The Bios Boot Type<br />

is needed to initiate the remote PXE boot of the target server.<br />

l The selection here is ignored if the target server does not<br />

have an UEFI BIOS.<br />

l Each LAN port must be manually configured for either Legacy<br />

PXE or UEFI PXE boot (in the BIOS of the target<br />

server). This configuration cannot be changed when initiating<br />

a PXE boot from the deployment server. The value that is<br />

specified here must match the manual configuration in the<br />

BIOS.<br />

232 <strong>Deployment</strong> <strong>Manager</strong>


Figure 69: Bios Boot Type step<br />

"PC compatible" Boot (legacy)<br />

Initiate a Legacy PXE boot at the target server.<br />

Extensible Firmware Interface Boot (EFI)<br />

Initiate an UEFI PXE boot at the target server.<br />

6.3.2.9 Scheduling step (Clone with Image wizard)<br />

6.3 Cloning an image<br />

Scheduling is the final step in the Clone with Image wizard. This step<br />

allows you to clone the image as a scheduled task.<br />

<strong>Deployment</strong> <strong>Manager</strong> 233


6 Mass Cloning<br />

Figure 70: Scheduling step<br />

Time Unit to perform this Task (only for later option)<br />

Specifies how often you want the task to be performed: Once, Daily,<br />

Weekly, Monthly. Additional settings are shown depending on your selection.<br />

Select Date and Time to start (only for later option)<br />

Specifies the start date (now or later) when you want the task to be performed.<br />

If you select later, you can also specify the start time for the<br />

task.<br />

Perform this Task (only for Daily)<br />

Selects whether the task should be performed Every Day, Weekdays or<br />

Every x days.<br />

234 <strong>Deployment</strong> <strong>Manager</strong>


Select days of the week you want this task to start (only for Weekly)<br />

Specifies that the task should be executed every week or every n week<br />

(n = 1 ..52). You can also select the days of the week.<br />

Select the day and the month you want this task to start (only for<br />

Monthly)<br />

Specifies for which months of the year the task should be executed. You<br />

can also specify on which day of the month a task should be executed.<br />

And you can specify a weekday and whether the task should be executed<br />

in the first, second, third, fourth or last week of a month.<br />

Retry Count<br />

Specifies the number of retries for this task if the job fails. A value<br />

between 0 and 10 can be specified.<br />

Retry Interval (minutes) (only for later option)<br />

The time (in minutes) before the next attempt if previous attempts at a<br />

task failed and the number of retries does not exceed the retry counter. A<br />

value between 1 and 360 minutes can be specified.<br />

Task Start Time Window (minutes) (only for later option)<br />

This value defines the maximum period (in minutes) for retries of a task.<br />

Outside this period no retries will be performed, even if all the retries specified<br />

by the retry counter have not been used.<br />

It can happen that a task is handed over to the deployment server but is<br />

only queued, because too many other tasks are already running. If the<br />

Task Start Time Window elapses while a task is in this state, the task<br />

will also be canceled. A running task will not be canceled when the Task<br />

Start Time Window elapses.<br />

Enable Segmentation<br />

Enable the option to specify the segmentation size.<br />

6.3 Cloning an image<br />

Segmentation Size<br />

Specifies the maximum number of targets that will be deployed in parallel.<br />

This might be useful when a job with a huge number of targets is created.<br />

<strong>Deployment</strong> <strong>Manager</strong> 235


6 Mass Cloning<br />

6.3.2.10 Action after starting the "Clone with Image" task<br />

Basically, the target server behaves as follows after starting the Clone with<br />

Image wizard.<br />

Windows reference system<br />

When restoring an image of Windows Reference system, the following steps<br />

are done:<br />

1. After starting the task, power on the client. (Please see “Notes”.)<br />

2. The client is sending a PXE request.<br />

3. The client is booted with WinPE.<br />

4. Start restoring an image.<br />

5. After restoring an image, reboot the client.<br />

6. The client boots the Windows operating system, starts the Client<br />

Agent, and executes Sysprep.<br />

7. After executing Sysprep, the client reboots.<br />

8. Start Windows setup<br />

9. The client reboots after Windows setup<br />

10. The client boots the Windows operating system and starts Client<br />

Agent again.<br />

11. The client is shutdown.<br />

Linux reference system<br />

When restoring an image of Linux Reference system, the following steps are<br />

done:<br />

1. After starting the task, power on the client. (Please see “Notes”.)<br />

2. The client is sending a PXE request.<br />

3. The client is booted with WinPE.<br />

4. Start restoring an image.<br />

5. After restoring an image, shutdown the client.<br />

236 <strong>Deployment</strong> <strong>Manager</strong>


6.3 Cloning an image<br />

l When you select iRMC Support, MMB SNMP Support,<br />

MMB Remote <strong>Manager</strong> Support, or Wake on LAN Support<br />

in the Remote Management Ports step in the <strong>Deployment</strong><br />

Configuration wizard, <strong>Deployment</strong> <strong>Manager</strong> turns on<br />

the client power automatically after starting the task. When<br />

you select Manual Management, you need to turn on the<br />

client manually.<br />

l When you select iRMC Support, MMB SNMP Support, or<br />

MMB Remote <strong>Manager</strong> Support in the Remote Management<br />

Ports step in the <strong>Deployment</strong> Configuration wizard,<br />

PXE is automatically set on first place in BIOS boot<br />

order when starting the client.<br />

l When you select Wake on LAN Support or Manual Management<br />

in the Remote Management Ports step in the<br />

<strong>Deployment</strong> Configuration wizard, you need to set PXE<br />

on first place in BIOS boot order of the client before you start<br />

the task.<br />

l If you select iRMC Support, MMB SNMP Support or<br />

MMB Remote <strong>Manager</strong> Support in the Remote Management<br />

Ports step in the <strong>Deployment</strong> Configuration wizard,<br />

you need to turn off a target server before you start the<br />

deployment. You can perform the shutdown manually or by<br />

using the Force System Shutdown before <strong>Deployment</strong><br />

option in the task you want to execute.<br />

l If you select iRMC Support, MMB SNMP Support or<br />

MMB Remote <strong>Manager</strong> Support in the Remote Management<br />

Ports step in the <strong>Deployment</strong> Configuration wizard,<br />

you need to set the user name and the password for<br />

iRMC or MMB in the <strong>Deployment</strong> Configuration wizard<br />

before starting the task.<br />

l When you select Keep System Running in the System<br />

Status after <strong>Deployment</strong> option, the client reboots after finishing<br />

the task.<br />

<strong>Deployment</strong> <strong>Manager</strong> 237


6 Mass Cloning<br />

l When you execute a customer script, the client doesn't perform<br />

a shutdown after finishing the task. You need to perform<br />

a shutdown by your script or yourself.<br />

6.4 Cloning/Installing Baseboard Management Controllers<br />

(BMCs)<br />

Baseboard Management Controllers (BMC) can be added to the <strong>ServerView</strong><br />

server list via the server browser. A special BMC icon shows the BMC (only<br />

<strong>Fujitsu</strong> BMCs) in the <strong>ServerView</strong> server list. The icon of a BMC in the server<br />

list reflects the status of the Global Error LED.<br />

RemoteDeploy version 4.1 supports the cloning and installation of BMCs.<br />

To access a BMC, <strong>Deployment</strong> <strong>Manager</strong> requires a valid user ID with the following<br />

privileges:<br />

l Getting and setting the power status<br />

l Getting and setting the boot order<br />

The admin account on all current PRIMERGY servers has the required privileges.<br />

The BMC/iRMC settings cannot be set by using this function. The<br />

function can only clone/install the operating system to BMCs on the<br />

server list as a target servers.<br />

Cloning and installing older PRIMERGY servers<br />

The steps below are necessary if you want to clone or install one of the following<br />

older PRIMERGY servers.<br />

l RX200/TX200<br />

l RX220<br />

l RX300/TX300<br />

l RX100 S2<br />

l RX200 S2/TX200 S2<br />

l RX300 S2/TX300 S2<br />

238 <strong>Deployment</strong> <strong>Manager</strong>


l RX100 S3<br />

l TX150 S2<br />

l TX150 S3<br />

l TX150 S4<br />

In Japan, the PRIMERGY servers described above are not supported.<br />

1. Start Operations <strong>Manager</strong>, see the Operations <strong>Manager</strong> guide.<br />

2. Open the User Password Settings window by selecting Users/Passwords<br />

from the Administration menu.<br />

3. Delete the admin account.<br />

4. Add a new account with the required privileges.<br />

5. Add the admin account again.<br />

It is important that the privileged account is first in the search order.<br />

You can change the account for each server later in the <strong>Deployment</strong> <strong>Manager</strong><br />

front-end via the <strong>Deployment</strong> Configuration wizard.<br />

6.4.1 Displaying BMCs<br />

6.4 Cloning/Installing Baseboard Management Controllers (BMCs)<br />

Displaying BMCs in Operations <strong>Manager</strong><br />

The following window shows an example of BMC entries displayed in the<br />

Operations <strong>Manager</strong> ServerList window.<br />

Figure 71: BMC in Operations <strong>Manager</strong><br />

<strong>Deployment</strong> <strong>Manager</strong> 239


6 Mass Cloning<br />

Displaying BMCs in <strong>Deployment</strong> <strong>Manager</strong><br />

The <strong>Deployment</strong> <strong>Manager</strong> database is periodically updated from the Server-<br />

View database. The BMC entries are displayed in the Servers view of the<br />

<strong>Deployment</strong> <strong>Manager</strong> front-end:<br />

Figure 72: BMC in <strong>Deployment</strong> <strong>Manager</strong><br />

The host names generated for the BMCs are based on the last 6 bytes of the<br />

MAC address.<br />

BMCs can be added to deployment groups or installation groups.<br />

6.4.2 Changing the deployment configuration<br />

You can change the deployment configuration of a BMC in the <strong>Deployment</strong><br />

<strong>Manager</strong> front-end.<br />

l Select a BMC in the All Servers view and choose <strong>Deployment</strong> Configuration<br />

from the context menu. The <strong>Deployment</strong> Configuration<br />

wizard opens.<br />

You can change the following settings:<br />

l In the General Settings step (first step) you should change the generated<br />

server name.<br />

l In the LAN Ports step (second step) DHCP is selected by default.<br />

You can enter a static IP address to replace the default deployment<br />

configuration. Click Add to add deployment configurations for add-on<br />

PCI boards.<br />

l The BMC Account Settings window (fifth window) displays the BMC<br />

configuration. You can change the user name and the password.<br />

Choose an account with the appropriate privilege.<br />

240 <strong>Deployment</strong> <strong>Manager</strong>


6.4 Cloning/Installing Baseboard Management Controllers (BMCs)<br />

A bare server entry will be added to the <strong>ServerView</strong> database. This entry<br />

takes the values from the BMC entry with the following changes:<br />

l The server name and system name are set to the generated server<br />

name or to the server name entered in the General Settings window.<br />

l The network address is set to "0.0.0.0".<br />

The GUID (UUID) is deleted in the <strong>ServerView</strong> database, but saved in the<br />

<strong>Deployment</strong> <strong>Manager</strong> database. After successful cloning or installation of the<br />

BMC, the GUID is set automatically in the <strong>ServerView</strong> database.<br />

In the <strong>ServerView</strong> database the BMC and the bare server entries exist in parallel.<br />

But in the <strong>ServerView</strong> server list only the BMC entry is displayed.<br />

In the <strong>Deployment</strong> <strong>Manager</strong> database, only the bare server entry exists. During<br />

the periodic update of the database, the <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong><br />

service matches the BMC entry from the <strong>ServerView</strong> database to the<br />

bare server entry. I.e. the <strong>Deployment</strong> <strong>Manager</strong> server list displays a bare<br />

server entry with the values of the BMC entry.<br />

Cloning/Installing a BMC without Changing the <strong>Deployment</strong> Configuration<br />

BMCs can be cloned or installed immediately without changing the deployment<br />

configuration.<br />

A default deployment configuration exists for every BMC:<br />

l Includes only the onboard interfaces (reported by the BMC).<br />

l DHCP is set for all interfaces.<br />

l The target system name is set to the generated server name.<br />

l The DNS suffix entry is empty.<br />

Once the cloning or installation process is started, a bare server with the generated<br />

server name is created. You cannot change the generated name later.<br />

<strong>Deployment</strong> <strong>Manager</strong> 241


6 Mass Cloning<br />

6.4.3 Actions after cloning/installation<br />

The following actions are performed after successful cloning or installation of<br />

a BMC:<br />

l The <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> service changes the Bare-<br />

Server to type Server in the <strong>ServerView</strong> database. The following<br />

fields are also changed:<br />

o The UUID is set to the value from the corresponding BMC.<br />

o The FullQualifiedName is set. If a DNS suffix is specified in<br />

the <strong>Deployment</strong> Configuration wizard, the suffix is appended<br />

to the ServerName. Otherwise the FullQualifiedName is<br />

set to ServerName.<br />

l The <strong>Deployment</strong> <strong>Manager</strong> front-end immediately displays a server<br />

instead of a bare server in the Servers view.<br />

l Operations <strong>Manager</strong> initially displays a BMC entry and a server entry<br />

in parallel.<br />

l After cloning/installing, the server is booted with the installed Server-<br />

View agents. The <strong>ServerView</strong> Services service<br />

o contacts the DNS server with the given FullQualifiedName to<br />

get the network address of the server.<br />

o uses the network address to contact the <strong>ServerView</strong> agent on<br />

the server.<br />

o determines that the UUID delivered by the <strong>ServerView</strong> agent<br />

on the server is identical to the UUID of the BMC entry. The<br />

BMC entry is deleted from the <strong>ServerView</strong> database.<br />

l Finally, Operations <strong>Manager</strong> displays only the server entry.<br />

242 <strong>Deployment</strong> <strong>Manager</strong>


7 Mass Installation<br />

The Mass Installation work area allows you to install any number of target<br />

servers in parallel.<br />

If the target server has an UEFI BIOS, each LAN port must be manually<br />

configured for either Legacy PXE or UEFI PXE boot. If you<br />

configure UEFI PXE boot, the target server will be installed with a<br />

GPT boot disk, otherwise with an MBR boot disk.<br />

7.1 Installation Groups<br />

Before you create a new installation group you need to consider which<br />

servers should get which configuration file. You can build installation groups<br />

consisting of one or more servers (every server can be assigned a different<br />

configuration file - there is no configuration file assigned to the whole group).<br />

Installation groups with several servers help to reduce the workload of an<br />

administrator by organizing servers in one logical group. All servers in an<br />

installation group can be installed simultaneously.<br />

<strong>Deployment</strong> <strong>Manager</strong> 243


7 Mass Installation<br />

l If you assign configuration files to servers, make sure that<br />

these files are applicable for these servers. For example, an<br />

installation configuration file for a specific RAID controller<br />

requires that the installation target server contains this RAID<br />

controller.<br />

l The operating system which can be installed by Mass Installation<br />

is dependent on <strong>ServerView</strong> Installation <strong>Manager</strong>. Please<br />

see the manual of "<strong>ServerView</strong> Installation <strong>Manager</strong>" for<br />

details.<br />

l You need to create the installation configuration file from Server-<br />

View Installation <strong>Manager</strong> beforehand. For details, please refer<br />

to the manual for <strong>ServerView</strong> Installation <strong>Manager</strong>.<br />

l This functionality depends on <strong>ServerView</strong> Installation <strong>Manager</strong>.<br />

Therefore, you must meet the software requirements of Server-<br />

View Installation <strong>Manager</strong>. For example, when you want to<br />

install a Linux system, you need to configure a FTP, HTTP or<br />

NFS server. For details, please refer to the <strong>ServerView</strong> Installation<br />

<strong>Manager</strong> user guide.<br />

You can add, copy, edit or delete installation groups.<br />

7.1.1 Adding an installation group<br />

The Add Installation Group wizard allows you to create installation groups.<br />

To open the wizard, select Add Installation Group from the context menu<br />

in the Servers by Installation group.<br />

7.1.1.1 Group Name step (Add Installation Group wizard)<br />

Group Name is the first step in the Add Installation Group wizard. In this<br />

step you can enter a group name.<br />

244 <strong>Deployment</strong> <strong>Manager</strong>


Figure 73: Group Name step<br />

Group Name<br />

Name of the group.<br />

Description<br />

Optional: Description of the group.<br />

7.1.1.2 Group Members step (Add Installation Group wizard)<br />

7.1 Installation Groups<br />

Group Members is the next step in the Add Installation Group wizard. In<br />

this step you can select the servers for the group.<br />

<strong>Deployment</strong> <strong>Manager</strong> 245


7 Mass Installation<br />

Figure 74: Group Members step<br />

><br />

<<br />

>><br />

Copies the servers selected from the Available Servers list to the Selected<br />

Servers list.<br />

Copies the servers selected from the Selected Servers list back to the<br />

Available Servers list.<br />

Copies all servers from the Available Servers list to the Selected<br />

Servers list.<br />

246 <strong>Deployment</strong> <strong>Manager</strong>


7 Mass Installation<br />

Assign >><br />

Assigns the configuration file to the selected servers in the adjacent list.<br />

Assign to all<br />

Assigns the configuration file to all servers in the adjacent list.<br />

7.1.2 Copying an installation group<br />

The Copy Installation Group wizard allows you to copy an existing installation<br />

group which is to be used as a template. You can modify the setting to<br />

your own needs.<br />

To open the wizard, select Copy Installation Group from the context menu<br />

in the Servers by Installation group.<br />

7.1.2.1 Group Name step (Copy Installation Group wizard)<br />

Group Name is the first step in the Copy Installation Group wizard. In this<br />

step you can enter a group name.<br />

248 <strong>Deployment</strong> <strong>Manager</strong>


Figure 76: Group Name step<br />

Group Name<br />

Name of the group.<br />

Description<br />

Optional: Description of the group.<br />

7.1.2.2 Group Members step (Copy Installation Group wizard)<br />

7.1 Installation Groups<br />

Group Members is the next step in the Copy Installation Group wizard. In<br />

this step you can select the servers for the group.<br />

<strong>Deployment</strong> <strong>Manager</strong> 249


7 Mass Installation<br />

Figure 77: Group Members step<br />

Show compatible servers only<br />

The Available Servers list displays all servers which are compatible.<br />

>><br />

><br />


Copies the servers selected from the Selected Servers list back to the<br />

Available Servers list.<br />

7.1.2.3 Installation Configuration step (Copy Installation Group wizard)<br />

Installation Configuration is the final step in the Copy Installation Group<br />

wizard. This step allows you to select a configuration file from the repository<br />

which can later be supplied to one or more members of the group.<br />

Figure 78: Installation Configuration step<br />

7.1 Installation Groups<br />

Installation Configuration<br />

Selects a configuration file from the Installation Configuration repository.<br />

<strong>Deployment</strong> <strong>Manager</strong> 251


7 Mass Installation<br />

Assign >><br />

Assigns the configuration file to the selected servers in the adjacent list.<br />

Assign to all<br />

Assigns the configuration file to all servers in the adjacent list.<br />

7.1.3 Editing an installation group<br />

The Edit Installation Group wizard allows you to edit an existing installation<br />

group.<br />

To open the wizard, select Edit Installation Group from the context menu in<br />

the Servers by Installation group.<br />

7.1.3.1 Group Name step (Edit Installation Group wizard)<br />

Group Name is the first step in the Edit Installation Group wizard. In this<br />

step you can enter a group name.<br />

252 <strong>Deployment</strong> <strong>Manager</strong>


Figure 79: Group Name step<br />

Group Name<br />

Name of the group.<br />

Description<br />

Optional: Description of the group.<br />

7.1.3.2 Group Members step (Edit Installation Group wizard)<br />

7.1 Installation Groups<br />

Group Members is the next step in the Edit Installation Group wizard. In<br />

this step you can select the servers for the group.<br />

<strong>Deployment</strong> <strong>Manager</strong> 253


7 Mass Installation<br />

Figure 80: Group Members step<br />

Show compatible servers only<br />

The Available Servers list displays all servers which are compatible.<br />

>><br />

><br />


Copies the servers selected from the Selected Servers list back to the<br />

Available Servers list.<br />

7.1.3.3 Installation Configuration step (Edit Installation Group wizard)<br />

Installation Configuration is the final step in the Edit Installation Group<br />

wizard. This step allows you to select a configuration file from the repository<br />

which can later be supplied to all members of the group.<br />

Figure 81: Installation Configuration step<br />

7.1 Installation Groups<br />

Installation Configuration<br />

Selects a configuration file from the Installation Configuration repository.<br />

<strong>Deployment</strong> <strong>Manager</strong> 255


7 Mass Installation<br />

Assign >><br />

Assigns the configuration file to the selected servers in the adjacent list.<br />

Assign to all<br />

Assigns the configuration file to all servers in the adjacent list.<br />

7.1.4 Deleting an installation group<br />

You can delete installation groups:<br />

1. Select the relevant installation group in the Servers by Installation<br />

group.<br />

2. Select Delete Installation Group from the context menu.<br />

7.2 Installing servers<br />

Before you start the installation process you need to consider which servers<br />

should be installed. You can build installation groups consisting of one or<br />

more servers (every server can be assigned a different configuration file). It is<br />

also possible to install servers without specifying an installation group - you<br />

select a server from the list and then select Install from the context menu.<br />

l You need to create the installation configuration file from Server-<br />

View Installation <strong>Manager</strong> beforehand. For details, please refer<br />

to the manual for <strong>ServerView</strong> Installation <strong>Manager</strong>.<br />

l This functionality depends on <strong>ServerView</strong> Installation <strong>Manager</strong>.<br />

Therefore, you must meet the software requirements of Server-<br />

View Installation <strong>Manager</strong>. For example, when you want to<br />

install a Linux system, you need to configure a FTP, HTTP or<br />

NFS server. For details, please refer to the <strong>ServerView</strong> Installation<br />

<strong>Manager</strong> user guide.<br />

256 <strong>Deployment</strong> <strong>Manager</strong>


7.2.1 Install wizard<br />

The Install wizard allows you to install servers.<br />

To open the wizard, select Install from the context menu in the Servers<br />

view.<br />

7.2.1.1 Task Name step (Install wizard)<br />

Task Name is the first step in the Install wizard. This step allows you to<br />

enter a task name.<br />

Figure 82: Task Name step<br />

7.2 Installing servers<br />

Task Name<br />

Name of the task. Any string except the characters & and " is allowed.<br />

<strong>Deployment</strong> <strong>Manager</strong> 257


7 Mass Installation<br />

7.2.1.2 Installation Configuration step (Install wizard)<br />

Installation Configuration is the next step in the Install wizard. This step is<br />

only available, if you start the Install wizard by selecting one or more<br />

servers.This step allows you to select a configuration file from the repository<br />

which can later be supplied to the selected servers.<br />

Figure 83: Installation Configuration step<br />

Installation Configuration<br />

Select a configuration file from the Installation Configuration repository.<br />

7.2.1.3 Settings step (Install wizard)<br />

Settings is the next step in the Install wizard. It is the third step if you start<br />

the wizard by selecting an installation group.<br />

258 <strong>Deployment</strong> <strong>Manager</strong>


Figure 84: Settings step<br />

Assign to ServerStart/Installation <strong>Manager</strong> data<br />

Location of the Installation <strong>Manager</strong> data. Use the default settings or<br />

specify a new location and define a user name and user password for<br />

remote access.<br />

Content Tree (UNC) Shared network directory (UNC network<br />

path).<br />

Username Remote Access User name for remote access.<br />

Password /<br />

Repeat Password<br />

Password for remote access.<br />

7.2 Installing servers<br />

<strong>Deployment</strong> <strong>Manager</strong> 259


7 Mass Installation<br />

Force System Shutdown before <strong>Deployment</strong><br />

<strong>Deployment</strong> <strong>Manager</strong> will try to shut down the affected target systems<br />

before the deployment process is started:<br />

l When this option is checked, the target servers will be shut down<br />

before the deployment is started.<br />

l When this option is not checked, the target servers must be shut<br />

down, otherwise the deployment will fail.<br />

Depending on the selected shutdown method, a shutdown<br />

might be a hard power-off or a graceful shutdown. In case of a<br />

hard power-off, unsaved data will be lost. This might not be a<br />

problem, since the disk will be overwritten by a new installation.<br />

But data on a second logical disk might be lost.<br />

Shutdown Method<br />

Select the shutdown method:<br />

Forced A hard power-off of all running target servers will be initiated<br />

if these servers support remote power-off by management<br />

blade or baseboard management controller. In<br />

this case unsaved data will be lost. Normally this is not a<br />

problem, since the disk will be overwritten by a new installation.<br />

ACPI <strong>Deployment</strong> <strong>Manager</strong> will try to perform an ACPI shutdown<br />

of the system. If the running operating system supports<br />

ACPI, it should save unsaved data and clean up the<br />

file systems.<br />

Graceful<br />

(via Server-<br />

View<br />

Agent)<br />

This shutdown method makes use of the <strong>ServerView</strong><br />

agent functionality. <strong>ServerView</strong> agents running on the target<br />

systems are instructed to shut down the systems.<br />

This method requires that <strong>ServerView</strong> agents are installed<br />

on all systems and are configured to allow this functionality.<br />

It also requires the specification of a shutdown<br />

user name and password.<br />

260 <strong>Deployment</strong> <strong>Manager</strong>


System Status after <strong>Deployment</strong><br />

System status of the target systems after the cloning:<br />

Shut down System The target servers involved will be shut down<br />

at the end of the process.<br />

Keep System Running The target servers involved will be kept running<br />

at the end of the process.<br />

7.2.1.4 Bios Boot Type step (Install wizard)<br />

Bios Boot Type is the next step in the Install wizard.<br />

7.2 Installing servers<br />

This option is currently ignored in<br />

Mass Installation. After finishing<br />

Mass Installation, the target server<br />

will be kept running.<br />

This step is only displayed when iRMC Support has been chosen as method<br />

for remote management in <strong>Deployment</strong> Configuration. The Bios Boot Type<br />

is needed to initiate the remote PXE boot of the target server.<br />

l The selection here is ignored if the target server does not<br />

have an UEFI BIOS.<br />

l Each LAN port must be manually configured for either Legacy<br />

PXE or UEFI PXE boot (in the BIOS of the target<br />

server). This configuration cannot be changed when initiating<br />

a PXE boot from the deployment server. The value that is<br />

specified here must match the manual configuration in the<br />

BIOS.<br />

<strong>Deployment</strong> <strong>Manager</strong> 261


7 Mass Installation<br />

Figure 85: Bios Boot Type step<br />

"PC compatible" Boot (legacy)<br />

Initiate a Legacy PXE boot at the target server.<br />

Extensible Firmware Interface Boot (EFI)<br />

Initiate an UEFI PXE boot at the target server.<br />

7.2.1.5 Scheduling step (Install wizard)<br />

Scheduling is the final step in the Install wizard. In this step you can create<br />

the install task as a scheduled task.<br />

262 <strong>Deployment</strong> <strong>Manager</strong>


Figure 86: Scheduling step<br />

7.2 Installing servers<br />

Time Unit to perform this Task (only for later option)<br />

Specifies how often you want the task to be performed: Once, Daily,<br />

Weekly, Monthly. Additional settings are shown depending on your selection.<br />

Select Date and Time to start (only for later option)<br />

Specifies the start date (now or later) when you want the task to be performed.<br />

If you select later, you can also specify the start time for the<br />

task.<br />

Perform this Task (only for Daily)<br />

Selects whether the task should be performed Every Day, Weekdays or<br />

Every x days.<br />

<strong>Deployment</strong> <strong>Manager</strong> 263


7 Mass Installation<br />

Select days of the week you want this task to start (only for Weekly)<br />

Specifies that the task should be executed every week or every n week<br />

(n = 1 ..52). You can also select the days of the week.<br />

Select the day and the month you want this task to start (only for<br />

Monthly)<br />

Specifies for which months of the year the task should be executed. You<br />

can also specify on which day of the month a task should be executed.<br />

And you can specify a weekday and whether the task should be executed<br />

in the first, second, third, fourth or last week of a month.<br />

Retry Count<br />

Specifies the number of retries for this task if the job fails. A value<br />

between 0 and 10 can be specified.<br />

Retry Interval (minutes) (only for later option)<br />

The time (in minutes) before the next attempt if previous attempts at a<br />

task failed and the number of retries does not exceed the retry counter. A<br />

value between 1 and 360 minutes can be specified.<br />

Task Start Time Window (minutes) (only for later option)<br />

This value defines the maximum period (in minutes) for retries of a task.<br />

Outside this period no retries will be performed, even if all the retries specified<br />

by the retry counter have not been used.<br />

It can happen that a task is handed over to the deployment server but is<br />

only queued, because too many other tasks are already running. If the<br />

Task Start Time Window elapses while a task is in this state, the task<br />

will also be canceled. A running task will not be canceled when the Task<br />

Start Time Window elapses.<br />

Enable Segmentation<br />

Enable the option to specify the segmentation size.<br />

Segmentation Size<br />

Specifies the maximum number of targets that will be deployed in parallel.<br />

This might be useful when a job with a huge number of targets is created.<br />

264 <strong>Deployment</strong> <strong>Manager</strong>


7.2.1.6 Actions after starting the "Install" task<br />

This action is the same as in <strong>ServerView</strong> Installation <strong>Manager</strong>.<br />

7.2 Installing servers<br />

l When installing a Windows operating system, <strong>ServerView</strong><br />

<strong>Deployment</strong> <strong>Manager</strong> displays the installation status as Success<br />

after finishing the necessary process on WinPE. However,<br />

installation of target server continues for a while.<br />

l When you select iRMC Support, MMB SNMP Support,<br />

MMB Remote <strong>Manager</strong> Support, or Wake on LAN Support<br />

in the Remote Management Ports step in the <strong>Deployment</strong><br />

Configuration wizard, <strong>Deployment</strong> <strong>Manager</strong> turns on<br />

the client power automatically after starting the task. When<br />

you select Manual Management, you need to turn on the<br />

client manually.<br />

l When you select iRMC Support, MMB SNMP Support, or<br />

MMB Remote <strong>Manager</strong> Support in the Remote Management<br />

Ports step in the <strong>Deployment</strong> Configuration wizard,<br />

PXE is automatically set on first place in BIOS boot<br />

order when starting the client.<br />

l When you select Wake on LAN Support or Manual Management<br />

in the Remote Management Ports step in the<br />

<strong>Deployment</strong> Configuration wizard, you need to set PXE<br />

on first place in BIOS boot order of the client before you start<br />

the task.<br />

l If you select iRMC Support, MMB SNMP Support or<br />

MMB Remote <strong>Manager</strong> Support in the Remote Management<br />

Ports step in the <strong>Deployment</strong> Configuration wizard,<br />

you need to turn off a target server before you start the<br />

deployment. You can perform the shutdown manually or by<br />

using the Force System Shutdown before <strong>Deployment</strong><br />

option in the task you want to execute.<br />

<strong>Deployment</strong> <strong>Manager</strong> 265


7 Mass Installation<br />

l If you select iRMC Support, MMB SNMP Support or<br />

MMB Remote <strong>Manager</strong> Support in the Remote Management<br />

Ports step in the <strong>Deployment</strong> Configuration wizard,<br />

you need to set the user name and the password for<br />

iRMC or MMB in the <strong>Deployment</strong> Configuration wizard<br />

before starting the task.<br />

l When you select Keep System Running in the System<br />

Status after <strong>Deployment</strong> option, the client reboots after finishing<br />

the task.<br />

266 <strong>Deployment</strong> <strong>Manager</strong>


7.3 Boot Groups<br />

7.3.1 Adding a boot group<br />

The Add Boot Group wizard allows you to create boot groups.<br />

To open the wizard, select Add Boot Group from the context menu in the<br />

Servers by Boot Image group.<br />

7.3.1.1 Group Name step (Add Boot Group wizard)<br />

Group Name is the first step in the Add Boot Group wizard. In this step<br />

you must enter a group name.<br />

Figure 87: Group Name step<br />

7.3 Boot Groups<br />

<strong>Deployment</strong> <strong>Manager</strong> 267


7 Mass Installation<br />

Group Name<br />

Name of the group.<br />

Description<br />

Optional: Description of the group.<br />

7.3.1.2 Boot Image step (Add Boot Group wizard)<br />

Boot Image step is the next step in the Add Boot Group wizard. In this<br />

step you can choose an image from the list of registered boot images.<br />

Figure 88: Boot Image step<br />

Select an image from the list of registered boot images.<br />

268 <strong>Deployment</strong> <strong>Manager</strong>


7.3.1.3 Group Members step (Add Boot Group wizard)<br />

Group Members is the final step in the Add Boot Group wizard. In this<br />

step you can select the servers for the group.<br />

Figure 89: Group Members step<br />

><br />

<<br />

7.3 Boot Groups<br />

Copies the servers selected from the Available Servers list to the Selected<br />

Servers list.<br />

Copies the servers selected from the Selected Servers list back to the<br />

Available Servers list.<br />

<strong>Deployment</strong> <strong>Manager</strong> 269


7 Mass Installation<br />

>><br />


Figure 90: Group Name step<br />

Group Name<br />

Name of the group.<br />

Description<br />

Optional: Description of the group.<br />

7.3.2.2 Boot Image step (Copy Boot Group wizard)<br />

7.3 Boot Groups<br />

Boot Image step is the next step in the Copy Boot Group wizard. In this<br />

step you can choose an image from the list of registered boot images.<br />

<strong>Deployment</strong> <strong>Manager</strong> 271


7 Mass Installation<br />

Figure 91: Boot Image step<br />

Select an image from the list of registered boot images.<br />

7.3.2.3 Group Members step (Copy Boot Group wizard)<br />

Group Members is the final step in the Copy Boot Group wizard. In this<br />

step you can select the servers for the group.<br />

272 <strong>Deployment</strong> <strong>Manager</strong>


Figure 92: Group Members step<br />

><br />

<<br />

>><br />

7.3 Boot Groups<br />

Copies the servers selected from the Available Servers list to the Selected<br />

Servers list.<br />

Copies the servers selected from the Selected Servers list back to the<br />

Available Servers list.<br />

Copies all servers from the Available Servers list to the Selected<br />

Servers list.<br />

<strong>Deployment</strong> <strong>Manager</strong> 273


7 Mass Installation<br />


Group Name<br />

Name of the group.<br />

Description<br />

Optional: Description of the group.<br />

7.3.3.2 Boot Image step (Edit Boot Group wizard)<br />

Boot Image is the next step in the Edit Boot Group wizard. In this step you<br />

can choose the image file for the deployment.<br />

Figure 94: Boot Image step<br />

Select an image from the list of registered boot images.<br />

7.3 Boot Groups<br />

<strong>Deployment</strong> <strong>Manager</strong> 275


7 Mass Installation<br />

Unregister all clients for the previously selected image at the PXE service<br />

If you select another boot image, you can activate this option. This<br />

applies only to servers in Generic Boot tasks that have been started with<br />

selection Permanent for Registration at PXE Service (see "Settings<br />

step (Generic Boot wizard)" on page 288).<br />

7.3.3.3 Group Members step (Edit Boot Group wizard)<br />

Group Members is the final step in the Edit Boot Group wizard. In this<br />

step you can select the servers for the group.<br />

Figure 95: Group Members step<br />

Show compatible servers only<br />

The Available Servers list displays all servers which are compatible.<br />

276 <strong>Deployment</strong> <strong>Manager</strong>


Copies all servers from the Available Servers list to the Selected<br />

Servers list.<br />

><br />


7 Mass Installation<br />

7.4 MDP booting of servers<br />

Before you can perform an MDP boot for one or more servers, you must<br />

create a boot group that has an image of type SVIM Agent Image (MDP)<br />

assigned. With MDP boot, you can execute an MDP application on one or<br />

more servers of a group.<br />

The boot group can be created from the built-in images SVIM32 or<br />

SVIM64. In most cases it is not necessary to add and register a<br />

boot image of the type SVIM Agent Image (MDP).<br />

Before calling the MDP Boot wizard, you must add an MDP Applications<br />

repository and manually copy your MDP application to a<br />

sub directory in the MDP Applications repository.<br />

For details on MDP (Multi <strong>Deployment</strong> Platform) refer to the <strong>ServerView</strong><br />

Installation <strong>Manager</strong> user guide.<br />

7.4.1 MDP Boot wizard<br />

The MDP Boot wizard allows you to execute an MDP application on one or<br />

more servers of a boot group.<br />

To open the wizard, select MDP Boot from the context menu in the Servers<br />

by Boot Image group.<br />

MDP Boot is offered in the context menu of a boot group if the<br />

group has an image of type SVIM Agent Image (MDP) assigned.<br />

7.4.1.1 Task Name step (MDP Boot wizard)<br />

Task Name is the first step in the MDP Boot wizard. In this step you must<br />

enter a task name.<br />

278 <strong>Deployment</strong> <strong>Manager</strong>


Figure 96: Task Name step<br />

Task Name<br />

Name of the task. Any string except the characters & and " is allowed.<br />

7.4.1.2 MDP Application step (MDP Boot wizard)<br />

7.4 MDP booting of servers<br />

MDP Application is the next step in the MDP Boot wizard. In this step you<br />

can select an MDP application.<br />

<strong>Deployment</strong> <strong>Manager</strong> 279


7 Mass Installation<br />

Figure 97: MDP Application step<br />

Select an MDP application in the tree view.<br />

7.4.1.3 Settings step (MDP Boot wizard)<br />

Settings is the next step in the MDP Boot wizard. This step allows you to<br />

specify the system shutdown method before starting the MDP boot and the<br />

SVIM agent status after deployment.<br />

280 <strong>Deployment</strong> <strong>Manager</strong>


Figure 98: Settings step<br />

Force System Shutdown before <strong>Deployment</strong><br />

<strong>Deployment</strong> <strong>Manager</strong> will try to shut down the affected target systems<br />

before the deployment process is started:<br />

l When this option is checked, the target servers will be shut down<br />

before the deployment is started.<br />

l When this option is not checked, the target servers must be shut<br />

down, otherwise the deployment will fail.<br />

Depending on the selected shutdown method, a shutdown might<br />

be a hard power-off or a graceful shutdown. In the case of a hard<br />

power-off, unsaved data will be lost.<br />

Shutdown Method<br />

Select the shutdown method:<br />

7.4 MDP booting of servers<br />

<strong>Deployment</strong> <strong>Manager</strong> 281


7 Mass Installation<br />

Forced A hard power-off of all running target servers will be initiated if<br />

these servers support remote power-off by management<br />

blade or baseboard management controller. In this case<br />

unsaved data will be lost.<br />

ACPI <strong>Deployment</strong> <strong>Manager</strong> will try to perform an ACPI shutdown<br />

of the system. If the running operating system supports<br />

ACPI, it should save unsaved data and clean up the file systems.<br />

Graceful<br />

(via<br />

Server-<br />

View<br />

Agent)<br />

This shutdown method makes use of the <strong>ServerView</strong> agent<br />

functionality. <strong>ServerView</strong> agents running on the target systems<br />

are instructed to shut down the systems. This method<br />

requires that <strong>ServerView</strong> agents are installed on all systems<br />

and are configured to allow this functionality. It also requires<br />

the specification of a shutdown user name and password.<br />

Shutdown Username<br />

Preconfigured user name.<br />

Shutdown Password/Repeat Shutdown Password<br />

Preconfigured user password.<br />

The shutdown user name and password are used for authentication<br />

of this shutdown request at the <strong>ServerView</strong> agent of the<br />

corresponding server. The <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong><br />

service remembers the last-used shutdown user name and password.<br />

These will be set as default shutdown user name and<br />

password. The last-used shutdown user name and password<br />

are lost when the <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> service is<br />

restarted.<br />

SVIM Agent Status after <strong>Deployment</strong><br />

SVIM Agent status of the target systems after the execution of the MDP<br />

application:<br />

282 <strong>Deployment</strong> <strong>Manager</strong>


Disabled The SVIM Agent will return the control back to<br />

WinPE.<br />

Agent PowerOff The SVIM Agent will shut down the target.<br />

Agent Reboot The SVIM Agent will reboot the target.<br />

Agent Idle The SVIM Agent will keep running in idle mode.<br />

7.4.1.4 Bios Boot Type step (MDP Boot wizard)<br />

Bios Boot Type is the next step in the MDP Boot wizard.<br />

7.4 MDP booting of servers<br />

This step is only displayed when iRMC Support has been chosen as method<br />

for remote management in <strong>Deployment</strong> Configuration. The Bios Boot Type<br />

is needed to initiate the remote PXE boot of the target server.<br />

l The selection here is ignored if the target server does not<br />

have an UEFI BIOS.<br />

l Each LAN port must be manually configured for either Legacy<br />

PXE or UEFI PXE boot (in the BIOS of the target<br />

server). This configuration cannot be changed when initiating<br />

a PXE boot from the deployment server. The value that is<br />

specified here must match the manual configuration in the<br />

BIOS.<br />

<strong>Deployment</strong> <strong>Manager</strong> 283


7 Mass Installation<br />

Figure 99: BIOS Boot Type step<br />

"PC compatible" Boot (legacy)<br />

Initiate a Legacy PXE boot at the target server.<br />

Extensible Firmware Interface Boot (EFI)<br />

Initiate an UEFI PXE boot at the target server.<br />

7.4.1.5 Scheduling step (MDP Boot wizard)<br />

Scheduling is the final step in the MDP Boot wizard. In this step you can<br />

create the MDP Boot task as a scheduled task.<br />

284 <strong>Deployment</strong> <strong>Manager</strong>


Figure 100: Scheduling step<br />

7.4 MDP booting of servers<br />

Time Unit to perform this Task (only for later option)<br />

Specifies how often you want the task to be performed: Once, Daily,<br />

Weekly, Monthly. Additional settings are shown depending on your selection.<br />

Select Date and Time to start (only for later option)<br />

Specifies the start date (now or later) when you want the task to be performed.<br />

If you select later, you can also specify the start time for the<br />

task.<br />

Perform this Task (only for Daily)<br />

Selects whether the task should be performed Every Day, Weekdays or<br />

Every x days.<br />

<strong>Deployment</strong> <strong>Manager</strong> 285


7 Mass Installation<br />

Select days of the week you want this task to start (only for Weekly)<br />

Specifies that the task should be executed every week or every n week<br />

(n = 1 ..52). You can also select the days of the week.<br />

Select the day and the month you want this task to start (only for<br />

Monthly)<br />

Specifies for which months of the year the task should be executed. You<br />

can also specify on which day of the month a task should be executed.<br />

And you can specify a weekday and whether the task should be executed<br />

in the first, second, third, fourth or last week of a month.<br />

Retry Count<br />

Specifies the number of retries for this task if the job fails. A value<br />

between 0 and 10 can be specified.<br />

Retry Interval (minutes) (only for later option)<br />

The time (in minutes) before the next attempt if previous attempts at a<br />

task failed and the number of retries does not exceed the retry counter. A<br />

value between 1 and 360 minutes can be specified.<br />

Task Start Time Window (minutes) (only for later option)<br />

This value defines the maximum period (in minutes) for retries of a task.<br />

Outside this period no retries will be performed, even if all the retries specified<br />

by the retry counter have not been used.<br />

It can happen that a task is handed over to the deployment server but is<br />

only queued, because too many other tasks are already running. If the<br />

Task Start Time Window elapses while a task is in this state, the task<br />

will also be canceled. A running task will not be canceled when the Task<br />

Start Time Window elapses.<br />

Enable Segmentation<br />

Enable the option to specify the segmentation size.<br />

Segmentation Size<br />

Specifies the maximum number of targets that will be deployed in parallel.<br />

This might be useful when a job with a huge number of targets is created.<br />

286 <strong>Deployment</strong> <strong>Manager</strong>


7.5 Generic booting of servers<br />

Before you can perform a generic boot for one or more servers, you must<br />

create a boot group that has an image of type Generic Image assigned. With<br />

generic boot, you can boot a generic image on one or more servers of a<br />

group.<br />

There is only limited task progress handling possible. As soon as a<br />

PXE request has been received from a client, the state of this client<br />

is set to Success. After the PXE requests from all clients in the<br />

boot group have been received, the task state is set to Success.<br />

7.5.1 Generic Boot wizard<br />

The Generic Boot wizard allows you to boot one or more servers of a boot<br />

group with a generic image.<br />

To open the wizard, select Generic Boot from the context menu in the<br />

Servers by Boot Image group.<br />

Generic Boot is offered in the context menu of a boot group if the<br />

group has an image of type Generic Image assigned.<br />

7.5.1.1 Task Name step (Generic Boot wizard)<br />

7.5 Generic booting of servers<br />

Task Name is the first step in the Generic Boot wizard. In this step you<br />

must enter a task name.<br />

<strong>Deployment</strong> <strong>Manager</strong> 287


7 Mass Installation<br />

Figure 101: Task Name step<br />

Task Name<br />

Name of the task. Any string except the characters & and " is allowed.<br />

7.5.1.2 Settings step (Generic Boot wizard)<br />

Settings is the next step in the Generic Boot wizard. This step allows you<br />

to specify the system shutdown method before starting the generic boot and<br />

the registration type at the PXE service.<br />

288 <strong>Deployment</strong> <strong>Manager</strong>


Figure 102: Settings step<br />

Force System Shutdown before <strong>Deployment</strong><br />

<strong>Deployment</strong> <strong>Manager</strong> will try to shut down the affected target systems<br />

before the deployment process is started:<br />

l When this option is checked, the target servers will be shut down<br />

before the deployment is started.<br />

l When this option is not checked, the target servers must be shut<br />

down, otherwise the deployment will fail.<br />

Depending on the selected shutdown method, a shutdown might<br />

be a hard power-off or a graceful shutdown. In the case of a hard<br />

power-off, unsaved data will be lost.<br />

Shutdown Method<br />

Select the shutdown method:<br />

7.5 Generic booting of servers<br />

<strong>Deployment</strong> <strong>Manager</strong> 289


7 Mass Installation<br />

Forced A hard power-off of all running target servers will be initiated<br />

if these servers support remote power-off by management<br />

blade or baseboard management controller. In<br />

this case unsaved data will be lost.<br />

ACPI <strong>Deployment</strong> <strong>Manager</strong> will try to perform an ACPI shutdown<br />

of the system. If the running operating system supports<br />

ACPI, it should save unsaved data and clean up the<br />

file systems.<br />

Graceful<br />

(via Server-<br />

View<br />

Agent)<br />

This shutdown method makes use of the <strong>ServerView</strong><br />

agent functionality. <strong>ServerView</strong> agents running on the target<br />

systems are instructed to shut down the systems.<br />

This method requires that <strong>ServerView</strong> agents are installed<br />

on all systems and are configured to allow this functionality.<br />

It also requires the specification of a shutdown<br />

user name and password.<br />

Registration at PXE service<br />

The registration type at the PXE service:<br />

OneShot<br />

The clients will PXE boot only once. After a PXE request has been<br />

received from a client, the client is automatically unregistered at the<br />

PXE service.<br />

Permanent<br />

The clients are registered permanently at the PXE service. If there is<br />

another PXE request received from a client, there is another generic<br />

boot of the client done.<br />

After the task has been completed, you must manually unregister<br />

the clients at the PXE service. See Unregistering all<br />

clients at the PXE service in section "Operations on the<br />

Tasks tab" on page 99 and "Unregistering a client" on page<br />

117.<br />

290 <strong>Deployment</strong> <strong>Manager</strong>


Delay (in seconds) to unregister the client after PXE request has been<br />

received<br />

If OneShot is selected, the client should not be unregistered immediately<br />

after the PXE request has been received. Specify the delay here.<br />

7.5.1.3 Bios Boot Type step (Generic Boot wizard)<br />

Bios Boot Type is the next step in the Generic Boot wizard.<br />

7.5 Generic booting of servers<br />

This step is only displayed when iRMC Support has been chosen as method<br />

for remote management in <strong>Deployment</strong> Configuration. The Bios Boot Type<br />

is needed to initiate the remote PXE boot of the target server.<br />

l The selection here is ignored if the target server does not<br />

have an UEFI BIOS.<br />

l Each LAN port must be manually configured for either Legacy<br />

PXE or UEFI PXE boot (in the BIOS of the target<br />

server). This configuration cannot be changed when initiating<br />

a PXE boot from the deployment server. The value that is<br />

specified here must match the manual configuration in the<br />

BIOS.<br />

<strong>Deployment</strong> <strong>Manager</strong> 291


7 Mass Installation<br />

Figure 103: Bios Boot Type step<br />

"PC compatible" Boot (legacy)<br />

Initiate a Legacy PXE boot at the target server.<br />

Extensible Firmware Interface Boot (EFI)<br />

Initiate an UEFI PXE boot at the target server.<br />

7.5.1.4 Scheduling step (Generic Boot wizard)<br />

Scheduling is the final step in the Generic Boot wizard. In this step you can<br />

create the Generic Boot task as a scheduled task.<br />

292 <strong>Deployment</strong> <strong>Manager</strong>


Figure 104: Scheduling step<br />

7.5 Generic booting of servers<br />

Time Unit to perform this Task (only for later option)<br />

Specifies how often you want the task to be performed: Once, Daily,<br />

Weekly, Monthly. Additional settings are shown depending on your selection.<br />

Select Date and Time to start (only for later option)<br />

Specifies the start date (now or later) when you want the task to be performed.<br />

If you select later, you can also specify the start time for the<br />

task.<br />

Perform this Task (only for Daily)<br />

Selects whether the task should be performed Every Day, Weekdays or<br />

Every x days.<br />

<strong>Deployment</strong> <strong>Manager</strong> 293


7 Mass Installation<br />

Select days of the week you want this task to start (only for Weekly)<br />

Specifies that the task should be executed every week or every n week<br />

(n = 1 ..52). You can also select the days of the week.<br />

Select the day and the month you want this task to start (only for<br />

Monthly)<br />

Specifies for which months of the year the task should be executed. You<br />

can also specify on which day of the month a task should be executed.<br />

And you can specify a weekday and whether the task should be executed<br />

in the first, second, third, fourth or last week of a month.<br />

Retry Count<br />

Specifies the number of retries for this task if the job fails. A value<br />

between 0 and 10 can be specified.<br />

Retry Interval (minutes) (only for later option)<br />

The time (in minutes) before the next attempt if previous attempts at a<br />

task failed and the number of retries does not exceed the retry counter. A<br />

value between 1 and 360 minutes can be specified.<br />

Task Start Time Window (minutes) (only for later option)<br />

This value defines the maximum period (in minutes) for retries of a task.<br />

Outside this period no retries will be performed, even if all the retries specified<br />

by the retry counter have not been used.<br />

It can happen that a task is handed over to the deployment server but is<br />

only queued, because too many other tasks are already running. If the<br />

Task Start Time Window elapses while a task is in this state, the task<br />

will also be canceled. A running task will not be canceled when the Task<br />

Start Time Window elapses.<br />

Enable Segmentation<br />

Enable the option to specify the segmentation size.<br />

Segmentation Size<br />

Specifies the maximum number of targets that will be deployed in parallel.<br />

This might be useful when a job with a huge number of targets is created.<br />

294 <strong>Deployment</strong> <strong>Manager</strong>


8 Crash Recovery<br />

The Crash Recovery work area allows you to create and restore snapshot<br />

images.<br />

If the target server has an UEFI BIOS, each LAN port must be<br />

manually configured for either Legacy PXE or UEFI PXE boot.<br />

If the target server has a GPT boot disk, UEFI PXE boot must<br />

be configured. Otherwise Legacy PXE or UEFI PXE boot can<br />

be configured.<br />

8.1 Creating a snapshot image<br />

You can create a snapshot image, which can be used as a kind of backup of<br />

one particular server. A snapshot image can be created for selected servers<br />

at the same time. After restoring a snapshot image, the restored system will<br />

have the same host name and the same IP address as the source system.<br />

Therefore a snapshot image is not useful for cloning several servers.<br />

8.1.1 Create Snapshot Image Wizard<br />

The Create Snapshot Image wizard allows you to create a snapshot image.<br />

To open the wizard, select Create Snapshot Image from the context menu<br />

of a selected server in the Servers view.<br />

After starting the image creation process (by clicking Finish), the cloning<br />

module now boots the reference system referenced by its MAC address with<br />

a WinPE boot image via PXE and starts the image creation tool on WinPE.<br />

The automatic remote power-on and the remote PXE boot configuration<br />

of the BIOS require that the server can be managed by a<br />

management blade (blade system), an RSB or a BMC that supports<br />

IPMI 1.5 over LAN. For non-blade systems you must have specified<br />

the correct BMC settings. If the server cannot be managed in one of<br />

these ways, a dialog appears asking you to initiate a PXE boot manually.<br />

Scheduled tasks for non-manageable servers are not possible.<br />

<strong>Deployment</strong> <strong>Manager</strong> 295


8 Crash Recovery<br />

8.1.1.1 Task Name step (Create Snapshot Image wizard)<br />

Task Name is the first step in the Create Snapshot Image wizard. In this<br />

step you must enter a task name.<br />

Figure 105: Task Name step<br />

Task Name<br />

Name of the task. Any string except the characters & and " is allowed.<br />

Image Information<br />

Optional: Specify any textual information that you want to be saved with<br />

the image. This information will be stored in the file with the suffix .txt.<br />

296 <strong>Deployment</strong> <strong>Manager</strong>


8.1.1.2 <strong>Deployment</strong> Server step (Create Snapshot Image wizard)<br />

<strong>Deployment</strong> Server is the next step in the Create Snapshot Image wizard.<br />

In this step you can select the deployment server.<br />

Figure 106: <strong>Deployment</strong> Server step<br />

8.1 Creating a snapshot image<br />

<strong>Deployment</strong> Server<br />

Name of the deployment server which is used for this task.<br />

8.1.1.3 Image Path and Name step (Create Snapshot Image wizard)<br />

Image Path and Name is the next step in the Create Snapshot Image wizard.<br />

In this step you can define the location (path) and the name of the image.<br />

<strong>Deployment</strong> <strong>Manager</strong> 297


8 Crash Recovery<br />

Figure 107: Image Path and Name step<br />

Disk Images<br />

Displays existing image repositories in the tree view on the left. The table<br />

on the right displays the images in the repository. Select a folder in the<br />

repository.<br />

Image Filename<br />

Name of the image. You can use the following variables in the file name:<br />

%D This is replaced by the day (01, 02, ..., 31) of the month.<br />

%M This is replaced by the month (01, 02, ...,12).<br />

%Y This is replaced by the year (4 digits).<br />

%S This is replaced by the name of each server of which an image will<br />

be created.<br />

298 <strong>Deployment</strong> <strong>Manager</strong>


8.1.1.4 Options step (Create Snapshot Image wizard)<br />

Options is the next step in the Create Snapshot Image wizard. In this step<br />

you can specify server options before and after deployment.<br />

Figure 108: Options step<br />

Force System Shutdown before <strong>Deployment</strong><br />

Force a system shutdown before creating the image. Operating-systemspecific<br />

<strong>ServerView</strong> agents must be installed on the relevant server.<br />

Shutdown Method<br />

Shutdown method:<br />

8.1 Creating a snapshot image<br />

<strong>Deployment</strong> <strong>Manager</strong> 299


8 Crash Recovery<br />

Graceful<br />

(via <strong>ServerView</strong> Agents)<br />

This shutdown method makes use of<br />

the <strong>ServerView</strong> agent functionality.<br />

<strong>ServerView</strong> agents running on the target<br />

systems are instructed to shut down the<br />

systems. This method requires Server-<br />

View agents to be installed and configured<br />

on all systems. It also requires<br />

specification of a shutdown user name<br />

and password.<br />

ACPI <strong>Deployment</strong> <strong>Manager</strong> will try to perform<br />

an ACPI shutdown of the system. If the<br />

running operating system supports<br />

ACPI, it should save unsaved data and<br />

clean up the file systems.<br />

Forced A hard power-off of all running target<br />

servers will be initiated if these servers<br />

support remote power-off by management<br />

blade or baseboard management<br />

controller. In this case<br />

unsaved data will be lost. Normally this<br />

is not a problem, since the disk will be<br />

overwritten by a new disk image.<br />

Shutdown Username<br />

Preconfigured user name.<br />

Shutdown Password/Repeat Shutdown Password<br />

Preconfigured user password.<br />

The shutdown user name and password are used for authentication<br />

of this shutdown request at the <strong>ServerView</strong> agent of the<br />

corresponding server. The <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong><br />

service remembers the last-used shutdown user name and password.<br />

These will be set as default shutdown user name and password.<br />

The last-used shutdown user name and password are lost<br />

when the <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> service is restarted.<br />

300 <strong>Deployment</strong> <strong>Manager</strong>


System Status after <strong>Deployment</strong><br />

System status after creating the image:<br />

Shut down System The server will be shut down at the end<br />

of the process.<br />

Keep System Running The server will be kept running at the<br />

end of the process.<br />

GUID Check<br />

The GUID of a server as specified in the deployment configuration of that<br />

server is checked against the GUID that is sent as part of the PXE boot<br />

request sent by a server. The deployment of the server will only start if<br />

the GUID sent by the PXE boot request matches the configured GUID.<br />

8.1.1.5 Disks step (Create Snapshot Image wizard)<br />

8.1 Creating a snapshot image<br />

Disks is the next step in the Create Snapshot Image wizard. In this step<br />

you can specify the disks from which the image will be saved.<br />

<strong>Deployment</strong> <strong>Manager</strong> 301


8 Crash Recovery<br />

Figure 109: Disks step<br />

Click Add to add a new disk.<br />

Logical Disk Number<br />

Select the logical disk number from which the image is to be created. By<br />

default the logical disk 0 is saved, which is the first logical device.<br />

If there are local disks and SAN disks connected to the system,<br />

the disk numbering depends on the driver load order (for deployment<br />

platform WinPE MDP). In most configurations with an Emulex<br />

board (e.g. in combination with an LSI SAS IME board) the<br />

SAN disks are listed before the local disks - if you accept the<br />

default value 0 for the logical disk number, the first SAN disk will<br />

be saved. As a workaround you could unplug the Fibre Channel<br />

cables before starting image creation.<br />

302 <strong>Deployment</strong> <strong>Manager</strong>


Raw Mode<br />

You can generate an image creation in raw disk mode. All sectors of the<br />

disks will be saved in the image. This image can only be deployed to<br />

servers which have disks with the same size and the same disk layout.<br />

File System Independent<br />

This method is used for non-supported file systems. When selected, the<br />

complete partition information is copied to the image file and must be<br />

cloned to a partition of the same size. For more information see section<br />

"File-System-Independent image creation (raw image creation)" on page<br />

155.<br />

File System Dependent<br />

When selected, only the used blocks of a file system will be saved in the<br />

image. For more information see section "File-System-Dependent image<br />

creation" on page 154.<br />

This functionality is supported for the following file systems:<br />

l FAT, FAT32<br />

l NTFS, NTFS5<br />

l EXT2, EXT3, EXT4<br />

8.1 Creating a snapshot image<br />

l Reiser FS version 3.5 and 3.6 on SuSE SLES 9 and 10.<br />

Other file systems and especially other versions of the Reiser file system<br />

are not supported by this functionality.<br />

If a disk contains several partitions for non-supported file system types,<br />

the whole partition including the non-used blocks will be saved. For the<br />

partitions with a supported file system, only the used blocks will be<br />

saved. This reduces the size of the image files and also increases the<br />

speed of the image generation. If you want to restore an image on a disk<br />

that has a different size than the source disk, this option should be specified.<br />

Verify File System<br />

The file system is checked before the image creation process is started.<br />

<strong>Deployment</strong> <strong>Manager</strong> 303


8 Crash Recovery<br />

Raise Error for Unsupported File Systems<br />

Only use this option if the File System Dependent option is checked. If<br />

an unsupported file system is detected, the image creation process will<br />

raise an error and stop.<br />

Compress Image<br />

The image file is compressed during the image creation.<br />

Fast Image Creation<br />

When this option is checked, the image creation process uses an optimized<br />

method to access the target disk for the image creation. This optimization<br />

can be applied to EXT2, EXT3 and EXT4 file systems when the<br />

option File System Dependent is checked. For an image that was created<br />

with this option, the option Expand last partition to whole disk in<br />

the System Preparation step of the cloning is not available because<br />

applying the option Expand last partition to whole disk to such an<br />

image would raise an error.<br />

This option must not be checked if there are NTFS partitions (for<br />

Windows).<br />

Click Remove to delete the settings for the disk.<br />

8.1.1.6 Bios Boot Type step (Create Snapshot Image wizard)<br />

Bios Boot Type is the next step in the Create Snapshot Image wizard.<br />

This step is only displayed when iRMC Support has been chosen as method<br />

for remote management in <strong>Deployment</strong> Configuration. The Bios Boot Type<br />

is needed to initiate the remote PXE boot of the target server.<br />

l The selection here is ignored if the target server does not<br />

have an UEFI BIOS.<br />

l Each LAN port must be manually configured for either Legacy<br />

PXE or UEFI PXE boot (in the BIOS of the target<br />

server). This configuration cannot be changed when initiating<br />

a PXE boot from the deployment server. The value that is<br />

specified here must match the manual configuration in the<br />

BIOS.<br />

304 <strong>Deployment</strong> <strong>Manager</strong>


Figure 110: Bios Boot Type step<br />

"PC compatible" Boot (legacy)<br />

Initiate a Legacy PXE boot at the target server.<br />

Extensible Firmware Interface Boot (EFI)<br />

Initiate an UEFI PXE boot at the target server.<br />

8.1.1.7 Scheduling step (Create Snapshot Image wizard)<br />

8.1 Creating a snapshot image<br />

Scheduling is the final step in the Create Snapshot Image wizard. In this<br />

step you can create the snapshot image generation task as a scheduled<br />

task.<br />

<strong>Deployment</strong> <strong>Manager</strong> 305


8 Crash Recovery<br />

Figure 111: Scheduling step<br />

Time Unit to perform this Task (only for later option)<br />

Specifies how often you want the task to be performed: Once, Daily,<br />

Weekly, Monthly. Additional settings are shown depending on your selection.<br />

Select Date and Time to start (only for later option)<br />

Specifies the start date (now or later) when you want the task to be performed.<br />

If you select later, you can also specify the start time for the<br />

task.<br />

Perform this Task (only for Daily)<br />

Selects whether the task should be performed Every Day, Weekdays or<br />

Every x days.<br />

306 <strong>Deployment</strong> <strong>Manager</strong>


Select days of the week you want this task to start (only for Weekly)<br />

Specifies that the task should be executed every week or every n week<br />

(n = 1 ..52). You can also select the days of the week.<br />

Select the day and the month you want this task to start (only for<br />

Monthly)<br />

Specifies for which months of the year the task should be executed. You<br />

can also specify on which day of the month a task should be executed.<br />

And you can specify a weekday and whether the task should be executed<br />

in the first, second, third, fourth or last week of a month.<br />

Retry Count<br />

Specifies the number of retries for this task if the job fails. A value<br />

between 0 and 10 can be specified.<br />

Retry Interval (minutes) (only for later option)<br />

The time (in minutes) before the next attempt if previous attempts at a<br />

task failed and the number of retries does not exceed the retry counter. A<br />

value between 1 and 360 minutes can be specified.<br />

Task Start Time Window (minutes) (only for later option)<br />

This value defines the maximum period (in minutes) for retries of a task.<br />

Outside this period no retries will be performed, even if all the retries specified<br />

by the retry counter have not been used.<br />

It can happen that a task is handed over to the deployment server but is<br />

only queued, because too many other tasks are already running. If the<br />

Task Start Time Window elapses while a task is in this state, the task<br />

will also be canceled. A running task will not be canceled when the Task<br />

Start Time Window elapses.<br />

Enable Segmentation<br />

Enable the option to specify the segmentation size.<br />

8.1 Creating a snapshot image<br />

Segmentation Size<br />

Specifies the maximum number of targets that will be deployed in parallel.<br />

This might be useful when a job with a huge number of targets is created.<br />

<strong>Deployment</strong> <strong>Manager</strong> 307


8 Crash Recovery<br />

8.1.1.8 Action after starting the "Create Snapshot Image" task<br />

Basically, the target server behaves as follows after starting the Create<br />

Snapshot Image wizard.<br />

Windows / Linux reference system<br />

When saving an image from a Windows Reference system and Linux Reference<br />

system, the following steps are done:<br />

1. After starting the task, power on the client. (Please see “Notes”.)<br />

2. The client is sending a PXE request.<br />

3. The client is booted with WinPE.<br />

4. Start saving an image.<br />

5. After saving an image, the client is shutdown.<br />

l When you select iRMC Support, MMB SNMP Support,<br />

MMB Remote <strong>Manager</strong> Support, or Wake on LAN Support<br />

in the Remote Management Ports step in the <strong>Deployment</strong><br />

Configuration wizard, <strong>Deployment</strong> <strong>Manager</strong> turns on<br />

the client power automatically after starting the task. When<br />

you select Manual Management, you need to turn on the<br />

client manually.<br />

l When you select iRMC Support, MMB SNMP Support, or<br />

MMB Remote <strong>Manager</strong> Support in the Remote Management<br />

Ports step in the <strong>Deployment</strong> Configuration wizard,<br />

PXE is automatically set on first place in BIOS boot<br />

order when starting the client.<br />

l When you select Wake on LAN Support or Manual Management<br />

in the Remote Management Ports step in the<br />

<strong>Deployment</strong> Configuration wizard, you need to set PXE<br />

on first place in BIOS boot order of the client before you start<br />

the task.<br />

308 <strong>Deployment</strong> <strong>Manager</strong>


l If you select iRMC Support, MMB SNMP Support or<br />

MMB Remote <strong>Manager</strong> Support in the Remote Management<br />

Ports step in the <strong>Deployment</strong> Configuration wizard,<br />

you need to turn off a target server before you start the<br />

deployment. You can perform the shutdown manually or by<br />

using the Force System Shutdown before <strong>Deployment</strong><br />

option in the task you want to execute.<br />

l If you select iRMC Support, MMB SNMP Support or<br />

MMB Remote <strong>Manager</strong> Support in the Remote Management<br />

Ports step in the <strong>Deployment</strong> Configuration wizard,<br />

you need to set the user name and the password for<br />

iRMC or MMB in the <strong>Deployment</strong> Configuration wizard<br />

before starting the task.<br />

l When you select Keep System Running in the System<br />

Status after <strong>Deployment</strong> option, the client reboots after finishing<br />

the task.<br />

8.2 Restoring a snapshot image<br />

You can restore a snapshot image. After restoring a snapshot image, the<br />

restored system will have the same host name and the same IP address as<br />

the source system.<br />

8.2.1 Restore Snapshot Image Wizard<br />

The Restore Snapshot Image wizard allows you to restore a snapshot<br />

image.<br />

To open the wizard, select Restore Snapshot Image from the context menu<br />

of a selected server in the Servers view.<br />

8.2.1.1 Task Name step (Restore Snapshot Image wizard)<br />

8.2 Restoring a snapshot image<br />

Task Name is the first step in the Restore Snapshot Image wizard. In this<br />

step you can enter a task name.<br />

<strong>Deployment</strong> <strong>Manager</strong> 309


8 Crash Recovery<br />

Figure 112: Task Name step<br />

Task Name<br />

Name of the task. Any string except the characters & and " is allowed.<br />

8.2.1.2 <strong>Deployment</strong> Server step (Restore Snapshot Image wizard)<br />

<strong>Deployment</strong> Server is the next step in the Restore Snapshot Image wizard.<br />

In this step you can select the deployment server.<br />

310 <strong>Deployment</strong> <strong>Manager</strong>


Figure 113: <strong>Deployment</strong> Server step<br />

<strong>Deployment</strong> Server<br />

Name of the deployment server which is used for this task.<br />

8.2.1.3 Disk Image step (Restore Snapshot Image wizard)<br />

8.2 Restoring a snapshot image<br />

Disk Image is the next step in the Restore Snapshot Image wizard. In this<br />

step you can select the disk image.<br />

<strong>Deployment</strong> <strong>Manager</strong> 311


8 Crash Recovery<br />

Figure 114: Disk Image step<br />

Use Cloning Images for Crash Recovery<br />

Cloning images can be used for Crash Recovery.<br />

Disk Images<br />

Displays existing image repositories in the tree view on the left. The table<br />

on the right displays the images in the repository. Select a folder in the<br />

repository.<br />

8.2.1.4 Disks step (Restore Snapshot Image wizard)<br />

Disks step is the next step in the Restore Snapshot Image wizard. In this<br />

step you can select the logical disk number.<br />

312 <strong>Deployment</strong> <strong>Manager</strong>


Figure 115: Disks step<br />

8.2 Restoring a snapshot image<br />

Restore Disk Image<br />

This option can only be selected if the image contains multiple disks (multiple<br />

drive snapshot cloning) and the user can select for each one whether<br />

they want to restore it.<br />

Restore to Logical Disk Number<br />

Optional: Select the disk on which the image is to be deployed.<br />

<strong>Deployment</strong> <strong>Manager</strong> 313


8 Crash Recovery<br />

If there are local disks and SAN disks connected to the system,<br />

the disk numbering depends on the driver load order (for deployment<br />

platform WinPE MDP). In most configurations with an Emulex<br />

board (e.g. in combination with an LSI SAS IME board) the<br />

SAN disks are listed before the local disks - if you accept the<br />

default value 0 for the logical disk number, the first SAN disk will<br />

be overwritten. As a workaround you could unplug the Fibre Channel<br />

cables before starting the cloning.<br />

Partitions<br />

In the first column select whether the partition should be restored.<br />

The columns in the table have the following meanings:<br />

Column Meaning<br />

Volume Label Volume label<br />

Attributes Attributes (and Active flag)<br />

Format File System<br />

Size Size of the partition<br />

Used Percentage of used space<br />

When the system partition is restored, all partitions before the<br />

system partition (for example the ESP and MSR partitions for a<br />

GPT disk) must be restored as well.<br />

8.2.1.5 System Preparation step (Restore Snapshot Image wizard)<br />

System Preparation is the next step in the Restore Snapshot Image wizard.<br />

In this step you can select the system preparation before image deployment.<br />

314 <strong>Deployment</strong> <strong>Manager</strong>


Figure 116: System Preparation step<br />

Unchanged<br />

When this method is selected, no further system preparation will be done<br />

before the disk image is cloned to the target servers. This requires that a system<br />

preparation (configuration of the RAID controller) was done manually for<br />

all target servers before the image cloning is started.<br />

All Primergy<br />

This method allows a RAID configuration for all RAID controllers supported<br />

by Installation <strong>Manager</strong>.<br />

The following parameters are displayed:<br />

Controller Vendor<br />

Select the controller vendor.<br />

8.2 Restoring a snapshot image<br />

<strong>Deployment</strong> <strong>Manager</strong> 315


8 Crash Recovery<br />

Controller Family<br />

Select the controller family.<br />

Controller Model<br />

Select the controller model.<br />

Controller Number<br />

Select the controller number.<br />

Manual Configuration<br />

You can define additional parameters:<br />

Raid Level<br />

Select the RAID level.<br />

Number of Disks<br />

Select the number of disks.<br />

Use Hot Spare<br />

A standby hard disk can be used to replace a defective hard disk.<br />

l To create a RAID array on an SX940 storage blade with<br />

a Lynx controller, select LSI/Mylex RAID Controller as<br />

the controller vendor, LSI IME SAS as the controller<br />

family, Any as the controller model, and 1 as the controller<br />

number.<br />

l To create a RAID array on an SX940 storage blade with<br />

a Cougar controller, select LSI/Mylex RAID Controller<br />

as the controller vendor, LSI MegaRAID SAS as the controller<br />

family, RAID 5/6 SAS based on LSI MegaRAID<br />

as the controller model, and 0 as the controller number.<br />

Preparation Boot Image<br />

The following parameters are displayed:<br />

Boot Image Path<br />

Displays the path where the selected image is stored.<br />

316 <strong>Deployment</strong> <strong>Manager</strong>


Boot Image Type<br />

Select the boot image type: Floppy Disk Image or Bootstrap Image.<br />

Expand last partition to whole disk<br />

The last partition of the restored disk image will be expanded to use the<br />

rest of the whole disk.<br />

Convert MBR to GPT<br />

Initialize the target disk as GPT disk and convert MBR partitions to GPT<br />

partitions. This is only possible for data disks (not for system disks).<br />

8.2.1.6 Settings step (Restore Snapshot Image wizard)<br />

8.2 Restoring a snapshot image<br />

Settings is the next step in the Restore Snapshot Image wizard. This step<br />

allows you to specify the deployment method and the system shutdown<br />

method before cloning the image.<br />

<strong>Deployment</strong> <strong>Manager</strong> 317


8 Crash Recovery<br />

Figure 117: Settings step<br />

<strong>Deployment</strong> Method<br />

<strong>Deployment</strong> method for the cloning:<br />

Multicast The disk image data is sent via multicast IP packets over<br />

the network. This means that a data packet is sent to all target<br />

servers involved once and not n times (where n is the<br />

number of servers involved). This allows you to deploy a<br />

large number of servers in a short time. You might experience<br />

problems with the multicast method if there is a router<br />

between the deployment server and a target server, since<br />

not all routers allow multicast transfer.<br />

318 <strong>Deployment</strong> <strong>Manager</strong>


Unicast The disk image data is sent via unicast IP packets over the<br />

network. This means that a data packet is sent to each<br />

server. When using the unicast deployment method, only 4<br />

servers can be deployed in parallel. If your deploy job<br />

involves more servers, the deployment of the other servers<br />

is queued. The other servers will be deployed as soon as the<br />

deployment of one server has finished.<br />

Force System Shutdown before <strong>Deployment</strong><br />

<strong>Deployment</strong> <strong>Manager</strong> will try to shut down the affected target systems<br />

before the deployment process is started:<br />

l When this option is checked, the target servers will be shut down<br />

before the deployment is started.<br />

l When this option is not checked, the target servers must be shut<br />

down, otherwise the deployment will fail.<br />

Depending on the selected shutdown method, a shutdown might<br />

be a hard power-off or a graceful shutdown. In the case of a hard<br />

power-off, unsaved data will be lost. This might not be a problem<br />

since the disk will be overwritten by a new disk image. But data<br />

on a second logical disk might be lost.<br />

Shutdown Method<br />

Select the shutdown method:<br />

8.2 Restoring a snapshot image<br />

Forced A hard power-off of all running target servers will be initiated<br />

if these servers support remote power-off by management<br />

blade or baseboard management controller. In<br />

this case unsaved data will be lost. Normally this is not a<br />

problem, since the disk will be overwritten by a new disk<br />

image.<br />

ACPI <strong>Deployment</strong> <strong>Manager</strong> will try to perform an ACPI shutdown<br />

of the system. If the running operating system supports<br />

ACPI, it should save unsaved data and clean up the<br />

file systems.<br />

<strong>Deployment</strong> <strong>Manager</strong> 319


8 Crash Recovery<br />

Graceful<br />

(via Server-<br />

View<br />

Agent)<br />

Shutdown Username<br />

Preconfigured user name.<br />

This shutdown method makes use of the <strong>ServerView</strong><br />

agent functionality. <strong>ServerView</strong> agents running on the target<br />

systems are instructed to shut down the systems.<br />

This method requires that <strong>ServerView</strong> agents are installed<br />

on all systems and are configured to allow this functionality.<br />

It also requires the specification of a shutdown<br />

user name and password.<br />

Shutdown Password/Repeat Shutdown Password<br />

Preconfigured user password.<br />

The shutdown user name and password are used for authentication<br />

of this shutdown request at the <strong>ServerView</strong> agent of the<br />

corresponding server. The <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong><br />

service remembers the last-used shutdown user name and password.<br />

These will be set as default shutdown user name and password.<br />

The last-used shutdown user name and password are lost<br />

when the <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> service is restarted.<br />

GUID Check<br />

The GUID of a server as specified in the deployment configuration of that<br />

server is checked against the GUID that is sent as part of the PXE boot<br />

request sent by a server. The deployment of the server will only start if<br />

the GUID sent by the PXE boot request matches the configured GUID.<br />

8.2.1.7 Post <strong>Deployment</strong> step (Restore Snapshot Image wizard)<br />

Post <strong>Deployment</strong> is the next step in the Restore Snapshot Image wizard.<br />

This step allows you to specify the system status after cloning.<br />

320 <strong>Deployment</strong> <strong>Manager</strong>


Figure 118: Post <strong>Deployment</strong> step<br />

System Status after <strong>Deployment</strong><br />

System status of the target systems after the cloning:<br />

Shut down System The target servers involved will be shut<br />

down at the end of the process.<br />

Keep System Running The target servers involved will be kept<br />

running at the end of the process.<br />

8.2.1.8 Bios Boot Type step (Restore Snapshot Image wizard)<br />

8.2 Restoring a snapshot image<br />

Bios Boot Type is the next step in the Restore Snapshot Image wizard.<br />

This step is only displayed when iRMC Support has been chosen as method<br />

for remote management in <strong>Deployment</strong> Configuration. The Bios Boot Type<br />

is needed to initiate the remote PXE boot of the target server.<br />

<strong>Deployment</strong> <strong>Manager</strong> 321


8 Crash Recovery<br />

l The selection here is ignored if the target server does not<br />

have an UEFI BIOS.<br />

l Each LAN port must be manually configured for either Legacy<br />

PXE or UEFI PXE boot (in the BIOS of the target<br />

server). This configuration cannot be changed when initiating<br />

a PXE boot from the deployment server. The value that is<br />

specified here must match the manual configuration in the<br />

BIOS.<br />

Figure 119: Bios Boot Type step<br />

"PC compatible" Boot (legacy)<br />

Initiate a Legacy PXE boot at the target server.<br />

Extensible Firmware Interface Boot (EFI)<br />

Initiate an UEFI PXE boot at the target server.<br />

322 <strong>Deployment</strong> <strong>Manager</strong>


8.2.1.9 Scheduling step (Restore Snapshot Image wizard)<br />

Scheduling is the final step in the Restore Snapshot Image wizard. This<br />

step allows you to clone the image as a scheduled task.<br />

Figure 120: Scheduling step<br />

8.2 Restoring a snapshot image<br />

Time Unit to perform this Task (only for later option)<br />

Specifies how often you want the task to be performed: Once, Daily,<br />

Weekly, Monthly. Additional settings are shown depending on your selection.<br />

Select Date and Time to start (only for later option)<br />

Specifies the start date (now or later) when you want the task to be performed.<br />

If you select later, you can also specify the start time for the<br />

task.<br />

<strong>Deployment</strong> <strong>Manager</strong> 323


8 Crash Recovery<br />

Perform this Task (only for Daily)<br />

Selects whether the task should be performed Every Day, Weekdays or<br />

Every x days.<br />

Select days of the week you want this task to start (only for Weekly)<br />

Specifies that the task should be executed every week or every n week<br />

(n = 1 ..52). You can also select the days of the week.<br />

Select the day and the month you want this task to start (only for<br />

Monthly)<br />

Specifies for which months of the year the task should be executed. You<br />

can also specify on which day of the month a task should be executed.<br />

And you can specify a weekday and whether the task should be executed<br />

in the first, second, third, fourth or last week of a month.<br />

Retry Count<br />

Specifies the number of retries for this task if the job fails. A value<br />

between 0 and 10 can be specified.<br />

Retry Interval (minutes) (only for later option)<br />

The time (in minutes) before the next attempt if previous attempts at a<br />

task failed and the number of retries does not exceed the retry counter. A<br />

value between 1 and 360 minutes can be specified.<br />

Task Start Time Window (minutes) (only for later option)<br />

This value defines the maximum period (in minutes) for retries of a task.<br />

Outside this period no retries will be performed, even if all the retries specified<br />

by the retry counter have not been used.<br />

It can happen that a task is handed over to the deployment server but is<br />

only queued, because too many other tasks are already running. If the<br />

Task Start Time Window elapses while a task is in this state, the task<br />

will also be canceled. A running task will not be canceled when the Task<br />

Start Time Window elapses.<br />

Enable Segmentation<br />

Enable the option to specify the segmentation size.<br />

324 <strong>Deployment</strong> <strong>Manager</strong>


Segmentation Size<br />

Specifies the maximum number of targets that will be deployed in parallel.<br />

This might be useful when a job with a huge number of targets is created.<br />

8.2.1.10 Action after starting the "Restore Snapshot Image" task<br />

Basically, the target server behaves as follows after starting the Restore<br />

Snapshot Image wizard.<br />

Windows / Linux reference system<br />

When restoring an image of Windows reference system and Linux reference<br />

system, the following steps are done:<br />

1. After starting the task, power on the client. (Please see “Notes”.)<br />

2. The client is sending a PXE request.<br />

3. The client is booted with WinPE.<br />

4. Start restoring an image.<br />

5. After restoring an image, shutdown the client.<br />

8.2 Restoring a snapshot image<br />

l When you select iRMC Support, MMB SNMP Support,<br />

MMB Remote <strong>Manager</strong> Support, or Wake on LAN Support<br />

in the Remote Management Ports step in the <strong>Deployment</strong><br />

Configuration wizard, <strong>Deployment</strong> <strong>Manager</strong> turns on<br />

the client power automatically after starting the task. When<br />

you select Manual Management, you need to turn on the<br />

client manually.<br />

l When you select iRMC Support, MMB SNMP Support, or<br />

MMB Remote <strong>Manager</strong> Support in the Remote Management<br />

Ports step in the <strong>Deployment</strong> Configuration wizard,<br />

PXE is automatically set on first place in BIOS boot<br />

order when starting the client.<br />

<strong>Deployment</strong> <strong>Manager</strong> 325


8 Crash Recovery<br />

l When you select Wake on LAN Support or Manual Management<br />

in the Remote Management Ports step in the<br />

<strong>Deployment</strong> Configuration wizard, you need to set PXE<br />

on first place in BIOS boot order of the client before you start<br />

the task.<br />

l If you select iRMC Support, MMB SNMP Support or<br />

MMB Remote <strong>Manager</strong> Support in the Remote Management<br />

Ports step in the <strong>Deployment</strong> Configuration wizard,<br />

you need to turn off a target server before you start the<br />

deployment. You can perform the shutdown manually or by<br />

using the Force System Shutdown before <strong>Deployment</strong><br />

option in the task you want to execute.<br />

l If you select iRMC Support, MMB SNMP Support or<br />

MMB Remote <strong>Manager</strong> Support in the Remote Management<br />

Ports step in the <strong>Deployment</strong> Configuration wizard,<br />

you need to set the user name and the password for<br />

iRMC or MMB in the <strong>Deployment</strong> Configuration wizard<br />

before starting the task.<br />

l When you select Keep System Running in the System<br />

Status after <strong>Deployment</strong> option, the client reboots after finishing<br />

the task.<br />

326 <strong>Deployment</strong> <strong>Manager</strong>


9 Messages<br />

The GUI polls the <strong>Deployment</strong> Service service at regular intervals for error<br />

and event messages. You can specify the refresh interval in the Global<br />

Options step in the Settings wizard, see section "Global Options step (Settings<br />

wizard)" on page 336.<br />

These messages are displayed in the Messages List dialog box. To open the<br />

dialog box, select the item Messages from the menu bar.<br />

You will be informed of new incoming messages. This is done in a message<br />

window in the top right corner of the main window.<br />

The message window closes when you select Messages from the menu bar<br />

to display the Messages List dialog box. If you do not want to look at the<br />

messages immediately, you can close the message window by selecting<br />

Close. If you do not want the message window to appear again (in the current<br />

session), select Don't show again.<br />

9.1 Messages List dialog box<br />

The Messages List dialog box displays all messages received from the<br />

<strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> service.<br />

<strong>Deployment</strong> <strong>Manager</strong> 327


9 Messages<br />

Figure 121: Messages List dialog box<br />

The columns in the table are explained below:<br />

Column Meaning<br />

Timestamp Time the message was read from<br />

the <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong><br />

service.<br />

Message Message text.<br />

Message<br />

Displays the complete message text from the selected message.<br />

328 <strong>Deployment</strong> <strong>Manager</strong>


10 Settings<br />

You can define global settings for <strong>Deployment</strong> <strong>Manager</strong>. This is done in the<br />

Settings wizard.<br />

The Settings wizard starts for the first time when you start <strong>Deployment</strong> <strong>Manager</strong><br />

after installation. You must work through all steps of the wizard, to specify<br />

the settings for the first time.<br />

You can open this wizard again at any time to change the settings. To start<br />

the Wizard, select Settings from the menu bar.<br />

10.1 <strong>Deployment</strong> Server step (Settings wizard)<br />

<strong>Deployment</strong> Server is the first step in the Settings wizard. In this step you<br />

can select deployment servers for Mass Cloning and Crash Recovery.<br />

<strong>Deployment</strong> servers are responsible for preparing the servers and their environment<br />

from a central instance via the LAN. The preparation of a deployment<br />

server is described in the “Installation <strong>Manager</strong>” guide.<br />

<strong>Deployment</strong> <strong>Manager</strong> 329


10 Settings<br />

Figure 122: <strong>Deployment</strong> Server step<br />

>><br />

><br />


Copies the servers selected from the <strong>Deployment</strong> Servers list back to<br />

the Server List list.<br />

Log On Properties<br />

This opens the Log On Properties dialog box for the selected <strong>Deployment</strong><br />

Server, see section "Log On Properties dialog box" on page 331.<br />

10.1.1 Log On Properties dialog box<br />

Figure 123: Log On Properties dialog box<br />

10.1 <strong>Deployment</strong> Server step (Settings wizard)<br />

Username<br />

User name for logging on to the deployment server. This must be one of<br />

the user names specified during the <strong>Deployment</strong> <strong>Manager</strong> installation<br />

(<strong>Deployment</strong> Services package).<br />

Password<br />

Password for logging on to the deployment server.<br />

<strong>Deployment</strong> <strong>Manager</strong> 331


10 Settings<br />

10.2 Repositories step (Settings wizard)<br />

Repositories is the next step in the Settings Wizard. In this step you must<br />

specify at least one installation configuration repository or one image repository.<br />

You can add repositories and folders via the context menu.<br />

Figure 124: Repositories step<br />

10.3 Licenses Installed step (Settings wizard)<br />

Licenses Installed is the next step in the Settings wizard. In this step you<br />

can see the currently installed licenses for <strong>Deployment</strong> <strong>Manager</strong>.<br />

332 <strong>Deployment</strong> <strong>Manager</strong>


Figure 125: Licenses Installed step<br />

Installed Licenses<br />

Displays the number of installed licenses.<br />

10.3 Licenses Installed step (Settings wizard)<br />

Used Licenses<br />

Displays the number of occupied licenses. Whenever a server is<br />

deployed, it occupies a license. If a server is already using a target<br />

license, it does not occupy a new target license when it is deployed<br />

again. A server that occupies a license can be deployed as many times<br />

as necessary.<br />

Click Add to open the Add New License Key dialog box.<br />

<strong>Deployment</strong> <strong>Manager</strong> 333


10 Settings<br />

10.3.1 Add New License Key dialog box<br />

The Add New License Key dialog box allows you to add a new license<br />

string.<br />

You cannot use the <strong>Deployment</strong> <strong>Manager</strong> functionality without a license. As<br />

well as the information that defines the validity of a license, a license string<br />

contains information about the number of target systems that can be<br />

deployed with this license. Multiple <strong>Deployment</strong> <strong>Manager</strong> licenses can be<br />

installed. In this case the number of deployable targets of all valid <strong>Deployment</strong><br />

<strong>Manager</strong> licenses are added. <strong>Deployment</strong> <strong>Manager</strong> allows to deploy as<br />

many servers as the sum of target licenses over all valid <strong>Deployment</strong> <strong>Manager</strong><br />

licenses estimates.<br />

You can enter a license key during <strong>Deployment</strong> <strong>Manager</strong> installation.<br />

If you do not enter a valid license key, an evaluation key is created.<br />

To open the dialog box click Add in the Installed step of the Settings wizard.<br />

Figure 126: Add New License Key dialog box<br />

License Key<br />

Enter the new license string that you received from your provider. The<br />

license you specify will be added to the list of licenses.<br />

334 <strong>Deployment</strong> <strong>Manager</strong>


10.4 Licenses Used step (Settings wizard)<br />

Licenses Used is the next step in the Settings wizard. Each server in the<br />

list occupies one license. In this step you can release licenses from a selected<br />

server.<br />

Figure 127: Licenses Used step<br />

Installed Licenses<br />

Displays the number of installed licenses.<br />

10.4 Licenses Used step (Settings wizard)<br />

Used Licenses<br />

Displays the number of occupied licenses. Whenever a server is<br />

deployed, it occupies a license. If a server is already using a target<br />

license, it does not occupy a new target license when it is deployed<br />

<strong>Deployment</strong> <strong>Manager</strong> 335


10 Settings<br />

again. A server that occupies a license can be deployed as many times<br />

as necessary.<br />

To release an occupied target license, select a server in the list and click<br />

Release License.<br />

10.5 Global Options step (Settings wizard)<br />

Global Options is the final step in the Settings wizard. In this step you can<br />

specify global settings for <strong>Deployment</strong> <strong>Manager</strong>.<br />

Figure 128: Global Options step<br />

336 <strong>Deployment</strong> <strong>Manager</strong>


GUI Options<br />

Messages<br />

update<br />

interval<br />

(s)<br />

Trace<br />

Level<br />

(Frontend<br />

only)<br />

10.5 Global Options step (Settings wizard)<br />

Refresh interval in which messages are read from Server-<br />

View <strong>Deployment</strong> <strong>Manager</strong>.<br />

For diagnostic purposes you can specify one of the following<br />

trace levels:<br />

No Tracing: No trace output will be generated.<br />

Severe: Severe (not recoverable) errors only.<br />

Info: Recoverable errors and important information.<br />

Fine: Debug output of important information, most of it<br />

reflecting user actions.<br />

Calltrace: System-level debug output.<br />

Finer: Detailed system-level debug output, parameters sent<br />

to and data parsed by <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong><br />

service.<br />

Data: Original data as received from <strong>ServerView</strong> <strong>Deployment</strong><br />

<strong>Manager</strong> service.<br />

<strong>Deployment</strong> <strong>Manager</strong> 337


338 <strong>Deployment</strong> <strong>Manager</strong>


11 High-Availability support<br />

As of version 4.0 you can use <strong>Deployment</strong> <strong>Manager</strong> in a high-availability configuration.<br />

This means that<br />

l The location of the databases (for <strong>Deployment</strong> <strong>Manager</strong> and <strong>Deployment</strong><br />

Services packages) is configurable. You can use an external<br />

filer as storage for the databases. The two databases for the two packages<br />

can be on different filers. You cannot use a mapped network<br />

drive.<br />

l <strong>Deployment</strong> <strong>Manager</strong> can be installed on two servers which use the<br />

same databases. The <strong>Deployment</strong> <strong>Manager</strong> and <strong>Deployment</strong> Service<br />

services are not started on the servers - it is up to the cluster software<br />

to start the services on the server it chooses as the active one.<br />

Only one instance should run at a time.<br />

l The <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> service and the <strong>Deployment</strong><br />

Service service can detect the abnormal shutdown of the service<br />

(crash) and write a message to the event log.<br />

l After a crash the <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> service and the<br />

<strong>Deployment</strong> Service service can recover by setting the status of the<br />

jobs that were interrupted by the crash to Error.<br />

l ACID support<br />

In databases, ACID stands for Atomicity, Consistency, Isolation, and<br />

Durability. These are considered to be the key transaction processing<br />

features of a database management system. Without them, the integrity<br />

of the database cannot be guaranteed.<br />

Requirements<br />

Both servers must have the same host name and IP address. If cluster software<br />

is installed, this requirement should be provided by the cluster software.<br />

Otherwise it can be achieved when the two servers are not up and running at<br />

the same time.<br />

<strong>Deployment</strong> <strong>Manager</strong> 339


11 High-Availability support<br />

11.1 Installation<br />

11.1.1 <strong>ServerView</strong> Operations <strong>Manager</strong> installation<br />

<strong>Deployment</strong> <strong>Manager</strong> must have access to an Operations <strong>Manager</strong> installation<br />

on the local server or on a remote server. The <strong>Deployment</strong> <strong>Manager</strong><br />

package installer only offers the <strong>ServerView</strong> Operations <strong>Manager</strong> Location<br />

window if a Full installation on first server has been selected in the<br />

High Availability window. This window is not displayed if a Limited installation<br />

on second server is installed.<br />

The parameters given in the <strong>ServerView</strong> Operations <strong>Manager</strong> Location<br />

window are added to the <strong>Deployment</strong> <strong>Manager</strong> database that is stored on a<br />

filer. In the failover scenario, the second server therefore uses the same Operations<br />

<strong>Manager</strong> location as the first server on which the Operations <strong>Manager</strong><br />

location was entered.<br />

For the configuration of the Operations <strong>Manager</strong> location, see section "Installing<br />

the <strong>Deployment</strong> <strong>Manager</strong> package" on page 68.<br />

If both servers are members of a Windows cluster, there are 3 entries<br />

in Operations <strong>Manager</strong> database - one for the cluster (with the cluster<br />

"virtual" IP address) and two for the servers. You should select the<br />

cluster and not the currently active server when you add a new deployment<br />

server. You are allowed to add both servers to the list of deployment<br />

servers in addition, but this deployment server might not be<br />

accessible, depending on which host is the currently active one and<br />

has the <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> service started.<br />

Using two local Operations <strong>Manager</strong> installations<br />

If the Operations <strong>Manager</strong> installation is to be used, Operations <strong>Manager</strong><br />

must be installed on both servers, and the <strong>ServerView</strong> databases on both<br />

servers should be kept identical.<br />

Using one remote Operations <strong>Manager</strong> installation<br />

In this case Operations <strong>Manager</strong> must be installed on one remote server<br />

only, and the servers must only be added to the <strong>ServerView</strong> database once.<br />

340 <strong>Deployment</strong> <strong>Manager</strong>


11.1.2 <strong>Deployment</strong> <strong>Manager</strong> installation<br />

During the installation of the <strong>Deployment</strong> <strong>Manager</strong> package the High Availability<br />

window is displayed:<br />

Figure 129: High Availability window - Support High Availability<br />

Support High Availability<br />

Select the option Yes.<br />

11.1 Installation<br />

<strong>Deployment</strong> <strong>Manager</strong> 341


11 High-Availability support<br />

Installation Order<br />

Full installation<br />

on first<br />

server<br />

Limited<br />

installation<br />

on second<br />

server<br />

If you select this option, the installation is similar to a<br />

standard installation without high availability but with the<br />

following differences:<br />

l Specify a location of the <strong>Deployment</strong> <strong>Manager</strong> database<br />

that is accessible to the second server. The<br />

default value offered is only appropriate for a standard<br />

installation without high availability and must<br />

be changed to a directory in a mounted LUN (Logical<br />

Unit Number) on a filer. A mapped network<br />

drive cannot be used.<br />

l The startup type of the <strong>ServerView</strong> <strong>Deployment</strong><br />

<strong>Manager</strong> service is set to Manual and the service<br />

is not started automatically after installation. It is<br />

up to the cluster software to start the service on the<br />

system it chooses as the active one.<br />

As a prerequisite a Full installation on first server must<br />

already have been performed.<br />

11.1.3 <strong>Deployment</strong> Services installation<br />

l Make sure that the location of the <strong>Deployment</strong> <strong>Manager</strong><br />

database points to an already existing database<br />

on a filer. If the database is not found by the<br />

installer, an error message is displayed and the<br />

installation will fail. The database is not created if<br />

this option is selected.<br />

l The startup type of the <strong>ServerView</strong> <strong>Deployment</strong><br />

<strong>Manager</strong> service is set to Manual. It is up to the<br />

cluster software to start the service on the system<br />

it chooses as the active one.<br />

During the installation of the <strong>Deployment</strong> Services package the Destination<br />

Folder window is displayed:<br />

342 <strong>Deployment</strong> <strong>Manager</strong>


Figure 130: Destination Folder window - Support High Availability<br />

Support High Availability<br />

Select the option Yes.<br />

11.1 Installation<br />

<strong>Deployment</strong> <strong>Manager</strong> 343


11 High-Availability support<br />

Installation order<br />

Full installation<br />

on first<br />

server<br />

Limited<br />

installation<br />

on second<br />

server<br />

If you select this option, the installation is similar to a<br />

standard installation without high availability but with the<br />

following differences:<br />

l Specify a location of the <strong>Deployment</strong> Service database<br />

that is accessible to the second server. The<br />

default value offered is only appropriate for a standard<br />

installation without high availability and must be<br />

changed to a directory in a mounted LUN on a filer.<br />

A mapped network drive cannot be used.<br />

l The startup type of the <strong>Deployment</strong> Service service<br />

is set to Manual and the service is not started<br />

automatically after installation. It is up to the cluster<br />

software to start the service on the system it chooses<br />

as the active one.<br />

A Full installation on first server must already have<br />

been performed.<br />

l Make sure that the location of the <strong>Deployment</strong> Service<br />

database points to an already existing database<br />

on a filer. If the database is not found by the<br />

installer, an error message is displayed and the<br />

installation will fail. The database is not created if<br />

this option is selected.<br />

l The startup type of the <strong>Deployment</strong> Service service<br />

is set to Manual. It is up to the cluster software<br />

to start the service on the system it chooses as the<br />

active one.<br />

344 <strong>Deployment</strong> <strong>Manager</strong>


11.2 Hints<br />

11.2.1 User Accounts<br />

During installation of the <strong>Deployment</strong> <strong>Manager</strong> and <strong>Deployment</strong> Services<br />

packages, the user accounts for the <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong><br />

service and <strong>Deployment</strong> Service service must be given. These user<br />

accounts are used to access the databases on the filer. The following combinations<br />

are recommended:<br />

11.2.2 Repositories<br />

l During the installation the user specifies a local account with the<br />

same name and password on both servers.<br />

11.2 Hints<br />

l During the installation on both servers the user specifies the same<br />

domain account.<br />

Repositories should not be on one of the two servers, but in a directory in a<br />

mounted LUN on a filer or on a mapped network drive. When adding a repository,<br />

a UNC path name that is stored in the shared <strong>Deployment</strong> <strong>Manager</strong><br />

database must be specified. Make sure that this UNC path is reachable from<br />

both servers.<br />

11.2.3 Actions Needed When No Cluster Software is Available<br />

Both servers do not have <strong>Deployment</strong> <strong>Manager</strong> and the <strong>Deployment</strong> Service<br />

started after installation. It is assumed that some cluster software starts the<br />

services on the server that it chooses as the active one.<br />

If there is no cluster software available, you must start the services on one of<br />

the servers (the one that you chooses as the active one).<br />

<strong>Deployment</strong> <strong>Manager</strong> 345


11 High-Availability support<br />

11.3 Failover scenarios<br />

11.3.1 <strong>Deployment</strong> <strong>Manager</strong> and <strong>Deployment</strong> Services Packages<br />

are Installed on Same Server<br />

A typical failover scenario is:<br />

l The first server crashes (more precisely: the server that has been<br />

selected as the active one by the cluster software crashes - it might<br />

as well be the server installed as the second server), <strong>ServerView</strong><br />

<strong>Deployment</strong> <strong>Manager</strong> service and <strong>Deployment</strong> Service service are<br />

not stopped correctly.<br />

l The cluster software starts the <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong><br />

service and <strong>Deployment</strong> Service service on the second server.<br />

l The <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> service and <strong>Deployment</strong><br />

Service service on the second server detect that a crash has<br />

occurred.<br />

l The <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> service and <strong>Deployment</strong><br />

Service service add event log entries.<br />

l The <strong>Deployment</strong> Service service cancels all running jobs and marks<br />

the jobs with the status Error in the <strong>Deployment</strong> Service database.<br />

l The <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> service tries to cancel all running<br />

jobs. This does not work (because the <strong>Deployment</strong> Service service<br />

already canceled them), so the <strong>ServerView</strong> <strong>Deployment</strong><br />

<strong>Manager</strong> service marks the jobs with the status Error in the <strong>Deployment</strong><br />

<strong>Manager</strong> database.<br />

11.3.2 <strong>Deployment</strong> <strong>Manager</strong> and <strong>Deployment</strong> Services Packages<br />

are Installed on Different Servers<br />

In the following<br />

l first server Mgr/second server Mgr denote the servers where the<br />

<strong>Deployment</strong> <strong>Manager</strong> package is installed.<br />

346 <strong>Deployment</strong> <strong>Manager</strong>


l first server Depl/second server Dep denote the servers where the<br />

<strong>Deployment</strong> Services package is installed.<br />

The first server with the <strong>Deployment</strong> <strong>Manager</strong> package crashes<br />

A typical failover scenario is:<br />

l The first server Mgr crashes, the <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong><br />

service is not stopped correctly.<br />

l The cluster software starts the <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong><br />

service on the second server Mgr.<br />

l The <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> service on the second<br />

server Mgr detects that a crash has occurred.<br />

l The <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> service adds an event log<br />

entry.<br />

l The <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> service tries to cancel all running<br />

jobs. This should work (because the <strong>ServerView</strong> <strong>Deployment</strong><br />

<strong>Manager</strong> service did not crash) and the <strong>ServerView</strong> <strong>Deployment</strong><br />

<strong>Manager</strong> service finally sets the status Canceled in the <strong>Deployment</strong><br />

<strong>Manager</strong> database. If canceling the jobs does not work, the Server-<br />

View <strong>Deployment</strong> <strong>Manager</strong> service marks the jobs with the status<br />

Error in the <strong>Deployment</strong> <strong>Manager</strong> database.<br />

The first server with the <strong>Deployment</strong> Services packages crashes<br />

A typical failover scenario is:<br />

11.3 Failover scenarios<br />

l The first server Depl crashes, the <strong>Deployment</strong> Service service is<br />

not stopped correctly.<br />

l The cluster software starts the <strong>Deployment</strong> Service service on the<br />

second server Depl.<br />

l The <strong>Deployment</strong> Service service on the second server Depl<br />

detects that a crash has occurred.<br />

l The <strong>Deployment</strong> Service service adds an event log entry.<br />

l The <strong>Deployment</strong> Service service cancels all running jobs and marks<br />

the jobs with the status Error in the database.<br />

<strong>Deployment</strong> <strong>Manager</strong> 347


11 High-Availability support<br />

l The <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> service running on the first<br />

server Mgr finally retrieves the status from the <strong>Deployment</strong> Service<br />

service (reads status Error from <strong>Deployment</strong> Service database) and<br />

sets the status Error in the <strong>Deployment</strong> <strong>Manager</strong> database.<br />

348 <strong>Deployment</strong> <strong>Manager</strong>


12 Cloning deployment process<br />

The following description of the cloning deployment process<br />

assumes that the deployment platform Caldera DOS is used. Most<br />

of the description is valid for deployment platform WinPE MDP as<br />

well.<br />

A cloning session consists of several phases and offers different alternatives<br />

to be used for each phase:<br />

1. Power control, i.e. remote PXE boot of target server<br />

o Initiated by RemCtrl.dll (default for <strong>ServerView</strong> Suite)<br />

o Initiated by SNMP commands accessing management blade<br />

or <strong>ServerView</strong> agents<br />

o Initiated by IPMI commands via Kalypso BMC<br />

o Initiated manually<br />

2. System preparation<br />

o Via DOS agent at cloning (used by Haribote for server blades)<br />

Concerns unattended RAID preparation of blade controller<br />

only.<br />

o Via Installation <strong>Manager</strong> based on WinPE PXE boot<br />

n SCU Server Management settings in SM-BIOS<br />

n Unattended RAID preparation for all PRIMERGY<br />

servers<br />

o Boot of system preparation images created by yourself (as<br />

used for MS ADS (Automated <strong>Deployment</strong> Services))<br />

All actions applicable on a DOS or MiniLinux platform boot are<br />

possible. You will need to obtain appropriate tools yourself to<br />

make the system preparations on the target server.<br />

o Manually by the administator<br />

n via Installation <strong>Manager</strong> in expert mode locally<br />

n via BIOS extensions of PCI controller in server boot<br />

phase<br />

n via Remote Management remote console<br />

<strong>Deployment</strong> <strong>Manager</strong> 349


12 Cloning deployment process<br />

3. Clone of an image<br />

All physical clients of a deployment group are cloned with the<br />

assigned image. Image-related actions can only be activated on logical<br />

groups.<br />

If a server was removed from one logical group and is to be moved to<br />

another logical group, the Status of Cloning reflects whether the newly<br />

assigned image is already cloned or not. If not, and <strong>Deployment</strong> <strong>Manager</strong><br />

is closed and restarted, it will not be reassigned to the old group<br />

because of the new image reference, but will be identified as a cloned<br />

client of that group.<br />

4. Post-preparation phase<br />

o Individualization of a clone via<br />

n LAN parameters<br />

n host name<br />

n Microsoft Windows system ID: SID<br />

o Start of customer scripts<br />

n ServicePack/Quickfix update<br />

n Driver update<br />

n Install applications/services<br />

n Configuration of operating system or application, e.g.<br />

Windows scripting<br />

These phases are described in more detail in the following sections.<br />

The figure below shows an overview of the phases during a cloning process.<br />

350 <strong>Deployment</strong> <strong>Manager</strong>


Figure 131: <strong>Deployment</strong> phases - overview<br />

<strong>Deployment</strong> <strong>Manager</strong> 351


12 Cloning deployment process<br />

As an example these are the basic deployment steps for a blade server system:<br />

1. The blade server chassis is powered on and the management blades<br />

start discovering the hardware environment (number of server blades,<br />

switch blades and status of redundant second management blade).<br />

2. On request, Operations <strong>Manager</strong> searches the LAN to find all present<br />

management blades in one segment.<br />

3. <strong>Deployment</strong> <strong>Manager</strong> retrieves the list of management blades found<br />

from Operations <strong>Manager</strong> and requests a list of system information<br />

directly from the management blade about each blade server installed<br />

in a blade chassis.<br />

4. <strong>Deployment</strong> <strong>Manager</strong> offers the administrator a physical server list<br />

based on the management blade information to create logical server<br />

groups based on the system information received.<br />

5. The administrator may change these logical groups by adding or<br />

changing logical-group-specific parameters, and finally he/she activates<br />

a deployment process on a logical group.<br />

6. <strong>Deployment</strong> <strong>Manager</strong> creates a list of related server blades and their<br />

MAC addresses and generates a cloning job for the cloning engine.<br />

7. The cloning engine identifies each physical server blade on the LAN if<br />

required. It prepares LAN switch configurations for PXE access and<br />

initiates a Unicast or Multicast cloning process on each client.<br />

8. The cloning engine contacts the management blade via RemCtrl.dll to<br />

initiate a PXE boot of each related server blade.<br />

9. The server blade PXE BIOS contacts the deployment PXE server to<br />

receive the first DOS boot image and starts that image containing the<br />

cloning agent.<br />

10. The cloning agent prepares the server blade hardware like a RAID<br />

array and downloads via TFTP a second master image directly onto<br />

the prepared storage volume, which is reachable via an Int13h BIOS<br />

call with the logical device ID: 0.<br />

352 <strong>Deployment</strong> <strong>Manager</strong>


11. The final clone will be patched for individual operating system parameters<br />

such as IP address, host name and system ID (Windows only).<br />

12. The cloning agent initiates a normal reboot of its current server blade.<br />

13. The final operating system boots:<br />

a. In the case of Linux, cloning is done by activating new LAN settings<br />

at run level 0.<br />

b. In the case of Windows, a RunOnce script is started to activate<br />

Microsoft system preparation and probably a customer<br />

script for additional configuration steps.<br />

With RemoteDeploy V3.0 this is done by controlling the post-preparation<br />

phase using a temporarily installed agent.<br />

12.1 Power control via remote control<br />

The remote control API (RemCtrl.dll) is used to initiate requests to the target<br />

server especially for power control and detection of servers. The service processors<br />

such as management blade (SNMP) and Kalypso (IPMI) access are<br />

supported via RemCtrl.dll.<br />

For PRIMERGY servers which support none of these service APIs, a corresponding<br />

dialog is displayed on the <strong>Deployment</strong> <strong>Manager</strong> front-end to make<br />

the user initiate tasks manually instead of remotely. The advantage of this is<br />

that the program using the RemCtrl.dll always uses the same API.<br />

By using the RemCtrl.dll, the <strong>ServerView</strong> suite provides a common API for<br />

power control of servers regardless of their managed server type. The different<br />

types of access are managed inside RemCtrl.dll, but the user of the<br />

API must offer the appropriate parameters used to enter each BMC-API.<br />

12.1.1 Management blade power control<br />

The management blade requires the following parameters to access the<br />

power control functions:<br />

l Management type = MMB<br />

12.1 Power control via remote control<br />

l IP address or host name of BMC (used by RemCtrl.dll)<br />

<strong>Deployment</strong> <strong>Manager</strong> 353


12 Cloning deployment process<br />

l SlotID<br />

l SNMP community string<br />

l Timeout of access BMC API<br />

l Number of retries to access BMC<br />

For accessing the BMC by IP address or host name, <strong>Deployment</strong> <strong>Manager</strong><br />

must be installed. (The <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> service is used<br />

to determine the MAC address via its <strong>Deployment</strong> <strong>Manager</strong> database.) If you<br />

use the MAC address for the same function, the <strong>ServerView</strong> <strong>Deployment</strong><br />

<strong>Manager</strong> service is not necessary. If you use the MMB control, PXE boot<br />

mode can be initialized and ACPI PowerOff is supported.<br />

12.2 System preparation phase<br />

The real cloning process is independent of the server configuration. It just<br />

requires a server to have the following attributes:<br />

l The server is in PXE boot mode or can be booted in PXE mode<br />

remotely.<br />

l A DOS image with a minimum size of 1.44 MB can be booted via<br />

PXE.<br />

l One or more BIOS devices are available based on Int13h (which of<br />

them is used for cloning is selected by its logical device ID; default is<br />

Log-ID: 0).<br />

l The deployment server is available over the whole cloning cycle via<br />

LAN.<br />

A system preparation phase occurs immediately before the cloning to set up,<br />

for example, a RAID array on the target server.<br />

<strong>Deployment</strong> <strong>Manager</strong> provides the following alternatives:<br />

l Manual preparation<br />

l Automatic preparation as part of a DOS session<br />

l Preparation based on WinPE<br />

l Customer system preparation image<br />

354 <strong>Deployment</strong> <strong>Manager</strong>


12.2.1 Manual preparation<br />

Within <strong>Deployment</strong> <strong>Manager</strong> nothing is done for system preparation. The<br />

<strong>Deployment</strong> <strong>Manager</strong> continues directly with the generic cloning job. The<br />

administrator must make sure that the attributes mentioned in section<br />

"Power control via remote control" on page 353 are implemented and a logical<br />

Int13h device is reachable via BIOS on DOS.<br />

A typical method of RAID configuration is to use the BIOS extensions<br />

offered by each RAID controller using a key combination in the BIOS boot<br />

phase.<br />

If remote PXE boot is not applicable, the BIOS boot device table must have<br />

the PXE boot device configured statically as the first boot device. In this<br />

mode, the PXE server can control whether a PXE boot will be performed or<br />

whether a PXE timeout will occur and the boot process will continue with the<br />

next boot device in the list (which should be the hard disk containing the final<br />

cloning image).<br />

Server Management BIOS settings or Server Management configurations<br />

usually done by the SCU of Installation <strong>Manager</strong> must also be made manually<br />

(if required for the operating system; this is not required for <strong>Deployment</strong><br />

<strong>Manager</strong>).<br />

12.2.2 Automatic preparation as part of a DOS session<br />

This is the standard preparation used for blade servers only as provided by<br />

the Haribote cloning engine.<br />

The DOS session for cloning a blade server is usually split into two boot<br />

phases in the case of a RAID configuration:<br />

1. Booting the cloning agent on DOS via PXE.<br />

12.2 System preparation phase<br />

2. Starting a vendor DOS tool for detection of storage devices and initiating<br />

the defined RAID configuration.<br />

3. Initiating a second PXE reboot which boots the DOS cloning agent<br />

again.<br />

Now the system BIOS can support access to the new RAID array<br />

<strong>Deployment</strong> <strong>Manager</strong> 355


12 Cloning deployment process<br />

based on an Int13h device call. The cloning process will start on this<br />

prepared logical device.<br />

4. A final normal reboot finalizes the cloning session by entering the<br />

post-cloning phase.<br />

The RAID preparation is the only system preparation done by the Haribote<br />

cloning engine in <strong>Deployment</strong> <strong>Manager</strong>. This method is not used for nonblade<br />

servers. The <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> service differentiates<br />

between blade and non-blade servers.<br />

12.2.3 Preparation based on WinPE<br />

For system preparation of all PRIMERGY servers, the preparation modules<br />

of Installation <strong>Manager</strong> are used.<br />

The <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> service calls the Installation <strong>Manager</strong><br />

Extension via an API as described in section"Software architecture" on<br />

page 46.<br />

The Installation <strong>Manager</strong> module stack for remote installation is extended by<br />

a separate module management controlled by the Installation <strong>Manager</strong> Extension<br />

and Installation <strong>Manager</strong> manager. These modules can be used as<br />

individual functions via the Installation <strong>Manager</strong> Extension API.<br />

<strong>Deployment</strong> <strong>Manager</strong> uses the following functions:<br />

l Boot server with WinPE by PXE<br />

l Get system information of server<br />

l This output contains all information on the server physics collected by<br />

the Installation <strong>Manager</strong> hardware detection module.<br />

l Set/Get RAID configuration<br />

This allows complete unattended RAID configuration for all RAID controllers<br />

used by PRIMERGY servers<br />

If a bare server is to be entered to the server list, the Installation <strong>Manager</strong><br />

Extension API is used as follows:<br />

l The Add Server dialog offers to collect additional server information.<br />

l The ServerStart agent is started and WinPE is booted via PXE.<br />

356 <strong>Deployment</strong> <strong>Manager</strong>


l GetSystemInfo is called to get the server parameters.<br />

l The ServerStart agent is stopped and the server is powered off.<br />

l A new entry is created in the <strong>ServerView</strong> server list with the retrieved<br />

parameters.<br />

This allows the following cloning process:<br />

<strong>Deployment</strong><br />

<strong>Manager</strong><br />

Installation<br />

<strong>Manager</strong><br />

<strong>Deployment</strong><br />

<strong>Manager</strong><br />

SCW (System<br />

Cast<br />

Wizard)<br />

l define RAID configuration<br />

l define cloning parameters and deployment group<br />

l start ServerStart agent and boot WinPE via PXE<br />

l configure RAID parameters<br />

l stop ServerStart agent and WinPE, release PXE service<br />

l enter cloning job for DB API<br />

l prepare cloning job image<br />

l initialize PXE service for DOS cloning agent boot<br />

image<br />

l set PXE mode via remote control<br />

l start PXE reboot via remote control<br />

Check if the <strong>ServerView</strong> agent is available: If yes, initiate<br />

a reboot via <strong>ServerView</strong> agent. If no, initiate a<br />

shutdown followed by a power on<br />

l boot PXE DOS image<br />

l perform cloning<br />

12.2.4 Customer system preparation image<br />

12.2 System preparation phase<br />

You can prepare your own system preparation phase if you want a particular<br />

server type to be supported in a particular way (must not be a PRIMERGY<br />

server).<br />

<strong>Deployment</strong> <strong>Manager</strong> 357


12 Cloning deployment process<br />

For that purpose a generic system-preparation image PXE boot method is provided<br />

before starting the generic cloning phase.<br />

The image must meet the following requirements:<br />

l The size of the image is less than 1.44 MB<br />

l The image is capable of being booted from and used as a local floppy<br />

disk copy.<br />

This means that all the contents of a floppy disk that can be booted<br />

from the target server can also be used as a PXE image.<br />

At the end of the system preparation process, a normal reboot is initiated. If<br />

the boot device table in the system BIOS has a PXE-capable LAN card as<br />

the first device, a PXE boot is automatically initiated instead. Any operating<br />

system can be used for the system-preparation image. The PXE boot works<br />

independently of the operating system.<br />

358 <strong>Deployment</strong> <strong>Manager</strong>


Finally, to continue with a cloning process automatically the following process<br />

is implemented within <strong>Deployment</strong> <strong>Manager</strong>:<br />

<strong>Deployment</strong><br />

<strong>Manager</strong> frontend<br />

<strong>Deployment</strong><br />

<strong>Manager</strong><br />

Installation<br />

<strong>Manager</strong><br />

<strong>Deployment</strong><br />

<strong>Manager</strong><br />

SCW (System<br />

Cast Wizard)<br />

l define cloning parameters and target servers<br />

l request PXE boot image path<br />

l start ServerStart agent on server side without booting<br />

WinPE<br />

l boot given system preparation image via PXE<br />

l check bootstrap boot status and return if PXE<br />

request from selected server was received<br />

l stop ServerStart agent, release PXE service only<br />

l finalize cloning job definition: without system preparation,<br />

no PXE boot, just PXE server initialization<br />

l enter cloning job for DB API<br />

l prepare cloning job image<br />

l initialize PXE service for DOS cloning agent boot<br />

image<br />

l set PXE mode via remote control<br />

l set PXE mode via remote control, but do not initiate<br />

a PXE boot, reboot is initiated by preparation<br />

image process<br />

l boot cloning agent DOS image via PXE<br />

l perform cloning<br />

12.2 System preparation phase<br />

The critical path in this scenario is the timing. The time between booting the<br />

system preparation image via PXE and the next PXE request from this<br />

server initiated by a reboot after finalizing the system preparation must cover<br />

the minimum of 30 seconds. If not, a PXE boot timeout will occur. This can<br />

be prevented by simply rebooting this target server in PXE mode. This is<br />

<strong>Deployment</strong> <strong>Manager</strong> 359


12 Cloning deployment process<br />

especially the case if more than four sessions are activated in the Haribote<br />

cloning engine.<br />

Within a 1 Gbit environment a higher session value is possible by changing a<br />

special registry key. This value controls the number of parallel sessions for<br />

Unicast cloning sessions as well.<br />

You can start a Unicast cloning job of, for example, 10 servers in one job, but<br />

only four servers are cloned in parallel at a time.<br />

With Unicast mode each server is a session, with Multicast each job is a session.<br />

12.3 Supported storage devices<br />

12.3.1 SCSI/IDE drives<br />

SCSI and IDE drives are handled by the system BIOS by default as Int13h<br />

devices and do not require special tools for modification.<br />

If the SCSI/IDE drive is mapped as the logical Int13h device, cloning will<br />

work without any system preparation. Manual mode should be used as the<br />

system preparation method.<br />

12.3.2 RAID devices<br />

It is possible to create an image on a different RAID level than is used for the<br />

subsequent cloning task. The RAID controller chip set must support a BIOS<br />

configuration extension for the manual system preparation method. This<br />

extension can be used on a reference server with a connected local console<br />

or via Remote Management.<br />

For blade servers only, the RAID functionality is prepared as part of the cloning<br />

process on DOS via the Haribote cloning engine directly.<br />

For non-blade servers, unattended RAID configuration is used by the Installation<br />

<strong>Manager</strong> Extension which boots WinPE on the target system. With<br />

this method, all PRIMERGY servers and their RAID controllers are automatically<br />

supported as declared for the currently installed Installation <strong>Manager</strong><br />

release.<br />

360 <strong>Deployment</strong> <strong>Manager</strong>


After creation of a RAID array, an additional reboot must usually be initiated<br />

to activate the RAID array and make the new volume accessible to the cloning<br />

agent via BIOS support.<br />

This second reboot is again a PXE boot and reloads the same DOS image as<br />

before.<br />

12.3.3 FC and iSCSI Devices<br />

Since RemoteDeploy V3.0, generic storage devices are supported which are<br />

visible as Int13h DOS devices and are detected by the system BIOS.<br />

Within the image creation or deployment configuration session of the <strong>Deployment</strong><br />

<strong>Manager</strong> Web user interface, you can define the logical ID of such an<br />

Int13h DOS device. This is necessary if more than one storage adapter has<br />

enabled Int13h BIOS support, or one adapter provides more than one bootable<br />

volume. FC and iSCSI adapters basically behave in the same way as<br />

SCSI or RAID controllers in detecting and creating a boot device list of their<br />

attached storage devices.<br />

Once this storage device list is visible on DOS as Int13h device (the adapter<br />

BIOS must be enabled for the last boot), <strong>Deployment</strong> <strong>Manager</strong> can perform<br />

image creation and mass cloning or mass remote installation on/from such<br />

devices generically.<br />

Based on this method, it is possible to clone on nearly all storage devices<br />

found as DOS Int13h device.<br />

Especially for FC and ISCSI device usage, the default UNDI LAN adapter no<br />

longer works for Broadcom NICs and must be replaced by the NDIS DOS<br />

driver, which is also provided with the <strong>Deployment</strong> <strong>Manager</strong> software. How<br />

to do this is described in section "Adding NDIS Driver to <strong>Deployment</strong> <strong>Manager</strong>"<br />

on page 363.<br />

12.3.4 Partitioning and File System Formatting<br />

12.3 Supported storage devices<br />

Partitioning is required to store the operating system master image in a partition.<br />

The sector algorithm used needs this partition orientation to decompress<br />

the used sectors to the right place.<br />

<strong>Deployment</strong> <strong>Manager</strong> 361


12 Cloning deployment process<br />

If an image is created in raw mode, all format information is stored inside the<br />

image as part of the data packages. No pre-formatting is required for cloning.<br />

If an image was created with file system optimization, the target volume will<br />

be pre-formatted to offer data block organization on the storage volume to put<br />

the used data sectors in the right place. This method requires a much smaller<br />

image size than the image created in raw mode. The cloning images are created<br />

on a partition basis even if this is offered on a disk basis in the <strong>Deployment</strong><br />

<strong>Manager</strong> GUI.<br />

The following file system formats are supported depending on the used operating<br />

system:<br />

File system<br />

FAT 16 P<br />

FAT 32 P<br />

NTFS4 P<br />

NTFS5 P<br />

NTFS5+ P<br />

DOS Windows Linux ESX<br />

Ext2 P P<br />

Ext3 P P<br />

Ext4 P<br />

Reiser file system (v3.5/3.6 on SuSE SLES<br />

9/10)<br />

1 ) A specific package must be installed on the reference system before<br />

image creation if Reiser is used as the root file system. This package is provided<br />

with the <strong>Deployment</strong> <strong>Manager</strong> software in the directory LINUX_support\ReiserFS_Cloning_Support.<br />

Basically the file system analysis is done independently of the operating system<br />

used. But with regard to the operating system types supported for individualization,<br />

there are limits imposed by the operating system (e.g. W2k3 does<br />

not support FAT16).<br />

But unknown or unsupported file systems can always be cloned in raw mode.<br />

362 <strong>Deployment</strong> <strong>Manager</strong><br />

P 1 )


12.3.5 Multi-boot Operating System Partitioning<br />

A hard disk with more partitions with different operating system instances<br />

can be used if you install a boot loader in the master boot record (MBR) of<br />

this hard disk.<br />

For non raw mode image cloning, the MBR is created by the cloning agent<br />

and therefore no multi-boot loader is installed and supported.<br />

This kind of installation can be cloned in raw mode only, but without individualization<br />

after cloning.<br />

12.3.6 Adding NDIS Driver to <strong>Deployment</strong> <strong>Manager</strong><br />

With RemoteDeploy V3.0, NDIS and UNDI LAN protocol drivers are used by<br />

the DOS clone agent.<br />

The following types of NIC vendor are currently supported:<br />

l Intel pro1000 family by NDIS driver on all LAN ports (up to 4)<br />

l Other (such as Broadcom)<br />

o Broadcom NDIS family driver for all known PRIMERGY<br />

servers released at <strong>Deployment</strong> <strong>Manager</strong> release time. Supports<br />

all LAN ports (up to 4)<br />

o by generic Intel UNDI driver on all LAN ports (up to 4)<br />

o by customer NDIS driver (by default, only the first LAN port is<br />

supported)<br />

To add an additional NDIS driver or switch Broadcom to NDIS, the following<br />

steps are required to change the <strong>Deployment</strong> <strong>Manager</strong> installation folder:<br />

1. Copy all driver files (typically *.DOS only) to ...\Program Files\<strong>Fujitsu</strong>\<strong>ServerView</strong><br />

Suite\<strong>Deployment</strong>Service\tftp\agent\dos\boot<br />

on the deployment server.<br />

2. Modify the existing DetNic.inf file by adding a new entry with PCI vendor<br />

and device ID with the same format as the existing one.<br />

3. Completely reboot the deployment server once.<br />

12.3 Supported storage devices<br />

<strong>Deployment</strong> <strong>Manager</strong> 363


12 Cloning deployment process<br />

For Broadcom NICs, a sample is provided with the <strong>Deployment</strong> <strong>Manager</strong> software<br />

in the directory setup\tftp\agent\dos\boot\AdditionalDrivers.<br />

For Broadcom NICs, two NDIS drivers are already configured. These NDIS<br />

drivers are stored in the B57.dos and bxnd20x.dos files in the ...\Program<br />

Files\<strong>Fujitsu</strong>\<strong>ServerView</strong> Suite\<strong>Deployment</strong>Service\tftp\agent\dos\boot<br />

directory on the deployment server. Modifying these files is not recommended.<br />

Bear in mind that, typically, NDIS drivers support only the first LAN port by<br />

default. You can change this by creating a protocol.ini file with vendor-specific<br />

settings.<br />

Please check your NDIS driver documentation for details of LAN port selection,<br />

assignment and bindings.<br />

12.4 Image creation<br />

The reference image is generated on a reference server with certain static<br />

machine-related parameters:<br />

l IP address<br />

l Host name<br />

l Windows SecureID<br />

These parameters can be changed during the cloning process to adapt to<br />

each target client.<br />

The <strong>ServerView</strong> <strong>Deployment</strong> <strong>Manager</strong> service delivers the final values of<br />

each client inside the cloning job description based on the deployment table<br />

settings. The image must be modified directly after it is cloned to the target<br />

client storage volume.<br />

In figure "<strong>Deployment</strong> phases - overview" in section "Cloning deployment<br />

process " on page 349, the flow chart of a cloning process describes this procedure<br />

in Block "C" for a Windows cloning. The file clcomp.dat, created for<br />

the before image creation, is now patched with the final individualization<br />

parameters and the next RunOnce process to be started will use these<br />

parameters in a system preparation session.<br />

364 <strong>Deployment</strong> <strong>Manager</strong>


For Linux cloning, the parameters are stored directly in the original operating<br />

system configuration files and are directly valid at the next boot. Therefore no<br />

post-cloning phase is required as for Windows.<br />

The procedure is always the same, regardless of whether Unicast or Multicast<br />

is used.<br />

Additional dynamic parameters per client are<br />

l Administrator account<br />

l Administrator password<br />

These parameters are available in the deployment table.<br />

12.5 Tag file handling<br />

A tag file can be used to detect whether a running system is still being modified<br />

by the deployment process. The tag file is stored in a specific directory.<br />

This directory might also be used to store log files. The names and the handling<br />

will be part of the official specification and will also be communicated to<br />

customers.<br />

The tag file can be used by third-party software (application or services, for<br />

example) to detect whether a cloned system is still being modified by the<br />

deployment process.<br />

12.5.1 Directory and tag file<br />

Operating System<br />

Directory Tag File<br />

12.5 Tag file handling<br />

Windows %Systemdrive%\<strong>Deployment</strong>Info\ <strong>Deployment</strong>Mode<br />

Linux /etc .<strong>Deployment</strong>Mode<br />

<strong>Deployment</strong> <strong>Manager</strong> log files on the target Linux system are stored in the<br />

/var/log/remotedeploy directory.<br />

<strong>Deployment</strong> <strong>Manager</strong> 365


12 Cloning deployment process<br />

12.5.2 Creating a tag file on Windows systems<br />

The tag file will be created during image creation at the beginning of the preparation<br />

of the reference system. If the %Systemdrive%\<strong>Deployment</strong>Info<br />

directory does not exist, it will be created. If the directory exists, any existing<br />

log files that are known as Haribote log files will be removed; other log files<br />

should not be removed. If the <strong>Deployment</strong>Mode file already exists, it is<br />

simply retained; this does not cause an error.<br />

12.5.3 Creating a tag file on LINUX systems<br />

The tag file will be created at the end of the image cloning while the DOS cloning<br />

agent is running. The DOS cloning agent creates the tag file in the directory<br />

/etc. If the tag file already exists, this does not cause a problem.<br />

12.5.4 Removing the tag file<br />

The tag file will be removed at the end of the Haribote post-preparation. If a<br />

customer script was specified for a cloning job, the tag file will be removed<br />

just before the customer script is executed. If an error occurs during the postpreparation,<br />

the tag file will not be removed.<br />

12.6 PXE protocol<br />

The following description is based on the Intel PXE specification from 1999<br />

release 2.1.<br />

Basically a DHCP server is a must for using PXE boot functionality in the current<br />

LAN segment! (A "DHCP proxy helper" functionality can be enabled in<br />

the router and works as well.)<br />

In principle the PXE protocol works as follows:<br />

1. The deployment server initiates a power-on of a particular server.<br />

2. Assuming the BIOS parameters are set to PXE boot, after power-on<br />

the BIOS ignores all alternative existing boot devices and starts the<br />

PXE LAN boot extension.<br />

366 <strong>Deployment</strong> <strong>Manager</strong>


12.6 PXE protocol<br />

3. The PXE BIOS issues a DHCP request to a DHCP server via broadcast.<br />

4. The DHCP server offers an IP address and additional LAN parameters<br />

such as the boot (deployment) server IP address (if it is configured<br />

by the administrator).<br />

5. A boot service broadcast is initiated:<br />

l If the boot server IP address is provided, the PXE BIOS can<br />

contact the boot server directly anywhere in the LAN (performing<br />

a check on port 4011).<br />

l If the boot server IP address is not provided, a broadcast on<br />

port 67 is sent which has to bypass a switch (e.g. with virtual<br />

LAN software) or hub depending on where the server is connected<br />

to the LAN.<br />

6. The boot server offers a packet with the name of the boot image.<br />

7. The PXE BIOS initiates an MTFTP or TFTP to receive the boot image<br />

file.<br />

8. The boot image is copied to address 07c0h and started in floppy emulation<br />

mode.<br />

This first boot image (normally based on DOS) with the maximum size of<br />

2.88 MB (<strong>Deployment</strong> <strong>Manager</strong> uses only 1.44 MB) is used to prepare the<br />

storage devices so that the second operating system image can be stored<br />

from the deployment server onto the target drive.<br />

A RAID array may have to be configured beforehand, and an additional<br />

reboot may have to be initiated again using PXE to activate the created RAID<br />

array (depending on the RAID controller type).<br />

A second PXE boot can be avoided if an already initialized RAID array with<br />

the right RAID level is detected.<br />

For security purposes, the cloning image must be decompressed and decrypted.<br />

Decompressing also stores the image in such a way that only used hard<br />

disk sectors are transferred and must be placed on the right position on the<br />

drive. Decrypting is used to ensure that this image will not be sent by an<br />

intruder.<br />

<strong>Deployment</strong> <strong>Manager</strong> 367


12 Cloning deployment process<br />

12.7 LAN traffic and deployment methods<br />

Because of very large operating system images of up to several GB, different<br />

deployment methods are used to keep bandwidth on the network for other<br />

processes.<br />

12.7.1 Unicast and Multicast<br />

Transfer with a Unicast IP protocol, only one target system can receive a<br />

packet at any one time. The packet header includes the MAC address of only<br />

one recipient. If individual data per recipient is to be distributed, this is the preferred<br />

method.<br />

The first DOS boot image containing the dynamic parameters of each client<br />

(IP address, host name) will be distributed in this way.<br />

The disadvantage is that if, for example, 100 clients are to be cloned, this<br />

DOS image must be created 100 times, each with its individual parameters<br />

for cloning its target client.<br />

If a RAID1 is created, and a second reboot with the same DOS image is<br />

required to activate the RAID array, these 100 DOS images must be transferred<br />

again, each with a size of up to 1.44 MB. This takes a few minutes to<br />

be prepared.<br />

Multicast is the typical method for cloning many servers over the LAN using<br />

the same image each time.<br />

The server sends only one image packet-by-packet over the LAN to the<br />

clients using a Multicast IP packet.<br />

The clients are assigned via an IP Multicast address. Each client assigned<br />

via such a temporary address catches this kind of broadcast and stores the<br />

operating system image in its memory or disk, concatenating the user data.<br />

Since each Multicast client receives the same image, dynamic parameters<br />

such as IP address, host name and SecureID are modified on the client side.<br />

The multicast protocol used by <strong>Deployment</strong> <strong>Manager</strong> uses an acknowledgement<br />

package for each transferred data package from each listening<br />

368 <strong>Deployment</strong> <strong>Manager</strong>


client. The slowest one controls the speed of the process. If more than two<br />

retries to a certain client are necessary for a data package, this client will be<br />

phased out and the protocol continues without it. That means, if one of the target<br />

servers is connected by a 10 Mbit connection, the whole Multicast protocol<br />

runs at 10 Mbit/s only, irrespective of which speed is possible with the<br />

other members of that Multicast group. With Unicast it is different on account<br />

of a peer-to-peer connection between each server and client.<br />

With Unicast mode each server is a session; with Multicast each job is a session.<br />

12.7.2 Switch, Hub and Bridge Configuration<br />

12.7 LAN traffic and deployment methods<br />

The PXE protocol searches for the PXE server and the DHCP server via<br />

broadcast on port 67.<br />

If these servers are placed behind bridges, hubs or switches with activated<br />

virtual LAN software, these devices must be programmed port-by-port to<br />

bypass these broadcasts.<br />

These external devices usually provide configuration APIs, accessible via<br />

vendor-specific configuration tools or via a programming interface.<br />

<strong>Deployment</strong> <strong>Manager</strong> 369


370 <strong>Deployment</strong> <strong>Manager</strong>


13 Appendix - network techniques<br />

13.1 MAC address handling<br />

13.2 PXE<br />

The MAC (Media Access Control) address is a hardware address that<br />

uniquely identifies each node of a network. If a server is already being managed<br />

by Operations <strong>Manager</strong>, the MAC address is automatically read from<br />

the <strong>ServerView</strong> Operations <strong>Manager</strong> database.<br />

There are several ways to find out the MAC address of a server:<br />

l You can unplug the server and read off the imprinted MAC address<br />

from the MAC/iSCSI address label on the rear side of the server.<br />

l You can start the server and read off the MAC address on the information<br />

screen of the BIOS.<br />

There are several ways to find out the MAC address of a server blade:<br />

l If you have already configured the access via the management blade,<br />

you can use the management blade’s Web interface (http:\\; default user: root; default<br />

password: root).<br />

l You can also unplug the server blade and read off the imprinted MAC<br />

address from the MAC/iSCSI address label of the server blade.<br />

l You can start the server blade and read off the MAC address on the<br />

information screen of the BIOS.<br />

Short for Pre-Boot Execution Environment.<br />

13.1 MAC address handling<br />

PXE is a boot mode of the LAN adapter. It does not become active until the<br />

system BIOS activates the LAN adapter as boot device during system boot<br />

and jumps to it. For this to happen, the LAN device must be set to the highest<br />

priority in the system BIOS boot device table. No jumpers are required for<br />

this.<br />

<strong>Deployment</strong> <strong>Manager</strong> 371


13 Appendix - network techniques<br />

PXE is a mandatory element of the WfM (Wired for Management) specification.<br />

To be considered compliant, the PXE must be supported by the<br />

computer's BIOS and its NIC.<br />

PXE Boot Algorithm<br />

2-3 The boot process on the client side starts with a PXE broadcast to<br />

the DHCP server to receive a temporary IP address (mandatory).<br />

4-5 A similar broadcast discovers the PXE boot server on port 67 or<br />

4011 (depending on the information issued by the DHCP server)<br />

requesting a boot image name.<br />

6-8 If the required information has been provided, a TFTP session is<br />

started to receive the boot image from the PXE server. The image<br />

size must not exceed 1.44 MB (floppy disk emulation mode). This<br />

boot image is copied to memory address 07C0h and started by the<br />

BIOS.<br />

9 As long as the operating system kernel is not started and neither are<br />

the kernel drivers, any LAN access is performed using the PXE<br />

BIOS for further TFTP sessions.<br />

372 <strong>Deployment</strong> <strong>Manager</strong>


13.3 DHCP<br />

DHCP (Dynamic Host Configuration Protocol) is a protocol for assigning<br />

dynamic IP addresses to devices in a network. With dynamic addressing, a<br />

device can have a different IP address every time it connects to the network.<br />

In some systems, the device's IP address can even change while it is still<br />

connected. DHCP also supports a mix of static (reserved) and dynamic IP<br />

addresses.<br />

Further information on DHCP can be obtained at the following internet<br />

addresses:<br />

l Red Hat Linux 9: Red Hat Linux Customization Guide<br />

http://www.redhat.com/docs/manuals/linux/RHL-9-Manual/customguide/ch-dhcp.html<br />

l DHCP.org - Resources for DHCP (contains further Links)<br />

http://www.dhcp.org/<br />

l ntfaq.com - Frequently asked questions concerning DHCP<br />

http://www.ntfaq.com/Articles/Index.cfm?DepartmentID=774<br />

l DHCP-Handbook<br />

http://www.dhcp-handbook.com/dhcp_faq.html<br />

13.4 VLAN (Virtual Local Area Network)<br />

The following VLAN description is based on blade servers.<br />

13.3 DHCP<br />

A virtual LAN is a network of computers that behave as if they are connected<br />

to the same wire even though they may actually be physically located on different<br />

segments of a LAN. VLANs are configured through software rather<br />

than hardware, which makes them very flexible. One of the biggest advantages<br />

of VLANs is that when a computer is physically moved to another location,<br />

it can stay on the same VLAN without any hardware reconfiguration.<br />

<strong>Deployment</strong> <strong>Manager</strong> 373


13 Appendix - network techniques<br />

VLAN Configuration<br />

The PXE client running on the system does not support VLANs and does not<br />

send untagged frames.<br />

VLAN requirements<br />

l The external switch in the LAN must support VLANs.<br />

l The driver for the server network connectors (NIC) must allow integration<br />

in several VLANs.<br />

l The image from the PXE server must already contain a driver which is<br />

pre-configured for VLANs.<br />

Example of a VLAN Configuration<br />

l VLAN IDs must be assigned for all segments on the segment switch,<br />

except for the deploy segment.<br />

o I.e. data traffic toward the deploy segment is transmitted<br />

untagged.<br />

o If the PXE client sends an untagged frame, it is allocated to all<br />

three source ports by the switch blade. The untagged frame is<br />

only transferred to the untagged port (connected to the deploy<br />

segment) by the segment switch.<br />

o After the operating system has been booted on the server<br />

blade, it identifies several separate segments due to the VLAN<br />

configuration.<br />

l All ports in the switch blade must recognize all VLAN IDs and must be<br />

configured for untagged frames at the same time. Even though the<br />

switch blade does not perform a segmentation, without VLAN configuration<br />

it would abandon all frames that carry a VLAN tag.<br />

Summary<br />

l PXE service data is only forwarded to the deploy segment.<br />

l The VLAN configuration provides the splitting of segments after the<br />

boot process.<br />

374 <strong>Deployment</strong> <strong>Manager</strong>


l Data exchange among the segments is only possible via a router.<br />

l Full redundancy can be configured for both network controllers.<br />

13.5 Example: <strong>Deployment</strong> over VLAN<br />

13.5 Example: <strong>Deployment</strong> over VLAN<br />

The following example shows the deployment configuration of a BX300 blade<br />

server (contains one switch blade and four server blades) over VLAN.<br />

<strong>Deployment</strong> <strong>Manager</strong> 375


13 Appendix - network techniques<br />

Configuration / Preparation<br />

Configuration:<br />

Make sure that an image is installed on all server blades.<br />

Client 30.90.1.40 255.0.0.0<br />

Cisco switch 30.90.1.30 255.0.0.0<br />

Switch blade 1 30.90.1.21 255.0.0.0<br />

Switch blade 2 removed 255.0.0.0<br />

Server blade 1 DHCP: 10.0.0.0 255.0.0.0<br />

Server blade 2 DHCP: 10.0.0.0 255.0.0.0<br />

Server blade 3 DHCP: 20.0.0.0 255.0.0.0<br />

Server blade 4 DHCP: 20.0.0.0 255.0.0.0<br />

Management blade 30.90.1.20 255.0.0.0<br />

Connect the components, see figure above.<br />

376 <strong>Deployment</strong> <strong>Manager</strong>


Deploying images into different VLANS<br />

Requirements:<br />

l Two VLANs which are not connected to each other. An image should<br />

be deployed into these VLANs from a third VLAN which is able to communicate<br />

with the other two.<br />

l For the whole network you must have one deployment server, one<br />

DHCP server and one WINS server running.<br />

Perform the following steps:<br />

1. Creating VLANs on the switch blades<br />

a. Assign ports 1, 2 and 13 to VLAN 10 untagged and remove<br />

them from VLAN 1 or any other VLANs.<br />

b. Set PVID for ports 1 and 2 to 10 only.<br />

c. Assign ports 3, 4 and 13 to VLAN 20 untagged and remove<br />

them from VLAN 1 or any other VLANs. Leave port 13 in VLAN<br />

10 also.<br />

d. Set PVID for ports 3 and 4 to 20 only.<br />

e. Assign port 13 to VLAN 30 untagged.<br />

13.5 Example: <strong>Deployment</strong> over VLAN<br />

f. Create a .1Q Trunk on port 13 and connect it to Cisco port 2.<br />

(Set frame type to tagged in egress rules (VLAN port configuration).)<br />

g. Assign VLAN 30 the IP address: 30.90.1.21 for the switch management<br />

(with CLI).<br />

2. Creating VLANs on the Cisco switch<br />

a. Create a .1Q Trunk on port 2 which is connected to the switch<br />

blade.<br />

b. Assign port 1 to VLAN 30 untagged which is connected to the<br />

client.<br />

c. Assign VLAN 30 the IP address: 30.90.1.30 for the switch management<br />

<strong>Deployment</strong> <strong>Manager</strong> 377


13 Appendix - network techniques<br />

3. Configuring the client<br />

a. Install a WINS_Server on the client and configure it.<br />

b. Under Control panel - Add/Remove Programs - MS Components<br />

- Network Services, add WINS.<br />

No configuration is necessary for WINS.<br />

c. Configure the DHCP server so that there is a range for each<br />

subnet:<br />

Range WINS (044) Gateway (003)<br />

VLAN 10 10.90.1.1 to .10 30.90.1.40 10.90.1.30<br />

VLAN 20 20.90.1.1 to .10 30.90.1.40 20.90.1.30<br />

VLAN 30 30.90.1.1 to .10 30.90.1.40 30.90.1.30<br />

d. Set up MMB with a new IP address (30.x.x.x)<br />

378 <strong>Deployment</strong> <strong>Manager</strong>


e. Set Cisco ports 3 and 4 to static access VLAN 30 and 10 Mb/s<br />

full or half duplex (depends on the MMB setting).<br />

f. Try to ping the server blades in VLAN 10 and VLAN 20 from the<br />

client. Also ping switch blade 1, management blade and Cisco<br />

switch. What IP address do they have? What is the result?<br />

Server blade 1<br />

Server blade 2<br />

Server blade 3<br />

Server blade 4<br />

Switch blade<br />

Management blade<br />

Cisco switch<br />

g. Is your result okay? Why?<br />

13.5 Example: <strong>Deployment</strong> over VLAN<br />

IP address Result of ping<br />

4. Configuring routing on the Cisco switch (part 1)<br />

a. Create Vlan 10 and Vlan 20 under Vlan - Vlan - Configure<br />

Vlans.<br />

<strong>Deployment</strong> <strong>Manager</strong> 379


13 Appendix - network techniques<br />

b. Assign each VLAN an IP address.<br />

Under Administration - IP Addresses:<br />

c. Remove the IP address for Vlan1.<br />

d. Click Apply.<br />

e. Enable IP Routing under Device - IP-Routing - Protocols.<br />

Set the check mark to enable IP routing for 3550:<br />

380 <strong>Deployment</strong> <strong>Manager</strong>


f. Click Apply. For 3750 under Device – IP-Routing – Protocols<br />

select enable\disable.<br />

g. Try to ping the server blades in VLAN 10 and VLAN 20 from the<br />

client. Also ping switch blade 1, management blade and Cisco<br />

switch.<br />

What IP address do they have? What is the result?<br />

Server blade 1<br />

Server blade 2<br />

Server blade 3<br />

Server blade 4<br />

Switch blade<br />

13.5 Example: <strong>Deployment</strong> over VLAN<br />

IP address Result of ping<br />

<strong>Deployment</strong> <strong>Manager</strong> 381


13 Appendix - network techniques<br />

Management blade<br />

Cisco switch<br />

h. Is your result okay? Why?<br />

IP address Result of ping<br />

i. Enable the IP-Helper function.<br />

You must configure a relay device when a switch sends broadcast<br />

packets that need to be responded to by a host on a different<br />

LAN. Examples of broadcast packets that the switch<br />

might send are DHCP, DNS and, in some cases, TFTP packets.<br />

You must configure this relay device to forward received<br />

broadcast packets on an interface to the destination host. If the<br />

relay device is a Cisco router, enable IP routing (ip routing global<br />

configuration command), and configure helper addresses by<br />

using the ip helper-address interface configuration command.<br />

Cisco CLI:<br />

en<br />

conf t<br />

int vlan 10<br />

ip helper-address 30.90.1.40<br />

Same for VLAN 20 with the same address.<br />

j. Enable the forwarding for UDP ports:<br />

Cisco CLI (Global parameter (config))<br />

en<br />

conf t<br />

ip forward-protocol udp (portnumber)<br />

Do this for the ports Bootps 67, Bootpc 68 and TFTP 69.<br />

5. IP address for server blades<br />

a. Execute an ipconfig /release and ipconfig /renew in the<br />

DOS window for each server blade.<br />

382 <strong>Deployment</strong> <strong>Manager</strong>


. Then execute an ipconfig /all. Fill out the following lines.<br />

Server blade 1<br />

Server blade 2<br />

Server blade 3<br />

Server blade 4<br />

13.5 Example: <strong>Deployment</strong> over VLAN<br />

IP address gateway WINS ping<br />

c. Try to ping the server blades in VLAN 10 and VLAN 20 from the<br />

client. Fill out the column above.<br />

d. Try to ping from a server blade in VLAN 10 and the VLAN 20<br />

server blade.<br />

What is the result? ___________________________________<br />

__<br />

6. Configuring routing on the Cisco switch (part 2)<br />

In order to define a secure routing, you must create an access list:<br />

a. Create an access list:<br />

conf t<br />

access-list 100 deny ip 10.0.0.0<br />

0.255.255.255 20.0.0.0 0.255.255.255<br />

access-list 100 permit ip any any<br />

access-list 110 deny ip 20.0.0.0<br />

0.255.255.255 10.0.0.0 0.255.255.255<br />

access-list 110 permit ip any any<br />

interface VLAN10<br />

ip access-group 100 in<br />

interface VLAN20<br />

ip access-group 110 in<br />

b. Try to ping from a server blade in VLAN 10 and the VLAN 20<br />

server blade.<br />

<strong>Deployment</strong> <strong>Manager</strong> 383


13 Appendix - network techniques<br />

What is the result? ___________________________________<br />

__<br />

c. Try to ping the server blades in VLAN 10 and VLAN 20 from the<br />

client.<br />

What is the result? ___________________________________<br />

__<br />

d. Enable multicast routing:<br />

conf en<br />

Password: xxx<br />

cisco1#conf t<br />

cisco1(config)#ip multicast-routing distributed<br />

(on Cisco 3750)<br />

cisco1(config)#ip multicast-routing (on<br />

Cisco 3550)<br />

Enable PIM Dense:<br />

cisco1(config)#int vlan 10<br />

cisco1(config-if)#ip pim dense-mode<br />

cisco1(config-if)#exit<br />

cisco1(config)#int vlan 20<br />

cisco1(config-if)#ip pim dense-mode<br />

cisco1(config-if)#exit<br />

cisco1(config)#int vlan 30<br />

cisco1(config-if)#ip pim dense-mode<br />

cisco1(config-if)#exit<br />

e. Check IGMP snooping (it should be enabled by default)<br />

cisco1#sh ip igmp snooping<br />

vlan 1<br />

----------<br />

IGMP snooping is globally enabled<br />

IGMP snooping is enabled on this Vlan<br />

IGMP snooping immediate-leave is disabled on<br />

this Vlan<br />

IGMP snooping mrouter learn mode is pim-<br />

384 <strong>Deployment</strong> <strong>Manager</strong>


13.5 Example: <strong>Deployment</strong> over VLAN<br />

dvmrp on this Vlan<br />

IGMP snooping is running in IGMP_ONLY mode<br />

on this Vlan<br />

vlan 10<br />

----------<br />

IGMP snooping is globally enabled<br />

IGMP snooping is enabled on this Vlan<br />

IGMP snooping immediate-leave is disabled on<br />

this Vlan<br />

IGMP snooping mrouter learn mode is pimdvmrp<br />

on this Vlan<br />

IGMP snooping is running in IGMP_ONLY mode<br />

on this Vlan<br />

vlan 20<br />

----------<br />

IGMP snooping is globally enabled<br />

IGMP snooping is enabled on this Vlan<br />

IGMP snooping immediate-leave is disabled on<br />

this Vlan<br />

IGMP snooping mrouter learn mode is pimdvmrp<br />

on this Vlan<br />

IGMP snooping is running in IGMP_ONLY mode<br />

on this Vlan<br />

vlan 30<br />

----------<br />

IGMP snooping is globally enabled<br />

IGMP snooping is enabled on this Vlan<br />

IGMP snooping immediate-leave is disabled on<br />

this Vlan<br />

IGMP snooping mrouter learn mode is pimdvmrp<br />

on this Vlan<br />

IGMP snooping is running in IGMP_ONLY mode<br />

on this Vlan<br />

<strong>Deployment</strong> <strong>Manager</strong> 385


13 Appendix - network techniques<br />

Deploying<br />

Now the entire configuration is ready for deploying:<br />

1. First try to save the image from server blade 1.<br />

2. Once the save process is complete, try to deploy this new image on<br />

server blades 2 - 4.<br />

386 <strong>Deployment</strong> <strong>Manager</strong>

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!