Administering Microsoft® SQL Server® 2012 Database - Advanced ...
Administering Microsoft® SQL Server® 2012 Database - Advanced ...
Administering Microsoft® SQL Server® 2012 Database - Advanced ...
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
OFFICIAL MICROSOFT LEARNING PRODUCT<br />
10775A<br />
<strong>Administering</strong> <strong>Microsoft®</strong><br />
<strong>SQL</strong> <strong>Server®</strong> <strong>2012</strong> <strong>Database</strong>
ii 10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Information in this document, including URL and other Internet Web site references, is subject to change<br />
without notice. Unless otherwise noted, the example companies, organizations, products, domain names,<br />
e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no association with<br />
any real company, organization, product, domain name, e-mail address, logo, person, place or event is<br />
intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the<br />
user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in<br />
or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical,<br />
photocopying, recording, or otherwise), or for any purpose, without the express written permission of<br />
Microsoft Corporation.<br />
Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property<br />
rights covering subject matter in this document. Except as expressly provided in any written license<br />
agreement from Microsoft, the furnishing of this document does not give you any license to these<br />
patents, trademarks, copyrights, or other intellectual property.<br />
The names of manufacturers, products, or URLs are provided for informational purposes only and<br />
Microsoft makes no representations and warranties, either expressed, implied, or statutory, regarding<br />
these manufacturers or the use of the products with any Microsoft technologies. The inclusion of a<br />
manufacturer or product does not imply endorsement of Microsoft of the manufacturer or product. Links<br />
may be provided to third party sites. Such sites are not under the control of Microsoft and Microsoft is not<br />
responsible for the contents of any linked site or any link contained in a linked site, or any changes or<br />
updates to such sites. Microsoft is not responsible for webcasting or any other form of transmission<br />
received from any linked site. Microsoft is providing these links to you only as a convenience, and the<br />
inclusion of any link does not imply endorsement of Microsoft of the site or the products contained<br />
therein.<br />
© <strong>2012</strong> Microsoft Corporation. All rights reserved.<br />
Microsoft and the trademarks listed at http://www.microsoft.com/about/legal/en/us/IntellectualProperty<br />
/Trademarks/EN-US.aspx are trademarks of the Microsoft group of companies. All other trademarks are<br />
property of their respective owners<br />
Product Number: 10775A<br />
Part Number: X18-29125<br />
Released: 05/<strong>2012</strong>
MICROSOFT LICENSE TERMS<br />
OFFICIAL MICROSOFT LEARNING PRODUCTS<br />
MICROSOFT OFFICIAL COURSE Pre-Release and Final Release Versions<br />
These license terms are an agreement between Microsoft Corporation and you. Please read them. They apply to<br />
the Licensed Content named above, which includes the media on which you received it, if any. These license<br />
terms also apply to any updates, supplements, internet based services and support services for the Licensed<br />
Content, unless other terms accompany those items. If so, those terms apply.<br />
BY DOWNLOADING OR USING THE LICENSED CONTENT, YOU ACCEPT THESE TERMS. IF YOU DO NOT ACCEPT<br />
THEM, DO NOT DOWNLOAD OR USE THE LICENSED CONTENT.<br />
If you comply with these license terms, you have the rights below.<br />
1. DEFINITIONS.<br />
a. “Authorized Learning Center” means a Microsoft Learning Competency Member, Microsoft IT Academy<br />
Program Member, or such other entity as Microsoft may designate from time to time.<br />
b. “Authorized Training Session” means the Microsoft-authorized instructor-led training class using only<br />
MOC Courses that are conducted by a MCT at or through an Authorized Learning Center.<br />
c. “Classroom Device” means one (1) dedicated, secure computer that you own or control that meets or<br />
exceeds the hardware level specified for the particular MOC Course located at your training facilities or<br />
primary business location.<br />
d. “End User” means an individual who is (i) duly enrolled for an Authorized Training Session or Private<br />
Training Session, (ii) an employee of a MPN Member, or (iii) a Microsoft full-time employee.<br />
e. “Licensed Content” means the MOC Course and any other content accompanying this agreement.<br />
Licensed Content may include (i) Trainer Content, (ii) sample code, and (iii) associated media.<br />
f. “Microsoft Certified Trainer” or “MCT” means an individual who is (i) engaged to teach a training session<br />
to End Users on behalf of an Authorized Learning Center or MPN Member, (ii) currently certified as a<br />
Microsoft Certified Trainer under the Microsoft Certification Program, and (iii) holds a Microsoft<br />
Certification in the technology that is the subject of the training session.<br />
g. “Microsoft IT Academy Member” means a current, active member of the Microsoft IT Academy<br />
Program.<br />
h. “Microsoft Learning Competency Member” means a Microsoft Partner Network Program Member in<br />
good standing that currently holds the Learning Competency status.<br />
i. “Microsoft Official Course” or “MOC Course” means the Official Microsoft Learning Product instructorled<br />
courseware that educates IT professionals or developers on Microsoft technologies.
j. “Microsoft Partner Network Member” or “MPN Member” means a silver or gold-level Microsoft Partner<br />
Network program member in good standing.<br />
k. “Personal Device” means one (1) device, workstation or other digital electronic device that you<br />
personally own or control that meets or exceeds the hardware level specified for the particular MOC<br />
Course.<br />
l. “Private Training Session” means the instructor-led training classes provided by MPN Members for<br />
corporate customers to teach a predefined learning objective. These classes are not advertised or<br />
promoted to the general public and class attendance is restricted to individuals employed by or<br />
contracted by the corporate customer.<br />
m. “Trainer Content” means the trainer version of the MOC Course and additional content designated<br />
solely for trainers to use to teach a training session using a MOC Course. Trainer Content may include<br />
Microsoft PowerPoint presentations, instructor notes, lab setup guide, demonstration guides, beta<br />
feedback form and trainer preparation guide for the MOC Course. To clarify, Trainer Content does not<br />
include virtual hard disks or virtual machines.<br />
2. INSTALLATION AND USE RIGHTS. The Licensed Content is licensed not sold. The Licensed Content is<br />
licensed on a one copy per user basis, such that you must acquire a license for each individual that<br />
accesses or uses the Licensed Content.<br />
2.1 Below are four separate sets of installation and use rights. Only one set of rights apply to you.<br />
a. If you are a Authorized Learning Center:<br />
i. If the Licensed Content is in digital format for each license you acquire you may either:<br />
1. install one (1) copy of the Licensed Content in the form provided to you on a dedicated, secure<br />
server located on your premises where the Authorized Training Session is held for access and<br />
use by one (1) End User attending the Authorized Training Session, or by one (1) MCT teaching<br />
the Authorized Training Session, or<br />
2. install one (1) copy of the Licensed Content in the form provided to you on one (1) Classroom<br />
Device for access and use by one (1) End User attending the Authorized Training Session, or by<br />
one (1) MCT teaching the Authorized Training Session.<br />
ii. You agree that:<br />
1. you will acquire a license for each End User and MCT that accesses the Licensed Content,<br />
2. each End User and MCT will be presented with a copy of this agreement and each individual<br />
will agree that their use of the Licensed Content will be subject to these license terms prior to<br />
their accessing the Licensed Content. Each individual will be required to denote their<br />
acceptance of the EULA in a manner that is enforceable under local law prior to their accessing<br />
the Licensed Content,<br />
3. for all Authorized Training Sessions, you will only use qualified MCTs who hold the applicable<br />
competency to teach the particular MOC Course that is the subject of the training session,<br />
4. you will not alter or remove any copyright or other protective notices contained in the<br />
Licensed Content,
5. you will remove and irretrievably delete all Licensed Content from all Classroom Devices and<br />
servers at the end of the Authorized Training Session,<br />
6. you will only provide access to the Licensed Content to End Users and MCTs,<br />
7. you will only provide access to the Trainer Content to MCTs, and<br />
8. any Licensed Content installed for use during a training session will be done in accordance<br />
with the applicable classroom set-up guide.<br />
b. If you are a MPN Member.<br />
i. If the Licensed Content is in digital format for each license you acquire you may either:<br />
1. install one (1) copy of the Licensed Content in the form provided to you on (A) one (1)<br />
Classroom Device, or (B) one (1) dedicated, secure server located at your premises where<br />
the training session is held for use by one (1) of your employees attending a training session<br />
provided by you, or by one (1) MCT that is teaching the training session, or<br />
2. install one (1) copy of the Licensed Content in the form provided to you on one (1)<br />
Classroom Device for use by one (1) End User attending a Private Training Session, or one (1)<br />
MCT that is teaching the Private Training Session.<br />
ii. You agree that:<br />
1. you will acquire a license for each End User and MCT that accesses the Licensed Content,<br />
2. each End User and MCT will be presented with a copy of this agreement and each individual<br />
will agree that their use of the Licensed Content will be subject to these license terms prior<br />
to their accessing the Licensed Content. Each individual will be required to denote their<br />
acceptance of the EULA in a manner that is enforceable under local law prior to their<br />
accessing the Licensed Content,<br />
3. for all training sessions, you will only use qualified MCTs who hold the applicable<br />
competency to teach the particular MOC Course that is the subject of the training session,<br />
4. you will not alter or remove any copyright or other protective notices contained in the<br />
Licensed Content,<br />
5. you will remove and irretrievably delete all Licensed Content from all Classroom Devices and<br />
servers at the end of each training session,<br />
6. you will only provide access to the Licensed Content to End Users and MCTs,<br />
7. you will only provide access to the Trainer Content to MCTs, and<br />
8. any Licensed Content installed for use during a training session will be done in accordance<br />
with the applicable classroom set-up guide.<br />
c. If you are an End User:<br />
You may use the Licensed Content solely for your personal training use. If the Licensed Content is in<br />
digital format, for each license you acquire you may (i) install one (1) copy of the Licensed Content in<br />
the form provided to you on one (1) Personal Device and install another copy on another Personal<br />
Device as a backup copy, which may be used only to reinstall the Licensed Content; or (ii) print one (1)<br />
copy of the Licensed Content. You may not install or use a copy of the Licensed Content on a device<br />
you do not own or control.
d. If you are a MCT.<br />
i. For each license you acquire, you may use the Licensed Content solely to prepare and deliver an<br />
Authorized Training Session or Private Training Session. For each license you acquire, you may<br />
install and use one (1) copy of the Licensed Content in the form provided to you on one (1) Personal<br />
Device and install one (1) additional copy on another Personal Device as a backup copy, which may<br />
be used only to reinstall the Licensed Content. You may not install or use a copy of the Licensed<br />
Content on a device you do not own or control.<br />
ii. Use of Instructional Components in Trainer Content. You may customize, in accordance with the<br />
most recent version of the MCT Agreement, those portions of the Trainer Content that are logically<br />
associated with instruction of a training session. If you elect to exercise the foregoing rights, you<br />
agree: (a) that any of these customizations will only be used for providing a training session, (b) any<br />
customizations will comply with the terms and conditions for Modified Training Sessions and<br />
Supplemental Materials in the most recent version of the MCT agreement and with this agreement.<br />
For clarity, any use of “customize” refers only to changing the order of slides and content, and/or<br />
not using all the slides or content, it does not mean changing or modifying any slide or content.<br />
2.2 Separation of Components. The Licensed Content components are licensed as a single unit and you<br />
may not separate the components and install them on different devices.<br />
2.3 Reproduction/Redistribution Licensed Content. Except as expressly provided in the applicable<br />
installation and use rights above, you may not reproduce or distribute the Licensed Content or any portion<br />
thereof (including any permitted modifications) to any third parties without the express written permission<br />
of Microsoft.<br />
2.4 Third Party Programs. The Licensed Content may contain third party programs or services. These<br />
license terms will apply to your use of those third party programs or services, unless other terms accompany<br />
those programs and services.<br />
2.5 Additional Terms. Some Licensed Content may contain components with additional terms,<br />
conditions, and licenses regarding its use. Any non-conflicting terms in those conditions and licenses also<br />
apply to that respective component and supplements the terms described in this Agreement.<br />
3. PRE-RELEASE VERSIONS. If the Licensed Content is a pre-release (“beta”) version, in addition to the other<br />
provisions in this agreement, then these terms also apply:<br />
a. Pre-Release Licensed Content. This Licensed Content is a pre-release version. It may not contain the<br />
same information and/or work the way a final version of the Licensed Content will. We may change it<br />
for the final version. We also may not release a final version. Microsoft is under no obligation to<br />
provide you with any further content, including the final release version of the Licensed Content.<br />
b. Feedback. If you agree to give feedback about the Licensed Content to Microsoft, either directly or<br />
through its third party designee, you give to Microsoft without charge, the right to use, share and<br />
commercialize your feedback in any way and for any purpose. You also give to third parties, without<br />
charge, any patent rights needed for their products, technologies and services to use or interface with<br />
any specific parts of a Microsoft software, Microsoft product, or service that includes the feedback. You<br />
will not give feedback that is subject to a license that requires Microsoft to license its software,<br />
technologies, or products to third parties because we include your feedback in them. These rights
survive this agreement.<br />
c. Term. If you are an Authorized Training Center, MCT or MPN, you agree to cease using all copies of the<br />
beta version of the Licensed Content upon (i) the date which Microsoft informs you is the end date for<br />
using the beta version, or (ii) sixty (60) days after the commercial release of the Licensed Content,<br />
whichever is earliest (“beta term”). Upon expiration or termination of the beta term, you will<br />
irretrievably delete and destroy all copies of same in the possession or under your control.<br />
4. INTERNET-BASED SERVICES. Classroom Devices located at Authorized Learning Center’s physical location<br />
may contain virtual machines and virtual hard disks for use while attending an Authorized Training<br />
Session. You may only use the software on the virtual machines and virtual hard disks on a Classroom<br />
Device solely to perform the virtual lab activities included in the MOC Course while attending the<br />
Authorized Training Session. Microsoft may provide Internet-based services with the software included<br />
with the virtual machines and virtual hard disks. It may change or cancel them at any time. If the<br />
software is pre-release versions of software, some of its Internet-based services may be turned on by<br />
default. The default setting in these versions of the software do not necessarily reflect how the features<br />
will be configured in the commercially released versions. If Internet-based services are included with the<br />
software, they are typically simulated for demonstration purposes in the software and no transmission<br />
over the Internet takes place. However, should the software be configured to transmit over the Internet,<br />
the following terms apply:<br />
a. Consent for Internet-Based Services. The software features described below connect to Microsoft or<br />
service provider computer systems over the Internet. In some cases, you will not receive a separate<br />
notice when they connect. You may switch off these features or not use them. By using these features,<br />
you consent to the transmission of this information. Microsoft does not use the information to identify<br />
or contact you.<br />
b. Computer Information. The following features use Internet protocols, which send to the appropriate<br />
systems computer information, such as your Internet protocol address, the type of operating system,<br />
browser and name and version of the software you are using, and the language code of the device<br />
where you installed the software. Microsoft uses this information to make the Internet-based services<br />
available to you.<br />
• Accelerators. When you use click on or move your mouse over an Accelerator, the title and full web<br />
address or URL of the current webpage, as well as standard computer information, and any content<br />
you have selected, might be sent to the service provider. If you use an Accelerator provided by<br />
Microsoft, the information sent is subject to the Microsoft Online Privacy Statement, which is<br />
available at go.microsoft.com/fwlink/?linkid=31493. If you use an Accelerator provided by a third<br />
party, use of the information sent will be subject to the third party’s privacy practices.<br />
• Automatic Updates. This software contains an Automatic Update feature that is on by default. For<br />
more information about this feature, including instructions for turning it off, see<br />
go.microsoft.com/fwlink/?LinkId=178857. You may turn off this feature while the software is<br />
running (“opt out”). Unless you expressly opt out of this feature, this feature will (a) connect to<br />
Microsoft or service provider computer systems over the Internet, (b) use Internet protocols to send<br />
to the appropriate systems standard computer information, such as your computer’s Internet<br />
protocol address, the type of operating system, browser and name and version of the software you<br />
are using, and the language code of the device where you installed the software, and (c)<br />
automatically download and install, or prompt you to download and/or install, current Updates to<br />
the software. In some cases, you will not receive a separate notice before this feature takes effect.
By installing the software, you consent to the transmission of standard computer information and<br />
the automatic downloading and installation of updates.<br />
• Auto Root Update. The Auto Root Update feature updates the list of trusted certificate authorities.<br />
you can switch off the Auto Root Update feature.<br />
• Customer Experience Improvement Program (CEIP), Error and Usage Reporting; Error Reports. This<br />
software uses CEIP and Error and Usage Reporting components enabled by default that<br />
automatically send to Microsoft information about your hardware and how you use this software.<br />
This software also automatically sends error reports to Microsoft that describe which software<br />
components had errors and may also include memory dumps. You may choose not to use these<br />
software components. For more information please go to<br />
.<br />
• Digital Certificates. The software uses digital certificates. These digital certificates confirm the<br />
identity of Internet users sending X.509 standard encrypted information. They also can be used to<br />
digitally sign files and macros, to verify the integrity and origin of the file contents. The software<br />
retrieves certificates and updates certificate revocation lists. These security features operate only<br />
when you use the Internet.<br />
• Extension Manager. The Extension Manager can retrieve other software through the internet from<br />
the Visual Studio Gallery website. To provide this other software, the Extension Manager sends to<br />
Microsoft the name and version of the software you are using and language code of the device<br />
where you installed the software. This other software is provided by third parties to Visual Studio<br />
Gallery. It is licensed to users under terms provided by the third parties, not from Microsoft. Read<br />
the Visual Studio Gallery terms of use for more information.<br />
• IPv6 Network Address Translation (NAT) Traversal service (Teredo). This feature helps existing<br />
home Internet gateway devices transition to IPv6. IPv6 is a next generation Internet protocol. It<br />
helps enable end-to-end connectivity often needed by peer-to-peer applications. To do so, each<br />
time you start up the software the Teredo client service will attempt to locate a public Teredo<br />
Internet service. It does so by sending a query over the Internet. This query only transfers standard<br />
Domain Name Service information to determine if your computer is connected to the Internet and<br />
can locate a public Teredo service. If you<br />
· use an application that needs IPv6 connectivity or<br />
· configure your firewall to always enable IPv6 connectivity<br />
by default standard Internet Protocol information will be sent to the Teredo service at Microsoft at<br />
regular intervals. No other information is sent to Microsoft. You can change this default to use non-<br />
Microsoft servers. You can also switch off this feature using a command line utility named “netsh”.<br />
• Malicious Software Removal. During setup, if you select “Get important updates for installation”,<br />
the software may check and remove certain malware from your device. “Malware” is malicious<br />
software. If the software runs, it will remove the Malware listed and updated at<br />
www.support.microsoft.com/?kbid=890830. During a Malware check, a report will be sent to<br />
Microsoft with specific information about Malware detected, errors, and other information about<br />
your device. This information is used to improve the software and other Microsoft products and<br />
services. No information included in these reports will be used to identify or contact you. You may<br />
disable the software’s reporting functionality by following the instructions found at
www.support.microsoft.com/?kbid=890830. For more information, read the Windows Malicious<br />
Software Removal Tool privacy statement at go.microsoft.com/fwlink/?LinkId=113995.<br />
• Microsoft Digital Rights Management. If you use the software to access content that has been<br />
protected with Microsoft Digital Rights Management (DRM), then, in order to let you play the<br />
content, the software may automatically request media usage rights from a rights server on the<br />
Internet and download and install available DRM updates. For more information, see<br />
go.microsoft.com/fwlink/?LinkId=178857.<br />
• Microsoft Telemetry Reporting Participation. If you choose to participate in Microsoft Telemetry<br />
Reporting through a “basic” or “advanced” membership, information regarding filtered URLs,<br />
malware and other attacks on your network is sent to Microsoft. This information helps Microsoft<br />
improve the ability of Forefront Threat Management Gateway to identify attack patterns and<br />
mitigate threats. In some cases, personal information may be inadvertently sent, but Microsoft will<br />
not use the information to identify or contact you. You can switch off Telemetry Reporting. For<br />
more information on this feature, see http://go.microsoft.com/fwlink/?LinkId=130980.<br />
• Microsoft Update Feature. To help keep the software up-to-date, from time to time, the software<br />
connects to Microsoft or service provider computer systems over the Internet. In some cases, you<br />
will not receive a separate notice when they connect. When the software does so, we check your<br />
version of the software and recommend or download updates to your devices. You may not receive<br />
notice when we download the update. You may switch off this feature.<br />
• Network Awareness. This feature determines whether a system is connected to a network by either<br />
passive monitoring of network traffic or active DNS or HTTP queries. The query only transfers<br />
standard TCP/IP or DNS information for routing purposes. You can switch off the active query<br />
feature through a registry setting.<br />
• Plug and Play and Plug and Play Extensions. You may connect new hardware to your device, either<br />
directly or over a network. Your device may not have the drivers needed to communicate with that<br />
hardware. If so, the update feature of the software can obtain the correct driver from Microsoft and<br />
install it on your device. An administrator can disable this update feature.<br />
• Real Simple Syndication (“RSS”) Feed. This software start page contains updated content that is<br />
supplied by means of an RSS feed online from Microsoft.<br />
• Search Suggestions Service. When you type a search query in Internet Explorer by using the Instant<br />
Search box or by typing a question mark (?) before your search term in the Address bar, you will see<br />
search suggestions as you type (if supported by your search provider). Everything you type in the<br />
Instant Search box or in the Address bar when preceded by a question mark (?) is sent to your<br />
search provider as you type it. In addition, when you press Enter or click the Search button, all the<br />
text that is in the search box or Address bar is sent to the search provider. If you use a Microsoft<br />
search provider, the information you send is subject to the Microsoft Online Privacy Statement,<br />
which is available at go.microsoft.com/fwlink/?linkid=31493. If you use a third-party search<br />
provider, use of the information sent will be subject to the third party’s privacy practices. You can<br />
turn search suggestions off at any time in Internet Explorer by using Manage Add-ons under the<br />
Tools button. For more information about the search suggestions service, see<br />
go.microsoft.com/fwlink/?linkid=128106.<br />
• <strong>SQL</strong> Server Reporting Services Map Report Item. The software may include features that retrieve<br />
content such as maps, images and other data through the Bing Maps (or successor branded)
application programming interface (the “Bing Maps APIs”). The purpose of these features is to<br />
create reports displaying data on top of maps, aerial and hybrid imagery. If these features are<br />
included, you may use them to create and view dynamic or static documents. This may be done only<br />
in conjunction with and through methods and means of access integrated in the software. You may<br />
not otherwise copy, store, archive, or create a database of the content available through the Bing<br />
Maps APIs. you may not use the following for any purpose even if they are available through the<br />
Bing Maps APIs:<br />
• Bing Maps APIs to provide sensor based guidance/routing, or<br />
• Any Road Traffic Data or Bird’s Eye Imagery (or associated metadata).<br />
Your use of the Bing Maps APIs and associated content is also subject to the additional terms and<br />
conditions at http://www.microsoft.com/maps/product/terms.html.<br />
• URL Filtering. The URL Filtering feature identifies certain types of web sites based upon predefined<br />
URL categories, and allows you to deny access to such web sites, such as known malicious sites and<br />
sites displaying inappropriate or pornographic materials. To apply URL filtering, Microsoft queries<br />
the online Microsoft Reputation Service for URL categorization. You can switch off URL filtering. For<br />
more information on this feature, see http://go.microsoft.com/fwlink/?LinkId=130980<br />
• Web Content Features. Features in the software can retrieve related content from Microsoft and<br />
provide it to you. To provide the content, these features send to Microsoft the type of operating<br />
system, name and version of the software you are using, type of browser and language code of the<br />
device where you run the software. Examples of these features are clip art, templates, online<br />
training, online assistance and Appshelp. You may choose not to use these web content features.<br />
• Windows Media Digital Rights Management. Content owners use Windows Media digital rights<br />
management technology (WMDRM) to protect their intellectual property, including copyrights. This<br />
software and third party software use WMDRM to play and copy WMDRM-protected content. If the<br />
software fails to protect the content, content owners may ask Microsoft to revoke the software’s<br />
ability to use WMDRM to play or copy protected content. Revocation does not affect other content.<br />
When you download licenses for protected content, you agree that Microsoft may include a<br />
revocation list with the licenses. Content owners may require you to upgrade WMDRM to access<br />
their content. Microsoft software that includes WMDRM will ask for your consent prior to the<br />
upgrade. If you decline an upgrade, you will not be able to access content that requires the upgrade.<br />
You may switch off WMDRM features that access the Internet. When these features are off, you can<br />
still play content for which you have a valid license.<br />
• Windows Media Player. When you use Windows Media Player, it checks with Microsoft for<br />
· compatible online music services in your region;<br />
· new versions of the player; and<br />
· codecs if your device does not have the correct ones for playing content.<br />
You can switch off this last feature. For more information, go to<br />
www.microsoft.com/windows/windowsmedia/player/11/privacy.aspx.<br />
• Windows Rights Management Services. The software contains a feature that allows you to create<br />
content that cannot be printed, copied or sent to others without your permission. For more<br />
information, go to www.microsoft.com/rms. you may choose not to use this feature
• Windows Time Service. This service synchronizes with time.windows.com once a week to provide<br />
your computer with the correct time. You can turn this feature off or choose your preferred time<br />
source within the Date and Time Control Panel applet. The connection uses standard NTP protocol.<br />
• Windows Update Feature. You may connect new hardware to the device where you run the<br />
software. Your device may not have the drivers needed to communicate with that hardware. If so,<br />
the update feature of the software can obtain the correct driver from Microsoft and run it on your<br />
device. You can switch off this update feature.<br />
c. Use of Information. Microsoft may use the device information, error reports, and malware reports to<br />
improve our software and services. We may also share it with others, such as hardware and software<br />
vendors. They may use the information to improve how their products run with Microsoft software.<br />
d. Misuse of Internet-based Services. You may not use any Internet-based service in any way that could<br />
harm it or impair anyone else’s use of it. You may not use the service to try to gain unauthorized access<br />
to any service, data, account or network by any means.<br />
5. SCOPE OF LICENSE. The Licensed Content is licensed, not sold. This agreement only gives you some rights<br />
to use the Licensed Content. Microsoft reserves all other rights. Unless applicable law gives you more<br />
rights despite this limitation, you may use the Licensed Content only as expressly permitted in this<br />
agreement. In doing so, you must comply with any technical limitations in the Licensed Content that only<br />
allows you to use it in certain ways. Except as expressly permitted in this agreement, you may not:<br />
• install more copies of the Licensed Content on devices than the number of licenses you acquired;<br />
• allow more individuals to access the Licensed Content than the number of licenses you acquired;<br />
• publicly display, or make the Licensed Content available for others to access or use;<br />
• install, sell, publish, transmit, encumber, pledge, lend, copy, adapt, link to, post, rent, lease or lend,<br />
make available or distribute the Licensed Content to any third party, except as expressly permitted<br />
by this Agreement.<br />
• reverse engineer, decompile, remove or otherwise thwart any protections or disassemble the<br />
Licensed Content except and only to the extent that applicable law expressly permits, despite this<br />
limitation;<br />
• access or use any Licensed Content for which you are not providing a training session to End Users<br />
using the Licensed Content;<br />
• access or use any Licensed Content that you have not been authorized by Microsoft to access and<br />
use; or<br />
• transfer the Licensed Content, in whole or in part, or assign this agreement to any third party.<br />
6. RESERVATION OF RIGHTS AND OWNERSHIP. Microsoft reserves all rights not expressly granted to you in<br />
this agreement. The Licensed Content is protected by copyright and other intellectual property laws and<br />
treaties. Microsoft or its suppliers own the title, copyright, and other intellectual property rights in the<br />
Licensed Content. You may not remove or obscure any copyright, trademark or patent notices that<br />
appear on the Licensed Content or any components thereof, as delivered to you.<br />
7. EXPORT RESTRICTIONS. The Licensed Content is subject to United States export laws and regulations. You<br />
must comply with all domestic and international export laws and regulations that apply to the Licensed<br />
Content. These laws include restrictions on destinations, End Users and end use. For additional<br />
information, see www.microsoft.com/exporting.
8. LIMITATIONS ON SALE, RENTAL, ETC. AND CERTAIN ASSIGNMENTS. You may not sell, rent, lease, lend or<br />
sublicense the Licensed Content or any portion thereof, or transfer or assign this agreement.<br />
9. SUPPORT SERVICES. Because the Licensed Content is “as is”, we may not provide support services for it.<br />
10. TERMINATION. Without prejudice to any other rights, Microsoft may terminate this agreement if you fail<br />
to comply with the terms and conditions of this agreement. Upon any termination of this agreement, you<br />
agree to immediately stop all use of and to irretrievable delete and destroy all copies of the Licensed<br />
Content in your possession or under your control.<br />
11. LINKS TO THIRD PARTY SITES. You may link to third party sites through the use of the Licensed Content.<br />
The third party sites are not under the control of Microsoft, and Microsoft is not responsible for the<br />
contents of any third party sites, any links contained in third party sites, or any changes or updates to third<br />
party sites. Microsoft is not responsible for webcasting or any other form of transmission received from<br />
any third party sites. Microsoft is providing these links to third party sites to you only as a convenience,<br />
and the inclusion of any link does not imply an endorsement by Microsoft of the third party site.<br />
12. ENTIRE AGREEMENT. This agreement, and the terms for supplements, updates and support services are<br />
the entire agreement for the Licensed Content.<br />
13. APPLICABLE LAW.<br />
a. United States. If you acquired the Licensed Content in the United States, Washington state law governs<br />
the interpretation of this agreement and applies to claims for breach of it, regardless of conflict of laws<br />
principles. The laws of the state where you live govern all other claims, including claims under state<br />
consumer protection laws, unfair competition laws, and in tort.<br />
b. Outside the United States. If you acquired the Licensed Content in any other country, the laws of that<br />
country apply.<br />
14. LEGAL EFFECT. This agreement describes certain legal rights. You may have other rights under the laws of<br />
your country. You may also have rights with respect to the party from whom you acquired the Licensed<br />
Content. This agreement does not change your rights under the laws of your country if the laws of your<br />
country do not permit it to do so.<br />
15. DISCLAIMER OF WARRANTY. THE LICENSED CONTENT IS LICENSED "AS-IS," "WITH ALL FAULTS," AND "AS<br />
AVAILABLE." YOU BEAR THE RISK OF USING IT. MICROSOFT CORPORATION AND ITS RESPECTIVE<br />
AFFILIATES GIVE NO EXPRESS WARRANTIES, GUARANTEES, OR CONDITIONS UNDER OR IN RELATION TO<br />
THE LICENSED CONTENT. YOU MAY HAVE ADDITIONAL CONSUMER RIGHTS UNDER YOUR LOCAL LAWS<br />
WHICH THIS AGREEMENT CANNOT CHANGE. TO THE EXTENT PERMITTED UNDER YOUR LOCAL LAWS,<br />
MICROSOFT CORPORATION AND ITS RESPECTIVE AFFILIATES EXCLUDE ANY IMPLIED WARRANTIES OR<br />
CONDITIONS, INCLUDING THOSE OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND<br />
NON-INFRINGEMENT.<br />
16. LIMITATION ON AND EXCLUSION OF REMEDIES AND DAMAGES. TO THE EXTENT NOT PROHIBITED BY<br />
LAW, YOU CAN RECOVER FROM MICROSOFT CORPORATION AND ITS SUPPLIERS ONLY DIRECT<br />
DAMAGES UP TO USD$5.00. YOU AGREE NOT TO SEEK TO RECOVER ANY OTHER DAMAGES, INCLUDING<br />
CONSEQUENTIAL, LOST PROFITS, SPECIAL, INDIRECT OR INCIDENTAL DAMAGES FROM MICROSOFT<br />
CORPORATION AND ITS RESPECTIVE SUPPLIERS.
This limitation applies to<br />
o anything related to the Licensed Content, services made available through the Licensed Content, or<br />
content (including code) on third party Internet sites or third-party programs; and<br />
o claims for breach of contract, breach of warranty, guarantee or condition, strict liability, negligence,<br />
or other tort to the extent permitted by applicable law.<br />
It also applies even if Microsoft knew or should have known about the possibility of the damages. The<br />
above limitation or exclusion may not apply to you because your country may not allow the exclusion or<br />
limitation of incidental, consequential or other damages.<br />
Please note: As this Licensed Content is distributed in Quebec, Canada, some of the clauses in this agreement<br />
are provided below in French.<br />
Remarque : Ce le contenu sous licence étant distribué au Québec, Canada, certaines des clauses dans ce<br />
contrat sont fournies ci-dessous en français.<br />
EXONÉRATION DE GARANTIE. Le contenu sous licence visé par une licence est offert « tel quel ». Toute<br />
utilisation de ce contenu sous licence est à votre seule risque et péril. Microsoft n’accorde aucune autre garantie<br />
expresse. Vous pouvez bénéficier de droits additionnels en vertu du droit local sur la protection dues<br />
consommateurs, que ce contrat ne peut modifier. La ou elles sont permises par le droit locale, les garanties<br />
implicites de qualité marchande, d’adéquation à un usage particulier et d’absence de contrefaçon sont exclues.<br />
LIMITATION DES DOMMAGES-INTÉRÊTS ET EXCLUSION DE RESPONSABILITÉ POUR LES DOMMAGES. Vous<br />
pouvez obtenir de Microsoft et de ses fournisseurs une indemnisation en cas de dommages directs uniquement<br />
à hauteur de 5,00 $ US. Vous ne pouvez prétendre à aucune indemnisation pour les autres dommages, y<br />
compris les dommages spéciaux, indirects ou accessoires et pertes de bénéfices.<br />
Cette limitation concerne:<br />
• tout ce qui est relié au le contenu sous licence , aux services ou au contenu (y compris le code)<br />
figurant sur des sites Internet tiers ou dans des programmes tiers ; et<br />
• les réclamations au titre de violation de contrat ou de garantie, ou au titre de responsabilité<br />
stricte, de négligence ou d’une autre faute dans la limite autorisée par la loi en vigueur.<br />
Elle s’applique également, même si Microsoft connaissait ou devrait connaître l’éventualité d’un tel dommage.<br />
Si votre pays n’autorise pas l’exclusion ou la limitation de responsabilité pour les dommages indirects,<br />
accessoires ou de quelque nature que ce soit, il se peut que la limitation ou l’exclusion ci-dessus ne s’appliquera<br />
pas à votre égard.<br />
EFFET JURIDIQUE. Le présent contrat décrit certains droits juridiques. Vous pourriez avoir d’autres droits prévus<br />
par les lois de votre pays. Le présent contrat ne modifie pas les droits que vous confèrent les lois de votre pays<br />
si celles-ci ne le permettent pas.<br />
Revised March <strong>2012</strong>
xiv 10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s xv<br />
Acknowledgments<br />
Microsoft Learning would like to acknowledge and thank the following for their contribution towards<br />
developing this title. Their effort at various stages in the development has ensured that you have a good<br />
classroom experience.<br />
Design and Development<br />
This course was designed and developed by SolidQ. SolidQ is a global provider of consulting, mentoring<br />
and training services for Microsoft Data Management, Business Intelligence and Collaboration platforms.<br />
Greg Low – Lead Developer<br />
Dr Greg Low is a <strong>SQL</strong> Server MVP, an MCT, and a Microsoft Regional Director for Australia. Greg has<br />
worked with <strong>SQL</strong> Server since version 4.2 as an active mentor, consultant and trainer. Greg describes<br />
himself as a <strong>SQL</strong> Server junkie and also describes himself as having been involved in development since<br />
dinosaurs roamed the Earth. He has been an instructor in the Microsoft <strong>SQL</strong> Server Masters certification<br />
program for several years and was one of the first two people to achieve the <strong>SQL</strong> Server 2008 Master<br />
certification. He is the author of a number whitepapers on the Microsoft MSDN and TechNet web sites<br />
and the author of a number of <strong>SQL</strong> Server related books. Greg is based in Melbourne Australia.<br />
Herbert Albert – Course Developer<br />
Herbert Albert started his career in 1994. He works as a trainer, consultant, and author focusing on <strong>SQL</strong><br />
Server technologies. Herbert is a mentor and the Central European CEO for SolidQ. He is based in Vienna,<br />
Austria. He has several Microsoft certifications including MCT which he has held since 1997. Herbert is a<br />
regular speaker at conferences and co-author of the <strong>SQL</strong> Server <strong>2012</strong> Upgrade Technical Reference Guide<br />
and <strong>SQL</strong> Server 2005 Step-by-Step Applied Techniques. Together with Gianluca Hotz, Herbert writes a<br />
regular column at the SolidQ Journal.<br />
Mark Hions – Technical Reviewer<br />
Mark's passion for computing and skill as a communicator were well suited to his position as instructor at<br />
Honeywell Canada, where he started working with minicomputers, mainframes and mature students in<br />
1984. He first met Microsoft <strong>SQL</strong> Server when it ran on OS/2, and has delivered training on every version<br />
since. An independent MCT and consultant for many years, he is a highly-rated presenter at TechEd, has<br />
designed <strong>SQL</strong> Server exams for Microsoft, and has delivered deep dive courses through the Microsoft<br />
Partner Channel. Mark is now the Principal <strong>SQL</strong> Server Instructor and Consultant at DesTech, which is the<br />
largest provider of <strong>SQL</strong> Server training in the Toronto area.<br />
Chris Barker – Technical Reviewer<br />
Chris Barker is an MCT working in the New Zealand market and currently employed as a staff trainer at<br />
Auldhouse, one of New Zealand’s major CPLS training centers in Wellington. Chris’ background includes<br />
programming from the early 1970s—his first program was written in assembly language and debugged in<br />
binary (literally)! While focusing training on programming (mostly .NET) and databases (mostly Microsoft<br />
<strong>SQL</strong> Server) Chris has also been an infrastructure trainer and has both Novell and Microsoft networking<br />
qualifications.
xvi 10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Contents<br />
Module 1: Introduction to <strong>SQL</strong> Server <strong>2012</strong> and Its Toolset<br />
Lesson 1: Introduction to the <strong>SQL</strong> Server Platform 1-3<br />
Lesson 2: Working with <strong>SQL</strong> Server Tools 1-14<br />
Lesson 3: Configuring <strong>SQL</strong> Server Services 1-26<br />
Lab 1: Introduction to <strong>SQL</strong> Server and Its Toolset 1-36<br />
Module 2: Preparing Systems for <strong>SQL</strong> Server <strong>2012</strong><br />
Lesson 1: Overview of <strong>SQL</strong> Server Architecture 2-3<br />
Lesson 2: Planning Server Resource Requirements 2-17<br />
Lesson 3: Pre-installation Testing for <strong>SQL</strong> Server 2-29<br />
Lab 2: Preparing Systems for <strong>SQL</strong> Server 2-35<br />
Module 3: Installing and Configuring <strong>SQL</strong> Server <strong>2012</strong><br />
Lesson 1: Preparing to Install <strong>SQL</strong> Server 3-3<br />
Lesson 2: Installing <strong>SQL</strong> Server 3-16<br />
Lesson 3: Upgrading and Automating Installation 3-24<br />
Lab 3: Installing and Configuring <strong>SQL</strong> Server 3-32<br />
Module 4: Working with <strong>Database</strong>s<br />
Lesson 1: Overview of <strong>SQL</strong> Server <strong>Database</strong>s 4-3<br />
Lesson 2: Working with Files and Filegroups 4-15<br />
Lesson 3: Moving <strong>Database</strong> Files 4-29<br />
Lab 4: Working with <strong>Database</strong>s 4-39<br />
Module 5: Understanding <strong>SQL</strong> Server <strong>2012</strong> Recovery Models<br />
Lesson 1: Backup Strategies 5-3<br />
Lesson 2: Understanding <strong>SQL</strong> Server Transaction Logging 5-12<br />
Lesson 3: Planning a <strong>SQL</strong> Server Backup Strategy 5-22<br />
Lab 5: Understanding <strong>SQL</strong> Server Recovery Models 5-32<br />
Module 6: Backup of <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Lesson 1: Backing up <strong>Database</strong>s and Transaction Logs 6-3<br />
Lesson 2: Managing <strong>Database</strong> Backups 6-14<br />
Lesson 3: Working with Backup Options 6-20<br />
Lab 6: Backup of <strong>SQL</strong> Server <strong>Database</strong>s 6-26
Module 7: Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s xvii<br />
Lesson 1: Understanding the Restore Process 7-3<br />
Lesson 2: Restoring <strong>Database</strong>s 7-8<br />
Lesson 3: Working with Point-in-time recovery 7-19<br />
Lesson 4: Restoring System <strong>Database</strong>s and Individual Files 7-27<br />
Lab 7: Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 7-34<br />
Module 8: Importing and Exporting Data<br />
Lesson 1: Transferring Data To/From <strong>SQL</strong> Server 8-3<br />
Lesson 2: Importing & Exporting Table Data 8-15<br />
Lesson 3: Inserting Data in Bulk 8-20<br />
Lab 8: Importing and Exporting Data 8-29<br />
Module 9: Authenticating and Authorizing Users<br />
Lesson 1: Authenticating Connections to <strong>SQL</strong> Server 9-3<br />
Lesson 2: Authorizing Logins to Access <strong>Database</strong>s 9-13<br />
Lesson 3: Authorization Across Servers 9-22<br />
Lab 9: Authenticating and Authorizing Users 9-30<br />
Module 10: Assigning Server and <strong>Database</strong> Roles<br />
Lesson 1: Working with Server Roles 10-3<br />
Lesson 2: Working with Fixed <strong>Database</strong> Roles 10-12<br />
Lesson 3: Creating User-defined <strong>Database</strong> Roles 10-18<br />
Lab 10: Assigning Server and <strong>Database</strong> Roles 10-26<br />
Module 11: Authorizing Users to Access Resources<br />
Lesson 1: Authorizing User Access to Objects 11-3<br />
Lesson 2: Authorizing Users to Execute Code 11-12<br />
Lesson 3: Configuring Permissions at the Schema Level 11-21<br />
Lab 11: Authorizing Users to Access Resources 11-28<br />
Module 12: Auditing <strong>SQL</strong> Server Environments<br />
Lesson 1: Options for Auditing Data Access in <strong>SQL</strong> 12-3<br />
Lesson 2: Implementing <strong>SQL</strong> Server Audit 12-12<br />
Lesson 3: Managing <strong>SQL</strong> Server Audit 12-26<br />
Lab 12: Auditing <strong>SQL</strong> Server Environments 12-31<br />
Module 13: Automating <strong>SQL</strong> Server <strong>2012</strong> Management<br />
Lesson 1: Automating <strong>SQL</strong> Server Management 13-3<br />
Lesson 2: Working with <strong>SQL</strong> Server Agent 13-11<br />
Lesson 3: Managing <strong>SQL</strong> Server Agent Jobs 13-19<br />
Lab 13: Automating <strong>SQL</strong> Server Management 13-26
xviii 10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Module 14: Configuring Security for <strong>SQL</strong> Server Agent<br />
Lesson 1: Understanding <strong>SQL</strong> Server Agent Security 14-3<br />
Lesson 2: Configuring Credentials 14-13<br />
Lesson 3: Configuring Proxy Accounts 14-18<br />
Lab 14: Configuring Security for <strong>SQL</strong> Server Agent 14-24<br />
Module 15: Monitoring <strong>SQL</strong> Server <strong>2012</strong> with Alerts and Notifications<br />
Lesson 1: Configuration of <strong>Database</strong> Mail 15-3<br />
Lesson 2: Monitoring <strong>SQL</strong> Server Errors 15-11<br />
Lesson 3: Configuring Operators, Alerts and Notifications 15-18<br />
Lab 15: Monitoring <strong>SQL</strong> Agent Jobs with Alerts and Notifications 15-30<br />
Module 16: Performing Ongoing <strong>Database</strong> Maintenance<br />
Lesson 1: Ensuring <strong>Database</strong> Integrity 16-3<br />
Lesson 2: Maintaining Indexes 16-12<br />
Lesson 3: Automating Routine <strong>Database</strong> Maintenance 16-26<br />
Lab 16: Performing Ongoing <strong>Database</strong> Maintenance 16-30<br />
Module 17: Tracing Access to <strong>SQL</strong> Server <strong>2012</strong><br />
Lesson 1: Capturing Activity using <strong>SQL</strong> Server Profiler and Extended<br />
Events Profiler<br />
Lesson 2: Improving Performance with the <strong>Database</strong> Engine Tuning<br />
17-3<br />
Advisor 17-17<br />
Lesson 3: Working with Tracing Options 17-25<br />
Lab 17: Tracing Access to <strong>SQL</strong> Server <strong>2012</strong> 17-36<br />
Module 18: Monitoring <strong>SQL</strong> Server <strong>2012</strong><br />
Lesson 1: Monitoring Activity 18-3<br />
Lesson 2: Capturing and Managing Performance Data 18-15<br />
Lesson 3: Analyzing Collected Performance Data 18-23<br />
Lab 18: Monitoring <strong>SQL</strong> Server <strong>2012</strong> 18-32<br />
Module 19: Managing Multiple Servers<br />
Lesson 1: Working with Multiple Servers 19-3<br />
Lesson 2: Virtualizing <strong>SQL</strong> Server 19-9<br />
Lesson 3: Deploying and Upgrading Data-tier Applications 19-15<br />
Lab 19: Managing Multiple Servers 19-22
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s xix<br />
Module 20: Troubleshooting Common <strong>SQL</strong> Server <strong>2012</strong> Administrative Issues<br />
Lesson 1: <strong>SQL</strong> Server Troubleshooting Methodology 20-3<br />
Lesson 2: Resolving Service-related Issues 20-7<br />
Lesson 3: Resolving Login and Connectivity Issues 20-13<br />
Lesson 4: Resolving Concurrency Issues 20-17<br />
Lab 20: Troubleshooting Common Issues 20-25<br />
Appendix A: Core Concepts in <strong>SQL</strong> Server High Availability and Replication<br />
Lesson 1: Core Concepts in High Availability A-3<br />
Lesson 2: Core Concepts in Replication A-11<br />
Appendix: Lab Answer Keys<br />
Module 1 Lab: Introduction to <strong>SQL</strong> Server and Its Toolset L1-1<br />
Module 2 Lab: Preparing Systems for <strong>SQL</strong> Server L2-5<br />
Module 3 Lab: Installing and Configuring <strong>SQL</strong> Server L3-11<br />
Module 4 Lab: Working with <strong>Database</strong>s L4-17<br />
Module 5 Lab: Understanding <strong>SQL</strong> Server Recovery Models L5-23<br />
Module 6 Lab: Backup of <strong>SQL</strong> Server <strong>Database</strong>s L6-27<br />
Module 7 Lab: Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s L7-31<br />
Module 8 Lab: Importing and Exporting Data L8-35<br />
Module 9 Lab: Authenticating and Authorizing Users L9-39<br />
Module 10 Lab: Assigning Server and <strong>Database</strong> Roles L10-41<br />
Module 11 Lab: Authorizing Users to Access Resources L11-43<br />
Module 12 Lab: Auditing <strong>SQL</strong> Server Environments L12-45<br />
Module 13 Lab: Automating <strong>SQL</strong> Server Management L13-49<br />
Module 14 Lab: Configuring Security for <strong>SQL</strong> Server Agent L14-53<br />
Module 15 Lab: Monitoring <strong>SQL</strong> Server <strong>2012</strong> with Alerts and<br />
Notifications L15-57<br />
Module 16 Lab: Performing Ongoing <strong>Database</strong> Maintenance L16-63<br />
Module 17 Lab: Tracing Access to <strong>SQL</strong> Server L17-67<br />
Module 18 Lab: Monitoring <strong>SQL</strong> Server <strong>2012</strong> L18-71<br />
Module 19 Lab: Managing Multiple Servers L19-75<br />
Module 20 Lab: Troubleshooting Common Issues L20-79
About This Course xxi<br />
About This Course<br />
This section provides you with a brief description of the course, audience, required prerequisites, and<br />
course objectives.<br />
Course Description<br />
This five-day instructor-led course provides students with the knowledge and skills to administer<br />
<strong>Microsoft®</strong> <strong>SQL</strong> <strong>Server®</strong> <strong>2012</strong> <strong>Database</strong>s. The course focuses on teaching individuals how to use <strong>SQL</strong><br />
Server <strong>2012</strong> product features and tools related to maintaining a database.<br />
Audience<br />
The primary audience for this course is individuals who administer and maintain <strong>SQL</strong> Server databases.<br />
This course can also be helpful for individuals who develop applications that deliver content from <strong>SQL</strong><br />
Server databases.<br />
Student Prerequisites<br />
This course requires that you meet the following prerequisites:<br />
• Basic knowledge of the Microsoft Windows operating system and its core functionality.<br />
• Working knowledge of Transact-<strong>SQL</strong>.<br />
• Working knowledge of relational databases.<br />
• Some experience with database design.<br />
• Completed Course 10774: Querying Microsoft <strong>SQL</strong> Server <strong>2012</strong>.
xxii About This Course<br />
Course Objectives<br />
After completing this course, students will be able to:<br />
• Explain <strong>SQL</strong> Server <strong>2012</strong> architecture, resources requirements and perform pre-install checks of I/O<br />
subsystems.<br />
• Plan, install and configure <strong>SQL</strong> Server <strong>2012</strong>.<br />
• Backup and restore databases.<br />
• Import and export wizards and explain how they relate to SSIS.<br />
• Use BCP and BULK INSERT to import data.<br />
• Manage security.<br />
• Assign, configure fixed database roles and create and assign user defined database roles.<br />
• Configure and assign permissions.<br />
• Implement <strong>SQL</strong> Server <strong>2012</strong> Audits.<br />
• Manage <strong>SQL</strong> Server <strong>2012</strong> Agent and Jobs.<br />
• Configure database mail, alerts and notifications.<br />
• Maintain databases.<br />
• Configure <strong>SQL</strong> Profiler and <strong>SQL</strong> Trace and use the <strong>Database</strong> Engine Tuning Advisor.<br />
• Monitor data by using Dynamic Management Views.<br />
• Execute multi-server queries and configure a central management server.<br />
• Deploy a data-tier application.<br />
• Troubleshoot common issues.
Course Outline<br />
This section provides an outline of the course:<br />
Module 1: Introduction to <strong>SQL</strong> Server <strong>2012</strong> and Its Toolset<br />
Module 2: Preparing Systems for <strong>SQL</strong> Server <strong>2012</strong><br />
Module 3: Installing and Configuring <strong>SQL</strong> Server <strong>2012</strong><br />
Module 4: Working with <strong>Database</strong>s<br />
Module 5: Understanding <strong>SQL</strong> Server <strong>2012</strong> Recovery Models<br />
Module 6: Backup of <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Module 7: Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Module 8: Importing and Exporting Data<br />
Module 9: Authenticating and Authorizing Users<br />
Module 10: Assigning Server and <strong>Database</strong> Roles<br />
Module 11: Authorizing Users to Access Resources<br />
Module 12: Auditing <strong>SQL</strong> Server Environments<br />
Module 13: Automating <strong>SQL</strong> Server <strong>2012</strong> Management<br />
Module 14: Configuring Security for <strong>SQL</strong> Server Agent<br />
Module 15: Monitoring <strong>SQL</strong> Server <strong>2012</strong> with Alerts and Notifications<br />
Module 16: Performing Ongoing <strong>Database</strong> Maintenance<br />
Module 17: Tracing Access to <strong>SQL</strong> Server <strong>2012</strong><br />
Module 18: Monitoring <strong>SQL</strong> Server <strong>2012</strong><br />
Module 19: Managing Multiple Servers<br />
Module 20: Troubleshooting Common <strong>SQL</strong> Server <strong>2012</strong> Administrative Issues<br />
About This Course xxiii
xxiv About This Course<br />
Course Materials<br />
The following materials are included with your kit:<br />
• Course Handbook. A succinct classroom learning guide that provides all the critical technical<br />
information in a crisp, tightly-focused format, which is just right for an effective in-class learning<br />
experience.<br />
• Lessons: Guide you through the learning objectives and provide the key points that are critical to the<br />
success of the in-class learning experience.<br />
• Labs: Provide a real-world, hands-on platform for you to apply the knowledge and skills learned in<br />
the module.<br />
• Module Reviews and Takeaways: Provide improved on-the-job reference material to boost<br />
knowledge and skills retention.<br />
• Lab Answer Keys: Provide step-by-step lab solution guidance at your fingertips when it’s needed.<br />
• Lessons: Include detailed information for each topic, expanding on the content in the Course<br />
Handbook.<br />
• Labs: Include complete lab exercise information and answer keys in digital form to use during lab<br />
time<br />
• Resources: Include well-categorized additional resources that give you immediate access to the most<br />
up-to-date premium content on TechNet, MSDN, Microsoft Press<br />
• Student Course Files: Include the Allfiles.exe, a self-extracting executable file that contains all the<br />
files required for the labs and demonstrations.<br />
• Course evaluation. At the end of the course, you will have the opportunity to complete an online<br />
evaluation to provide feedback on the course, training facility, and instructor.<br />
To provide additional comments or feedback on the course, send e-mail to support@mscourseware.com.<br />
To inquire about the Microsoft Certification Program, send e-mail to mcphelp@microsoft.com.
About This Course xxv<br />
Virtual Machine Environment<br />
This section provides the information for setting up the classroom environment to support the business<br />
scenario of the course.<br />
Virtual Machine Configuration<br />
In this course, you will use Microsoft Hyper-V to perform the labs.<br />
The following table shows the role of each virtual machine used in this course:<br />
Virtual machine Role<br />
1077XA-MIA-DC Domain Controller<br />
1077XA-MIA-<strong>SQL</strong> <strong>SQL</strong> Server VM for Modules 1 - 20<br />
Software Configuration<br />
The following software is installed on each VM:<br />
• <strong>SQL</strong> Server <strong>2012</strong> (on the <strong>SQL</strong> Server VMs)<br />
Course Files<br />
There are files associated with the labs in this course. The lab files are located in the folder<br />
D:\10775A_Labs on the student computers.<br />
Classroom Setup<br />
Each classroom computer will have the same virtual machine configured in the same way.<br />
Course Hardware Level<br />
To ensure a satisfactory student experience, Microsoft Learning requires a minimum equipment<br />
configuration for trainer and student computers in all Microsoft Certified Partner for Learning Solutions<br />
(CPLS) classrooms in which Official Microsoft Learning Product courses are taught. This course requires<br />
hardware level 6.
Module 1<br />
Introduction to <strong>SQL</strong> <strong>Server®</strong> <strong>2012</strong> and Its Toolset<br />
Contents:<br />
Lesson 1: Introduction to the <strong>SQL</strong> Server Platform 1-3<br />
Lesson 2: Working with <strong>SQL</strong> Server Tools 1-14<br />
Lesson 3: Configuring <strong>SQL</strong> Server Services 1-26<br />
Lab 1: Introduction to <strong>SQL</strong> Server and its Toolset 1-36<br />
1-1
1-2 Introduction to <strong>SQL</strong> <strong>Server®</strong> <strong>2012</strong> and Its Toolset<br />
Module Overview<br />
Before beginning to work with <strong>Microsoft®</strong> <strong>SQL</strong> <strong>Server®</strong> in either a development or an administration<br />
role, it is important to understand the overall <strong>SQL</strong> Server platform. In particular, it is useful to understand<br />
that <strong>SQL</strong> Server is not just a database engine but it is a complete platform for managing enterprise data.<br />
Along with a strong platform, <strong>SQL</strong> Server provides a series of tools that make the product easy to manage<br />
and a good target for the application development.<br />
Individual components of <strong>SQL</strong> Server can operate within separate security contexts. Correctly configuring<br />
<strong>SQL</strong> Server services is important where enterprises are operating with a policy of least possible<br />
permissions.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe the <strong>SQL</strong> Server platform.<br />
• Work with <strong>SQL</strong> Server tools.<br />
• Configure <strong>SQL</strong> Server services.
Lesson 1<br />
Introduction to the <strong>SQL</strong> Server Platform<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 1-3<br />
<strong>SQL</strong> Server is a platform for developing business applications that are data focused. Rather than being a<br />
single monolithic application, <strong>SQL</strong> Server is structured as a series of components. It is important to<br />
understand the use of each of the components.<br />
More than a single copy of <strong>SQL</strong> Server can be installed on a server. Each of these copies is called an<br />
instance and can be separately configured and managed.<br />
<strong>SQL</strong> Server is shipped in a variety of editions, each with a different set of capabilities. It is important to<br />
understand the target business cases for each of the <strong>SQL</strong> Server editions and how <strong>SQL</strong> Server has evolved<br />
through a series of improving versions over many years. It is a stable and robust platform.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe the overall <strong>SQL</strong> Server platform.<br />
• Explain the role of each of the components that make up the <strong>SQL</strong> Server platform.<br />
• Describe the functionality provided by <strong>SQL</strong> Server instances.<br />
• Explain the available <strong>SQL</strong> Server editions.<br />
• Explain how <strong>SQL</strong> Server has evolved through a series of versions.
1-4 Introduction to <strong>SQL</strong> <strong>Server®</strong> <strong>2012</strong> and Its Toolset<br />
<strong>SQL</strong> Server Architecture<br />
Key Points<br />
<strong>SQL</strong> Server is an integrated and enterprise-ready platform for data management that offers a low total<br />
cost of ownership.<br />
Question: Which other database platforms have you worked with?<br />
Enterprise Ready<br />
While <strong>SQL</strong> Server is much more than a relational database management system, it provides a very secure,<br />
robust, and stable relational database management system. <strong>SQL</strong> Server is used to manage organizational<br />
data and to provide analysis and insights into that data.<br />
The database engine is one of the highest performing database engines available and regularly features in<br />
the top of industry performance benchmarks. You can review industry benchmarks and scores at<br />
http://www.tpc.org.<br />
High Availability<br />
Impressive performance is necessary but not at the cost of availability. Organizations need constant access<br />
to their data. Many enterprises are now finding a need to have 24 hour x 7 day access available. The <strong>SQL</strong><br />
Server platform was designed with the highest levels of availability in mind. As each version of the product<br />
has been released, more and more capabilities have been added to minimize any potential downtime.<br />
Security<br />
Utmost in the minds of enterprise managers is the need to secure organizational data. Security is not able<br />
to be retrofitted after an application or a product is created. <strong>SQL</strong> Server has been built from the ground<br />
up with the highest levels of security as a goal.
Scalability<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 1-5<br />
Organizations have a need for data management capabilities for systems of all sizes. <strong>SQL</strong> Server scales<br />
from the smallest needs to the largest via a series of editions with increasing capabilities.<br />
Cost of Ownership<br />
Many competing database management systems are expensive both to purchase and to maintain. <strong>SQL</strong><br />
Server offers very low total cost of ownership. <strong>SQL</strong> Server tooling (both management and development)<br />
builds on existing Windows knowledge. Most users tend to become familiar with the tools quite quickly.<br />
The productivity achieved when working with the tools is enhanced by the high degree of integration<br />
between the tools. For example, many of the <strong>SQL</strong> Server tools have links to launch and preconfigure other<br />
<strong>SQL</strong> Server tools.
1-6 Introduction to <strong>SQL</strong> <strong>Server®</strong> <strong>2012</strong> and Its Toolset<br />
<strong>SQL</strong> Server Components<br />
Key Points<br />
<strong>SQL</strong> Server is a very good relational database engine but as a data platform, it offers much more than a<br />
relational database engine. <strong>SQL</strong> Server is a platform comprising many components.<br />
Component Purpose<br />
<strong>Database</strong> Engine Is a relational database engine based on the <strong>SQL</strong> language.<br />
Analysis Services Is an online analytical processing (OLAP) engine that works with analytic<br />
cubes.<br />
Integration Services Is a tool used to orchestrate the movement of data between <strong>SQL</strong> Server<br />
components and external systems (in both directions).<br />
Reporting Services Offers a reporting engine based on web services and provides a web portal<br />
and end-user reporting tools.<br />
Master Data Services Provides tooling and a hub for managing master or reference data.<br />
StreamInsight Is a platform for building applications to process high-speed events.<br />
Data Mining Provides tooling and an inference engine for deriving knowledge and insights<br />
from existing OLAP data or relational data.<br />
Full-Text Search Allows building sophisticated search options into applications. <strong>SQL</strong> Server<br />
<strong>2012</strong> adds sophisticated sematic search to the existing full-text search.<br />
PowerPivot Allows end-users, power users, and business analysts to quickly analyze large<br />
volumes of data from different locations.<br />
Replication Allows moving data between servers to suit data distribution needs.<br />
Data Quality Services Allows building or connecting to a knowledgebase for data cleansing.<br />
Power View Allows rapid visualization of data by end-users.<br />
Question: Which components of <strong>SQL</strong> Server have you worked with?
<strong>SQL</strong> Server Instances<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 1-7<br />
It is sometimes useful to install more than a single copy of a <strong>SQL</strong> Server component on a single server.<br />
Many <strong>SQL</strong> Server components can be installed more than once as separate instances.<br />
<strong>SQL</strong> Server Instances<br />
The ability to install multiple instances of <strong>SQL</strong> Server components on a single server is useful in a number<br />
of situations:<br />
• There may be a need to have different administrators or security environments for sets of databases.<br />
Each instance of <strong>SQL</strong> Server is separately manageable and securable.<br />
• Applications that need to be supported by an organization may require server configurations that are<br />
inconsistent or incompatible with the server requirements of other applications. Each instance of <strong>SQL</strong><br />
Server is separately configurable.<br />
• Application databases might need to be supported with different levels of service, particularly in<br />
relation to availability. <strong>SQL</strong> Server instances can be used to separate workloads with differing service<br />
level agreements (SLAs) that need to be met.<br />
• Different versions or editions of <strong>SQL</strong> Server might need to be supported.<br />
• Applications might require different server-level collations. While each database can have different<br />
collations, an application might be dependent on the collation of the tempdb database when the<br />
application is using temporary objects.<br />
Question: Why might you need to separate databases by service level agreement?<br />
Different versions of <strong>SQL</strong> Server can often be installed side-by-side using multiple instances. This can assist<br />
when testing upgrade scenarios or performing upgrades.
1-8 Introduction to <strong>SQL</strong> <strong>Server®</strong> <strong>2012</strong> and Its Toolset<br />
Default and Named Instances<br />
Prior to <strong>SQL</strong> Server 2000, only a single copy of <strong>SQL</strong> Server could be installed on a server system. <strong>SQL</strong><br />
Server was addressed by the name of the server. To maintain backward compatibility, this mode of<br />
connection is still supported and is known as a "default" instance.<br />
Additional instances of <strong>SQL</strong> Server require an instance name in addition to the server name and are<br />
known as "named" instances. No default instance needs to be installed before installing named instances.<br />
Not all components of <strong>SQL</strong> Server are able to be installed in more than one instance. A substantial change<br />
in <strong>SQL</strong> Server <strong>2012</strong> allows for multiple instance support for <strong>SQL</strong> Server Integration Services.<br />
There is no need to install <strong>SQL</strong> Server tools and utilities more than once. A single installation of the tools is<br />
able to manage and configure all instances.
<strong>SQL</strong> Server Editions<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 1-9<br />
<strong>SQL</strong> Server is available in a wide variety of editions, with different price points and different levels of<br />
capability.<br />
<strong>SQL</strong> Server Editions<br />
Each <strong>SQL</strong> Server edition is targeted to a specific business use case as shown in the following table:<br />
Edition Business Use Case<br />
Parallel Data<br />
Warehouse<br />
Uses massively parallel processing (MPP) to execute queries against vast amount of<br />
data quickly. Parallel Data Warehouse systems are sold as a complete "appliance"<br />
rather than via standard software licenses.<br />
Enterprise Provides the highest levels of reliability for demanding workloads.<br />
Business<br />
Intelligence<br />
Adds Business Intelligence to the offerings from Standard Edition.<br />
Standard Delivers a reliable, complete data management platform.<br />
Express Is a free edition for lightweight web and small server-based applications.<br />
Compact Is a free edition for standalone and occasionally connected mobile applications,<br />
optimized for a very small memory footprint.
1-10 Introduction to <strong>SQL</strong> <strong>Server®</strong> <strong>2012</strong> and Its Toolset<br />
(continued)<br />
Edition Business Use Case<br />
Developer Allows building, testing, and demonstrating all <strong>SQL</strong> Server functionality.<br />
Web Provides a secure, cost effective, and scalable platform for public web sites and applications.<br />
<strong>SQL</strong> Azure Allows building and extending <strong>SQL</strong> Server applications to a cloud-based platform.<br />
Question: What would be a good business case example for using a cloud-based service?
<strong>SQL</strong> Server Versions<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 1-11<br />
<strong>SQL</strong> Server is a platform with a rich history of innovation achieved while maintaining strong levels of<br />
stability. <strong>SQL</strong> Server has been available for many years, yet it is rapidly evolving new capabilities and<br />
features.<br />
Early Versions<br />
The earliest versions (1.0 and 1.1) were based on the OS/2 operating system.<br />
Versions 4.2 and later moved to the Windows operating system, initially on the Windows NT operating<br />
system.<br />
Later Versions<br />
Version 7.0 saw a significant rewrite of the product. Substantial advances were made in reducing the<br />
administration workload for the product. OLAP Services (which later became Analysis Services) was<br />
introduced.<br />
<strong>SQL</strong> Server 2000 featured support for multiple instances and collations. It also introduced support for data<br />
mining. <strong>SQL</strong> Server Reporting Services was introduced after the product release as an add-on<br />
enhancement to the product, along with support for 64-bit processors.<br />
<strong>SQL</strong> Server 2005 provided another significant rewrite of many aspects of the product. It introduced<br />
support for:<br />
• Non-relational data stored and queried as XML.<br />
• <strong>SQL</strong> Server Management Studio was released to replace several previous administrative tools.<br />
• <strong>SQL</strong> Server Integration Services replaced a former tool known as Data Transformation Services (DTS).
1-12 Introduction to <strong>SQL</strong> <strong>Server®</strong> <strong>2012</strong> and Its Toolset<br />
• Another key addition to the product was the introduction of support for objects created using the<br />
Common Language Runtime (CLR).<br />
• The T-<strong>SQL</strong> language was substantially enhanced, including structured exception handling.<br />
• Dynamic Management Views and Functions were introduced to enable detailed health monitoring,<br />
performance tuning, and troubleshooting.<br />
• Substantial high availability improvements were included in the product. <strong>Database</strong> mirroring was<br />
introduced.<br />
• Support for column encryption was introduced.<br />
<strong>SQL</strong> Server 2008 also provided many enhancements:<br />
• The "<strong>SQL</strong> Server Always On" technologies were introduced to reduce potential downtime.<br />
• Filestream support improved the handling of structured and semi-structured data.<br />
• Spatial data types were introduced.<br />
• <strong>Database</strong> compression and encryption technologies were added.<br />
• Specialized date- and time-related data types were introduced, including support for timezones<br />
within datetime data.<br />
• Full-text indexing was integrated directly within the database engine. (Previously full-text indexing<br />
was based on interfaces to operating system level services).<br />
• A policy-based management framework was introduced to assist with a move to more declarativebased<br />
management practices, rather than reactive practices.<br />
• A PowerShell provider for <strong>SQL</strong> Server was introduced.<br />
The enhancements and additions to the product in <strong>SQL</strong> Server 2008 R2 included:<br />
• Substantial enhancements to <strong>SQL</strong> Server Reporting Services.<br />
• The introduction of advanced analytic capabilities with PowerPivot.<br />
• Improved multi-server management capabilities were added.<br />
• Support for managing reference data was provided with the introduction of Master Data Services.<br />
• StreamInsight provides the ability to query data that is arriving at high speed, before storing the data<br />
in a database.<br />
• Data-tier applications assist with packaging database applications as part of application development<br />
projects.<br />
<strong>SQL</strong> Server <strong>2012</strong><br />
The enhancements and additions to the product in <strong>SQL</strong> Server <strong>2012</strong> included:<br />
• Further substantial enhancements to <strong>SQL</strong> Server Reporting Services.<br />
• Substantial enhancements to <strong>SQL</strong> Server Integration Services.<br />
• The introduction of tabular data models into <strong>SQL</strong> Server Analysis Services.<br />
• The migration of Business Intelligence projects into Visual Studio 2010.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 1-13<br />
• The introduction of the Always On enhancements to <strong>SQL</strong> Server High Availability.<br />
• The introduction of Data Quality Services.<br />
• Strong enhancements to the T-<strong>SQL</strong> language such as the addition of sequences, new error-handling<br />
capabilities, and new window functions.<br />
• The introduction of the FileTable.<br />
• The introduction of statistical semantic search.<br />
• Many general tooling improvements.<br />
Question: Which versions of <strong>SQL</strong> Server have you worked with?
1-14 Introduction to <strong>SQL</strong> <strong>Server®</strong> <strong>2012</strong> and Its Toolset<br />
Lesson 2<br />
Working with <strong>SQL</strong> Server Tools<br />
Working effectively with <strong>SQL</strong> Server requires familiarity with the tools that are used in conjunction with<br />
<strong>SQL</strong> Server. Before any tool can connect to <strong>SQL</strong> Server, it needs to make a network connection to the<br />
server. In this lesson, you will see how these connections are made, then look at the tools that are most<br />
commonly used when working with <strong>SQL</strong> Server.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Connect from Clients and Applications.<br />
• Describe the roles of Software Layers for Connections.<br />
• Use <strong>SQL</strong> Server Management Studio.<br />
• Use <strong>SQL</strong> Server Data Tools.<br />
• Use Books Online.
Connecting from Clients and Applications<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 1-15<br />
Client applications connect to endpoints. A variety of communication protocols are available for making<br />
connections. Also, users need to be identified before they are permitted to use the server.<br />
Connectivity<br />
The protocol that client applications use when connecting to the <strong>SQL</strong> Server relational database engine is<br />
known as Tabular Data Stream (TDS). It defines how requests are issued and how results are returned.<br />
Other components of <strong>SQL</strong> Server use alternate protocols. For example, clients to <strong>SQL</strong> Server Analysis<br />
Services communicate via the XML for Analysis (XML/A) protocol. However, in this course, you are<br />
primarily concerned with the relational database engine.<br />
TDS is a high-level protocol that is transported by lower-level protocols. It is most commonly transported<br />
by the TCP/IP protocol, the Named Pipes protocol, or implemented over a shared memory connection.<br />
Authentication<br />
For the majority of applications and organizations, data must be held securely and access to the data is<br />
based on the identity of the user attempting to access the data. The process of verifying the identity of a<br />
user (or more formally, of any principal), is known as authentication. <strong>SQL</strong> Server supports two forms of<br />
authentication. It can store the login details for users directly within its own system databases. These<br />
logins are known as <strong>SQL</strong> Logins. Alternately, it can be configured to trust a Windows authenticator (such<br />
as Active Directory). In that case, a Windows user can be granted access to the server, either directly or via<br />
his/her Windows group memberships.<br />
When a connection is made, the user is connected to a specific database, known as their "default"<br />
database.
1-16 Introduction to <strong>SQL</strong> <strong>Server®</strong> <strong>2012</strong> and Its Toolset<br />
Software Layers for Connections<br />
Key Points<br />
Connections to <strong>SQL</strong> Server are made through a series of software layers. It is important to understand how<br />
each of these layers interacts. This knowledge will assist you when you need to perform configuration or<br />
troubleshooting.<br />
Client Libraries<br />
Client applications use programming libraries to simplify their access to databases such as <strong>SQL</strong> Server.<br />
Open <strong>Database</strong> Connectivity (ODBC) is a commonly used library. It operates as a translation layer that<br />
shields the application from some details of the underlying database engine. By changing the ODBC<br />
configuration, an application could be altered to work with a different database engine, without the need<br />
for application changes. Java <strong>Database</strong> Connectivity (JDBC) is the Java based equivalent library to ODBC.<br />
OLEDB originally stood for Object Linking and Embedding for <strong>Database</strong>s, however, that meaning is now<br />
not very relevant. OLEDB is a library that does not translate commands. When an application sends a <strong>SQL</strong><br />
command, OLEDB passes it to the database server without modification.<br />
The <strong>SQL</strong> Server Native Access Component (SNAC) is a software layer that encapsulates commands issued<br />
by libraries such as OLEDB, ODBC and JDBC into commands that can be understood by <strong>SQL</strong> Server and<br />
encapsulates results returned by <strong>SQL</strong> Server ready for consumption by these libraries. This primarily<br />
involves wrapping the commands and results in the TDS protocol.
Network Libraries<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 1-17<br />
<strong>SQL</strong> Server exposes endpoints that client applications can connect to. The endpoint is used to pass<br />
commands and data to/from the database engine.<br />
SNAC connects to these endpoints via network libraries such as TCP/IP, or Named Pipes. For client<br />
applications that are executing on the same computer as the <strong>SQL</strong> Server service, a special "shared<br />
memory" network connection is also available.<br />
<strong>SQL</strong> Server Software Layers<br />
<strong>SQL</strong> Server receives commands via endpoints and sends results to clients via endpoints. Clients interact<br />
with the Relational engine which in turn utilizes the Storage engine to manage the storage of databases.<br />
<strong>SQL</strong> Server Operating System (<strong>SQL</strong> OS) is a software layer that provides a layer of abstraction between the<br />
Relational engine and the available server resources.
1-18 Introduction to <strong>SQL</strong> <strong>Server®</strong> <strong>2012</strong> and Its Toolset<br />
<strong>SQL</strong> Server Management Studio<br />
Key Points<br />
<strong>SQL</strong> Server Management Studio (SSMS) is the primary tool supplied by Microsoft for interacting with <strong>SQL</strong><br />
Server services.<br />
<strong>SQL</strong> Server Management Studio<br />
SSMS is an integrated environment that has been created within the Microsoft Visual Studio platform<br />
shell. SSMS shares many common features with Visual Studio.<br />
SSMS is used to execute queries and return results but it is also capable of helping users to analyze<br />
queries. It offers rich editors for a variety of document types (.sql files, .xml files, etc.). When working with<br />
.sql files, SSMS provides IntelliSense to assist with writing queries.<br />
While all <strong>SQL</strong> Server relational database management tasks can be performed using the T-<strong>SQL</strong> language,<br />
many users prefer graphical administration tools as they are typically easier to use than the T-<strong>SQL</strong><br />
commands. SSMS provides graphical interfaces for configuring databases and servers.<br />
SSMS is capable of connecting to a variety of <strong>SQL</strong> Server services including the database engine, Analysis<br />
Services, Integration Services, Reporting Services, and <strong>SQL</strong> Server Compact Edition.
Demonstration 2A: Using <strong>SQL</strong> Server Management Studio<br />
Demonstration Steps<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 1-19<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, ensure that Server type is set to <strong>Database</strong> Engine.<br />
4. In the Server name text box, type (local).<br />
5. In the Authentication drop-down list, select Windows Authentication, and click Connect.<br />
6. If required, from the View menu, click Object Explorer.<br />
7. In Object Explorer, expand <strong>Database</strong>s, expand AdventureWorks, and Tables. Review the database<br />
objects.<br />
8. Right-click the AdventureWorks database and choose New Query.<br />
9. Type the query shown in the snippet below:<br />
SELECT * FROM Production.Product ORDER BY ProductID;<br />
10. Note the use of Intellisense while entering it, and then click Execute on the toolbar. Note how the<br />
results can be returned.<br />
11. From the File menu click Save <strong>SQL</strong>Query1.sql. Note this saves the query to a file. In the Save File As<br />
window click Cancel.<br />
12. In the Results tab, right-click on the cell for ProductID 1 (first row and first cell) and click Save<br />
Results As…. In the FileName textbox, type Demonstration2AResults and click Save. Note this<br />
saves the query results to a file.
1-20 Introduction to <strong>SQL</strong> <strong>Server®</strong> <strong>2012</strong> and Its Toolset<br />
13. From the Query menu, click Display Estimated Execution Plan. Note that SSMS is capable of more<br />
than simply executing queries.<br />
14. From the Tools menu, and click Options.<br />
15. In the Options pane, expand Query Results, expand <strong>SQL</strong> Server, and expand General. Review the<br />
available configuration options and click Cancel.<br />
16. From the File menu, click Close. In the Microsoft <strong>SQL</strong> Server Management Studio window, click No.<br />
17. In the File menu, click Open, and click Project/Solution.<br />
18. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_02_PRJ\10775A_02_PRJ.ssmssln.<br />
19. From the View menu, click Solution Explorer. Note the contents of Solution Explorer. <strong>SQL</strong> Server<br />
projects have been supplied for each module of the course and contain demonstration steps and<br />
suggested lab solutions, along with any required setup/shutdown code for the module.<br />
20. In the Solution Explorer, click the X to close it.<br />
21. In Object Explorer, from the Connect toolbar icon, note the other <strong>SQL</strong> Server components that<br />
connections can be made to:<br />
• <strong>Database</strong> Engine, Analysis Services, Integration Services, Reporting Services<br />
22. From the File menu, click New, and click <strong>Database</strong> Engine Query to open a new connection.<br />
23. In the Connect to <strong>Database</strong> Engine window, type (local) in the Server name text box.<br />
24. In the Authentication drop-down list, select Windows Authentication, and click Connect.<br />
25. In the Available <strong>Database</strong>s drop-down list, click tempdb database. Note this will change the<br />
database that the query is executed against.<br />
26. Right-click in the query window and click Connection, and click Change Connection…. Note: this<br />
will reconnect the query to another instance of <strong>SQL</strong> Server. In the Connect to <strong>Database</strong> Engine<br />
window, click Cancel.<br />
27. From the View menu, click Registered Servers.<br />
28. In the Registered Servers window, expand <strong>Database</strong> Engine, right-click Local Server Groups, and<br />
click New Server Group….<br />
29. In the New Server Group Properties window type Dev Servers in the Group name textbox and<br />
click OK.<br />
30. Right-click Dev Servers and click New Server Registration….<br />
31. In the New Server Registration window, click Server name drop-down list, type (local) and click<br />
Save.<br />
32. Right-click Dev Servers and click New Server Registration….<br />
33. In the New Server Registration window, click Server name drop-down list, type .\MKTG and click<br />
Save.<br />
34. In the Registered Servers window, right-click the Dev Servers group and choose New Query.
35. Type the query as shown in the snippet below and click Execute toolbar icon:<br />
SELECT @@version;<br />
36. Close <strong>SQL</strong> Server Management Studio.<br />
37. Click No in the <strong>SQL</strong> Server Management Studio window.<br />
Question: When would displaying an estimated execution plan be helpful?<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 1-21
1-22 Introduction to <strong>SQL</strong> <strong>Server®</strong> <strong>2012</strong> and Its Toolset<br />
<strong>SQL</strong> Server Data Tools<br />
Key Points<br />
The <strong>SQL</strong> Server platform comprises a number of components. Projects for several of the Business<br />
Intelligence related components are created and modified using <strong>SQL</strong> Server Data Tools (SSDT).<br />
<strong>SQL</strong> Server Data Tools<br />
SSDT is a series of project templates that have been added to the Microsoft Visual Studio® 2010<br />
environment. The templates allow the creation and editing of projects for Analysis Services, Integration<br />
Services, and Reporting Services.<br />
Visual Studio does not need to be installed before <strong>SQL</strong> Server. If an existing installation of Visual Studio<br />
2010 is present, <strong>SQL</strong> Server installation will add project templates into that environment. If no existing<br />
Visual Studio 2010 installation is present, <strong>SQL</strong> Server installation will first install the "partner" edition of<br />
Visual Studio 2010 and then add the required project templates. The Partner edition of Visual Studio 2010<br />
is essentially an empty Visual Studio shell with a template for a "blank solution".
Demonstration 2B: Using <strong>SQL</strong> Server Data Tools<br />
Demonstration Steps<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 1-23<br />
1. If Demonstration 2A was not performed, revert the virtual machines as per the instructions in<br />
D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Data Tools. From the File menu, click New Project. Note the available project templates<br />
(also if other languages are installed).<br />
3. In the Templates pane, click Report Server Project, and click OK.<br />
4. In Solution Explorer, right-click Reports and click Add New Report.<br />
5. In the Report Wizard window, click Next.<br />
6. In the Select the Data Source window, click Edit.<br />
7. In the Connection Properties window, type (local) for the Server name and in the Connect to a<br />
database drop-down list, select AdventureWorks, and click OK.<br />
8. In the Select the Data Source window, click Next then in the Design the Query window, for the Query<br />
string textbox, type the following query as shown in snippet below and click Next.<br />
SELECT ProductID, Name, Color, Size FROM Production.Product ORDER BY ProductID;<br />
9. In the Select the Report Type window, click Next.<br />
10. In the Design the Table window, click Details four times, and click Finish>>|.<br />
11. In the Completing the Wizard window, click Finish.
1-24 Introduction to <strong>SQL</strong> <strong>Server®</strong> <strong>2012</strong> and Its Toolset<br />
12. In the Report1.rdl [Design]* tab, click Preview and note the report that is rendered.<br />
13. Click on the Design tab, from the File menu click Exit. Note if prompted do not save the changes.<br />
Question: Can you suggest a situation where the ability to schedule the execution of a<br />
report would be useful?
Books Online<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 1-25<br />
Books Online (BOL) is the primary reference for <strong>SQL</strong> Server. It can be installed offline (for use when<br />
disconnected from the Internet) and can also be used online directly from the Microsoft MSDN web site<br />
(via an Internet connection). The most up-to-date version is always the online version. It is also the most<br />
flexible version as it enables easy comparison between <strong>SQL</strong> Server versions of documentation. In <strong>SQL</strong><br />
Server <strong>2012</strong>, there is a Help Library Manager that can be used to switch between online and offline modes<br />
for Books Online and can also be used to install updates for the offline version.<br />
Books Online<br />
BOL should be regarded as the primary technical reference for <strong>SQL</strong> Server.<br />
A common mistake when installing BOL locally on a <strong>SQL</strong> Server installation is to neglect to update BOL<br />
regularly. To avoid excess download file sizes, BOL is not included in <strong>SQL</strong> Server service pack and<br />
cumulative update packages. BOL is regularly updated and a regular check should be made for updates.<br />
For most T-<strong>SQL</strong> commands, many users will find the examples supplied easier to follow than the formal<br />
syntax definition. Note that when viewing the reference page for a statement, the formal syntax is shown<br />
at the top of the page and the examples are usually at the bottom of the page.<br />
BOL is available for all supported versions of <strong>SQL</strong> Server. It is important to make sure you are working with<br />
the pages designed for the version of <strong>SQL</strong> Server that you are working with. Many pages in BOL provide<br />
links to related pages from other versions of the product.
1-26 Introduction to <strong>SQL</strong> <strong>Server®</strong> <strong>2012</strong> and Its Toolset<br />
Demonstration 2C: Using Books Online<br />
Demonstration Steps<br />
1. If Demonstration 2A was not performed, revert the virtual machines as per the instructions in<br />
D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
Documentation & Community, and click <strong>SQL</strong> Server Documentation.<br />
3. Maximize the Microsoft Help Viewer window and note the basic navigation options available.<br />
4. In the Virtual Machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
5. In the Connect to Server window, ensure that Server type is set to <strong>Database</strong> Engine.<br />
6. In the Server name text box, type (local).<br />
7. In the Authentication drop-down list, select Windows Authentication, and click Connect.<br />
8. From the File menu, click New, and click Query with Current Connection.<br />
9. In the <strong>SQL</strong>Query1.sql tab, type the query as shown in the snippet below and click Execute toolbar<br />
icon:<br />
SELECT SUBSTRING('test string',2,7);<br />
10. Click the name of the function SUBSTRING, then hit the F1 key to open the BOL topic for<br />
SUBSTRING.<br />
11. Note the content of the page and scroll to the bottom to see the examples then close the Microsoft<br />
Help Viewer window.<br />
12. Close <strong>SQL</strong> Server Management Studio, without saving any changes.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 1-27<br />
13. If your host system has Internet access available, open Internet Explorer® in the host system and<br />
browse to the <strong>SQL</strong> Server Books Online page: http://go.microsoft.com/fwlink/?LinkID=233780<br />
and note the available online options.
1-28 Introduction to <strong>SQL</strong> <strong>Server®</strong> <strong>2012</strong> and Its Toolset<br />
Lesson 3<br />
Configuring <strong>SQL</strong> Server Services<br />
Each <strong>SQL</strong> Server service can be configured individually. The ability to provide individual configuration for<br />
services assists organizations that aim to minimize the permissions assigned to service accounts, as part of<br />
a policy of least privilege execution. <strong>SQL</strong> Server Configuration Manager is used to configure services,<br />
including the accounts that the services operate under, and the network libraries used by the <strong>SQL</strong> Server<br />
services.<br />
<strong>SQL</strong> Server also ships with a variety of tools and utilities. It is important to know what each of these tools<br />
and utilities is used for.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Use <strong>SQL</strong> Server Configuration Manager.<br />
• Use <strong>SQL</strong> Server Services.<br />
• Use Network Ports and Listeners.<br />
• Create Server aliases.<br />
• Use other <strong>SQL</strong> Server tools.
<strong>SQL</strong> Server Configuration Manager<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 1-29<br />
<strong>SQL</strong> Server Configuration Manager (SSCM) is used to configure <strong>SQL</strong> Server services, to configure the<br />
network libraries exposed by <strong>SQL</strong> Server services, and to configure how client connections are made to<br />
<strong>SQL</strong> Server.<br />
<strong>SQL</strong> Server Configuration Manager<br />
SSCM can be used for three distinct purposes:<br />
• Managing Services – Each service can be controlled (started or stopped) and configured.<br />
• Managing Server Protocols – It is possible to configure the endpoints that are exposed by the <strong>SQL</strong><br />
Server services. This includes the protocols and ports used.<br />
• Managing Client Protocols – When client applications (such as SSMS) are installed on a server, there is<br />
a need to configure how connections from those tools are made to <strong>SQL</strong> Server. SSCM can be used to<br />
configure the protocols required and can be used to create aliases for the servers to simplify<br />
connectivity.<br />
Question: Why would a server system need to have a client configuration node?
1-30 Introduction to <strong>SQL</strong> <strong>Server®</strong> <strong>2012</strong> and Its Toolset<br />
<strong>SQL</strong> Server Services<br />
Key Points<br />
<strong>SQL</strong> Server Configuration Manager can be used to configure the individual services that are provided by<br />
<strong>SQL</strong> Server. Many components provided by <strong>SQL</strong> Server are implemented as operating system services. The<br />
components of <strong>SQL</strong> Server that you choose during installation determine which of the <strong>SQL</strong> Server services<br />
are installed.<br />
<strong>SQL</strong> Server Services<br />
These services operate within a specific Windows identity. If there is a need to alter the assigned identity<br />
for a service, SSCM should be used to make this change. A common error is to use the Services applet in<br />
the server’s administrative tools to change the service identity. While this applet will change the identity<br />
for the service, it will not update the other permissions and access control lists that are required for the<br />
service to operate correctly. When service identities are modified from within SSCM, the required<br />
permissions and access control lists are also modified.<br />
Each service has a start mode. This mode can be set to Automatic, Manual, or Disabled. Services that are<br />
set to the Automatic start mode are automatically started when the operating system starts. Services that<br />
are set to the Manual start mode can be manually started. Services that are set to the Disabled start mode<br />
cannot be started.<br />
Instances<br />
Many <strong>SQL</strong> Server components are instance-aware and can be installed more than once on a single server.<br />
When SSCM lists each service, it shows the associated instance of <strong>SQL</strong> Server in parentheses after the<br />
name of the service.<br />
In the example shown in the slide, there is a single instance of the database engine installed.<br />
MS<strong>SQL</strong>SERVER is the default name allocated to the default instance of the <strong>SQL</strong> Server database engine.
Network Ports and Listeners<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 1-31<br />
<strong>SQL</strong> Server Configuration Manager can be used to configure both server and client protocols and ports.<br />
Network Ports and Listeners<br />
SSCM provides two sets of network configurations. Each network endpoint that is exposed by an instance<br />
of <strong>SQL</strong> Server can be configured. This includes the determination of which network libraries are enabled<br />
and, for each library, the configuration of the network library. Typically, this will involve settings such as<br />
protocol port numbers. You should discuss the required network protocol configuration of <strong>SQL</strong> Server<br />
with your network administrator.<br />
Many protocols provide multiple levels of configuration. For example, the configuration for the TCP/IP<br />
protocol allows for different settings on each configured IP address if required, or a general set of<br />
configurations that are applied to all IP addresses.<br />
Client Configurations<br />
Every computer that has SNAC installed needs the ability to configure how that library will access <strong>SQL</strong><br />
Server services.<br />
SNAC is installed on the server as well as on client systems. When SSMS is installed on the server, it uses<br />
the SNAC library to make connections to the <strong>SQL</strong> Server services that are on the same system. The client<br />
configuration nodes within SSCM can be used to configure how those connections are made. Note that<br />
two sets of client configurations are provided and that they only apply to the computer where they are<br />
configured. One set is used for 32-bit applications; the other set is used for 64-bit applications. SSMS is a<br />
32-bit application, even when <strong>SQL</strong> Server is installed as a 64-bit application.
1-32 Introduction to <strong>SQL</strong> <strong>Server®</strong> <strong>2012</strong> and Its Toolset<br />
Creating Server Aliases<br />
Key Points<br />
Connecting to a <strong>SQL</strong> Server service can involve multiple settings such as server address, protocol, and<br />
port. To make this easier for client applications and to provide a level of available redirection, aliases can<br />
be created for servers.<br />
Aliases<br />
Hard-coding connection details for a specific server, protocol, and port within an application is not<br />
desirable as these might need to change over time.<br />
A server alias can be created and associated with a server, protocol, and port (if required). Client<br />
applications can then connect to the alias without being concerned about how those connections are<br />
made.<br />
Each client system that utilizes SNAC (including the server itself) can have one or more aliases configured.<br />
Aliases for 32-bit applications are configured independently of the aliases for 64-bit applications.<br />
In the example shown in the slide, the alias "AdvDev" has been created for the server<br />
"devserver.adventureworks.com" and utilizing the TCP/IP protocol with a custom TCP port of 51550. The<br />
client then only needs to connect to the name "AdvDev".
Other <strong>SQL</strong> Server Tools<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 1-33<br />
<strong>SQL</strong> Server provides a rich set of tools and utilities to make working with the product easier.<br />
The most commonly used tools are listed in the following table:<br />
Tool Purpose<br />
<strong>SQL</strong> Server Profiler Trace activity from client applications to <strong>SQL</strong> Server. Supports both the<br />
database engine and Analysis Services.<br />
<strong>Database</strong> Engine Tuning<br />
Advisor<br />
Master Data Services<br />
Configuration Manager<br />
Reporting Services<br />
Configuration Manager<br />
Data Quality Services<br />
Client<br />
<strong>SQL</strong> Server Error and<br />
Usage Reporting<br />
Design indexes and statistics to improve database performance, based on<br />
analysis of trace workloads.<br />
Configure and manage <strong>SQL</strong> Server Master Data Services.<br />
Configure and manage <strong>SQL</strong> Server Reporting Services.<br />
Configure and manage Data Quality Services knowledge-bases and<br />
projects.<br />
Configure the level of automated reporting back to the <strong>SQL</strong> Server<br />
product team about errors that occur and on usage of different aspects of<br />
the product.<br />
PowerShell Provider Allow configuring and querying <strong>SQL</strong> Server using PowerShell.<br />
<strong>SQL</strong> Server Management<br />
Objects (SMO)<br />
Provide a detailed .NET based library for working with management<br />
aspects of <strong>SQL</strong> Server directly from application code.
1-34 Introduction to <strong>SQL</strong> <strong>Server®</strong> <strong>2012</strong> and Its Toolset<br />
Demonstration 3A: Using <strong>SQL</strong> Server Profiler<br />
Demonstration Steps<br />
1. If Demonstration 2A was not performed, revert the virtual machines as per the instructions in<br />
D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, ensure that Server type is set to <strong>Database</strong> Engine.<br />
4. In the Server name text box, type (local), in the Authentication drop-down list, select Windows<br />
Authentication, and click Connect.<br />
5. From the Tools menu, click <strong>SQL</strong> Server Profiler.<br />
6. In the Connect to Server window, ensure that Server type is set to <strong>Database</strong> Engine.<br />
7. In the Server name text box, type (local), in the Authentication drop-down list, select Windows<br />
Authentication, and click Connect.<br />
8. In the Trace Properties window, click Run. Note this will start a new trace with the default options.<br />
9. Switch to <strong>SQL</strong> Server Management Studio, click New Query toolbar icon.<br />
10. In the Query window, type the query as shown in the snippet below, and click Execute toolbar icon:<br />
USE AdventureWorks;<br />
GO<br />
SELECT * FROM Person.Contact ORDER BY FirstName;<br />
GO<br />
11. Switch to <strong>SQL</strong> Server Profiler. Note the statement trace occurring in <strong>SQL</strong> Server Profiler then from the<br />
File menu and click Stop Trace.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 1-35<br />
12. In the Results grid, click individual statements to see the detail shown in the lower pane.<br />
13. Close <strong>SQL</strong> Server Management Studio and <strong>SQL</strong> Server Profiler without saving any changes.<br />
Question: What could you use captured trace files for?
1-36 Introduction to <strong>SQL</strong> <strong>Server®</strong> <strong>2012</strong> and Its Toolset<br />
Lab 1: Introduction to <strong>SQL</strong> Server and Its Toolset<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
Lab Scenario<br />
AdventureWorks is a global manufacturer, wholesaler and retailer of cycle products. The owners of the<br />
company have decided to start a new direct marketing arm of the company. It has been created as a new<br />
company named Proseware, Inc. Even though it has been set up as a separate company, it will receive<br />
some IT-related services from the existing AdventureWorks company and will be provided with a subset<br />
of the corporate AdventureWorks data. The existing AdventureWorks company <strong>SQL</strong> Server platform has<br />
been moved to a new server that is capable of supporting both the existing workload and the workload<br />
from the new company. In this lab, you are ensuring that the additional instance of <strong>SQL</strong> Server has been<br />
configured appropriately and making a number of additional required configuration changes.<br />
Exercise 1: Verify <strong>SQL</strong> Server Component Installation<br />
A new instance of <strong>SQL</strong> Server has been installed by the IT department at AdventureWorks. It will be used<br />
by the new direct marketing company. The <strong>SQL</strong> Server named instance is called MKTG. In the first<br />
exercise, you need to verify that the required <strong>SQL</strong> Server components have been installed.
The main tasks for this exercise are as follows:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 1-37<br />
1. Check that <strong>Database</strong> Engine and Reporting Services have been installed for the MKTG instance.<br />
2. Note the services that are installed for the default instance.<br />
3. Ensure that all required services including <strong>SQL</strong> Server Agent are started and set to autostart for both<br />
instances.<br />
Task 1: Check that <strong>Database</strong> Engine and Reporting Services have been installed for the<br />
MKTG instance<br />
• Open <strong>SQL</strong> Server Configuration Manager.<br />
• Check the installed list of services for the MKTG instance and ensure that the database engine and<br />
Reporting Services have been installed for the MKTG instance.<br />
Task 2: Note the services that are installed for the default instance<br />
• Note the list of services that are installed for the default instance.<br />
Task 3: Ensure that all required services including <strong>SQL</strong> Server Agent are started and set<br />
to autostart for both instances<br />
• Ensure that all the MKTG services are started and set to autostart. (Ignore the Full Text Filter Daemon<br />
at this time).<br />
• Ensure that all the services for the default instance are set to autostart. (Ignore the Full Text Filter<br />
Daemon at this time).<br />
Results: After this exercise, you have checked that the required <strong>SQL</strong> Server services are installed,<br />
started, and configured to autostart.<br />
Exercise 2: Alter Service Accounts for New Instance<br />
Scenario<br />
The <strong>SQL</strong> Server services for the MKTG instance have been configured to execute under the<br />
AdventureWorks\<strong>SQL</strong>Service service account. In this exercise, you will configure the services to execute<br />
under the AdventureWorks\PWService service account.<br />
The main tasks for this exercise are as follows:<br />
1. Change the service account for the MKTG database engine.<br />
2. Change the service account for the MKTG <strong>SQL</strong> Server Agent.<br />
Task 1: Change the service account for the MKTG database engine<br />
• Change the service account for the MKTG database engine service to AdventureWorks\PWService<br />
using the properties page for the service.<br />
Task 2: Change the service account for the MKTG <strong>SQL</strong> Server Agent<br />
• Change the service account for the MKTG <strong>SQL</strong> Server Agent service to AdventureWorks\PWService<br />
using the properties page for the service and then restart the service.<br />
Results: After this exercise, you have configured the service accounts for the MKTG instance.
1-38 Introduction to <strong>SQL</strong> <strong>Server®</strong> <strong>2012</strong> and Its Toolset<br />
Exercise 3: Enable Named Pipes Protocol for Both Instances<br />
Scenario<br />
Client applications that are installed on the server will connect to the database engine using the named<br />
pipes protocol. In this exercise, you will ensure that the named pipes protocol is enabled for both<br />
database engine instances.<br />
The main tasks for this exercise are as follows:<br />
1. Enable the named pipes protocol for the default instance.<br />
2. Enable the named pipes protocol for the MKTG instance.<br />
3. Restart database engine services for both instances.<br />
Task 1: Enable the named pipes protocol for the default instance<br />
• If necessary, enable the named pipes protocol for the default database engine instance using the<br />
Protocols window.<br />
Task 2: Enable the named pipes protocol for the MKTG instance<br />
• If necessary, enable the named pipes protocol for the MKTG database engine instance using the<br />
Protocols window.<br />
Task 3: Restart both database engine services<br />
• If necessary, restart the default database engine instance.<br />
• If necessary, restart the MKTG database engine instance.<br />
• Check to ensure that both instances have been restarted successfully.<br />
Results: After this exercise, you should have ensured that the named pipes protocol is enabled for both<br />
database engine instances.<br />
Exercise 4: Create an Alias for AdvDev<br />
Scenario<br />
One badly-written monitoring application has been installed on the server. Unfortunately, it requires a<br />
hard-coded server name of AdvDev. In this exercise, you will configure and test an alias for AdvDev that<br />
points to the MKTG server. Use the Named Pipes protocol. Both 32 bit and 64 bit aliases should be<br />
configured. You will use <strong>SQL</strong> Server Management Studio to test the alias once it has been configured.<br />
The main tasks for this exercise are as follows:<br />
1. Create a 32-bit alias (AdvDev) for the MKTG instance.<br />
2. Create a 64-bit alias (AdvDev) for the MKTG instance.<br />
3. Use <strong>SQL</strong> Server Management Studio to connect to the alias to ensure it works as expected.<br />
Task 1: Create a 32-bit alias (AdvDev) for the MKTG instance<br />
• Create a 32-bit alias for the MKTG instance. Call the alias AdvDev and connect via named pipes. Use<br />
the servername ".\MKTG".
Task 2: Create a 64-bit alias (AdvDev) for the MKTG instance<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 1-39<br />
• Create a 64-bit alias for the MKTG instance. Call the alias AdvDev and connect via named pipes. Use<br />
the servername ".\MKTG".<br />
Task 3: Use <strong>SQL</strong> Server Management Studio to connect to the alias to ensure it works as<br />
expected<br />
• Open <strong>SQL</strong> Server Management Studio.<br />
• Connect to the AdvDev alias.<br />
Results: After this exercise, you should have created and tested an alias for the named instance.<br />
Challenge Exercise 5: Ensure <strong>SQL</strong> Browser Is Disabled and Configure a<br />
Fixed TCP/IP Port (Only if time permits)<br />
Scenario<br />
Client applications will need to connect to the MKTG database engine instance via the TCP/IP protocol. As<br />
their connections will need to traverse a firewall, the port used for connections cannot be configured as a<br />
dynamic port. The port number must not change. Corporate policy at AdventureWorks is that named<br />
instances should be accessed via fixed TCP ports and the <strong>SQL</strong>Browser service should be disabled. In this<br />
exercise, you will make configuration changes to comply with these requirements. A firewall exception has<br />
already been created for port 51550, for use with the MKTG database engine instance.<br />
The main tasks for this exercise are as follows:<br />
1. Configure the TCP port for the MKTG database engine instance to 51550.<br />
2. Disable the <strong>SQL</strong>Browser service.<br />
Task 1: Configure the TCP port for the MKTG database engine instance to 51550<br />
• Using the property page for the TCP/IP server protocol, configure the use of the fixed port 51550.<br />
(Make sure that you clear the dynamic port.)<br />
• Restart the MKTG database engine instance.<br />
• Ensure that the MKTG database engine instance has been restarted successfully.<br />
Task 2: Disable the <strong>SQL</strong>Browser service<br />
• Stop the <strong>SQL</strong>Browser service.<br />
• Set the Start Mode for the <strong>SQL</strong> Browser service to Disabled.<br />
Results: After this exercise, you will have configured a fixed TCP port for the MKTG database engine<br />
instance and disabled the <strong>SQL</strong>Browser service.
1-40 Introduction to <strong>SQL</strong> <strong>Server®</strong> <strong>2012</strong> and Its Toolset<br />
Module Review and Takeaways<br />
Review Questions<br />
1. What is the difference between a <strong>SQL</strong> Server version and an edition?<br />
2. What is the purpose of the <strong>SQL</strong> Server Data Tools?<br />
3. Does Visual Studio need to be installed before SSDT?<br />
Best Practices<br />
1. Ensure that developer edition licenses are not used in production environments.<br />
2. Develop using the least privileges possible, to avoid accidentally building applications that will not<br />
run for standard users.<br />
3. If using an offline version of Books Online, ensure it is kept up to date.<br />
4. Ensure that service accounts are provisioned with the least workable permissions.
Module 2<br />
Preparing Systems for <strong>SQL</strong> Server <strong>2012</strong><br />
Contents:<br />
Lesson 1: Overview of <strong>SQL</strong> Server Architecture 2-3<br />
Lesson 2: Planning Server Resource Requirements 2-17<br />
Lesson 3: Pre-installation Testing for <strong>SQL</strong> Server 2-29<br />
Lab 2: Preparing Systems for <strong>SQL</strong> Server 2-35<br />
2-1
2-2 Preparing Systems for <strong>SQL</strong> Server <strong>2012</strong><br />
Module Overview<br />
Before you start to deploy <strong>Microsoft®</strong> <strong>SQL</strong> <strong>Server®</strong>, it is important to plan appropriately for the<br />
deployment. As with almost any type of project, the installation work is always easier once good planning<br />
has occurred.<br />
To understand the requirements for <strong>SQL</strong> Server, you first need to gain an understanding of its<br />
architecture. In this module, you will see how <strong>SQL</strong> Server is structured and the requirements it places on<br />
the underlying server platform.<br />
Rather than implementing new hardware and software in a server, installing <strong>SQL</strong> Server on them, and then<br />
hoping that all will work as expected, a better approach is to test the server platform before deploying<br />
<strong>SQL</strong> Server. In this module, you will also see how to use tools that allow you to pre-stress the server<br />
platform to find out if it really is capable of supporting <strong>SQL</strong> Server applications. An approach that involves<br />
pre-testing can help to avoid the need to sort out server platform issues once <strong>SQL</strong> Server is deployed into<br />
production.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe the <strong>SQL</strong> Server architecture.<br />
• Plan for server resource requirements.<br />
• Conduct pre-installation stress testing for <strong>SQL</strong> Server.
Lesson 1<br />
Overview of <strong>SQL</strong> Server Architecture<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 2-3<br />
Before you start to look at the resources that <strong>SQL</strong> Server requires from the underlying server platform, you<br />
need to gain an understanding of how <strong>SQL</strong> Server functions, so that you can understand why each of the<br />
resource requirements exists. To be able to read <strong>SQL</strong> Server documentation, you also need to become<br />
familiar with some of the terminology used when describing how the product functions.<br />
The most important resources that <strong>SQL</strong> Server utilizes from a server platform are CPU, memory, and I/O.<br />
In this lesson, you will see how <strong>SQL</strong> Server is structured and how it uses each of these resources.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe <strong>SQL</strong> Server architecture.<br />
• Explain CPU Usage by <strong>SQL</strong> Server.<br />
• Describe the role of parallelism.<br />
• Explain how 32 bit and 64 bit servers differ with respect to <strong>SQL</strong> Server.<br />
• Describe how <strong>SQL</strong> Server uses memory.<br />
• Describe the difference between physical and logical I/O.
2-4 Preparing Systems for <strong>SQL</strong> Server <strong>2012</strong><br />
<strong>SQL</strong> Server Architecture<br />
Key Points<br />
<strong>SQL</strong> Server is constructed from a series of many small components that work together. Three general<br />
categories of components exist within the database engine and are structured as layers: Query Execution<br />
(or also referred as the Relational Engine), Storage Engine, and <strong>SQL</strong> OS.<br />
Query Execution Layer<br />
As well as managing the query optimization process, the Query Execution layer manages connections and<br />
security. A series of sub-components help it to work out how to execute your queries:<br />
• The Parser checks that you have followed the rules of the T-<strong>SQL</strong> language and outputs a syntax tree,<br />
which is a simplified representation of the queries to be executed. The parser outputs what you want<br />
to achieve in your queries.<br />
• The Algebrizer converts the syntax tree into a relational algebra tree, where operations are<br />
represented by logic objects rather than words. The aim of this phase is to take the list of what you<br />
want to achieve, and convert it to a logical series of operations that represent the work that needs to<br />
be performed.<br />
• The Query Optimizer then considers the different ways that your queries could be executed and finds<br />
an acceptable plan, based on the costs of each operation. The costs are based on the required<br />
operation and the volume of data that needs to be processed which is calculated based upon the<br />
distribution statistics. For example, the Query Optimizer considers the data that needs to be retrieved<br />
from a table and the indexes that are available on the table, to decide how to access the data in the<br />
table. It is important to realize that the Query Optimizer does not always look for the lowest cost plan.<br />
In some situations, finding the lowest cost plan might take too long. Instead, the Query Optimizer<br />
finds a plan which it considers satisfactory. The Query Optimizer also manages a query plan cache so<br />
that it can avoid the overhead of performing all this work, when another similar query is received for<br />
execution.
Storage Engine Layer<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 2-5<br />
The Storage Engine layer manages the data that is held within databases. The main responsibilities of the<br />
Storage Engine layer are to manage how data is stored both on disk and in memory, to manage how the<br />
data is cached for reuse, and to manage the consistency of data through locking and transactions. The<br />
main sub-components of the Storage Engine layer are as follows:<br />
• The Access Methods component is used to manage how data is accessed. For example, the Access<br />
Methods component works with scans and seeks and lookups.<br />
• The Page Cache manages the storage of cached copies of data pages. Caching of data pages is used<br />
to minimize the time it takes to access data pages. The Page Cache places data pages into memory,<br />
so they are present when needed for query execution.<br />
• The Locking and Transaction Management components work together to maintain consistency of<br />
your data, including the maintenance of transactional integrity with the help of the database log file.<br />
<strong>SQL</strong> OS Layer<br />
<strong>SQL</strong> OS is the layer of <strong>SQL</strong> Server that provides operating system functionality to the <strong>SQL</strong> Server<br />
components. All <strong>SQL</strong> Server components use programming interfaces provided by <strong>SQL</strong> OS to access<br />
memory, to schedule tasks, or to perform I/O.<br />
The abstraction layer provided by <strong>SQL</strong> OS avoids the need for resource-related code to be present<br />
throughout the <strong>SQL</strong> Server database engine code. The most important functions provided by this layer<br />
are memory management and scheduling. These two aspects are discussed in more detail later in this<br />
lesson.<br />
Question: Why does <strong>SQL</strong> Server need to optimize queries?<br />
Question: What do you imagine that cost based optimization means and can you imagine<br />
other ways that queries could be optimized (apart from cost based)?
2-6 Preparing Systems for <strong>SQL</strong> Server <strong>2012</strong><br />
CPU Usage by <strong>SQL</strong> Server<br />
Key Points<br />
All work that is performed within the Windows® operating system is based on the execution of threads.<br />
Windows uses pre-emptive scheduling of threads to maintain control. Rather than requiring threads to<br />
voluntarily give up control of the CPU, preemptive systems have a clock that is used to interrupt threads<br />
when their allocated share of CPU time has completed. Threads are considered to have encountered a<br />
context switch when they are preempted.<br />
<strong>SQL</strong> Server Threads<br />
<strong>SQL</strong> Server retrieves threads from Windows. A <strong>SQL</strong> Server configuration setting (max worker threads)<br />
determines how many threads will be retrieved. <strong>SQL</strong> Server has its own internal scheduling system,<br />
separate from the scheduling that is performed by the operating system. Instead of using Windows<br />
threads directly, <strong>SQL</strong> Server creates a pool of worker threads that are mapped to Windows threads<br />
whenever work needs to be performed.<br />
Whenever a <strong>SQL</strong> Server component needs to execute code, the component creates a task which<br />
represents the unit of work to be done. For example, if you send a batch of T-<strong>SQL</strong> commands to the<br />
server, it is likely that your batch of commands will be executed within a task.<br />
When a task is created, it is assigned the next available worker thread that is not in use. If no worker<br />
threads are available, <strong>SQL</strong> Server will try to retrieve another Windows thread, up to the point that the max<br />
worker threads configuration limit is reached. At that point, the new task would need to wait to get a<br />
worker thread.<br />
All tasks are scheduled by the <strong>SQL</strong> Server scheduler until they are done.
Affinity Mask<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 2-7<br />
Schedulers can be enabled and disabled by setting the CPU affinity mask on the instance. The affinity<br />
mask is a configurable bitmap that determines which CPUs from the host system should be used for <strong>SQL</strong><br />
Server, and can be changed without the need for rebooting. By default, <strong>SQL</strong> Server will assume that it can<br />
use all CPUs on the host system. While it is possible to configure the affinity mask bitmap directly by using<br />
sp_configure, use the properties window for the server instance in SSMS to modify processor affinity.<br />
Note Whenever the term CPU is used here in relation to <strong>SQL</strong> Server internal architecture,<br />
it refers to any logical CPU, regardless of whether core or hyperthreading CPUs are being<br />
used. Note also that <strong>SQL</strong> Server licensing is currently based on physical CPU sockets rather<br />
than on logical CPUs. If affinity settings are used to violate <strong>SQL</strong> Server licensing limits, the<br />
system detects this at startup and logs details about the violation in the event log.<br />
Waiting for Resources<br />
One concept that might be difficult to grasp at first is that most <strong>SQL</strong> Server tasks spend most of their time<br />
waiting for something external to happen. Most of the time, it is I/O or the release of locks that they are<br />
waiting for but it can be other system resources.<br />
When a task needs to wait for a resource, it is placed on a waiting list until the resource is available. When<br />
the resource is available, the task is signaled that it can continue, however it still needs to then wait for<br />
another share of CPU time. This allocation of CPUs to resources is a function of the <strong>SQL</strong> OS.<br />
<strong>SQL</strong> Server keeps detailed internal records of how long tasks spend waiting and of the types of resources<br />
they are waiting for. You can see these details by querying the following system views:<br />
sys.dm_os_waiting_tasks;<br />
sys.dm_os_wait_stats;<br />
The use of these views will be described in Module 18.<br />
Question: Why would analyzing waits be an important instrument for monitoring <strong>SQL</strong> Server<br />
performance?
2-8 Preparing Systems for <strong>SQL</strong> Server <strong>2012</strong><br />
Parallelism<br />
Key Points<br />
<strong>SQL</strong> Server normally executes queries using sequential plans on a single task. To reduce overall execution<br />
time, <strong>SQL</strong> Server can decide to distribute queries over several tasks so that it can execute the tasks in<br />
parallel.<br />
Parallel Execution Plans<br />
Parallel execution involves the overhead of synchronizing the tasks and monitoring the tasks. Because of<br />
this overhead, <strong>SQL</strong> Server only considers parallel plans for expensive queries, where the advantages<br />
outweigh the additional overhead.<br />
The Query Optimizer decides whether a parallel plan should be used, based on two aspects of the query<br />
and of the system configuration:<br />
• A value called the Maximum Degree of Parallelism (MAXDOP) determines a limit for how many CPUs<br />
can be used to execute a query.<br />
• Another value called the Cost Threshold for Parallelism determines the cost that a query must meet<br />
before a parallel query plan will even be considered.<br />
If a query is expensive enough to consider a parallel plan, <strong>SQL</strong> Server still might decide to use a sequential<br />
plan that is lower in overall cost.
Controlling Parallelism<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 2-9<br />
The Query Optimizer only creates a parallel plan but is not involved in deciding the MAXDOP value. The<br />
MAXDOP value can be configured at the server level and can also be overridden at the query level via a<br />
query hint. Even if the Query Optimizer creates a parallel query plan, the Execution Engine might decide<br />
to only use a single CPU, based on the resources available when it is time to execute the query.<br />
Note In earlier versions of <strong>SQL</strong> Server, it was common to disable parallel queries on<br />
systems that were primarily used for transaction processing. This limitation was<br />
implemented by setting the server setting for MAXDOP to the value 1. In current versions of<br />
<strong>SQL</strong> Server, this is no longer generally considered a good practice.<br />
Question: Why would parallel plans involve an overhead?
2-10 Preparing Systems for <strong>SQL</strong> Server <strong>2012</strong><br />
32 bit vs. 64 bit Servers<br />
Key Points<br />
Virtual Address Space (VAS) is the total amount of memory that an application as <strong>SQL</strong> Server could<br />
possibly refer to in Windows. The VAS depends upon the configuration of the server.<br />
32 bit Systems<br />
32 bit systems have a VAS of 4GB. By default, half the address space (2GB) is reserved for the system and<br />
is known as kernel mode address space. The other half of the VAS (2GB) is available for application to use<br />
and is known as the user mode address space. It is possible to change this proportion by using a /3GB<br />
switch in the boot.ini file of the operating system (on earlier operating systems) or by using the bcdedit<br />
program to edit the Boot Configuration Datastore (on more recent operating systems). Once the /3GB<br />
switch has been configured, 1GB of VAS is allocated for the kernel and 3GB of the VAS is allocated for<br />
applications.<br />
Note More fine grained control of the allocated space can be achieved by using the<br />
/USERVA switch instead of the /3GB switch. The /USERVA switch allows the configuration of<br />
any value between 2GB and 3GB for user applications.
64 bit Systems<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 2-11<br />
<strong>Database</strong> systems tend to work with large amounts of data and need large amounts of memory, so that<br />
the systems can serve large numbers of concurrent users. <strong>SQL</strong> Server needs large amounts of memory for<br />
caching queries and data pages. A limit of 2GB to 3GB for VAS is a major constraint on most current<br />
systems. As is the case with most database engines, <strong>SQL</strong> Server is best installed on a 64 bit version of<br />
Windows instead of a 32 bit version.<br />
It is best to install a 64 bit version of <strong>SQL</strong> Server on a 64 bit version of Windows. Full 64 bit systems offer a<br />
VAS of 8TB.<br />
It is possible (but less desirable) to install a 32 bit version of <strong>SQL</strong> Server on a 64 bit version of Windows.<br />
This configuration provides a full 4GB of VAS for the 32 bit applications and is based on the Windows on<br />
Windows (WOW) emulation technology.<br />
64 bit System Limitations<br />
Not all current systems can be implemented as 64 bit systems. The most common limitation is the<br />
availability of data providers. If special data providers are needed to connect <strong>SQL</strong> Server to other systems<br />
(particularly through the linked servers feature), it is important to check that the required data providers<br />
are available in 64 bit versions. For example, the JET engine provider is currently only available as a 32 bit<br />
provider.<br />
Question: Why is data caching so important for database engines like <strong>SQL</strong> Server?
2-12 Preparing Systems for <strong>SQL</strong> Server <strong>2012</strong><br />
Overview of <strong>SQL</strong> Server Memory<br />
Key Points<br />
The main memory object of <strong>SQL</strong> Server is known as the Buffer Pool. The Buffer Pool is divided into 8KB<br />
pages; the same size as data pages within a database. The Buffer Pool is comprised of three sections:<br />
• Free Pages: Pages that are not used yet and are kept to satisfy new memory requests.<br />
• Stolen Pages: Pages that are used (formally referred to as stolen) by other <strong>SQL</strong> Server components<br />
such as the Query Optimizer and the Storage Engine.<br />
• Data Cache: Pages that are used for caching database data pages. All data operations are performed<br />
in the Data Cache. If query wants to select data from a specific data page, the data page is moved<br />
into the Data Cache first. Also, data modification is only ever performed in memory. Modifications are<br />
never performed directly on the data files. Changes that are made to pages result in the pages being<br />
considered dirty and the dirty pages are written to the database by a background process known as a<br />
Checkpoint.<br />
The Data Cache implements a least recently used (LRU) algorithm to determine candidate pages to be<br />
dropped from the cache as space is needed, after they have been flushed to disk (if necessary) by the<br />
Checkpoint process. The process that performs the task of dropping pages is known as the Lazy Writer.<br />
The Lazy Writer performs two core functions. By removing pages from the Buffer Cache, the Lazy Writer<br />
attempts to keep sufficient free space in the Buffer Cache for <strong>SQL</strong> Server to operate. The Lazy Writer also<br />
monitors the overall size of the Buffer Cache to avoid taking too much memory from the Windows<br />
operating system.
<strong>SQL</strong> Server Memory and Windows Memory<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 2-13<br />
The memory manager within <strong>SQL</strong> OS allocates and controls the memory used by <strong>SQL</strong> Server. It does this<br />
by checking the available memory on the Windows system, calculating a target memory value, which is<br />
the maximum memory that <strong>SQL</strong> Server can use at that point in time, to avoid a memory shortage at the<br />
Windows operating system level. <strong>SQL</strong> Server is designed to respond to signals from the Windows<br />
operating system that indicate a change in memory needs.<br />
As long as <strong>SQL</strong> Server stays within the target memory, it requests additional memory from Windows when<br />
needed. If the target memory value is reached, the memory manager answers memory requests from<br />
components by freeing up the memory of other components. This can involve evicting pages from caches.<br />
The target memory value can be controlled via the Min and Max Server Memory options.<br />
Question: What is the purpose of an LRU algorithm?
2-14 Preparing Systems for <strong>SQL</strong> Server <strong>2012</strong><br />
Physical vs. Logical I/O<br />
Key Points<br />
A logical I/O occurs when a task can retrieve a data page from the Buffer Cache. When the requested<br />
page is not present in the Buffer Cache, it must first be read into the Buffer Cache from the database. This<br />
database read operation is known as a physical I/O.<br />
Physical I/O Minimization<br />
From an overall system performance point of view, two I/O aspects must be minimized:<br />
• You need to minimize the number of physical I/O operations.<br />
• You need to minimize the time taken by each I/O operation that is still required.<br />
Minimizing the number of physical I/O operations is accomplished by:<br />
• Providing enough memory for the data cache.<br />
• Optimizing the physical and logical database layout including indexes.<br />
• Optimizing queries to request as few I/O operations as possible.<br />
One of the major goals of query optimization is to reduce the number of logical I/O operations. The side<br />
effect of this is the reduction in the number of physical I/O operations.<br />
Note Logical I/O counts can be difficult to interpret as certain operations can cause the<br />
counts to be artificially inflated, due to multiple accesses to the same page. However, in<br />
general, lower counts are better than higher counts.
Monitoring Query I/O Operations<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 2-15<br />
Both logical and physical I/O operation counts can be seen for each query by setting the following<br />
connection option:<br />
SET STATISTICS IO ON;<br />
The overall physical I/O operations that are occurring on the system can be seen by querying the<br />
sys.dm_io_virtual_file_stats system function. The values returned by this function are cumulative from the<br />
point that the system was last restarted.<br />
Question: Why should the logical and not the physical I/O be optimized when optimizing<br />
queries?
2-16 Preparing Systems for <strong>SQL</strong> Server <strong>2012</strong><br />
Demonstration 1A: CPU and Memory Configurations in SSMS<br />
Demonstration Steps<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click <strong>SQL</strong><br />
Server Management Studio. In the Connect to Server window, type Proseware and click Connect.<br />
From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_02_PRJ\10775A_02_PRJ.ssmssln and click Open.<br />
3. From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file from<br />
within Solution Explorer.<br />
4. Open the 11 – Demonstration 1A.sql script file.<br />
5. Follow the instructions contained within the comments of the script file.
Lesson 2<br />
Planning Server Resource Requirements<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 2-17<br />
Now that you have an understanding of how <strong>SQL</strong> Server executes queries, it is time to plan appropriate<br />
resources to support the installation of <strong>SQL</strong> Server on a server platform.<br />
Installing <strong>SQL</strong> Server will be much easier if you have already planned the installation appropriately.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe the planning process.<br />
• Plan CPU requirements.<br />
• Plan memory requirements.<br />
• Plan network requirements.<br />
• Plan storage and I/O requirements.
2-18 Preparing Systems for <strong>SQL</strong> Server <strong>2012</strong><br />
Introduction to Planning Server Resource Requirements<br />
Key Points<br />
Planning server resource requirements is not an easy task. There is no easy formula that allows you to<br />
calculate resource requirements based on measures such as database size and the number of connections,<br />
even though you may see references to these at times. Some application vendors provide sizing tools that<br />
can provide a starting point but even these tools need to be used with caution.<br />
Application Provider<br />
As a first step it is always good to speak with the provider of the application that needs to be installed on<br />
the server. If the provider is an Independent Software Vendor (ISV), the provider will often have reference<br />
installations available that can be used as a starting point for sizing resource requirements. In particular,<br />
case studies are often useful as most include documentation of the server platforms that they were<br />
performed on.<br />
Simulated Production Testing<br />
The second phase of resource planning should include the testing of different configurations. For success<br />
in this phase, it is important to have defined the goals that need to be achieved. These goals should be<br />
defined at a business level such as "How long should it take to save a new order?" rather than at a system<br />
level such as "How long should an individual I/O operation take?".<br />
Part of this testing phase should also include testing of the predicted further growth. Do not just test the<br />
systems in the proposed initial configuration. Test how the systems perform with a database populated to<br />
the size that the database is likely to grow to, during the life of the server platform.
Monitoring Production<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 2-19<br />
The planning process should not end when the system is installed. It is important to establish a process of<br />
ongoing monitoring and corrective actions, to make sure that the goals that were stated are still being<br />
met.<br />
Question: Why is it important to perform tests for capacity planning?
2-20 Preparing Systems for <strong>SQL</strong> Server <strong>2012</strong><br />
Discussion: Previous Exposure to Resource Planning<br />
Key Points<br />
Resource Planning is an important part of new installations.<br />
Question: What is your experience with planning of new systems?<br />
Question: How successful was the planning?
Planning CPU Requirements<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 2-21<br />
CPU utilization largely depends upon the types of queries that are running on the system. Processor<br />
planning is often considered as relatively straightforward in that few system architectures provide fine<br />
grained control of the available processor resources. Testing with realistic workloads is the best option.<br />
Note Many technical whitepapers suggest that a sustained 50 percent CPU load is<br />
acceptable. The experience of the course authors is that average utilizations above 30<br />
percent lead to very sluggish systems, when combined with common workloads. Peaks that<br />
do not last for long are acceptable, regardless of the target percentage.<br />
Increasing the number of available CPUs will provide <strong>SQL</strong> Server with better options for creating parallel<br />
query plans. Even without parallel query plans, <strong>SQL</strong> Server workloads tend to make good use of multiple<br />
processors when working with simple query workloads from a large number of concurrent users. Parallel<br />
query plans are particularly useful when large amounts of data are being scanned within large data<br />
warehouses.<br />
Try to ensure that your server is dedicated to <strong>SQL</strong> Server whenever possible. Most servers that are running<br />
production workloads on <strong>SQL</strong> Server should have no other significant services running on the same<br />
system. This particularly applies to other server applications such as Microsoft Exchange Server.
2-22 Preparing Systems for <strong>SQL</strong> Server <strong>2012</strong><br />
NUMA<br />
Many new systems are based on Non-Uniform Memory Access (NUMA) architectures. In a traditional<br />
symmetric multiprocessing (SMP) system, all CPUs and memory are bound to a single system bus. The bus<br />
can become a bottleneck when additional CPUs are added. On a NUMA based system, each set of CPUs<br />
has its own bus complete with local memory. In some systems, the local bus might also include separate<br />
I/O channels. These CPU sets are called NUMA nodes. Each NUMA node can access the memory of other<br />
nodes but the local access to local memory is much faster. The best performance is achieved if the CPUs<br />
mostly access their own local memory. Windows and <strong>SQL</strong> Server are both NUMA aware and try to make<br />
use of these advantages.<br />
Optimal NUMA configuration is highly dependent on the hardware. Special configurations in the system<br />
BIOS might be needed to achieve optimal performance. It is crucial to check with the hardware vendor for<br />
the optimal configuration for a <strong>SQL</strong> Server on the specific NUMA based hardware.
Planning Memory Requirements<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 2-23<br />
You saw earlier in the module how dependent <strong>SQL</strong> Server is on available memory, when trying to achieve<br />
high performance. Caches are used to reduce I/O and CPU usage and to store query plans so that the cost<br />
of compiling queries to produce query plans can be avoided, when queries are reused.<br />
<strong>SQL</strong> Server also uses memory while queries are being executed. Each user that is executing a query needs<br />
a separate memory space known as an execution context that holds the local memory-based data for that<br />
user.<br />
A lack of memory can appear to be a lack of other resources. For example, a lack of memory could cause<br />
the caching mechanisms to work ineffectively. This could result in high degrees of CPU and I/O load, as<br />
the system constantly recompiles queries that have been evicted from the Plan Cache or as the system<br />
constantly re-reads data from the database, after the data pages have been evicted from the Data Cache.<br />
This means that a lack of available memory could appear, on the surface, to be a lack of CPU or I/O<br />
requirements.
2-24 Preparing Systems for <strong>SQL</strong> Server <strong>2012</strong><br />
Planning Memory Requirements<br />
Planning memory required on a system involves planning not only for the <strong>SQL</strong> Server instance, but also<br />
for other services running on the system. Software vendors can often supply guidelines about the memory<br />
requirements of their products. Every system behaves differently, even when running the same<br />
application, so the experiences of similar installations can only be used as guidelines.<br />
As mentioned earlier in the lesson, the system must be tested in a real live scenario with all other services<br />
running that would be installed on the production system.<br />
Note Some external applications might run only from time to time but request a lot of<br />
memory when they are first started. <strong>SQL</strong> Server Integration Services and Anti-virus scanning<br />
programs are common examples of these.<br />
Question: What are some of the potential symptoms of memory shortage?
Planning Network Requirements<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 2-25<br />
The required network throughput needs to be planned and tested. As mentioned with other resources,<br />
testing the requirements in simulated production environments is important. For many servers, a single<br />
network adapter will not provide sufficient throughput, so you should consider the use of multiple<br />
network adapters.<br />
Network Adapter Throughput<br />
When reviewing the specifications of network adapters, keep in mind that most low-level network<br />
protocols (such as the Ethernet protocol) work on a basis where collisions can occur but are detected and<br />
recovered. This means that the networks work well when lightly loaded but quickly become unusable as<br />
loads increase because collisions become frequent and more time is spent recovering from collisions than<br />
in performing network I/O. As an example, most Ethernet based networks start to saturate at around 10%<br />
of their stated speed.
2-26 Preparing Systems for <strong>SQL</strong> Server <strong>2012</strong><br />
Other Services and Applications<br />
Besides planning and testing the network throughput needed to the application servers and clients it is<br />
also important to plan for other operations that need the network on the system. Often these operations<br />
are forgotten while planning and evaluating new systems. But operations like performing backups to<br />
network shares or moving backup devices to backup storages are operations that produce high volumes<br />
of network I/O. This can slow down network connections to application servers and clients. If this slow<br />
down becomes severe, client applications can start to suffer timeouts and failures.<br />
Therefore such operations must also be planned for and tested. Allocating a dedicated network<br />
connection for such administrative purposes is an option worth considering.<br />
Other <strong>SQL</strong> Server features such as database mirroring and replication might also need dedicated<br />
connections to perform effectively.<br />
Note <strong>Database</strong> mirroring and replication are advanced features outside the scope of this<br />
course.<br />
Question: Why might backups interrupt or slow down user workloads?
Planning Storage and I/O Requirements<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 2-27<br />
Planning and testing the storage is one of the most important tasks during the planning phase. Current<br />
I/O systems are very complex in nature. Planning and testing will often involve a team of people, each<br />
with a specific skillset. The performance of <strong>SQL</strong> Server is tightly coupled to the performance of the I/O<br />
subsystem that it is using.<br />
Determining Requirements<br />
In the first phase of planning, the requirements of the application must be determined, including the I/O<br />
patterns that need to be satisfied. These I/O patterns include the frequency and size of reads and writes<br />
sent by the application. As a general rule, OLTP systems produce a high number of random I/O operations<br />
on the data files and sequential write operations on database log files. By comparison, data warehouse<br />
based applications tend to generate large scans on data files.<br />
Storage Styles<br />
The second planning phase involves determining the style of storage to be used. With direct attached<br />
storage (DAS), it is easier to get good predictable performance. On storage area network (SAN) systems,<br />
more work is often required to get good performance, but SAN storage typically provides a wide variety<br />
of management capabilities.<br />
One particular challenge for <strong>SQL</strong> Server administrators is that SAN administrators are generally more<br />
concerned with the disk space that is allocated to applications rather than the performance requirements<br />
of individual files. Rather than trying to discuss file layouts with a SAN administrator, try to concentrate on<br />
discussing your performance requirements for specific files. Leave the decisions about how to achieve<br />
those goals to the SAN administrator. That is, focus on what is needed in these discussions rather than on<br />
how it can be achieved.
2-28 Preparing Systems for <strong>SQL</strong> Server <strong>2012</strong><br />
RAID Systems<br />
In SAN based systems, you will not often be concerned about the RAID levels being used. If you have<br />
specified the required performance on a file basis, the SAN administrator will need to select appropriate<br />
RAID levels and physical disk layouts to achieve that.<br />
For DAS storage, become aware of different RAID levels. While other RAID levels exist, RAID levels 1, 5,<br />
and 10 are the most common RAID levels that are in use on <strong>SQL</strong> Server systems.<br />
Number of Spindles<br />
For most current systems, the number of drives (or spindles even though the term is now somewhat<br />
dated), will matter more than the size of the disk. It is easy to find large disks that will hold substantial<br />
databases but often a single large disk will not be able to provide sufficient I/O operations per second or<br />
enough data throughput (MB/sec) to be workable. Solid state drive (SSD) based systems are quickly<br />
changing the available options in this area.<br />
Drive Caching<br />
Read caching within disk drives is not particularly useful as <strong>SQL</strong> Server already manages its own caching<br />
system and it is unlikely that <strong>SQL</strong> Server will re-read a page that it has recently written, unless the system<br />
is low on memory. Write caches can substantially improve <strong>SQL</strong> Server I/O performance, but make sure<br />
that hardware caches guarantee a write, even after a system failure. Many drive write caches cannot<br />
survive failures and can lead to database corruptions.<br />
Note The placement of files on disk systems will be discussed in a later module.<br />
Question: Why is it better to spread a database over several disks rather than one large disk?
Lesson 3<br />
Pre-installation Testing for <strong>SQL</strong> Server<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 2-29<br />
One common problem that occurs in database server deployments occurs when the server platform is<br />
delivered and installed and immediately commissioned once <strong>SQL</strong> Server is installed on it.<br />
There are many subtle problems that can cause errors on server systems, particularly problems related to<br />
the I/O subsystems. These I/O subsystems are often very complex and the interaction between<br />
components of the systems can be hard to evaluate.<br />
A preferred option is to perform pre-installation stress testing of the systems, using activity that is similar<br />
to the activity generated by <strong>SQL</strong> Server. Tools such as <strong>SQL</strong>IOSIM provide the ability to generate<br />
representative I/O loads without the need to actually run <strong>SQL</strong> Server applications. In this lesson, you will<br />
see how to use <strong>SQL</strong>IOSIM for general testing and how to use another utility called <strong>SQL</strong>IO, for specific<br />
types of I/O testing.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Perform pre-installation testing.<br />
• Perform pre-checks of I/O subsystems.<br />
• Work with <strong>SQL</strong>IOSIM.<br />
• Work with <strong>SQL</strong>IO.
2-30 Preparing Systems for <strong>SQL</strong> Server <strong>2012</strong><br />
Overview of Pre-installation Testing<br />
Key Points<br />
Any testing that you perform prior to the commissioning of a new system should resemble the final<br />
production usage as closely as possible. In particular, ensure that database sizes and workloads are typical<br />
of what is to be expected over the life of the system.<br />
Planning is not a one-off activity. After the first round of tests is complete, the results need to be fed back<br />
into another round of planning processes.<br />
Documentation<br />
One key aspect that is often lacking in many organizations is the documentation of planning and testing<br />
processes. When you have a need some time in the future to work out if something about the system has<br />
changed, you will be glad that you documented the process and documented details of every test case<br />
that you ran.<br />
Question: Why is it important to document and archive the tests run on the system?
Perform Pre-checks of I/O Subsystems<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 2-31<br />
As with other aspects of pre-installation testing, it is important that the testing of the I/O subsystem is<br />
representative of the expected load. In particular, it is important to make sure that the types of I/O<br />
operations that you are expecting are part of your tests.<br />
Online transaction processing (OLTP) systems tend to have a large number of smaller random reads and<br />
writes. Online analytical processing (OLAP) systems tend to have larger and more sequential reads and<br />
writes than OLTP systems.<br />
Do not test with single files. Make sure that the number of files you use in the testing is also similar to<br />
your production configuration.<br />
Determining Saturation Points<br />
For most resources, the saturation point of the resource is apparent when the time taken to retrieve<br />
results starts to climb but the amount of work being achieved stays stationary.<br />
Question: Why is it better to spread a database over several disks compared to one bigger<br />
disk?
2-32 Preparing Systems for <strong>SQL</strong> Server <strong>2012</strong><br />
Introducing <strong>SQL</strong>IOSIM<br />
Key Points<br />
<strong>SQL</strong>IOSIM is an unsupported utility that can be downloaded from http://download.microsoft.com.<br />
<strong>SQL</strong>IOSIM is designed to simulate the activity generated by <strong>SQL</strong> Server without the need to have <strong>SQL</strong><br />
Server installed. This capability makes <strong>SQL</strong>IOSIM a good tool for pre-testing of server systems that are<br />
targets for running <strong>SQL</strong> Server.<br />
<strong>SQL</strong>IOSIM is a standalone tool that can be copied onto the system and executed. It does not need to be<br />
installed on the target system via an installer. <strong>SQL</strong>IOSIM has both GUI and command line execution<br />
options.<br />
Note <strong>SQL</strong>IOSIM replaces a number of previous tools with names such as <strong>SQL</strong>Stress and<br />
<strong>SQL</strong>IOStress.<br />
While <strong>SQL</strong>IOSIM is useful for stress testing systems, it is not useful for general performance testing. The<br />
tasks that it performs vary between each execution of the utility, so no attempt should be made to<br />
compare the output of multiple executions directly, particularly in terms of timing.<br />
<strong>SQL</strong>IO is designed for repeatable performance testing and should be used instead of <strong>SQL</strong>IOSIM for this<br />
purpose.
Introducing <strong>SQL</strong>IO<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 2-33<br />
<strong>SQL</strong>IO is another unsupported utility that can be downloaded from http://download.microsoft.com.<br />
Unlike <strong>SQL</strong>IOSIM which is used for non-repeatable stress testing, <strong>SQL</strong>IO is a tool that is designed to create<br />
entirely repeatable I/O patterns. A configuration file is used to determine the types of I/O operations that<br />
need to be tested and <strong>SQL</strong>IO then tests those types of operations specifically.<br />
A common way to use <strong>SQL</strong>IOSIM and <strong>SQL</strong>IO together is to use <strong>SQL</strong>IOSIM to find issues with certain types<br />
of I/O operations, and to then use <strong>SQL</strong>IO to generate those specific problematic types of I/O operations,<br />
while attempting to resolve the issues. <strong>SQL</strong>IO is also a stand-alone tool that does not require <strong>SQL</strong> Server<br />
to be installed on the system. Furthermore it is advised to perform these tests before installation of <strong>SQL</strong><br />
Server. <strong>SQL</strong>IO only checks one I/O type at a time. This makes the interpretation of the results the most<br />
important task.<br />
Question: What types of I/O will <strong>SQL</strong> Server mainly produce against data files of a typical<br />
OLTP system?
2-34 Preparing Systems for <strong>SQL</strong> Server <strong>2012</strong><br />
Demonstration 3A: Using <strong>SQL</strong>IOSIM & <strong>SQL</strong>IO<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_02_PRJ\10775A_02_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
2. Open the 31 – Demonstration 3A.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
Lab 2: Preparing Systems for <strong>SQL</strong> Server<br />
Lab Setup<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 2-35<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_02_PRJ\10775A_02_PRJ.ssmssln.<br />
7. From the View menu, click Solution Explorer. In Solution Explorer, double-click the query<br />
00-Setup.sql. When the query window opens, click Execute on the toolbar.<br />
Lab Scenario<br />
You have reviewed the additional instance of <strong>SQL</strong> Server. A system administrator at AdventureWorks has<br />
expressed some concerns that the existing server may not have enough memory or I/O capacity to<br />
support this new <strong>SQL</strong> Server instance and is reviewing a new I/O subsystem. As the database<br />
administrator, you need to review the available server memory and the memory allocated to each of the<br />
existing <strong>SQL</strong> Server instances. You need to ensure that the I/O subsystem of the new server is capable of<br />
running <strong>SQL</strong> Server and the required workload correctly.
2-36 Preparing Systems for <strong>SQL</strong> Server <strong>2012</strong><br />
Supporting Documentation<br />
Required Memory Configuration<br />
• 1.5GB reserved for operating system.<br />
• 60% of the remaining memory as the maximum value for the AdventureWorks server instance.<br />
• 40% of the remaining memory as the maximum value for the Proseware server instance.<br />
• Configure minimum memory as zero for both instances.<br />
Required <strong>SQL</strong>IOSIM Configuration<br />
• Drive D with a 100MB data file that grows by 20MB increments to a 200MB maximum size.<br />
• Drive L with a 50MB log file that grows by 10MB increments to a 100MB maximum size.<br />
• Cycle Duration (sec) set to 60 seconds.<br />
• Delete Files at Shutdown should be selected.<br />
Required <strong>SQL</strong>IO Tests<br />
• Drive to be tested is D.<br />
• Test 64KB sequential reads for 60 seconds.<br />
• Test 8KB random writes for 60 seconds.<br />
Exercise 1: Adjust Memory Configuration<br />
Scenario<br />
The Adventure Works Marketing server has an existing default Microsoft <strong>SQL</strong> Server <strong>2012</strong> instance<br />
installed and the new MKTG instance. You need to check the total memory available on the server and<br />
how much memory has been allocated to each of the two existing <strong>SQL</strong> Server instances. You should then<br />
decide if the memory allocation is appropriate. If not, make the required changes to the memory<br />
configuration.<br />
The main tasks for this exercise are as follows:<br />
1. Check total server memory.<br />
2. Check the memory allocated to the default instance.<br />
3. Check the memory allocated to the MKTG instance.<br />
4. Decide if the memory allocation is appropriate. If not, make the required changes to the memory<br />
configuration.<br />
Task 1: Check total server memory<br />
• Retrieve the installed memory (RAM) value from the properties of the computer.<br />
Task 2: Check the memory allocated to the default instance<br />
• Using the properties of the AdventureWorks server instance in SSMS, retrieve the minimum and<br />
maximum server memory settings.
Task 3: Check the memory allocated to the MKTG instance<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 2-37<br />
• Using the properties of the Proseware server instance in SSMS, retrieve the minimum and maximum<br />
server memory settings.<br />
Task 4: Decide if the memory allocation is appropriate. If not, make the required<br />
changes to the memory configuration<br />
• Review the Required Memory Configuration from the Supporting Documentation.<br />
• Alter the Memory Configuration for both <strong>SQL</strong> Server instances as per the requirements. You will need<br />
to work out how much memory should be used for all <strong>SQL</strong> Server instances and apportion the<br />
memory based on the requirements in the Supporting Documentation.<br />
Note While reducing the max server memory might require restarting <strong>SQL</strong> Server, there is<br />
no need to restart the servers at this point in the exercise.<br />
Results: After this exercise, you should have configured the memory for the <strong>SQL</strong> Server Instances.<br />
Exercise 2: Pre-installation Stress Testing<br />
Scenario<br />
After you have reviewed allocated memory on the server, you need to test whether the new I/O<br />
subsystem is capable of running <strong>SQL</strong> Server successfully. In this exercise, you need to use the <strong>SQL</strong>IOSIM<br />
utility for stress testing to ensure the stability of <strong>SQL</strong> Server performance.<br />
The main tasks for this exercise are as follows:<br />
1. Configure <strong>SQL</strong>IOSIM.<br />
2. Execute <strong>SQL</strong>IOSIM.<br />
3. Review the results from executing <strong>SQL</strong>IOSIM.<br />
Task 1: Configure <strong>SQL</strong>IOSIM<br />
• Install <strong>SQL</strong>IOSIM from the file D:\10775A_Labs\10775A_02_PRJ\sqliosimx64.exe (Make sure that you<br />
use the Run as administrator option).<br />
• Configure <strong>SQL</strong>IOSIM as per requirements in the Supporting Documentation.<br />
Task 2: Execute <strong>SQL</strong>IOSIM<br />
• Execute <strong>SQL</strong>IOSIM based upon the configured parameters.<br />
Task 3: Review the results from executing <strong>SQL</strong>IOSIM<br />
• If any errors are returned in red, review the errors.<br />
• Locate the final summary for each of the drives and note the average I/O duration in milliseconds.<br />
Results: After this exercise, you have run a stress test using <strong>SQL</strong>IOSIM.
2-38 Preparing Systems for <strong>SQL</strong> Server <strong>2012</strong><br />
Challenge Exercise 3: Check Specific I/O Operations (Only if time permits)<br />
Scenario<br />
You have identified a specific list of I/O workloads and plan to test their performance on the new I/O<br />
subsystem. In this exercise, you need to run the <strong>SQL</strong>IO utility to verify the I/O workloads.<br />
The main tasks for this exercise are as follows:<br />
1. Install the <strong>SQL</strong>IO Utility.<br />
2. Configure and Execute the <strong>SQL</strong>IO Utility.<br />
Task 1: Install the <strong>SQL</strong>IO Utility<br />
• Install the <strong>SQL</strong>IO Utility from the file D:\10775A_Labs\10775A_02_PRJ\<strong>SQL</strong>IO.msi (Choose to install for<br />
all users).<br />
Task 2: Configure and Execute the <strong>SQL</strong>IO Utility<br />
• Review the Supporting Documentation for details of the <strong>SQL</strong>IO tests to be performed.<br />
• Configure the file C:\Program Files (x86)\<strong>SQL</strong>IO\param.txt as per the requirements in the Supporting<br />
Documentation.<br />
• Execute the sqlio.exe program from within a command window, to test the I/O types as requested in<br />
the Supporting Documentation. For each test, record the IOPS and throughput achieved. Also note<br />
the minimum, maximum and average latency for each test.<br />
Results: After this exercise, you should have executed the <strong>SQL</strong>IO utility to test specific I/O<br />
characteristics.
Module Review and Takeaways<br />
Review Questions<br />
1. What is the main reason that causes <strong>SQL</strong> Server to need a lot of memory?<br />
2. Why is pre-installation planning and testing important?<br />
Best Practices<br />
1. Understand the architecture of <strong>SQL</strong> Server.<br />
2. Plan memory, CPU, network and I/O requirements for the specific system.<br />
3. Test these requirements against the available hardware.<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 2-39
2-40 Preparing Systems for <strong>SQL</strong> Server <strong>2012</strong>
Module 3<br />
Installing and Configuring <strong>SQL</strong> Server <strong>2012</strong><br />
Contents:<br />
Lesson 1: Preparing to Install <strong>SQL</strong> Server 3-3<br />
Lesson 2: Installing <strong>SQL</strong> Server 3-16<br />
Lesson 3: Upgrading and Automating Installation 3-24<br />
Lab 3: Installing and Configuring <strong>SQL</strong> Server 3-32<br />
3-1
3-2 Installing and Configuring <strong>SQL</strong> Server <strong>2012</strong><br />
Module Overview<br />
In the previous module, you saw how to pre-stress your hardware systems to ensure they can run<br />
<strong>Microsoft®</strong> <strong>SQL</strong> <strong>Server®</strong>. In this module, you will see the steps involved in installing <strong>SQL</strong> Server. As with<br />
almost all projects, good planning is critical. You will see how to prepare for an installation and then look<br />
at how to perform the installation.<br />
There is a common need in larger enterprises to install many <strong>SQL</strong> Server instances but to make sure that<br />
the installations are performed in consistent way. You will see options for automating the installation of<br />
<strong>SQL</strong> Server, which can also be useful for silent installations of the product where <strong>SQL</strong> Server needs to be<br />
installed in the background to support a software application.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Prepare to install <strong>SQL</strong> Server.<br />
• Install <strong>SQL</strong> Server.<br />
• Upgrade and automate the installation of <strong>SQL</strong> Server.
Lesson 1<br />
Preparing to Install <strong>SQL</strong> Server<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 3-3<br />
Before starting the installation process for <strong>SQL</strong> Server, it is important to have discovered how each of the<br />
requirements for a successful installation can be met. In this lesson, you will consider the specific<br />
requirements that <strong>SQL</strong> Server places on the hardware and software platforms on which it runs, determine<br />
where database files should be placed, and how to configure service accounts. The aim for service<br />
accounts is to make sure they have enough privileges to operate but to minimize the allocation of any<br />
other privileges to the service accounts. Applications might require specific configuration of collations.<br />
You will investigate how collations operate and how the choices you make for configuring a server<br />
collation can affect your later usage of the system.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe the general hardware requirements for <strong>SQL</strong> Server.<br />
• Explain the memory requirements for <strong>SQL</strong> Server.<br />
• Describe the operating system requirements for <strong>SQL</strong> Server.<br />
• Describe other software that is required to support <strong>SQL</strong> Server.<br />
• Determine where to place database files.<br />
• Determine appropriate permissions and privileges for <strong>SQL</strong> Server service accounts.<br />
• Configure collations.
3-4 Installing and Configuring <strong>SQL</strong> Server <strong>2012</strong><br />
Hardware Requirements - General<br />
Key Points<br />
In earlier versions of <strong>SQL</strong> Server, it was necessary to focus on minimum requirements for processor, disk<br />
space and memory. Nowadays, discussing minimum processor speeds and disk space requirements for the<br />
<strong>SQL</strong> Server components is pointless. Even the slowest processor in a new laptop is now fast enough to<br />
meet the minimum requirements for <strong>SQL</strong> Server.<br />
Processors<br />
In Enterprise environments, the number of processors is now a much more significant issue. While it might<br />
seem desirable to add as many CPUs as possible, it is important to consider that there is a tradeoff<br />
between the number of CPUs and license costs. Furthermore, not all computer architectures support the<br />
addition of CPUs. Adding CPU resources might then require architectural upgrades to computer systems,<br />
not just the additional CPUs.<br />
While support for the use of Itanium processors existed for some components in earlier versions of <strong>SQL</strong><br />
Server, <strong>SQL</strong> Server <strong>2012</strong> does not support the use of Itanium processors.<br />
Disk<br />
The hardware requirements for <strong>SQL</strong> Server list the required disk space to install the product. These values<br />
are usually not relevant, however, as the size of user databases generally makes the space occupied by the<br />
product irrelevant.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 3-5<br />
Disk subsystem performance, however, is critical. A "typical" <strong>SQL</strong> Server system today is I/O bound, if it is<br />
configured and working correctly. Note that a bottleneck is, in itself, not a bad thing. Any computer<br />
system that has any task that it needs to perform will have a bottleneck somewhere in the system. If<br />
another component of the server is the bottleneck (rather than the I/O subsystem), there is usually<br />
another underlying issue to resolve. It could be a lack of memory or something more subtle like a<br />
recompilation issue (that is, a situation where <strong>SQL</strong> Server is constantly recompiling code). Memory<br />
requirements for <strong>SQL</strong> Server are discussed in the next topic.<br />
Virtualization<br />
There is a strong resistance to <strong>SQL</strong> Server virtualization in the marketplace. While this resistance is rapidly<br />
decreasing, much of the resistance is outdated and misguided. Often, the resistance is based on<br />
assumptions about virtualization of the entire I/O subsystem. While virtualizing the server can provide a<br />
good outcome, virtualizing the entire I/O subsystem would rarely be a good solution when working with<br />
<strong>SQL</strong> Server. Virtualization of <strong>SQL</strong> Server will be discussed in a later module.<br />
Question: If you need to continue to support an older <strong>SQL</strong> Server version, what would be a<br />
good method of supporting it?
3-6 Installing and Configuring <strong>SQL</strong> Server <strong>2012</strong><br />
Hardware Requirements - Memory<br />
Key Points<br />
The availability of large amounts of memory for <strong>SQL</strong> Server to use is now one of the most important<br />
actors when sizing systems.<br />
While <strong>SQL</strong> Server will operate in relatively small amounts of memory, when memory configuration<br />
challenges arise they tend to relate to the maximum values, not to the minimum values. For example, the<br />
Express Edition of <strong>SQL</strong> Server will not utilize more than 1GB of memory, regardless of how much memory<br />
is physically installed in the system.<br />
64 bit and 32 bit Systems and Memory<br />
The majority of servers being installed today are 64 bit servers. The 64 bit servers have a single address<br />
space that can directly access large amounts of memory.<br />
The biggest challenge with 32 bit servers is that memory outside of the 4GB "visible" address space (that is<br />
the memory that can be directly accessed) is accessed via Address Windowing Extensions (AWE).<br />
Note While earlier versions of <strong>SQL</strong> Server allowed the use of AWE-based memory for the<br />
caching of data pages, <strong>SQL</strong> Server <strong>2012</strong> no longer supports the use of AWE-based memory.<br />
Question: Can you suggest a reason why a 32 bit server might still need to be implemented?
Software Requirements – Operating Systems<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 3-7<br />
The list in the slide shows a general summary of operating systems that are supported for <strong>SQL</strong> Server.<br />
While a version of <strong>SQL</strong> Server can be installed on many different operating systems, not all editions of <strong>SQL</strong><br />
Server can be installed on each operating system.<br />
Client Systems<br />
Even though is it possible to install versions of <strong>SQL</strong> Server on the client operating systems, such as the<br />
Windows 7® (SP1) and Windows Vista® (SP2), the product is really designed for use on server operating<br />
systems such as the Windows <strong>Server®</strong> series of operating systems.<br />
Version Requirements<br />
Many higher-end editions of <strong>SQL</strong> Server also require higher-end editions of Windows. Consult Books<br />
Online for a precise list of supported versions and editions.<br />
32 bit applications can be installed on 64 bit operating systems, via the Windows on Windows (WOW)<br />
emulation system that is provided on the 64 bit operating systems. It is possible to install 32 bit versions<br />
of <strong>SQL</strong> Server on 64 bit operating systems but installing 64 bit versions of <strong>SQL</strong> Server on 64 bit operating<br />
systems is preferred.<br />
The installation of <strong>SQL</strong> Server on Windows Server Core is now supported.
3-8 Installing and Configuring <strong>SQL</strong> Server <strong>2012</strong><br />
Domain Controllers<br />
It is strongly recommended to avoid installing <strong>SQL</strong> Server on a domain controller. If you attempt to install<br />
<strong>SQL</strong> Server on a domain controller, the installation is not blocked but limitations are applied to the<br />
installation.<br />
• You cannot run <strong>SQL</strong> Server services on a domain controller under a local service account or a network<br />
service account.<br />
• After <strong>SQL</strong> Server is installed on a computer, you cannot change the computer from a domain member<br />
to a domain controller. You must uninstall <strong>SQL</strong> Server before you change the host computer to a<br />
domain controller.<br />
• After <strong>SQL</strong> Server is installed on a computer, you cannot change the computer from a domain<br />
controller to a domain member. You must uninstall <strong>SQL</strong> Server before you change the host computer<br />
to a domain member.<br />
• <strong>SQL</strong> Server failover cluster instances are not supported where cluster nodes are domain controllers.<br />
• <strong>SQL</strong> Server Setup cannot create security groups or provision <strong>SQL</strong> Server service accounts on a readonly<br />
domain controller. In this scenario, Setup will fail.
Software Requirements – General<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 3-9<br />
In earlier versions, the installer for <strong>SQL</strong> Server would preinstall most requirements as part of the<br />
installation process. This is no longer the case; the .NET Framework and Powershell need to be preinstalled<br />
before running setup. The installer for <strong>SQL</strong> Server will install the <strong>SQL</strong> Server Native Client (SNAC) and the<br />
<strong>SQL</strong> Server setup support files. However, to minimize the installation time for <strong>SQL</strong> Server, particularly in<br />
busy production environments, it is useful to have preinstalled these components in production<br />
environments during any available planned downtime. Components such as the .NET Framework often<br />
require a reboot after installation so the pre-installation of these components can further reduce<br />
downtime during installations or upgrades.<br />
General Software Requirements<br />
The <strong>SQL</strong> Server installer is based on the Windows Installer 4.5 technology. You should consider installing<br />
Windows Installer 4.5 prior to the installation of <strong>SQL</strong> Server, to minimize <strong>SQL</strong> Server installation time.<br />
Several components of <strong>SQL</strong> Server have a requirement for the Internet Explorer® browser. These<br />
components include the Microsoft Management Console (MMC) add-in, <strong>SQL</strong> Server Management Studio<br />
(SSMS), Business Intelligence Design Studio (BIDS), the Report Designer in BIDS, and any use of HTML<br />
Help.<br />
<strong>SQL</strong> Server communications can be based on the Shared Memory, Named Pipes, or TCP/IP protocols. The<br />
VIA protocol is no longer supported.
3-10 Installing and Configuring <strong>SQL</strong> Server <strong>2012</strong><br />
Determining File Placement<br />
Key Points<br />
The placement of database files will be discussed further in Module 4 along with details on the use of<br />
filegroups but before installing <strong>SQL</strong> Server, it is useful to understand the core concepts related to file<br />
placement. Filegroups are names that are given to sets of files.<br />
Primary and Secondary Data Files<br />
In general, if you have tables that are frequently used together, you should put them on separate<br />
filegroups and physical drives. You may need to alter this recommendation if the files are not similar in<br />
size.<br />
While one large disk might be capable of holding your data, it can only perform a single I/O at a time.<br />
Because of this, you need to consider not only how much disk space is available but how many physical<br />
drives (or "spindles") are needed to achieve the number of I/O operations per second that are required for<br />
your applications.<br />
tempdb<br />
The tempdb database is used to hold temporary objects in a <strong>SQL</strong> Server system. Place the tempdb<br />
database on a fast I/O subsystem to ensure good performance. Stripe the tempdb database across<br />
multiple disks for better performance. If possible, make sure that the tempdb database is located on<br />
separate disks to user databases. While the tempdb database can be located with the data in many<br />
situations, larger systems that make heavy use of tempdb should consider putting tempdb on a separate<br />
set of disks, to achieve extra performance.<br />
In general, you should match the number of tempdb files to the number of processors in the system (up<br />
to a reasonable limit of about eight), as this avoids contention issues with system structures in the files.
Transaction Logs<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 3-11<br />
Create the transaction log on a physically separate disk or RAID array. Transaction logs are written<br />
sequentially; therefore, using a separate, dedicated disk allows the disk heads to stay in place for the next<br />
write operation. For this reason, smaller systems will do well by using a single mirrored disk for the<br />
transaction log. A single mirrored physical disk should support up to approximately 1,000 transactions per<br />
second, depending on the speed of the disk itself. Systems requiring more than that should stripe the<br />
transaction log across a RAID 10 array for maximum performance.<br />
Question: How would you plan drives and file placement for your organization?
3-12 Installing and Configuring <strong>SQL</strong> Server <strong>2012</strong><br />
Service Account Requirements<br />
Key Points<br />
There is no single rule for how to best configure service accounts for use with <strong>SQL</strong> Server but there are a<br />
general set of principles that should typically be followed.<br />
Principle of Least Privilege<br />
The overarching principle is that service accounts should not be allocated any more capability than they<br />
require. While it is easy to make each service a domain administrator account, it is not sensible to do this<br />
from a security-related perspective. If you create an account with low privilege, the <strong>SQL</strong> Server installer<br />
will automatically configure the account with the necessary permissions and privileges needed by the<br />
service account. <strong>SQL</strong> Server does this by creating groups, assigning the required permissions to the<br />
groups, and then adding the service accounts as members of the groups.<br />
Domain vs. Local Accounts<br />
In most cases, you should use a domain account. There are circumstances though, where a local account<br />
might be more appropriate. This would involve situations where you wished to specifically limit the<br />
account's access to the local computer.<br />
If you do decide to use a local account rather than a domain account, it is important to realize that the<br />
Local Service account is not the same as the Local System account. The Local Service account is configured<br />
with the same permissions as other authenticated users, whereas the Local System account has<br />
administrative privileges, and should generally not be used as the service account for any <strong>SQL</strong> Server<br />
services.
Working with Collations<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 3-13<br />
Microsoft <strong>SQL</strong> Server <strong>2012</strong> supports many collations. A collation encodes the rules governing the proper<br />
use of characters for either a language, such as Greek or Polish, or an alphabet, such as Latin1_General<br />
(the alphabet used by Western European languages).<br />
Server Collation Selection<br />
When you install <strong>SQL</strong> Server, you designate a collation and select sort order rules. The term collation<br />
refers to a set of rules that determines how data is compared and sorted. Character data is sorted by using<br />
rules that define the correct sequence of characters. You can specify the sensitivity to case, accent marks,<br />
kana character types, and character width when sorting data.<br />
For example, should the word "AdventureWorks" be sorted as though it was identical to the word<br />
"adventureworks"? For a slightly more subtle example, should the word "café" be sorted as though it was<br />
identical to the word "cafe"?<br />
A particular impact of incorrect collation setting occurs when working with temporary objects that are<br />
created in the tempdb database. If an application database is configured with a different collation to the<br />
server, care must be taken with the use of temporary objects such as temporary tables. In Demonstration<br />
1A, you will see the effect that collations can have on the ability to execute queries involving temporary<br />
objects.<br />
Changing the server collation after installation is not a simple process. Each system structure that has been<br />
sorted (perhaps as part of indexes) needs to be rebuilt. All system databases need to be recreated.<br />
While it is possible to change the default collation for a user database after it has been created, the<br />
existing columns and data in the table are not affected. The default collation setting will only affect new<br />
objects that are created.<br />
<strong>SQL</strong> Server supports two types of collations: Windows collations and <strong>SQL</strong> Server collations.
3-14 Installing and Configuring <strong>SQL</strong> Server <strong>2012</strong><br />
Windows Collations<br />
When you designate Windows collations, the operating system defines a set of rules for storing and<br />
sorting character data that is based on the rules for an associated Windows locale. Windows collation<br />
rules specify which alphabet or language is used when dictionary sorting is applied and the code page<br />
used to store non-Unicode character data.<br />
<strong>SQL</strong> Server Collations<br />
When you designate <strong>SQL</strong> Server collations, <strong>SQL</strong> Server matches the attributes of common combinations of<br />
code page number and sort order that may have been specified in earlier versions of <strong>SQL</strong> Server. <strong>SQL</strong><br />
Server collations control the code page used for storing non-Unicode data and sort rules for both<br />
Unicode and non-Unicode data.<br />
Each <strong>SQL</strong> Server collation specifies three properties:<br />
• The sort order to use for Unicode data types (such as nchar, nvarchar, and nvarchar(max)).<br />
• The sort order to use for non-Unicode character data types (char, varchar, and varchar(max)).<br />
• The code page used to store non-Unicode character data.<br />
<strong>SQL</strong> Server collations are maintained primarily for backward compatibility. When you are designing new<br />
applications, choose the appropriate Windows collation.<br />
<strong>SQL</strong> Server <strong>2012</strong> also introduced some new collations that support supplementary character sets. These<br />
are known as SC collations and support a much wider variety of characters. The SC collations are not<br />
supported as server collations but can be used within databases. You can identify an SC collation as it<br />
name ends with an _SC suffix.<br />
Question: What is the difference between an accent-sensitive collation and a non-accentsensitive<br />
collation?
Demonstration 1A: Using Collations<br />
Demonstration Steps<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 3-15<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click <strong>SQL</strong><br />
Server Management Studio. In the Connect to Server window, type Proseware and click Connect.<br />
From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_03_PRJ\10775A_03_PRJ.ssmssln and click Open.<br />
3. From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file from<br />
within Solution Explorer.<br />
4. Open the 11 – Demonstration 1A.sql script file.<br />
5. Follow the instructions contained within the comments of the script file.
3-16 Installing and Configuring <strong>SQL</strong> Server <strong>2012</strong><br />
Lesson 2<br />
Installing <strong>SQL</strong> Server<br />
Once the decisions have been taken about how <strong>SQL</strong> Server should be configured, you can proceed to<br />
installing <strong>SQL</strong> Server. In this lesson, you will see the phases that installation proceeds through, and then<br />
you will see how <strong>SQL</strong> Server checks your system for compatibility using a tool known as the System<br />
Configuration Checker.<br />
For most users, the setup program will report that all was installed as expected. For the rare situations<br />
where this does not occur, you will also learn how to carry out post-installation checks and<br />
troubleshooting.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe the phases of the <strong>SQL</strong> Server installation process.<br />
• Explain the role of the System Configuration Checker.<br />
• Conduct post-installation checks.
Overview of the Installation Process<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 3-17<br />
You can install the required components of <strong>SQL</strong> Server <strong>2012</strong> by running the <strong>SQL</strong> Server <strong>2012</strong> Setup<br />
program. The exceptions to this are StreamInsight and Master Data Services, which need to be installed<br />
separately. StreamInsight and Master Data Services ship with separate installer programs.<br />
The main <strong>SQL</strong> Server <strong>2012</strong> installation process consists of two core phases: the component update and the<br />
<strong>SQL</strong> Setup MSI package. Understanding the installation process will help you plan and perform <strong>SQL</strong> Server<br />
<strong>2012</strong> installations.<br />
Component Update<br />
During the component update phase of installation, the <strong>SQL</strong> Server <strong>2012</strong> Setup program checks for the<br />
following components:<br />
• Windows Installer 4.5<br />
• The .NET Framework<br />
• <strong>SQL</strong> Server Native Client (sqlncli.msi)<br />
• Windows PowerShell<br />
• Other <strong>SQL</strong> Server Setup support files
3-18 Installing and Configuring <strong>SQL</strong> Server <strong>2012</strong><br />
<strong>SQL</strong> Setup MSI<br />
After the component update phase, Windows Installer is used to install the <strong>SQL</strong> Setup MSI package.<br />
During this phase, Setup performs the following tasks:<br />
• Determines the installation type (default instance or named instance).<br />
• Analyzes the computer using the System Configuration Checker.<br />
• Determines the features to be installed (including the need for automatic updates if required) and<br />
performs the appropriate installation.<br />
The Instance ID field controls the name of the directories for where the features will be installed. By<br />
default it is the name of the instance but it can be changed. Changing to a Named Instance does not<br />
automatically change the Instance ID field if you have already typed in an Instance ID.<br />
Server Configuration<br />
The BUILTIN\Administrators group no longer gets administrative control in <strong>SQL</strong> Server, as it did in earlier<br />
versions. It is often acceptable however to add BUILTIN\Administrators as a group that should have<br />
administrative permissions, which means all administrators on the computer. Analysis Services is<br />
configured in a similar way.<br />
A common request from users is for the ability to block access to <strong>SQL</strong> Server for computer administrators.<br />
While avoiding the inclusion of the BUILTIN\Administrators group can prevent casual attempts to access<br />
the server's contents, you should not assume that computer administrators will not be able to access <strong>SQL</strong><br />
Server if they are determined to do so.<br />
During the server configuration phase, you also have the opportunity to set a number of system<br />
configuration options such as whether or not the filestream feature should be enabled, and the default<br />
folders for use with user databases.
System Configuration Checker<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 3-19<br />
As part of <strong>SQL</strong> Server Setup, the System Configuration Checker (SCC) scans the computer where <strong>SQL</strong><br />
Server will be installed. The SCC checks for conditions that prevent a successful <strong>SQL</strong> Server installation.<br />
Before Setup starts running the <strong>SQL</strong> Server Installation Wizard, the SCC retrieves the status of each check<br />
item, compares the result with required conditions, and provides guidance for removal of blocking issues.<br />
If a result would not block the installation of <strong>SQL</strong> Server but would limit the capabilities of the product<br />
once it was installed, you would be warned about the situation.<br />
Software Requirements<br />
In the Software Requirements phase, many aspects of the software configuration are checked, including<br />
whether the operating system present is a supported operating system for the version and edition of <strong>SQL</strong><br />
Server being installed. The operating system service pack level is also checked and any unsupported<br />
operating system would block the setup from proceeding.<br />
Also, as part of the software requirements phase, the SCC checks for the presence of the Windows<br />
Management Instrumentation (WMI) service. The WMI service must be available. A failed check on this<br />
item will also block Setup.<br />
Hardware Requirements<br />
The SCC will warn the user but will not block Setup if the minimum or recommended RAM check is not<br />
met. Memory requirements are for <strong>SQL</strong> Server only, and do not reflect additional memory requirements of<br />
the operating system.<br />
The SCC will warn the user but will not block Setup if the minimum or recommended processor speed<br />
check is not met.
3-20 Installing and Configuring <strong>SQL</strong> Server <strong>2012</strong><br />
Security Requirements<br />
The user running Setup must have administrator privileges on the computer on which <strong>SQL</strong> Server is being<br />
installed.<br />
System State Requirements<br />
<strong>SQL</strong> Server Setup cannot run if files required by Setup are locked by other services, processes, or<br />
applications. A failed check on this item will block Setup.<br />
Report<br />
After completing these checks, the SCC generates a report that can be viewed or saved. This report<br />
includes information about any issues that will prevent installation and recommends solutions. It also<br />
includes warnings and recommendations—such as recommended hotfixes or security configurations—for<br />
issues that will not prevent installation but might cause problems. In most scenarios, you should resolve<br />
these issues and run Setup again rather than attempt to resolve them after installation is complete.
Post-installation Checks<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 3-21<br />
Once <strong>SQL</strong> Server has been installed, the most important check is to make sure that all <strong>SQL</strong> Server services<br />
are running, by using the <strong>SQL</strong> Server Services node in <strong>SQL</strong> Server Configuration Manager.<br />
Note <strong>SQL</strong> Server services have names that differ slightly compared to their displayed<br />
names. You can view the actual service name for a service by looking on the properties<br />
page for the service.<br />
Administrators typically do not need to check the contents of the <strong>SQL</strong> Server setup log files, after<br />
installation, as the installer program will indicate errors that occur and will attempt to reverse any of the<br />
<strong>SQL</strong> Server setup that has been completed to that point.<br />
Note When errors occur during the <strong>SQL</strong> Server Setup phase, the installation of the <strong>SQL</strong><br />
Server Native Access Client and the Setup Components is not reversed.<br />
The setup log files are located here:<br />
%programfiles%\Microsoft <strong>SQL</strong> Server\110\Setup Bootstrap\Log<br />
Administrators typically only need to look at the setup log files in two situations:<br />
• when trying to work out why setup is failing if not enough error information is provided by the<br />
installer.<br />
• when working with Microsoft Product Support to troubleshoot a setup failure.
3-22 Installing and Configuring <strong>SQL</strong> Server <strong>2012</strong><br />
The log files have details for each of the three core phases. Books Online has details on what is contained<br />
in each section of the log files.<br />
Question: If you discover after installation that you have used an incorrect or inappropriate<br />
service account for <strong>SQL</strong> Server, which tool do you use to correct the account?
Demonstration 2A: Using System Configuration Checker<br />
Demonstration Steps<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 3-23<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the Virtual Machines list in Hyper-V Manager, right-click the 10775A-MIA-<strong>SQL</strong>1 virtual machine<br />
and click Settings.<br />
3. In the Settings for 10775A-MIA-<strong>SQL</strong>1 window, in the Hardware list expand IDE Controller 1, and<br />
click DVD Drive.<br />
4. In the DVD Drive properties pane, click Image file, and click browse.<br />
5. Navigate to the file C:\Program Files\Microsoft Learning\1077XA\Drives\10775A-MIA-<strong>SQL</strong>1<br />
\Virtual Hard Disks\<strong>SQL</strong>FULL_ENU.iso and click Open.<br />
6. In the Settings for 10775A-MIA-<strong>SQL</strong>1 window, click OK.<br />
7. In the Virtual Machine window, in the AutoPlay window (which should now have popped up) click<br />
Run SETUP.EXE and wait for <strong>SQL</strong> Server Setup to start.<br />
8. In the <strong>SQL</strong> Server Installation Center window, click System Configuration Checker from the list of<br />
available options on the Planning tab.<br />
9. In the Setup Support Rules window, note the list of rules which have been checked.<br />
10. Click on the warning in the Status column for the Microsoft .NET Application Security rule.<br />
11. In the Rule Check Result window, read the details of the warning and click OK.<br />
12. In the Setup Support Rules window, click OK.
3-24 Installing and Configuring <strong>SQL</strong> Server <strong>2012</strong><br />
Lesson 3<br />
Upgrading and Automating Installation<br />
Rather than installing <strong>SQL</strong> Server, the product often needs to be upgraded from earlier versions. In this<br />
lesson, you will see the benefits and limitations of the available methods for performing upgrades.<br />
Not every installation of <strong>SQL</strong> Server is performed individually by an administrator. In larger enterprises,<br />
there is a need to be able to install many instances of <strong>SQL</strong> Server and to be able to install the instances in<br />
a very consistent manner. Unattended installation options are provided for <strong>SQL</strong> Server and can be used to<br />
satisfy the requirements of this scenario. Unattended installations can also be performed in a silent mode.<br />
This allows the installation of <strong>SQL</strong> Server to be performed during the installation of other applications.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Upgrade <strong>SQL</strong> Server.<br />
• Apply <strong>SQL</strong> Server hotfixes, cumulative updates and service packs.<br />
• Perform unattended installations.
Upgrading <strong>SQL</strong> Server<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 3-25<br />
There are two basic ways that <strong>SQL</strong> Server upgrades can be performed. There is no single preferred way to<br />
perform the upgrades. Each method has benefits and limitations, and is appropriate in certain<br />
circumstances.<br />
In-place Upgrades<br />
In-place upgrades occur when the installed version of <strong>SQL</strong> Server is directly replaced by a new version.<br />
This is an easier and highly automated method of upgrading but is riskier. If an upgrade fails, it is much<br />
harder to return to the previous operating state. The risk of this cannot be ignored for most customers.<br />
Note When you are considering risk, you need to consider that it may not be the <strong>SQL</strong><br />
Server upgrade that fails. Even if the <strong>SQL</strong> Server upgrade works as expected, but then the<br />
application fails to operate as expected on the new version of <strong>SQL</strong> Server, the need to<br />
recover the situation quickly will be just as important.<br />
In-place upgrades have the added advantage of minimizing the need for additional hardware resources<br />
and in-place upgrades avoid the need to redirect client applications that are configured to work with the<br />
existing server.
3-26 Installing and Configuring <strong>SQL</strong> Server <strong>2012</strong><br />
Side-by-side Upgrades<br />
Side-by-side upgrades are a safer alternative as the original system is left in place and can be quickly<br />
returned to production should an upgrade issue arise. However, side-by-side upgrades involve more work<br />
and more hardware resources.<br />
To perform a side-by-side upgrade, you will need enough hardware resources to provide for both the<br />
original system and the new system. Two common risks associated with side-by-side upgrades relate to<br />
the time taken to copy all the user databases to a new location and the space required to hold these<br />
copies.<br />
While most side-by-side upgrades are performed on separate servers, it is possible to install both versions<br />
of <strong>SQL</strong> Server on the same server during a side-by-side upgrade. However, side-by-side upgrades of<br />
versions with the same major build number (ie: <strong>SQL</strong> Server 2008 R2 and <strong>SQL</strong> Server 2008) on the same<br />
server are a special case. Because the major version number is identical, separate versions of the shared<br />
components cannot co-exist on the same server. Shared components will be upgraded.<br />
Not all versions of <strong>SQL</strong> Server are supported when installed side-by-side. Consult Books Online for a<br />
matrix of versions that are supported when installed together.<br />
Hybrid Options<br />
It is also possible to use some elements of an in-place upgrade with elements of a side-by-side upgrade.<br />
For example, rather than copying all the user databases, after installing the new version of <strong>SQL</strong> Server<br />
beside the old version, and migrating all the server objects such as logins, you could detach user<br />
databases from the old server instance and reattach them to the new server instance.<br />
Note Once user databases have been attached to a newer version of <strong>SQL</strong> Server, they<br />
cannot be reattached to an older version again, even if the database compatibility settings<br />
have not been upgraded. This is a risk that needs to be considered when using a hybrid<br />
approach.
<strong>SQL</strong> Server Servicing<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 3-27<br />
As with all software products, issues can be encountered with <strong>SQL</strong> Server over time. The product group is<br />
very responsive in fixing issues that are identified. If you have issues that you wish to notify Microsoft of,<br />
or suggestions for how the product might be improved, please visit the site http://connect.microsoft.com.<br />
The simplest way to keep <strong>SQL</strong> Server up to date is to enable automatic updates from the Microsoft<br />
Update service. Larger organizations or those with strong change processes should exert caution in<br />
applying automatic updates. It is likely that the updates should be applied to test or staging environments<br />
before being applied to production environments.<br />
<strong>SQL</strong> Server updates are released in several ways:<br />
• Hotfixes (also known as QFE or Quick Fix Engineering) are released to address urgent customer<br />
concerns. Due to the tight time constraints, only limited testing is able to be performed on these fixes,<br />
so they should only be applied to systems that are known to be experiencing the issues that they<br />
address.<br />
• Cumulative Updates (or CUs) are periodic roll-up releases of hotfixes that have received further<br />
testing as a group.<br />
• Service Packs (or SPs) are periodic releases that full regression testing has been performed on.<br />
Microsoft recommend applying service packs to all systems after appropriate levels of organizational<br />
testing.<br />
• <strong>SQL</strong> Server 2008 R2 and later can also have product service packs slipstreamed into the installation<br />
process to avoid the need to apply service packs after installation.
3-28 Installing and Configuring <strong>SQL</strong> Server <strong>2012</strong><br />
Unattended Installation<br />
Key Points<br />
In many organizations, script files for standard builds of software installations are created by senior IT<br />
administrators and used to ensure consistent installations throughout the organizations.<br />
Unattended installations can help with the deployment of multiple identical installations of <strong>SQL</strong> Server<br />
across an enterprise. Unattended installations can also provide for the delegation of the installation to<br />
another person.<br />
Unattended Installation Methods<br />
One option for performing an unattended installation of <strong>SQL</strong> Server <strong>2012</strong> is to create an .ini file<br />
containing the required setup information and passing it as a parameter to setup.exe at a command<br />
prompt. A second alternative is to pass all the required <strong>SQL</strong> Server setup details as parameters to the<br />
setup.exe program, rather than placing the parameters into an .ini file. You can choose to use the .ini file<br />
or to use the separate parameters but a combination of both is not permitted.<br />
In both examples on the slide, the second method has been used. The first example shows a typical<br />
installation command and the second example shows how an upgrade could be performed using the<br />
same method.<br />
/q Switch<br />
The "/q" switch shown in the examples specifies "quiet mode" – no user interface is provided. An<br />
alternative switch "/qs" specifies "quiet simple" mode. In the quiet simple mode, the installation runs and<br />
shows progress in the UI but does not accept any input.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 3-29<br />
Creating an .ini File<br />
An .ini file for unattended installation can be created by using any text editor, such as Notepad. The <strong>SQL</strong><br />
Server installation program creates a file called ConfigurationFile.ini in a folder that is named based upon<br />
the installation date and time, under the folder C:\Program Files\Microsoft <strong>SQL</strong> Server\110\Setup<br />
Bootstrap\Log. You can use this as a starting point for creating your own .ini file. The .ini file is composed<br />
of a single [Options] section containing multiple parameters, each relating to a different feature or<br />
configuration setting.
3-30 Installing and Configuring <strong>SQL</strong> Server <strong>2012</strong><br />
Demonstration 3A: Creating an Unattended Installation File<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_03_PRJ\10775A_03_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
2. Open the 31 – ConfigurationFile.ini file.<br />
3. Review the Configuration File, in particular note the values of the following properties:<br />
• INSTANCEID<br />
• ACTION<br />
• FEATURES<br />
• QUIET<br />
• QUIETSIMPLE<br />
• INSTALLSHAREDDIR<br />
• INSTANCEDIR<br />
• INSTANCENAME
• AGTSVCSTARTUPTYPE<br />
• <strong>SQL</strong>COLLATION<br />
• <strong>SQL</strong>SVCACCOUNT<br />
• <strong>SQL</strong>SYSADMINACCOUNTS<br />
• TCPENABLED<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 3-31
3-32 Installing and Configuring <strong>SQL</strong> Server <strong>2012</strong><br />
Lab 3: Installing and Configuring <strong>SQL</strong> Server<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_03_PRJ\10775A_03_PRJ.ssmssln.<br />
7. From the View menu, click Solution Explorer. In Solution Explorer, double-click the query<br />
00-Setup.sql. When the query window opens, click Execute on the toolbar.<br />
8. On the host system, in the Virtual Machines list in Hyper-V Manager, right-click the<br />
10775A-MIA-<strong>SQL</strong>1 virtual machine and click Settings.<br />
9. In the Settings for 10775A-MIA-<strong>SQL</strong>1 window, in the Hardware list expand IDE Controller 1, and<br />
click DVD Drive.<br />
10. In the DVD Drive properties pane, click Image file, and click browse.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 3-33<br />
11. Navigate to the file C:\Program Files\Microsoft Learning\10775\Drives\10775A-MIA-<strong>SQL</strong>1<br />
\Virtual Hard Disks\<strong>SQL</strong>FULL_ENU.iso and click Open.<br />
12. In the Settings for 10775A-MIA-<strong>SQL</strong>1 window, click OK.<br />
Lab Scenario<br />
The development group within the company has ordered a new server for the work they need to do on<br />
the Proseware system. Unfortunately, the new server will not arrive for a few weeks and the development<br />
group cannot wait that long to start work.<br />
The new server that was provisioned by the IT Support department already has two instances of <strong>SQL</strong><br />
Server installed. The support team has determined that the new server will be able to support an<br />
additional instance of <strong>SQL</strong> Server on a temporary basis, until the server for the development group arrives.<br />
You need to install the new instance of <strong>SQL</strong> Server and if you have time, you should configure the<br />
memory of all three instances to balance their memory demands, and you should create a new alias for<br />
the instance that you install.<br />
Supporting Documentation<br />
Required <strong>SQL</strong> Server Instance Configuration<br />
Item Configuration<br />
Instance Name MKTGDEV<br />
Features <strong>Database</strong> Engine only (excluding Full Text and Replication)<br />
Data File Folder D:\MKTGDEV for user databases and tempdb<br />
Log File Folder L:\MKTGDEV for user databases and tempdb<br />
Service Accounts AdventureWorks\PWService for all services<br />
Startup Both <strong>SQL</strong> Server and <strong>SQL</strong> Server Agent should start<br />
automatically<br />
Server Collation <strong>SQL</strong>_Latin1_General_CP1_CI_AS<br />
Authentication Mode Mixed<br />
Administrative User AdventureWorks\Administrator<br />
Filestream Support Disabled<br />
Note that Pa$$w0rd is used for all passwords in the course.
3-34 Installing and Configuring <strong>SQL</strong> Server <strong>2012</strong><br />
Required Memory Configuration (Used in Exercise 4 only)<br />
• 1.0GB reserved for operating system.<br />
• 40% of the remaining memory as the maximum value for the AdventureWorks server instance.<br />
• 30% of the remaining memory as the maximum value for the Proseware server instance.<br />
• 30% of the remaining memory as the maximum value for the PWDev server instance.<br />
• Configure minimum memory as zero for all instances.<br />
Exercise 1: Review Installation Requirements<br />
Scenario<br />
You will review the supporting documentation that describes the required configuration for the new<br />
instance. You will also create the required folders to hold the data and log files for the instance.<br />
The main tasks for this exercise are as follows:<br />
1. Review the supporting documentation prior to installation.<br />
2. Create the folders that are required for the data and log files.<br />
Task 1: Review the supporting documentation prior to installation<br />
• Review the supplied requirements in the supporting documentation for the exercise.<br />
Task 2: Create the folders that are required for the data and log files<br />
• Based on the supplied requirements, create the folders that are required for the data and log files of<br />
the new <strong>SQL</strong> Server instance.<br />
Results: After this exercise, you should have read the requirements and created the two<br />
folders that are required.<br />
Exercise 2: Install the <strong>SQL</strong> Server Instance<br />
Scenario<br />
In this exercise, you need to perform an installation of another instance of <strong>SQL</strong> Server on the existing<br />
server based on the supplied specifications. In the setup steps for this lab, you have mounted an ISO file<br />
that contains an image of <strong>SQL</strong> Server that will be used to install the instance. Pre-requisites have already<br />
been installed as there are existing instances of <strong>SQL</strong> Server.<br />
The main task for this exercise is as follows:<br />
1. Based on the requirements reviewed in Exercise 1, install another instance of <strong>SQL</strong> Server.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 3-35<br />
Task 1: Based on the requirements reviewed in Exercise 1, install another instance of <strong>SQL</strong><br />
Server<br />
• Install another instance of <strong>SQL</strong> Server based on the requirements in Exercise 1.<br />
Note On the Server Configuration page, you should configure the service account name<br />
and password, the startup type for <strong>SQL</strong> Server Agent, and the collation. On the <strong>Database</strong><br />
Engine Configuration page, you should configure Mixed Mode, the sa password, Add<br />
Current User, Data Directories tab, and the Filestream tab.<br />
Results: After this exercise, you should have installed another <strong>SQL</strong> Server instance.<br />
Exercise 3: Perform Post-installation Setup and Checks<br />
Scenario<br />
You need to make sure that the services for the new instance are running and you need to create a new<br />
alias for the instance you have just installed. Once the instance is created, you should connect to the<br />
instance using <strong>SQL</strong> Server Management Studio to make sure that the instance works.<br />
The main tasks for this exercise are as follows:<br />
1. Check that the services for the new <strong>SQL</strong> Server instance are running.<br />
2. Configure both 32 bit and 64 bit aliases for the new instance.<br />
3. Connect to the new instance using SSMS.<br />
Task 1: Check that the services for the new <strong>SQL</strong> Server instance are running<br />
• Using <strong>SQL</strong> Server Configuration Manager, make sure that the newly installed services are running.<br />
• Make sure that the named pipes protocol is enabled for the new instance.<br />
Task 2: Configure both 32 bit and 64 bit aliases for the new instance<br />
• Configure a 32 bit alias called PWDev for the new instance using named pipes.<br />
• Configure a 64 bit alias called PWDev for the new instance using named pipes.<br />
Task 3: Connect to the new instance using SSMS<br />
• Start <strong>SQL</strong> Server Management Studio and connect to the new instance to make sure it is working.<br />
Make the connection using the PWDev alias.<br />
Results: After this exercise, you should have checked that the services are running, created a new alias,<br />
and connected using SSMS.<br />
Challenge Exercise 4: Configure Server Memory (Only if time permits)<br />
Scenario<br />
There are now three <strong>SQL</strong> Server instances installed on the Marketing server at AdventureWorks. In this<br />
exercise, you need to configure the amount of memory that is allocated to each instance.
3-36 Installing and Configuring <strong>SQL</strong> Server <strong>2012</strong><br />
The main tasks for this exercise are as follows:<br />
1. Review the current memory available on the server.<br />
2. Determine an appropriate memory allocation for each instance.<br />
3. Configure each instance appropriately.<br />
Task 1: Review the current memory available on the server<br />
• Review the current memory available on the server.<br />
Task 2: Determine an appropriate memory allocation for each instance<br />
• Review the required memory proportions as specified in the supporting documentation.<br />
• Determine an appropriate memory allocation for each instance.<br />
Task 3: Configure each instance appropriately<br />
• Configure each instance based on the values you have calculated.<br />
Results: After this exercise, you should have modified the memory allocation for each <strong>SQL</strong> Server<br />
instance.
Module Review and Takeaways<br />
Review Questions<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 3-37<br />
1. Why is the choice of collation for a server so important, when you can choose individual database<br />
collations anyway?<br />
2. Should all <strong>SQL</strong> Server services use a single domain-based service account?<br />
Best Practices related to a particular technology area in this module<br />
1. Use domain-based accounts for service accounts.<br />
2. Configure service accounts for the least possible privilege that lets them still operate.<br />
3. Use <strong>SQL</strong> Server Configuration Manager to change service accounts as it will ensure that the correct<br />
permissions and ACLs are configured.
3-38 Installing and Configuring <strong>SQL</strong> Server <strong>2012</strong>
Module 4<br />
Working with <strong>Database</strong>s<br />
Contents:<br />
Lesson 1: Overview of <strong>SQL</strong> Server <strong>Database</strong>s 4-3<br />
Lesson 2: Working with Files and Filegroups 4-15<br />
Lesson 3: Moving <strong>Database</strong> Files 4-29<br />
Lab 4: Working with <strong>Database</strong>s 4-39<br />
4-1
4-2 Working with <strong>Database</strong>s<br />
Module Overview<br />
One of the most important roles for database administrators working with <strong>Microsoft®</strong> <strong>SQL</strong> <strong>Server®</strong> is the<br />
management of databases. It is important to know how data is stored in databases, how to create<br />
databases, and how to move databases either within a server or between servers.<br />
When databases become larger, there is a need to allocate the data from the database across different<br />
volumes, rather than storing the data in a single large disk volume. This allocation of data is configured<br />
using filegroups and is used to address both performance and ongoing management needs within<br />
databases.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe the role and structure of <strong>SQL</strong> Server databases.<br />
• Work with files and filegroups.<br />
• Move database files within servers and between servers.
Lesson 1<br />
Overview of <strong>SQL</strong> Server <strong>Database</strong>s<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 4-3<br />
Before you begin to create databases, you need to learn about how data is stored in databases, about the<br />
different types of files that <strong>SQL</strong> Server can utilize, where the files should be placed, and also learn how to<br />
plan for ongoing file growth. This will make sure that you configure databases optimally.<br />
In this lesson, you will also explore the system databases that are supplied with <strong>SQL</strong> Server. One system<br />
database in particular, tempdb, has very specific configuration requirements as the performance of<br />
tempdb has the potential to affect the performance of all applications that use the server.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe how data is stored in <strong>SQL</strong> Server.<br />
• Explain the different type of files that <strong>SQL</strong> Server can use.<br />
• Determine appropriate file placement and number of files for <strong>SQL</strong> Server databases.<br />
• Ensure sufficient file capacity and allow for ongoing growth.<br />
• Explain the role of each of the system databases supplied with <strong>SQL</strong> Server.<br />
• Configure tempdb.
4-4 Working with <strong>Database</strong>s<br />
How Data Is Stored in <strong>SQL</strong> Server<br />
Key Points<br />
The data stored by <strong>SQL</strong> Server databases is contained within a set of files allocated for the database to<br />
use. There are three types of files that are used by <strong>SQL</strong> Server: primary data files, secondary data files, and<br />
transaction log files.<br />
Primary Data Files<br />
The primary data file is the starting point of the database. Every database has a single primary data file. As<br />
well as holding data in the same way that other database files do, the primary data file holds pointers to<br />
the other files in the database. Primary data files typically use the file extension .mdf. While the use of this<br />
file extension is not mandatory, using .mdf as a file extension for primary data files is highly<br />
recommended.<br />
Secondary Data Files<br />
Secondary data files are optional, user-defined, additional data files that can be used to spread the data<br />
through more files for performance and/or maintenance reasons. Secondary files can be used to spread<br />
data across multiple disks by putting each file on a different disk drive. Additionally, if a database exceeds<br />
the maximum size for a single Windows file, you can use secondary data files so the database can<br />
continue to grow. The recommended extension for secondary data files is .ndf.
Data File Pages<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 4-5<br />
Pages in a <strong>SQL</strong> Server data file are numbered sequentially, starting with zero for the first page in the file.<br />
Each file in a database has a unique file ID number. To uniquely identify a page in a database, both the<br />
file ID and the page number are required. Each page is 8KB in size. After allowing for header information<br />
that is needed on each page, there is a region of 8096 bytes remaining for holding data. Data rows can<br />
hold fixed length and variable length column values. All fixed length columns of a data row need to fit on<br />
a single page, within an 8060 byte limit. Data pages only hold data from a single database object, such as<br />
a table or an index.<br />
Extents<br />
Groups of 8 contiguous pages are referred to as an extent. <strong>SQL</strong> Server uses extents to simplify the<br />
management of data pages. There are two types of extents:<br />
• Uniform extents: All pages within the extent contain data from only one object.<br />
• Mixed extents: The pages of the extent can hold data from different objects.<br />
The first allocation for an object is at the page level, and always comes from a mixed extent. If they are<br />
free, other pages from the same mixed extent will be allocated to the object as needed. Once the object<br />
has grown bigger than its first extent, then all future allocations are from uniform extents.<br />
In both primary and secondary data files, a small number of pages is allocated to tracking the usage of<br />
extents within the file.<br />
Log Files<br />
Log files hold information that is used to recover the database when necessary. There must be at least one<br />
log file for each database. The recommended extension for log files is .ldf.<br />
All transactions are written to the log file using the write-ahead logging (WAL) mechanism to ensure the<br />
integrity of the database in case of a failure and to support rollbacks of transactions.<br />
When data pages need to be changed, they are fetched into memory and changed in memory. The "dirty"<br />
pages are then written to the transaction log in a synchronous manner. At some time later, during a<br />
background process known as a "checkpoint" process, the dirty pages are written to the database files. For<br />
this reason, the pages that are contained in the transaction log are critical to the ability of <strong>SQL</strong> Server to<br />
recover the database to a known committed state. Transaction logs are discussed in detail in later<br />
modules in this course.<br />
Note The log file is also used by other features within <strong>SQL</strong> Server such as transactional<br />
replication, database mirroring, and change data capture. These are advanced topics that<br />
are out of scope for this course.<br />
Question: In what scenarios would secondary data files be useful?
4-6 Working with <strong>Database</strong>s<br />
Determining File Placement and Number of Files<br />
Key Points<br />
It is important to isolate log and data files for both performance and recovery reasons. This isolation<br />
needs to be at the physical disk level.<br />
Access Patterns<br />
The access patterns of log and data files are very different. Data access on log files consists primarily of<br />
sequential, synchronous writes to log files, with occasional random disk access. Data access on data files is<br />
predominantly asynchronous random disk access to the data files from the database. A single physical<br />
drive system does not tend to provide good response times when these types of data access are<br />
combined.<br />
Recovery<br />
While disk subsystems are increasing in reliability, failures do still occur. If a <strong>SQL</strong> Server data file is lost, the<br />
database could be restored from a backup and the transaction log reapplied to recover the database to a<br />
recent point in time. If a <strong>SQL</strong> Server log file is lost, the database can be forced to recover from the data<br />
files, with the possibility of some data loss or inconsistency in the database. But if both the data and log<br />
files are on a single disk subsystem that is lost, the recovery options usually involve restoring the database<br />
from an earlier backup and losing all transactions since that time. Isolating data and log files can help to<br />
avoid the worst impacts of drive subsystem failures.<br />
Logical vs. Physical Separation<br />
It is best practice to do the separation on the physical level. With storage area network (SAN) systems, it is<br />
easy to configure storage so that it appears to be separated but the separation is only logical. The same<br />
underlying physical storage is being used. This is generally a poor design choice.
Data File Management<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 4-7<br />
Ideally, all data files that are defined for a database should be the same size. Data is spread evenly across<br />
all available data files. The main performance advantages from doing this come when the files are spread<br />
over different storage locations.<br />
A number of management advantages are gained from allocating multiple data files:<br />
• The main management advantage gained from allocating multiple data files is the possibility of<br />
moving files and part of the data later.<br />
• Another management advantage gained through the use of multiple data files is that if a database<br />
file is being restored separately, the recovery time can be minimized. This could be useful where only<br />
part of the data was corrupt.<br />
• Splitting a database across multiple data files can increase the parallelism in the I/O channel.<br />
• The final advantage is that if a database exceeds the maximum size for a single Windows file, you can<br />
use secondary data files so the database can continue to grow.<br />
Number of Log Files<br />
Unlike the way that <strong>SQL</strong> Server writes to data files, the <strong>SQL</strong> Server database engine only writes to a single<br />
log file at any point in time. Additional log files are only used when space is not available in the active log<br />
file.<br />
Question: Why is it important to separate data and log files on the physical level?
4-8 Working with <strong>Database</strong>s<br />
Ensuring Sufficient File Capacity<br />
Key Points<br />
It is important to consider capacity planning. To perform capacity planning, you should estimate the<br />
maximum size of the database, indexes, transaction log, and tempdb, through a predicted growth period.<br />
For most sites, you should aim to create database files that are large enough to handle the data expected<br />
to be stored in the files over a twelve month period.<br />
Realistic Planning<br />
Performance and capacity testing for <strong>SQL</strong> Server is usually accomplished through the load testing of the<br />
actual application(s) that will be using the <strong>SQL</strong> Server, rather than trying to estimate the requirements.<br />
Autogrowth vs. Planned Growth<br />
<strong>SQL</strong> Server can automatically expand a database according to growth parameters that were defined when<br />
the database files were created. While the options for autogrowth should be enabled to prevent<br />
downtime when unexpected growth occurs, it is important to avoid the need for <strong>SQL</strong> Server to ever<br />
autogrow the files. Instead, you should monitor file growth over time and ensure that files are large<br />
enough for many months or years.<br />
Many administrators are concerned that larger database files will somehow increase the time it takes to<br />
perform backups. The size of a <strong>SQL</strong> Server backup is not related directly to the size of the database files.<br />
Only used portions of the database are backed up.<br />
One significant issue that arises with autogrowth is that there is a tradeoff related to the size of the<br />
growth increments. If a large increment is specified, a significant delay can be experienced in the<br />
execution of the T-<strong>SQL</strong> statement that triggered the need for growth. If a too small increment is specified,<br />
the filesystem can become very fragmented and the database performance can suffer because the data<br />
files have been allocated in small chunks all over a disk subsystem.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 4-9<br />
Note The instant file initialization (IFI) option can reduce the time taken for autogrow<br />
operations on data files. It does not apply to log files. IFI will be discussed in the next lesson.<br />
Log File Growth Planning<br />
If the transaction log is not set up to expand automatically, the transaction log can run out of space when<br />
certain types of activity occur in the database. For example, performing large-scale bulk operations, such<br />
as bulk import or index creation, can cause the transaction log to fill rapidly.<br />
In addition to expanding the size of the transaction log, it is possible to truncate a log file. Truncating the<br />
log purges the file of inactive, committed, transactions and allows the <strong>SQL</strong> Server <strong>Database</strong> Engine to<br />
reuse this unused part of the transaction log.<br />
Note How and when the log is truncated depends on the recovery model of the database<br />
and will be discussed later.<br />
Question: When would it be appropriate to preset a maximum size for the database and<br />
restrict filegrowth?
4-10 Working with <strong>Database</strong>s<br />
System <strong>Database</strong>s Supplied with <strong>SQL</strong> Server<br />
Key Points<br />
There are five system databases that are created during installation. They are master, msdb, model,<br />
tempdb, and resource. These databases contain metadata and cannot be dropped.<br />
master<br />
The master database contains all system-wide information. Anything that is defined at the server instance<br />
level is typically stored in the master database.<br />
If the master database is damaged or corrupted, <strong>SQL</strong> Server will not start, so it is very important to backup<br />
the master database on a regular basis.<br />
msdb<br />
The msdb database holds information for the <strong>SQL</strong> Server Agent. Jobs, operators, and alerts which are<br />
stored in the msdb database. It is also important to perform a backup of the msdb database regularly, to<br />
ensure that jobs, schedules, and history for backups, restores, and maintenance plans are not lost. In<br />
earlier versions of <strong>SQL</strong> Server, <strong>SQL</strong> Server Integration Services (SSIS) packages were often stored within the<br />
msdb database also. In <strong>SQL</strong> Server <strong>2012</strong>, these should be stored in the dedicated SSIS database instead.<br />
model<br />
The model database is the template on which all user databases are based. Any new database that is<br />
created uses the model database as a template. Any objects created in the model database will be present<br />
in all new databases that are created on the server instance. Many sites do not ever modify the model<br />
database. Note that even though the model database does not seem overly important, <strong>SQL</strong> Server will not<br />
start if the model database is not present.
tempdb<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 4-11<br />
The tempdb database holds temporary data. This database is truncated or created every time that <strong>SQL</strong><br />
Server starts so there is no need to perform a backup on this database. In fact, there is no option to<br />
perform a backup of the tempdb database. The tempdb database is discussed further in the next topic.<br />
resource<br />
The resource database is a read-only hidden database that contains system objects that are mapped to<br />
the sys schema in every database. This database also holds all system stored procedures, system views and<br />
system functions. In <strong>SQL</strong> Server versions before <strong>SQL</strong> Server 2005 these objects were defined in the master<br />
database.<br />
Question: Suggest an example of objects that you might want to create in the model<br />
database so that they are already present in all newly-created databases on the server<br />
instance.
4-12 Working with <strong>Database</strong>s<br />
Overview of tempdb<br />
Key Points<br />
The tempdb database consists of the internal objects, the row version store and user objects. The<br />
performance of the tempdb database is critical to the overall performance of most <strong>SQL</strong> Server<br />
installations.<br />
Internal Objects<br />
Internal objects are objects that are used by <strong>SQL</strong> Server for its own operations. Internal objects include<br />
work tables for cursor or spool operations, temporary large object (LOB) storage, work files for hash join<br />
or hash aggregate operations, and intermediate sort results.<br />
Note Working with internal objects is an advanced concept outside the scope of this<br />
course.<br />
Transactions that are associated with snapshot-related transaction isolation levels can cause alternate<br />
versions of rows to be briefly maintained in a special row version store within tempdb. Row versions can<br />
also be produced by other features like online index rebuilds, Multiple Active Result Sets (MARS) and<br />
triggers.<br />
Note Transaction isolation levels are discussed in course 10776A: Developing Microsoft<br />
<strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s.
User Objects<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 4-13<br />
Most objects that reside in the tempdb database are user-generated objects and consist of temporary<br />
tables, table variables, and the result sets of multi-statement table-valued functions, and other temporary<br />
row sets.<br />
Size of tempdb<br />
As tempdb is used for so many purposes, it is difficult to predict the required size for tempdb in advance.<br />
Appropriate sizes for tempdb should be carefully tested and monitored in real live scenarios for new<br />
installations.<br />
Running out of disk space in the tempdb database can cause significant disruptions in the <strong>SQL</strong> Server<br />
production environment and can prevent applications that are running from completing their operations.<br />
You can use the sys.dm_db_file_space_usage dynamic management view to monitor the disk space that is<br />
used by the tempdb files. Additionally, to monitor the page allocation or deallocation activity in tempdb<br />
at the session or task level, you can use the sys.dm_db_session_space_usage and<br />
sys.dm_db_task_space_usage dynamic management views.<br />
By default, the tempdb database automatically grows as space is required, because the MAXSIZE of the<br />
files is set to UNLIMITED. Therefore, tempdb can continue growing until space on the disk that contains<br />
tempdb is exhausted.<br />
Multiple Files<br />
To overcome I/O restrictions but also to avoid a concept called latch contention that affects the allocation<br />
pages of tempdb data pages, increasing the number of files can improve the overall performance of <strong>SQL</strong><br />
Server systems. Do not, however, create too many files. As a general rule it is advised to have 0.25-1 file<br />
per core nowadays. As a general rule it can be said that the ratio should be lower with the increase of<br />
cores on the system but the optimal configuration must be identified by doing real live tests.<br />
Question: Why could a memory bottleneck lead to higher tempdb usage?
4-14 Working with <strong>Database</strong>s<br />
Demonstration 1A: Working with tempdb<br />
Demonstration Steps<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click <strong>SQL</strong><br />
Server Management Studio. In the Connect to Server window, type Proseware and click Connect.<br />
From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_04_PRJ\10775A_04_PRJ.ssmssln and click Open.<br />
3. From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file from<br />
within Solution Explorer.<br />
4. Open the 11 – Demonstration 1A.sql script file.<br />
5. Follow the instructions contained within the comments of the script file.
Lesson 2<br />
Working with Files and Filegroups<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 4-15<br />
Creating databases is a core competency for database administrators working with <strong>SQL</strong> Server. As well as<br />
understanding how databases are created, you need to be aware of the impact of file initialization options<br />
and know how to alter existing databases. Altering databases will mostly involve expanding the space<br />
available to the databases but on rare occasions, you may need to shrink the space occupied by database<br />
files.<br />
For larger databases, filegroups provide improved manageability. You should be aware of how filegroups<br />
are configured.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Create user databases.<br />
• Explain the role of instant file initialization.<br />
• Alter databases.<br />
• Expand and shrink database files.<br />
• Work with filegroups.
4-16 Working with <strong>Database</strong>s<br />
Creating User <strong>Database</strong>s<br />
Key Points<br />
<strong>Database</strong>s can be created with either the GUI in SSMS or via the CREATE DATABASE command in T-<strong>SQL</strong>.<br />
The CREATE DATABASE command offers more flexible options but the GUI is easier to use. This topic will<br />
concentrate on the CREATE DATABASE command but is equally applicable to the options chosen in the<br />
GUI within SSMS.<br />
CREATE DATABASE<br />
<strong>Database</strong> names must be unique within an instance of <strong>SQL</strong> Server and comply with the rules for<br />
identifiers. A database name is of data type "sysname" which is currently defined as nvarchar(128). This<br />
means that up to 128 characters can be present in the database name and that each of the characters can<br />
be chosen from the double-byte Unicode character set. While database names can be quite long, you will<br />
find that long names become awkward to work with.<br />
Data Files<br />
As discussed earlier in this module, a database must have a single primary data file and one log file. The<br />
ON and LOG ON clauses of the CREATE DATABASE command specify the required files.<br />
In the example shown in the slide, the database named Branch is being created. It is comprised of two<br />
files: a primary data file located at D:\Data\Branch.mdf and a log file located at L:\Logs\Branch.ldf.<br />
Note Each file includes a logical file name as well as a physical file path. Logical file names<br />
must be unique within each database. Operations within <strong>SQL</strong> Server refer to the logical file<br />
name.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 4-17<br />
The primary data file has an initial file size of 100MB and a maximum file size of 500MB. It will grow by<br />
20% of its current size whenever autogrowth needs to occur.<br />
The log file has an initial file size of 20MB and has no limit on maximum file size. Each time it needs to<br />
autogrow, it will grow by a fixed 10MB allocation.<br />
Collations and Default Values<br />
A specific collation can be allocated at the database level if required. If not specified, the collation will<br />
default to the collation that was specified for the server instance during <strong>SQL</strong> Server installation. Keeping<br />
individual databases with the same collation as the server is considered a best practice.<br />
While it is possible to create a database by just providing the database name, you would create a<br />
database that is based on the model database and with data and log files in the default locations. It is<br />
unlikely to represent a best practice.<br />
Question: What is the logical file name?
4-18 Working with <strong>Database</strong>s<br />
Configuring <strong>Database</strong> Options<br />
Key Points<br />
Each database has a set of options that can be configured. These options are unique to each database and<br />
do not affect other databases. All database options are taken from the model database when you create a<br />
database and can be changed by using the SET clause of the ALTER DATABASE statement or via the<br />
Properties page for each database in SSMS.<br />
Categories of Options<br />
There are several categories of database options:<br />
• Auto options - Control certain automatic behaviors. As a general guideline, Auto Close and Auto<br />
Shrink should be turned off on most systems but Auto Create and Update Statistics should be turned<br />
on.<br />
• Cursor options - Control cursor behavior and scope. In general, the use of cursors when working with<br />
<strong>SQL</strong> Server is not recommended apart from particular applications such as utilities. Cursors are not<br />
discussed further in this course but it should be noted that the overuse of cursors is a common cause<br />
of performance issues.<br />
• <strong>Database</strong> Availability Options - Control whether the database is online or offline, who can connect to<br />
the database, and whether the database is in read-only mode.
• Maintenance and recovery options such as:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 4-19<br />
• Recovery Model - <strong>Database</strong> recovery models will be discussed in Module 5.<br />
• Page Verify - Early versions of <strong>SQL</strong> Server offered an option called Torn Page Detection. This<br />
option caused <strong>SQL</strong> Server to write a small bitmap across each disk drive sector within a database<br />
page. There are 512 bytes per sector, so this means that there are 16 sectors per database page<br />
(8KB). This was a fairly crude yet reasonably effective way to detect a situation where only some<br />
of the sectors that were required to write a page, were in fact written. In <strong>SQL</strong> Server 2005, a new<br />
CHECKSUM verification option was added. The use of this option causes <strong>SQL</strong> Server to calculate<br />
and add a checksum to each page as it is written and to recheck the checksum whenever a page<br />
is retrieved from disk.<br />
Note Page checksums are only added the next time that any page is written. Enabling the<br />
option does not cause every page in the database to be rewritten with a checksum.
4-20 Working with <strong>Database</strong>s<br />
Instant File Initialization<br />
Key Points<br />
Part of the security provided by the Windows operating system is that a user is not provided with raw<br />
access to disk space that was previously used by another user, without first overwriting the disk space.<br />
Instant File Initialization<br />
The time taken to overwrite the space allocated to data files for use with <strong>SQL</strong> Server can be substantial.<br />
When a 200GB file is allocated, 200GB of data needs to be written.<br />
The main risk associated with not overwriting the disk space is that an administrator could read the<br />
previous contents of the disk. If this risk is considered minimal, <strong>SQL</strong> Server can take advantage of instant<br />
file initialization (IFI) to avoid the time taken when clearing existing data.<br />
<strong>SQL</strong> Server can request uninitialized space from the operating system but only when it has sufficient rights<br />
to request this. If the Perform Volume Maintenance Tasks right has been assigned to the <strong>SQL</strong> Server<br />
service account, <strong>SQL</strong> Server can use instant file initialization. This option was introduced in <strong>SQL</strong> Server<br />
2005 and only affects data files.<br />
Another advantage of this option is that it decreases the time it takes to restore databases as the disk<br />
space for the restored database does not need to be overwritten before it is reused.
Demonstration 2A: Creating <strong>Database</strong>s<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 4-21<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_04_PRJ\10775A_04_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
2. Open the 21 – Demonstration 2A.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
4-22 Working with <strong>Database</strong>s<br />
Altering <strong>Database</strong>s<br />
Key Points<br />
When databases are in operation, they may need to be modified. The most common requirement is to<br />
add additional space by either expanding existing files or by adding additional files.<br />
ALTER DATABASE<br />
Expanding files and adding files can both be done through the ALTER DATABASE statement or via the GUI<br />
in SSMS.<br />
Another option that you might need to implement is to drop a file. <strong>SQL</strong> Server will not allow you to drop<br />
a file that is currently in use within the database. Dropping a file has to be done in two steps. First the file<br />
has to be emptied using the EMPTYFILE option of DBCC SHRINKFILE and then it can be removed using<br />
the ALTER DATABASE command.<br />
In Demonstration 2B, you will see examples of each of these operations.
Expanding and Shrinking <strong>Database</strong> Files<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 4-23<br />
By default, <strong>SQL</strong> Server automatically expands a database according to growth parameters defined when<br />
the database files were created. You can also manually expand a database by allocating additional space<br />
to an existing database file or by creating a new file. You may have to expand the data or transaction log<br />
space if the existing files are becoming full.<br />
If a database has already exhausted the space allocated to it and it cannot grow a data file automatically,<br />
error 1105 is raised. (The equivalent error number for the inability to grow a transaction log file is 9002).<br />
This can happen if the database is not set to grow automatically or if there is not enough disk space on<br />
the hard drive.<br />
Adding Files<br />
One option for expanding the size of a database is to add files to the database. You can add files using<br />
either the GUI in SSMS or by the ALTER DATABASE … ADD FILE command in T-<strong>SQL</strong>. You will see an<br />
example that shows the addition of a file in Demonstration 2B.<br />
Growing Files<br />
When expanding a database, you must increase the size of the database by at least 1 MB. Ideally, any file<br />
size increase should be much larger than this. Increases of 100MB or more are common.<br />
When a database is expanded, the new space is immediately made available to either the data or<br />
transaction log file, depending on which file was expanded. When you expand a database, you should<br />
specify the maximum size to which the file is permitted to grow. This prevents the file from growing until<br />
disk space is exhausted. To specify a maximum size for the file, use the MAXSIZE parameter of the ALTER<br />
DATABASE statement, or use the Restrict filegrowth (MB) option when you use the Properties dialog box<br />
in <strong>SQL</strong> Server Management Studio to expand the database.
4-24 Working with <strong>Database</strong>s<br />
Transaction Log<br />
If the transaction log is not set up to expand automatically, the transaction log can run out of space when<br />
certain types of activity occur in the database. In addition to expanding the size of the transaction log, the<br />
log file can be truncated. Truncating the log purges the file of inactive, committed transactions and allows<br />
the <strong>SQL</strong> Server <strong>Database</strong> Engine to reuse this unused part of the transaction log. If there are active<br />
transactions, the log file might not be able to be truncated and expanding the log file might be the only<br />
available option.<br />
Shrinking a <strong>Database</strong><br />
Each file within a database can be reduced in size by removing unused pages. Although the <strong>Database</strong><br />
Engine will reuse space effectively, there are times when a file no longer needs to be as large as it once<br />
was. Shrinking the file may then become necessary but shrinking should be considered a rarely used<br />
option. Both data and transaction log files can be reduced, or shrunk. The database files can be shrunk<br />
manually, either as a group or individually, or the database can be set to shrink automatically at specified<br />
intervals.<br />
Methods for Shrinking<br />
You can shrink a database or specific database files using the DBCC SHRINKDATABASE and DBCC<br />
SHRINKFILE commands. The DBCC SHRINKFILE is preferred as it provides much more control of the<br />
operation than DBCC SHRINKDATABASE.<br />
Note Shrinking a file usually involves moving of pages within the files, which can take a<br />
long time.<br />
Regular shrinking of files tends to lead to regrowth of files. For this reason, even though <strong>SQL</strong> Server<br />
provides an option to automatically shrink databases, this should only be used in rare situations, as for<br />
most databases, enabling this option will cause substantial fragmentation issues on the disk subsystem. It<br />
is best practice to perform any form of shrink operation only if absolutely needed.<br />
TRUNCATE ONLY<br />
One additional available option is the TRUNCATE_ONLY option of DBCC SHRINKFILE that releases all free<br />
space at the end of the file to the operating system but does not perform any page movement inside the<br />
file. The data file is shrunk only to the last allocated extent. That option often does not shrink the file as<br />
effectively but is less likely to cause substantial fragmentation and is a much faster operation.<br />
Question: What are the two basic strategies for expanding a database manually?
Demonstration 2B: Altering <strong>Database</strong>s<br />
Demonstration Steps<br />
1. If Demonstration 2A was not performed:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 4-25<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_04_PRJ\10775A_04_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
• Open the 21 – Demonstration 2A.sql script file and follow the instructions contained within it.<br />
2. Open the 22 – Demonstration 2B.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
4-26 Working with <strong>Database</strong>s<br />
Working with Filegroups<br />
Key Points<br />
Filegroups are named collections of data files and are used to simplify data placement and administrative<br />
tasks such as backup and restore operations. Using files and filegroups may improve database<br />
performance, because it lets a database be created across multiple disks, multiple disk controllers, or RAID<br />
systems.<br />
File Placement<br />
Files and filegroups enable data placement, because a table or an index can be created in a specific<br />
filegroup. When using a statement such as CREATE TABLE, a parameter can be supplied to specify the<br />
filegroup that the object should be created on. This can improve performance, because the I/O for specific<br />
tables can be directed at specific disks. For example, a heavily used table can be put on a single file in one<br />
filegroup, located on one disk, and the other less heavily accessed tables in the database can be put on<br />
the other files in another filegroup, located on other disks.<br />
Note File placement is an advanced topic and if not planned and tested carefully, it is also<br />
possible to decrease system performance by creating new bottlenecks due to forcing I/O<br />
loads to specific filegroups and files.<br />
Proportional Fill<br />
Filegroups use a proportional fill strategy across all the files within each filegroup. As data is written to the<br />
filegroup, the <strong>SQL</strong> Server <strong>Database</strong> Engine writes an amount of data proportional to the free space in a<br />
file to each file within the filegroup, instead of writing all the data to the first file until full. As soon as all<br />
the files in a filegroup are full, the <strong>Database</strong> Engine automatically expands one file at a time in a roundrobin<br />
manner to allow for more data, provided that the database is set to grow automatically.
Filegroups and Other Features<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 4-27<br />
Filegroups can be used in combination with partitioned tables and indexes, which are advanced topics,<br />
out of scope of this course.<br />
One filegroup is designated as the default filegroup. When objects are created without specifying a<br />
filegroup, the default filegroup is used. Unless you change the default filegroup, the PRIMARY filegroup<br />
will be the default.<br />
Filegroups can also be created as read-only filegroups. Read-only filegroups provide you the following<br />
benefits:<br />
• They can be compressed (using NTFS compression).<br />
• During recovery, you do not need to apply logs to recover a read-only file group.<br />
• The data is protected from accidental modifications.<br />
• Archive data can be stored on cheaper storage.<br />
Question: What is the advantage of storing archive data on separate filegroups?
4-28 Working with <strong>Database</strong>s<br />
Demonstration 2C: Filegroups<br />
Demonstration Steps<br />
1. If Demonstration 2A was not performed:<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_04_PRJ\10775A_04_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
• Open and execute the 21 – Demonstration 2A.sql script file.<br />
2. Open the 23 – Demonstration 2C.sql script file.
Lesson 3<br />
Moving <strong>Database</strong> Files<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 4-29<br />
There is often a need to move databases by moving database files. Both user databases and system<br />
databases can be moved but each requires a slightly different approach. In this lesson, you will learn how<br />
to move both types of database.<br />
Particular caution needs to be used when moving system databases as it is possible to stop the operation<br />
of <strong>SQL</strong> Server if these operations are not performed properly.<br />
You may also need to copy a database. SSMS provides a copy database wizard for this purpose. You will<br />
see learn about the capabilities of this wizard.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe detach and attach functionality.<br />
• Move user database files.<br />
• Move system database files.<br />
• Explain the functions offered by the Copy <strong>Database</strong> Wizard.
4-30 Working with <strong>Database</strong>s<br />
Overview of Detach and Attach<br />
Key Points<br />
<strong>Database</strong>s can be detached using SSMS or sp_detach_db. Detaching a database does not remove the data<br />
from the data files or remove the data files. All that is removed are the metadata entries for the database.<br />
The detached database will no longer appear in the list of databases shown in SSMS or reported by the<br />
sys.databases system view. The detached data files can be moved or copied to another instance and<br />
reattached.<br />
UPDATE STATISTICS<br />
<strong>SQL</strong> Server maintains a set of statistics on the distribution of data within tables and indexes. As part of the<br />
detach process, an option is provided to perform an UPDATE STATISTICS operation on table and index<br />
statistics prior to the detach. While this might be useful if the database is to be reattached as a read-only<br />
database, in general it is not a good option to use while detaching a database.<br />
Detachable <strong>Database</strong>s<br />
Not all databases can be detached. <strong>Database</strong>s that are replicated, mirrored or in a suspect state cannot be<br />
detached.<br />
Note Replicated and mirrored databases are advanced topics out of scope for this course.<br />
A more common problem that makes a database unable to be detached is that connections are open to<br />
the database at the time the detach operation is attempted. All connections must be dropped before<br />
detaching the database. <strong>SQL</strong> Server Management Studio offers an option to force connections to be<br />
dropped during this operation.
Attaching <strong>Database</strong>s<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 4-31<br />
SSMS offers an option to attach databases. <strong>Database</strong>s can also be attached using the CREATE DATABASE<br />
… FOR ATTACH command.<br />
Note You may find many references for the use of the sp_attach_db command and the<br />
sp_attach_single_file_db command. These are older syntax that is replaced by the FOR<br />
ATTACH option to the CREATE DATABASE command. Note also that there is no equivalent<br />
replacement for the sp_detach_db procedure.<br />
A common problem when databases are reattached is that database users can become "orphaned". You<br />
will see how to deal with this problem in a later module.<br />
Question: Why might updating all statistics not be a good option before detaching a<br />
database?
4-32 Working with <strong>Database</strong>s<br />
Moving User <strong>Database</strong> Files<br />
Key Points<br />
Moving the files of a database requires the database to be taken offline. Moving a database should be<br />
performed in a maintenance window.<br />
Options for Moving <strong>Database</strong>s<br />
In SSMS, the only option available for moving database files is to detach the database, move the data files<br />
to their new location, and then attach them back into the instance. While using detach and attach to<br />
move user database files can be used, the ALTER DATABASE can also be used to move database files and<br />
is preferred, unless the database is being moved to another server instance.<br />
The process of moving user database files requires the use of the logical names of the database files. You<br />
can see the logical names of existing files by executing the following code:<br />
SELECT * FROM sys.database_files;<br />
The use of detach and attach for moving user database files across instances will be shown in the next<br />
demonstration.<br />
Question: Why is ALTER DATABASE preferred over detaching and attaching the database?
Demonstration 3A: Detach and Attach<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 4-33<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_04_PRJ\10775A_04_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
2. Open the 31 – Demonstration 3A.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
4-34 Working with <strong>Database</strong>s<br />
Moving System <strong>Database</strong> Files<br />
Key Points<br />
All system databases can be moved to new locations, to help balance I/O load, except the resource<br />
database. The process for moving the master database is different from the process for other databases.<br />
Moving the master <strong>Database</strong><br />
The steps involved in moving the master database are as follows:<br />
1. Open <strong>SQL</strong> Server Configuration Manager.<br />
2. In the <strong>SQL</strong> Server Services node, right-click the instance of <strong>SQL</strong> Server and choose Properties and click<br />
the Startup Parameters tab.<br />
3. Edit the Startup Parameters values to point to the planned location for the master database data (-d<br />
parameter) and log (-l parameter) files.<br />
4. Stop the instance of <strong>SQL</strong> Server.<br />
5. Move the master.mdf and mastlog.ldf files to the new location.<br />
6. Restart the instance of <strong>SQL</strong> Server.
Moving Other System <strong>Database</strong>s<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 4-35<br />
Other system databases (except the resource database which cannot be moved) are moved by this<br />
process:<br />
1. For each file to be moved, execute ALTER DATABASE … MODIFY FILE as for user databases.<br />
2. Stop the instance of <strong>SQL</strong> Server.<br />
3. Move the files to the new location.<br />
4. Restart the instance of <strong>SQL</strong> Server.<br />
In the next demonstration, you will see how to move the tempdb database and how to increase the<br />
number of files that it is using.<br />
Question: What is the biggest concern when performing these tasks?
4-36 Working with <strong>Database</strong>s<br />
Copying <strong>Database</strong>s<br />
Key Points<br />
There are several ways to copy databases to other locations, including to other instances:<br />
• You can detach a database, copy the files to new locations, reattach both the original and new<br />
database files.<br />
• You can backup a database and then restore it using the same database name (on a different server<br />
instance) or using another database name (on the same server instance).<br />
• You can use the copy database wizard.<br />
The main disadvantage of detaching and attaching and backup and restore is that only the database is<br />
copied and the DBA needs to take care of all dependent objects like logins, jobs, user-defined error<br />
messages and so on.<br />
Restoring a database on another instance has the advantage that backups should be created regularly<br />
anyway and can therefore be restored easily without affecting the source system. Performing the<br />
restoration is itself a good test of the current backup strategy. Also the source database stays online<br />
during the whole operation. Restoring a database also has the issue related to recovering dependent<br />
objects.
Copy <strong>Database</strong> Wizard<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 4-37<br />
The Copy <strong>Database</strong> Wizard is a good way to work around this restriction. It provides an easy to use wizard<br />
that can move or copy the database with all dependent objects without the need for additional scripting.<br />
In addition it is possible to schedule the copy operation.<br />
The wizard provides two methods for copying or moving the database. It can be configured to use detach<br />
and attach (which is the fastest option) but has the downside that the source database needs to be offline<br />
while the detach/copy/attach is occurring. The second method uses the <strong>SQL</strong> Server Management Objects<br />
(SMO) programming library methods to create the objects and transfer the data, which is slower but has<br />
the advantage that the source database can kept online while copying.<br />
Running the Copy <strong>Database</strong> Wizard requires sysadmin privileges on both instances and a network<br />
connection must be present.<br />
Question: What might be the main disadvantage copying a database using detach and<br />
attach of the database?
4-38 Working with <strong>Database</strong>s<br />
Demonstration 3B: Moving and Reconfiguring tempdb<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
a. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
b. In the virtual machine, open the 10775A_04_PRJ <strong>SQL</strong> Server script project within <strong>SQL</strong> Server<br />
Management Studio.<br />
c. From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
2. Open the 32 – Demonstration 3B.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
Lab 4: Working with <strong>Database</strong>s<br />
Lab Setup<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 4-39<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_04_PRJ\10775A_04_PRJ.ssmssln.<br />
7. From the View menu, click Solution Explorer. In Solution Explorer, double-click the query<br />
00-Setup.sql. When the query window opens, click Execute on the toolbar.<br />
Lab Scenario<br />
Now that the Proseware instance of <strong>SQL</strong> Server has been installed and configured on the server, a number<br />
of additional database configurations need to be performed. As the database administrator, you need to<br />
perform these configuration changes.<br />
You need to create a new database on the server, based on requirements from an application vendor’s<br />
specifications. A client has sent you a database that needs to be installed on the Proseware instance.<br />
Instead of sending you a backup, they have sent a detached database and log file. You need to attach the<br />
database to the Proseware instance.
4-40 Working with <strong>Database</strong>s<br />
A consultant has also provided recommendations regarding tempdb configuration that you need to<br />
review and implement if appropriate.<br />
Supporting Documentation<br />
tempdb Size Requirement (For Exercise 1)<br />
File Size (in MB)<br />
Data 30<br />
Log 10<br />
RateTracking <strong>Database</strong> Specification<br />
Item Requirement<br />
<strong>Database</strong>Name RateTracking<br />
Primary Data File Logical name = RateTracking_dat<br />
File name = RateTracking.mdf<br />
Folder = D:\MKTG<br />
Initial size = 10MB<br />
Maximum file size = 100MB<br />
Autogrowth amount = 10MB<br />
Filegroup = PRIMARY<br />
Log File Logical name = RateTracking_log<br />
File name = RateTracking.ldf<br />
Folder = L:\MKTG<br />
Initial size = 20MB<br />
Maximum file size = unlimited<br />
Autogrowth amount = 20MB<br />
Filegroup = Not Applicable<br />
Secondary Data File #1 Logical name = RateTracking_dat_1<br />
File name = RateTracking_1.ndf<br />
Folder = D:\MKTG<br />
Initial size = 20MB<br />
Maximum file size = 100MB<br />
Autogrowth amount = 10MB<br />
Filegroup = USERDATA<br />
Secondary Data File #2 Logical name = RateTracking_dat_2<br />
File name = RateTracking_2.ndf<br />
Folder = D:\MKTG<br />
Initial size = 20MB<br />
Maximum file size = 100MB<br />
Autogrowth amount = 10MB<br />
Filegroup = USERDATA
(continued)<br />
Item Requirement<br />
Secondary Data File #3 Logical name = RateTracking_dat_3<br />
File name = RateTracking_3.ndf<br />
Folder = D:\MKTG<br />
Initial size = 200MB<br />
Maximum file size = 500MB<br />
Autogrowth amount = 50MB<br />
Filegroup = ARCHIVE<br />
Secondary Data File #4 Logical name = RateTracking_dat_4<br />
File name = RateTracking_4.ndf<br />
Folder = D:\MKTG<br />
Initial size = 200MB<br />
Maximum file size = 500MB<br />
Autogrowth amount = 50MB<br />
Filegroup = ARCHIVE<br />
Default Filegroup USERDATA<br />
tempdb Requirements From The Consultant (For Exercise 4)<br />
1. Move the tempdb primary data file to the folder D:\MKTG.<br />
2. Move the tempdb log file to the folder L:\MKTG.<br />
3. Add three additional files to tempdb as per the following table:<br />
Filename Requirements<br />
Secondary Data File #1 Logical name = tempdev2<br />
File name = tempdb_file2.ndf<br />
Folder = D:\MKTG<br />
Initial size = 20MB<br />
Maximum file size = unlimited<br />
Autogrowth amount = 10MB<br />
Secondary Data File #2 Logical name = tempdev3<br />
File name = tempdb_file3.ndf<br />
Folder = D:\MKTG<br />
Initial size = 20MB<br />
Maximum file size = unlimited<br />
Autogrowth amount = 10MB<br />
Secondary Data File #3 Logical name = tempdev4<br />
File name = tempdb_file4.ndf<br />
Folder = D:\MKTG<br />
Initial size = 20MB<br />
Maximum file size = unlimited<br />
Autogrowth amount = 10MB<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 4-41
4-42 Working with <strong>Database</strong>s<br />
Exercise 1: Adjust tempdb Configuration<br />
Scenario<br />
You will adjust the current configuration of the tempdb database.<br />
The main tasks for this exercise are as follows:<br />
1. Adjust the size of tempdb.<br />
2. Check that the tempdb size is still correct after a restart.<br />
Task 1: Adjust the size of tempdb<br />
• Review the requirement for tempdb size in the Supporting Documentation.<br />
• Adjust the tempdb size based on the requirement in the supporting documentation.<br />
Task 2: Check that the tempdb size is still correct after a restart<br />
• Restart the Proseware server using <strong>SQL</strong> Server Configuration Manager.<br />
• Check that tempdb is still the correct size.<br />
Results: After this exercise, you should have inspected and configured the tempdb database.<br />
Exercise 2: Create the RateTracking <strong>Database</strong><br />
Scenario<br />
You will create a new database named RateTracking as per a supplied set of specifications.<br />
The main tasks for this exercise are as follows:<br />
1. Create the database.<br />
2. Create the required filegroups and files.<br />
3. Change the default filegroup for the database.<br />
Task 1: Create the database<br />
• Review the supplied requirements in the supporting documentation for the exercise.<br />
• Create a new RateTracking database based on the requirements.<br />
Task 2: Create the required filegroups and files<br />
• Review the supplied requirements in the supporting documentation for the required files and<br />
filegroups.<br />
• Create the required filegroups and files.<br />
Task 3: Change the default filegroup for the database<br />
• Review the supplied requirements in the supporting documentation for the default filegroup.<br />
• Modify the default filegroup.<br />
Results: After this exercise, you should have created a new RateTracking database with multiple<br />
filegroups.
Exercise 3: Attach the OldProspects <strong>Database</strong><br />
Scenario<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 4-43<br />
A client has sent you a database that needs to be installed on the Proseware instance. Instead of sending<br />
you a backup, they have sent a detached database and log file. You need to attach the database to the<br />
Proseware instance.<br />
The main tasks for this exercise are as follows:<br />
1. Copy the database files.<br />
2. Attach the database to the MKTG instance.<br />
Task 1: Copy the database files<br />
• Copy the files to new folders as per the table below:<br />
Filename Source Folder Destination Folder<br />
OldProspects.mdf D:\10775A_Labs\10775A_04_PRJ D:\MKTG<br />
OldProspects.ldf D:\10775A_Labs\10775A_04_PRJ L:\MKTG<br />
Task 2: Attach the database to the MKTG instance<br />
• Attach the OldProspects database to the MKTG instance.<br />
Results: After this exercise, you should have attached a database to a client’s server.<br />
Challenge Exercise 4: Add Multiple Files to tempdb (Only if time permits)<br />
Scenario<br />
A consultant has also provided recommendations regarding tempdb configuration that you need to<br />
review and implement.<br />
The main tasks for this exercise are as follows:<br />
1. Review the tempdb file requirements.<br />
2. Move existing files.<br />
3. Add new files.<br />
4. Restart the server and check file locations.<br />
Task 1: Review the tempdb file requirements<br />
• In the Supporting Documentation review the tempdb Requirements From The Consultant section.<br />
Task 2: Move existing files<br />
• Move the existing tempdb data and log files to the required locations as specified in the supporting<br />
documentation.<br />
Task 3: Add new files<br />
• Add the additional tempdb files as required in the supporting documentation.
4-44 Working with <strong>Database</strong>s<br />
Task 4: Restart the server and check file locations<br />
• Restart the Proseware server.<br />
• View the Properties of the tempdb database and ensure the list of files match the requirements.<br />
Results: After this exercise, you should have reconfigured tempdb as per the requirements.
Module Review and Takeaways<br />
Review Questions<br />
1. Why is it typically sufficient to have one log file in a database?<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 4-45<br />
2. Why should no data except temporary data should be stored in tempdb system database?<br />
3. What operations can be performed online on database files?<br />
Best Practices related to a particular technology area in this module<br />
1. Plan and test your file layout carefully.<br />
2. Separate data and log files on the physical level.<br />
3. Keep the data files of a database at the same size.<br />
4. Create the database in an appropriate size that it doesn’t have to be expanded too often.<br />
5. Shrink files only if absolutely necessary.
4-46 Working with <strong>Database</strong>s
Module 5<br />
Understanding <strong>SQL</strong> Server <strong>2012</strong> Recovery Models<br />
Contents:<br />
Lesson 1: Backup Strategies 5-3<br />
Lesson 2: Understanding <strong>SQL</strong> Server Transaction Logging 5-12<br />
Lesson 3: Planning a <strong>SQL</strong> Server Backup Strategy 5-22<br />
Lab 5: Understanding <strong>SQL</strong> Server Recovery Models 5-32<br />
5-1
5-2 Understanding <strong>SQL</strong> Server 2010 Recovery Models<br />
Module Overview<br />
One of the most important aspects of a database administrator's role is ensuring that organizational data<br />
is backed up reliably, so that it is possible to recover the data if (or perhaps when), a failure occurs.<br />
Even though the computing industry has known about the need for reliable backup strategies for<br />
decades, and discussed the needs at great length, tragic stories regarding data loss are still commonplace.<br />
A further problem is that even when the strategies that have been put in place work as they were<br />
designed, the outcomes still regularly fail to meet organizational operational requirements.<br />
In this module, you will consider how to create a strategy that is aligned with organizational needs and<br />
see how the transaction logging capabilities of <strong>Microsoft®</strong> <strong>SQL</strong> <strong>Server®</strong> can help you to achieve an<br />
appropriate outcome.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe the critical concepts surrounding backup strategies.<br />
• Explain the transaction logging capabilities within the <strong>SQL</strong> Server database engine.<br />
• Plan a <strong>SQL</strong> Server backup strategy.
Lesson 1<br />
Backup Strategies<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 5-3<br />
This lesson is the first in a series of lessons that provide you with the knowledge you need to plan and<br />
implement an appropriate backup strategy for your organizational data that is stored in <strong>SQL</strong> Server.<br />
After discussing your existing experiences with backup strategies, you will consider the key criteria that<br />
should be part of a backup strategy. You will also consider how long backups should be retained, how the<br />
backups should be tested, and what types of media the backups should be held upon.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Determine an appropriate backup strategy.<br />
• Choose appropriate backup media.<br />
• Design a retention policy for backups.
5-4 Understanding <strong>SQL</strong> Server 2010 Recovery Models<br />
Discussion: Previous Experience with Backup Strategies<br />
Key Points<br />
In this discussion, members of your class will share their experiences with backup strategies. In particular,<br />
consider (but do not limit yourself to) the following questions:<br />
• What types of backup have you used?<br />
• How often do you perform backups?<br />
• Who is responsible for planning and executing a backup strategy?<br />
• How often are your backups tested?<br />
• Do you use third party tools for backups?<br />
• What type of backup media do you use?
Determining an Appropriate Backup Strategy<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 5-5<br />
<strong>SQL</strong> Server provides a variety of backup types. No single backup type will usually be sufficient for the<br />
needs of an organization when forming a backup strategy. More commonly, a combination of backup<br />
types is required to achieve an appropriate outcome. Later in this module, and in the next module, you<br />
will learn about the specific types of backups that <strong>SQL</strong> Server provides.<br />
Key Criteria<br />
When designing a backup strategy, there is always a tradeoff between the level of safety that is<br />
guaranteed and the cost of the solution. If you ask any business about how much data they can afford to<br />
lose, you will almost certainly be told that they cannot afford to lose any data, in any circumstances. Yet,<br />
while zero data loss is an admirable goal, it is not an affordable or realistic goal. For this reason, there are<br />
two objectives that need to be established when discussing a backup strategy: a recovery time objective<br />
(RTO) and a recovery point objective (RPO). Part of the strategy might also involve the retrieval of data<br />
from other locations where copies of the data are stored.<br />
Recovery Time Objective<br />
There is little point in having perfectly recoverable data if the time taken to recover the data is too long. A<br />
backup strategy needs to have a recovery time objective (RTO).<br />
For example, consider the backup requirements of a major online bank. If the bank was unable to access<br />
any of the data in their systems, how long could the bank tolerate this situation?<br />
Now imagine that the bank was making full copies of all its data, yet a full restore of the data would take<br />
two weeks to complete. What impact would a two week outage have on the bank? The more important<br />
question is how long would an interruption to data access need to be, before the bank ceased to be<br />
viable as a bank?
5-6 Understanding <strong>SQL</strong> Server 2010 Recovery Models<br />
The key message with RTO is that a plan that involves quick recovery with a small data loss might be more<br />
palatable to an organization than a plan that reduces data loss but takes much longer to implement.<br />
Another key issue is that the time taken to restore data might also involve finding the correct backup<br />
media, finding a person with the authority to perform the restore, finding documentation related to the<br />
restore, and so on.<br />
Recovery Point Objective<br />
Once a system has been recovered, hopefully in a timely manner, the next important question relates to<br />
how much data will have been lost. This is represented by the recovery point objective (RPO).<br />
For example, while a small business might conclude that restoring a backup from the previous night, with<br />
the associated loss of up to a day's work is an acceptable risk tradeoff, a large business might see the<br />
situation very differently. It is common for large corporations to plan for zero committed data loss. This<br />
means that work that was committed to the database must be recovered but that it might be acceptable<br />
to lose work that was in process at the time a failure occurred.<br />
Mapping to Business Strategy<br />
The most important aspect of creating a backup strategy is that the backup strategy must be designed in<br />
response to the business requirements and strategy. The backup strategy also needs to be communicated<br />
to the appropriate stakeholders within the organization. It is important to make sure that the expectations<br />
of the business users are managed, in line with the agreed strategy.<br />
Organizations often deploy large numbers of databases. The RPO and RTO for each database might be<br />
different. This means that database administrators will often need to work with different backup strategies<br />
for different databases that they are managing. Most large organizations have a method of categorizing<br />
the databases and applications in terms of importance to the core functions of the organization.<br />
The business requirements will determine all aspects of the backup strategy, including how frequently<br />
backups need to occur, how much data is to be backed up each time, the type of media that the backups<br />
will be held upon, and the retention and archival plans for the media.<br />
Question: What would be the likely RPO and RTO requirements for the most important<br />
databases in your organization?
Choosing Appropriate Backup Media<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 5-7<br />
A backup set contains the backup from a single, successful backup operation performed by <strong>SQL</strong> Server.<br />
One or more backup sets are written to a media set, which is represented by a file at the operating system<br />
or device level.<br />
It is important to realize that, because a single file (or media set) can contain more than one backup, when<br />
it is time to restore a backup, you need to ensure that you are restoring the intended backup from within<br />
the file.<br />
Physical Backup Devices<br />
<strong>SQL</strong> Server supports the creation of backups to disk files. UNC file paths are supported for disk locations<br />
so that backups can be written to network file shares.<br />
Earlier versions supported writing backups directly to tape but that option is now deprecated and should<br />
not be used for new development. It is generally considered better practice to write a backup to disk first,<br />
and to later copy the disk backup to tape if required.<br />
The backups that <strong>SQL</strong> Server creates are encoded in a format known as Microsoft Tape Format (MTF)<br />
which is a common format that is used by other Microsoft products in addition to <strong>SQL</strong> Server. This means<br />
that <strong>SQL</strong> Server backups can be combined with other backups, such as operating system backups, on the<br />
same media sets.<br />
Note Compressed <strong>SQL</strong> Server backups cannot share media with other types of backup.
5-8 Understanding <strong>SQL</strong> Server 2010 Recovery Models<br />
Logical Backup Devices<br />
It is possible to specify an operating system filename that a backup should be written to, directly in the<br />
BACKUP command or in the GUI within SSMS. <strong>SQL</strong> Server also though, allows a degree of indirection by<br />
the creation of logical backup devices. A logical backup device is an optional user-defined name that<br />
refers to a specific output location.<br />
By using logical backup devices, an application can be designed to always send backups to the logical<br />
backup device, instead of to a specific physical location.<br />
For example, an HR application could be designed to create backups on a logical backup device named<br />
HRBackupDevice. A database administrator could then later determine where the backups should be<br />
physically sent. The administrator could later decide the name of a file that should hold the backups from<br />
the HR application. No changes would need to be made to the HR application to accommodate the<br />
change in backup file location, as the application would always back up to the same logical backup<br />
device.<br />
Backup Mirroring and Striping<br />
A single backup can target more than one backup device. Up to 64 devices are supported for a single<br />
media set. If more than one backup device is used, the backups can be mirrored or striped.<br />
With a mirrored backup (only available in Enterprise edition), the same backup data is written to each<br />
backup device concurrently. This option provides for redundancy of the physical backup device. Only one<br />
of the devices needs to be present during a restore process.<br />
Note While mirrored backups help provide fault tolerance regarding media failure after<br />
the backup completes, mirrored backups are actually a fault-intolerant option during the<br />
backup process. If <strong>SQL</strong> Server cannot write to one of the mirrored devices, the entire backup<br />
fails.<br />
Striping of backups causes a single backup to be written across a set of backup devices. Each backup<br />
device receives only part of the backup. All backup devices need to be present when a restore of the data<br />
is required.<br />
Required Privileges<br />
The ability to back up a <strong>SQL</strong> Server database requires certain permissions that are discussed in Modules 10<br />
and 11.<br />
Question: What might be the purpose of striping backups to more than one backup device<br />
on a disk?
Determining a Retention and Testing Policy for Backups<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 5-9<br />
A backup strategy must include plans for retention of backups, and for the locations that the media or<br />
backups should be retained in.<br />
It is too common for organizations to regularly perform backup operations, but when the time comes to<br />
restore the backups, a restore is not possible. Most of these problems would be alleviated by a good<br />
retention and testing plan.<br />
In the paragraphs below, examples are provided of the most common problems that are seen and the<br />
appropriate avoidance measure:<br />
Insufficient Copies of Backups<br />
Your organization will be dependent on the quality of backups should the need arise to restore the<br />
backups. The more copies of backups that you hold, and the more pieces of media that are holding all the<br />
required data, the better the chance you have of being able to recover.<br />
The worst offence is generally regarded as creating a backup over your most recent backup. If the system<br />
fails during the backup, you will often then have lost both your data and your backup.<br />
As a good example of this, <strong>Database</strong> Administrator A recently posted on a Microsoft <strong>SQL</strong> Server forum,<br />
seeking help. Her last backup was one year ago and she had inadvertently performed a restore operation<br />
instead of a backup operation using this one year old file. There was little that anyone could do to assist at<br />
that point.<br />
Avoidance strategy: Multiple copies of backups.
5-10 Understanding <strong>SQL</strong> Server 2010 Recovery Models<br />
Insufficient Data on the Backups<br />
Company A performed regular backups, yet no testing of recovery was ever made and the first time that a<br />
real recovery was attempted, it was discovered that not all files that needed to be backed up were in fact<br />
being backed up.<br />
Avoidance strategy: Regular reconstruction of data from backup recovery testing.<br />
Unreadable Backups<br />
Company B performed regular backups but did not test them. When recovery was attempted, none of the<br />
backups were readable. This is often caused by hardware failures but it can be caused by inappropriate<br />
storage of media.<br />
Avoidance strategy: Regular backup recovery testing.<br />
Unavailable Hardware<br />
Company C purchased a special tape drive to perform their backups. When the time came to restore the<br />
backups, that special device no longer worked and no other device within the organization could read the<br />
backups, even if the backups were valid.<br />
Avoidance strategy: Regular backup recovery testing.<br />
Old Hardware<br />
Company D performed regular backups and retained their backups for an appropriate retention period.<br />
When the time came to restore the backups, the company no longer possessed equipment that was<br />
capable of restoring the backups.<br />
Avoidance strategy: Regular backup recovery testing, combined with recovery and backup onto current<br />
devices.<br />
Misaligned Hardware<br />
Company E performed regular backups and even tested that they could perform restore operations from<br />
the backups. However, because they tested the restores on the same device that performed the backups,<br />
they did not realize that the device was misaligned and that it was the only device that could read those<br />
backups. At the time that a restore was needed, the device that the backups were performed on had<br />
failed.<br />
Avoidance strategy: Regular backup recovery testing on a separate system and separate physical device.<br />
General Considerations<br />
When a backup strategy calls for multiple types of backups to be performed, it is important to work out<br />
the combination of backups that will be required when a restore is required.<br />
Organizations might need to fulfill legal or compliance requirements regarding the retention of backups.<br />
In most cases, full database backups are kept for a longer period of time than other backup types.<br />
Checking the consistency of databases by using DBCC CHECKDB is a crucial part of database<br />
maintenance, and is discussed later in the course.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 5-11<br />
As well as deciding how long backups need to be kept, you will need to determine where they are kept.<br />
Part of the RTO needs to consider how long it takes to obtain the physical backup media if it needs to be<br />
restored.<br />
You also need to make sure that backups are complete. Are all files that are needed to recover the system<br />
(including external operating system files) being backed up?
5-12 Understanding <strong>SQL</strong> Server 2010 Recovery Models<br />
Lesson 2<br />
Understanding <strong>SQL</strong> Server Transaction Logging<br />
The transaction log is the key to the consistency of the <strong>SQL</strong> Server database engine. There are many<br />
configurations that can be made to the server that affect how the transaction log will operate. It is<br />
important for you to learn how the transaction log operates and how to configure the transaction log to<br />
meet your organization recovery requirements. The first critical configuration that you need to consider is<br />
the recovery model that is selected for each database. The second critical configuration is the space<br />
required for transaction log activity. Making good choices for both of these configurations will<br />
significantly increase the chance of your systems being able to meet database recovery requirements.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Explain how the <strong>SQL</strong> Server transaction log operates.<br />
• Configure database recovery models.<br />
• Implement capacity planning for transaction logs.<br />
• Configure checkpoint options.
Overview of <strong>SQL</strong> Server Transaction Logs<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 5-13<br />
Two of the common requirements for transaction management in database management systems are<br />
atomicity and durability of transactions. Atomicity requires that an entire transaction is committed or that<br />
no work at all is committed. Durability requires that once a transaction is committed that it will survive<br />
system restarts, including those caused by system failures. <strong>SQL</strong> Server uses the transaction log to ensure<br />
both the atomicity and durability of transactions.<br />
The transaction log can be used to roll back transactions that have been partially completed, to ensure<br />
that transactions are not left in a partially completed state. Rolling back a transaction could occur because<br />
of a request from a user or client application (such as the execution of a ROLLBACK TRANSACTION<br />
statement), or because a transaction may have been partially completed at the time of a system failure.<br />
Write Ahead Logging<br />
When <strong>SQL</strong> Server needs to modify the data in a database page, <strong>SQL</strong> Server checks whether or not the<br />
page is present in the buffer cache. If the page is not present, it must first be read into the buffer cache.<br />
<strong>SQL</strong> Server then modifies the page in memory and then writes redo and undo information to the<br />
transaction log. While this write is occurring, the "dirty" page in memory is locked using a simple shortterm<br />
locking mechanism called a latch, until the write to the transaction log is completed. Later, a<br />
background checkpoint process flushes the dirty pages to the database. This process is known as Write<br />
Ahead Logging (WAL) as all log records are written to the log before affected dirty pages are written to<br />
the data files or the transaction is committed.<br />
The WAL protocol ensures that the database can always be set to a consistent state after a failure. This<br />
recovery process will be discussed in detail in Module 7, but the effect of the process is that transactions<br />
that were committed before the failure occurred are applied to the database and those transactions that<br />
were "in flight" at the time of the failure, where work is partially complete, are undone.<br />
Writing all changes to the log file in advance also makes it possible to rollback transactions if requested.
5-14 Understanding <strong>SQL</strong> Server 2010 Recovery Models<br />
Transaction Log File Structure<br />
Key Points<br />
<strong>SQL</strong> Server needs to make sure that enough information is available in the transaction logs to process<br />
rollback requests from users or applications, and to recover the database. <strong>SQL</strong> Server also needs to keep<br />
sufficient log file entries to satisfy the needs of other <strong>SQL</strong> Server features that need to access the database<br />
engine, such as replication, and change data capture.<br />
Transaction Log Structure and Virtual Log Files<br />
Transaction logs are written in chronological order in a circular way. Because the transaction log might<br />
need to grow, rather than being structured as a single log file, internally the log file is divided into a set of<br />
virtual log files (VLFs). Virtual log files have no fixed size, and there is no fixed number of virtual log files<br />
for a physical log file. The <strong>Database</strong> Engine chooses the size of the virtual log files dynamically while it is<br />
creating or extending log files, based upon the size of the growth that is occurring, as per the values in<br />
the following table:<br />
Growth Increment Number of VLFs<br />
Growth Increment Growth Increment
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 5-15<br />
While overwriting the previous contents of the log file, if a point is reached where it would be necessary<br />
to overwrite data that must be retained, <strong>SQL</strong> Server needs to grow the size of the log file. If the log file<br />
has not been configured to allow automatic growth or if the disk volume that the log file is contained on<br />
is full, <strong>SQL</strong> Server will fail the transaction and return an error indicating that the log file is full.<br />
When it is possible to grow the log file, <strong>SQL</strong> Server will allocate new virtual log files, depending upon the<br />
autogrowth size increment that has been chosen in the configuration of the log file.<br />
Note Instant File Initialization (IFI) cannot be used with transaction log files. This means<br />
that transactions can be blocked while the log file growth occurs.<br />
Log File Truncation<br />
There are two ways that a transaction log file can be truncated. Transaction logs are normally truncated as<br />
part of the process of backing up the transaction log. The second option is that a database recovery<br />
model that automatically truncates the contents of the log file may have been selected.<br />
Each time the log file is truncated, only data up to the start of the oldest active transaction can be<br />
truncated. Entries in the log file are logically ordered by a value known as the Log Sequence Number<br />
(LSN). The starting point of the oldest active transaction in the log file is referred to as the MinLSN value.<br />
All log entries with a sequence value greater than the MinLSN value cannot be truncated as they might be<br />
needed for potential recovery. At each truncation, only data up to the start of the virtual log file that<br />
contains the minimum of starting point of the last checkpoint operation, MinLSN, and oldest transaction<br />
that is yet not replicated (only when using replication) is truncated.<br />
Note Replication is beyond the scope of this course but it is important to be aware that<br />
the configuration and state of replicated data can affect transaction log truncation.<br />
Question: Why is the write performance to the log file so important for the transaction<br />
performance of a database?
5-16 Understanding <strong>SQL</strong> Server 2010 Recovery Models<br />
Working with Recovery Models<br />
Key Points<br />
<strong>SQL</strong> Server has three database recovery models. All models will preserve data in the event of a disaster but<br />
there are important differences that need to be considered by the database administrator when selecting<br />
a model for their database.<br />
Choosing the appropriate recovery model is an important part of your recovery strategy. The recovery<br />
model that you select for your database will determine many factors including:<br />
• Maintenance processing overhead.<br />
• Exposure to potential loss.<br />
• Which backup types are available to you.<br />
When choosing a recovery model for your database, you will need to consider the size of the database,<br />
the potential maintenance overhead, and the level of acceptable risk with regards to potential data loss.<br />
Simple Recovery Model<br />
The simple recovery model minimizes administrative overhead for the transaction log, because the<br />
transaction log is not backed up. The simple recovery model risks significant work-loss exposure if the<br />
database is damaged. Data is only recoverable to the point of the most recent backup of the database<br />
that has been lost. Therefore, under the simple recovery model, the backup intervals should be short<br />
enough to prevent the loss of significant amounts of data. However, the intervals should be long enough<br />
to keep the backup overhead from affecting production work. The inclusion of differential backups into a<br />
backup strategy based on the simple recovery model can help reduce the overhead.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 5-17<br />
In earlier versions of <strong>SQL</strong> Server, simple recovery model was referred to as "truncate log on checkpoint".<br />
The name was changed to provide a focus on the recovery options rather than on the process involved in<br />
implementing the option. Each time a checkpoint process occurs, <strong>SQL</strong> Server will automatically truncate<br />
the transaction log up to the end of the VLF before the VLF that contains the MinLSN value. This means<br />
that the only role that the transaction log ever plays is the provision of active transaction log data during<br />
the recovery of a database.<br />
Full Recovery Model<br />
The Full recovery model provides the normal database maintenance model for databases where durability<br />
of transactions is necessary. Full recovery model is the default recovery model when <strong>SQL</strong> Server is<br />
installed. The recovery model for new databases is based on the recovery model of the model database,<br />
which could be changed for sites that wish to use a different default recovery model.<br />
With full recovery model, log backups are required. This recovery model fully logs all transactions and<br />
retains the transaction log records until after they are backed up. The full recovery model allows a<br />
database to be recovered to the point of failure, assuming that the tail of the log can be backed up after<br />
the failure. The full recovery model also supports an option to restore individual data pages or to restore<br />
to a specific point in time.<br />
Bulk-logged Recovery Model<br />
This recovery model can reduce the transaction logging requirements for many bulk operations. It is<br />
intended solely as an adjunct to the full recovery model. For example, while executing certain large-scale<br />
bulk operations such as bulk import or index creation, a database can be switched temporarily to the<br />
bulk-logged recovery model. This temporary switch can increase performance by only logging extent<br />
allocations and reduce log space consumption.<br />
Transaction log backups are still required when using bulk-logged recovery model. Like the full recovery<br />
model, the bulk-logged recovery model retains transaction log records until after they are backed up. The<br />
tradeoffs are bigger log backups and increased work-loss exposure because the bulk-logged recovery<br />
model does not support point-in-time recovery.<br />
Note One potentially surprising outcome is that the log backups can often be larger than<br />
the transaction logs. This is because <strong>SQL</strong> Server retrieves the modified extents from the data<br />
files while performing a log backup for minimally-logged data.<br />
Question: What type of organization might find the simple recovery model to be adequate?
5-18 Understanding <strong>SQL</strong> Server 2010 Recovery Models<br />
Capacity Planning for Transaction Logs<br />
Key Points<br />
The recovery model that you choose for a database will have a big impact on how large the log file for<br />
the database will need to be.<br />
In the simple recovery model, <strong>SQL</strong> Server truncates the log after each checkpoint. In the full and bulklogged<br />
recovery models, the log is truncated after each log backup, to ensure that an unbroken chain of<br />
backup log files exists. The truncation of a log file happens after a log backup. There is a common<br />
misconception that a full database backup breaks this chain of log file backups but this is not true.<br />
Determining Log File Size<br />
It is very difficult to calculate the size requirements for log files. As with planning other aspects of <strong>SQL</strong><br />
Server, monitoring during realistic testing is the best indicator.<br />
There is another common misconception that the log file of a database in simple recovery model will not<br />
grow. This is also not the case. In simple recovery model, the transaction log needs to be large enough to<br />
hold all details from the oldest active transaction. Large transactions or long running transactions can<br />
cause the log file to need additional space.<br />
One of the most common questions asked by users on <strong>SQL</strong> Server forums is about how to truncate a log<br />
file. As an example, a user might mention that they have a 20GB database but a 300GB log file. This<br />
situation is typically caused by creating databases in full recovery model and failing to ever manage the<br />
database logs. Because of this lack of management, the log files will simply continue to grow in size. Often<br />
the user will have also attempted (unsuccessfully) to truncate the transaction log. Log backups followed<br />
by a log shrink would have corrected the situation.
Inability to Truncate a Log<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 5-19<br />
Other <strong>SQL</strong> Server features can also prevent you from truncating log files. For example, database mirroring,<br />
transactional replication, and change data capture can all affect the ability for the database engine to<br />
truncate log files.<br />
Note <strong>Database</strong> mirroring, transactional replication, and change data capture are out of<br />
scope for this course.<br />
If you are wondering why a transaction log cannot be truncated, executing the following query can assist:<br />
SELECT name, log_reuse_wait_desc FROM sys.databases;<br />
The current values that can be returned for the log_reuse_wait_desc column are:<br />
0 = Nothing<br />
1 = Checkpoint<br />
2 = Log backup<br />
3 = Active backup or restore<br />
4 = Active transaction<br />
5 = <strong>Database</strong> mirroring<br />
6 = Replication<br />
7 = <strong>Database</strong> snapshot creation<br />
8 = Log Scan<br />
9 = Other (transient)<br />
After resolving the reason that is shown, perform a log backup (if you are using full recovery model), to<br />
truncate the log file, and then use DBCC SHRINKFILE to reduce the filesize of the log file.<br />
Note If the log file does not reduce in size when using DBCC SHRINKFILE as part of the<br />
steps above, the active part of the log file must have been at the end of the log file at that<br />
point in time.<br />
Question: What would be the most common reason for excessive log file growth?
5-20 Understanding <strong>SQL</strong> Server 2010 Recovery Models<br />
Working with Checkpoint Options<br />
Key Points<br />
<strong>SQL</strong> Server has four types of checkpoint operation:<br />
• Automatic is issued automatically in the background to meet the upper time limit suggested by the<br />
recovery interval server configuration option. Automatic checkpoints run to completion. Automatic<br />
checkpoints are throttled based on the number of outstanding writes and whether the <strong>Database</strong><br />
Engine detects an increase in write latency above 20 milliseconds.<br />
• Indirect is issued in the background to meet a user-specified target recovery time for a given<br />
database. The default target recovery time is 0, which causes automatic checkpoint settings to be<br />
used on the database. If you have used ALTER DATABASE to set TARGET_RECOVERY_TIME to value<br />
greater than zero, this value is used, rather than the recovery interval specified for the server instance.<br />
• Manual is issued when you execute a Transact-<strong>SQL</strong> CHECKPOINT command. The manual checkpoint<br />
occurs in the current database for your connection. By default, manual checkpoints run to<br />
completion. The optional checkpoint duration parameter specifies a requested amount of time, in<br />
seconds, for the checkpoint to complete.<br />
• Internal is issued by various server operations such as backup and database snapshot creation to<br />
guarantee that disk images match the current state of the log.<br />
You can control the target duration of a checkpoint operation by executing the CHECKPOINT statement<br />
as shown:<br />
CHECKPOINT 5;
Demonstration 2A: Logs and Full Recovery<br />
Demonstration Steps<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 5-21<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click <strong>SQL</strong><br />
Server Management Studio. In the Connect to Server window, type Proseware and click Connect.<br />
From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_05_PRJ\10775A_05_PRJ.ssmssln and click Open.<br />
3. From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file from<br />
within Solution Explorer.<br />
4. Open the 21 – Demonstration 2A.sql script file.<br />
5. Follow the instructions contained within the comments of the script file.
5-22 Understanding <strong>SQL</strong> Server 2010 Recovery Models<br />
Lesson 3<br />
Planning a <strong>SQL</strong> Server Backup Strategy<br />
Now that you have an understanding of <strong>SQL</strong> Server transaction logs and database recovery models, it is<br />
time to consider the types of backups that are available with <strong>SQL</strong> Server.<br />
In addition to learning in general about all the available types of backup, it is important to learn about the<br />
three most common types of backups in greater detail. These types are full database backups, differential<br />
backups, and transaction log backups.<br />
To effectively plan a backup strategy, you need to align your chosen combination of backup types to the<br />
business recovery requirements. Most organizations will need to use a combination of backup types rather<br />
than relying solely on a single type of backup.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Detail the available Microsoft <strong>SQL</strong> Server backup types.<br />
• Describe full database backups.<br />
• Describe differential backups.<br />
• Describe transaction log backups.
Overview of Microsoft <strong>SQL</strong> Server Backup Types<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 5-23<br />
Before exploring any of the backup types in detail, it is important to be familiar with all the backup types<br />
that are available in <strong>SQL</strong> Server. Not all backup types are available for all database recovery models. For<br />
example, transaction log backups cannot be made for a database that is in simple recovery model.<br />
Full Backups<br />
A full backup of a database includes the data files and the active part of the transaction log. The first step<br />
in the backup is that a CHECKPOINT operation is performed. The active part of the transaction log<br />
includes all details from the oldest active transaction forward. A full backup represents the database at the<br />
time that the data reading phase of the backup was completed and serves as your baseline in the event of<br />
a system failure. Full backups do not truncate the transaction log.<br />
Differential Backups<br />
A differential backup is used to save the data that has been changed since the last full backup. Differential<br />
backups are based on the data file contents rather than on log file contents and contain extents that have<br />
been modified since the last full database backup. Differential backups are generally faster to restore than<br />
transaction log backups but they have less options available. For example, point in time recovery is not<br />
available unless differential backups are also combined with log file backups.<br />
Partial Backups<br />
A partial backup is similar to a full backup, but a partial backup does not contain all of the filegroups.<br />
Partial backups contain all the data in the primary filegroup, every read/write filegroup, and any specified<br />
read-only files. A partial backup of a read-only database contains only the primary filegroup.<br />
Note Working with partial backups is an advanced topic that is out of scope for this<br />
course.
5-24 Understanding <strong>SQL</strong> Server 2010 Recovery Models<br />
Transaction Log Backups<br />
Transaction log backups record any database changes by backing up the log records from the transaction<br />
log. Point in time recovery is possible with transaction log backups and they are generally much smaller<br />
than full database backups. The smaller size of transaction log backups means they can be run much more<br />
frequently. After the transaction log is backed up, the log records that have been backed up and that are<br />
not in the currently active portion of the transaction log are truncated. Transaction log backups are not<br />
available in the simple recovery model.<br />
Tail-log Backups<br />
A transaction log backup that is taken just before a restore operation is called a tail-log backup. Typically,<br />
tail-log backups are taken after a disk failure that affects data files only. From <strong>SQL</strong> Server 2005 onwards,<br />
<strong>SQL</strong> Server has required that you take a tail-log backup before it will allow you to restore a database, to<br />
protect against inadvertent data loss.<br />
Tail-log backups are often also possible even when the data files from the database are no longer<br />
accessible.<br />
File or Filegroup Backups<br />
If performing a full database backup on very large databases is not practical, you can perform database<br />
file or filegroup backups.<br />
Note Working with file and filegroup backups is an advanced topic that is out of scope for<br />
this course.<br />
Copy-only Backups<br />
<strong>SQL</strong> Server 2005 and later versions supports the creation of copy-only backups. Unlike other backups, a<br />
copy-only backup does not impact the overall backup and restore procedures for the database. Copy-only<br />
backups can be used to create a copy of the backup to take offsite to a safe location. Copy-only backups<br />
are also useful when performing some online restore operations. All recovery models support copy-only<br />
data backups.<br />
Question: What type of database would benefit from partial backups?
Full <strong>Database</strong> Backup Strategies<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 5-25<br />
A full database backup strategy involves regular full backups to preserve the database. If a failure occurs,<br />
the database can be restored to the state of the last full backup.<br />
Full <strong>Database</strong> Backups<br />
A full database backup backs up the whole database and also backs up the portion of the transaction log<br />
that covers changes that occurred while reading the data pages.<br />
Full database backups represent a copy of the database as at the time the data-reading phase of the<br />
backup finished, not as at the time that the backup started. Backups can be taken while the system is<br />
being used. At the end of the backup, <strong>SQL</strong> Server writes transaction log entries that cover the period<br />
during which the backup was occurring into the backup.<br />
Common Scenarios<br />
For a small database that can be backed up quickly, the best practice is to use full database backups.<br />
However, as a database becomes larger, full backups take more time to complete and require more<br />
storage space. Therefore, for a large database, you might want to supplement full database backups with<br />
other forms of backup.<br />
Under the simple recovery model, after each backup, the database is exposed to potential work loss if a<br />
disaster was to occur. The work-loss exposure increases with each update until the next full backup, when<br />
the work-loss exposure returns to zero and a new cycle of work-loss exposure begins.
5-26 Understanding <strong>SQL</strong> Server 2010 Recovery Models<br />
Scenarios that might be appropriate for using a full database backup strategy include:<br />
• Test systems.<br />
• Data warehouses where the data could be recovered from a source system and where the data in the<br />
data warehouse does not change regularly.<br />
• Systems where the data can be recovered from other sources.<br />
Example on the Slide<br />
On the example shown in the slide, a full backup is performed on Sunday, Monday, and Tuesday. This<br />
means that during the day on Monday, up to a full day of data is exposed to risk until the backup is<br />
performed. The same amount of exposure happens on Tuesday. After the Tuesday backup is carried out,<br />
the risk increases every day until the next Sunday backup is carried out.<br />
Question: Might a small database be a good candidate for a full database backup strategy?
Transaction Log Backup Strategies<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 5-27<br />
A backup strategy that involves transaction log backups must be combined with a full database strategy<br />
or a strategy that combines the use of full and differential database backups.<br />
Transaction Log Backups<br />
Transaction log backups save all data since the last log backup. Rather than reading database pages,<br />
transaction log backups are based on reading data from the transaction log. A backup strategy based on<br />
transaction log backups is appropriate for databases with frequent modifications.<br />
When it is necessary to recover a database, the latest full database backup needs to be restored, along<br />
with the most recent differential backup (if one has been performed). After the database has been<br />
restored, transaction logs that have been backed up since that time are also then restored, in order.<br />
Because the restore works on a transactional basis, it is possible to restore a database to a specific point in<br />
time, within the transactions stored in the log backup.<br />
In addition to providing capabilities that let you restore the transactions that have been backed up, a<br />
transaction log backup truncates the transaction log. This enables VLFs in the transaction log to be reused.<br />
If you do not back up the log frequently enough, the log files can fill up.<br />
Example on the Slide<br />
In the example shown on the slide, nightly full database backups are supplemented by periodic<br />
transaction log backups during the day. If the system fails, recovery could be made to the time of the last<br />
transaction log backup. If, however, only the database data files failed, and a tail-log backup could be<br />
performed, no committed data loss would occur.
5-28 Understanding <strong>SQL</strong> Server 2010 Recovery Models<br />
Combinations of Backup Types<br />
Transaction log backups are typically much smaller than other backups, especially when they are<br />
performed regularly. Potential data loss can be minimized by a backup strategy that is based on<br />
transaction log backups in combination with other backup types.<br />
As log backups typically take longer to restore than other types of backup, it is often advisable to<br />
combine transaction log backups with periodic differential backups. During a recovery, only the<br />
transaction log backups that were taken after the last differential backup, need to be restored.<br />
Question: On the slide, which transactions would be contained in the first log backup on<br />
Monday morning?
Differential Backup Strategies<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 5-29<br />
Differential backups are a good way to reduce potential work loss and to reduce maintenance overhead.<br />
When the proportion of a database that is changed between backup intervals is much smaller than the<br />
entire size of the database, a differential backup strategy might be useful. However, if you have a very<br />
small database then differential backups may not save much time.<br />
Differential Backups<br />
From the time that a full backup occurs, <strong>SQL</strong> Server maintains a map of extents that have been modified.<br />
In a differential backup, <strong>SQL</strong> Server backs up those extents that have been changed. It is important to<br />
realize though, that after the differential backup is performed, <strong>SQL</strong> Server does not clear that map of<br />
modified extents. The map is only cleared when full backups occur. This means that a second differential<br />
backup performed on a database will include all changes since the last full backup, not just those changes<br />
since the last differential backup.<br />
Useful Scenarios for Differential Backups<br />
Because they only save the data that has been changed since the last full database backup, differential<br />
backups are typically much faster and occupy less disk space than transaction log backups for the same<br />
period of time.<br />
Differential database backups are especially useful when a subset of a database is modified more<br />
frequently than the remainder of the database. In these situations, differential database backups enable<br />
you back up frequently without the overhead of full database backups.
5-30 Understanding <strong>SQL</strong> Server 2010 Recovery Models<br />
Example on the Slide<br />
In the example on the slide, a full database backup is taken at midnight on Sunday night (early Monday<br />
morning). Differential backups are then taken at midnight each other night of the week. The differential<br />
backup taken on Monday night would include all data changed during Monday. The differential backup<br />
taken on Tuesday night would include all data changed on Monday and Tuesday. The differential backup<br />
taken on Friday night would include all data that changed on Monday, Tuesday, Wednesday, Thursday,<br />
and Friday. This means that differential backups can substantially grow in size between each full backup<br />
interval.<br />
Combinations of Backups<br />
Differential backups must be combined with other forms of backup. As a differential backup saves all data<br />
changed since the last full backup was taken, a differential backup cannot be taken unless a full backup<br />
has been taken.<br />
Another important aspect to consider is that when a recovery is needed, multiple backups need to be<br />
restored to bring the system back online, rather than a single backup. This increases the risk exposure for<br />
an organization and must be considered when planning a backup strategy.<br />
Differential backups can also be used in combination with both full backups and transaction log backups.<br />
Question: How could you estimate the size of a differential backup?
Discussion: Meeting Business Recovery Requirements<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 5-31<br />
Question: Imagine that you need to create a backup strategy for an online store front.<br />
Before you could design the strategy, what questions would you need to ask?
5-32 Understanding <strong>SQL</strong> Server 2010 Recovery Models<br />
Lab 5: Understanding <strong>SQL</strong> Server Recovery Models<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_05_PRJ\10775A_05_PRJ.ssmssln.<br />
7. From the View menu, click Solution Explorer. In Solution Explorer, double-click the query<br />
00-Setup.sql. When the query window opens, click Execute on the toolbar.<br />
Lab Scenario<br />
You need to implement a database recovery strategy. The business unit from Proseworks, Inc. has<br />
provided you with the availability needs for the databases on the new Proseware <strong>SQL</strong> Server instance. You<br />
need to plan how you will meet the requirements and then implement your strategy.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 5-33<br />
If you have time, there is another issue that your manager would like you to work on. There is another<br />
instance of <strong>SQL</strong> Server installed for supporting Customer Service operations. There is concern that existing<br />
databases on the CustomerService server instance are configured inappropriately and have invalid backup<br />
strategies, based on their RPO and RTO requirements. In this exercise, you need to review the database<br />
recovery models and backup strategies for the databases on the CustomerService instance and provide<br />
recommended changes.<br />
Supporting Documentation<br />
Business <strong>Database</strong> Continuity Requirements for <strong>Database</strong>s on the Proseware Server<br />
Instance (for Exercises 1 and 2)<br />
Recovery Time Objectives<br />
1. The MarketDev database must never be unavailable for longer than eight hours.<br />
2. The Research database must never be unavailable for longer than two hours.<br />
Recovery Point Objectives<br />
1. When the MarketDev database is recovered from a failure, no more than 30 minutes of transactions<br />
may be lost.<br />
2. When the Research database is recovered from a failure, all transactions that were completed up to<br />
the end of the previous weekday must be recovered.<br />
Projected Characteristics<br />
Characteristic Estimated Value<br />
MarketDev database size 20GB<br />
Research database size 200MB<br />
Total backup throughput 100MB per minute<br />
Total restore throughput 80MB per minute<br />
Average rate of change to the MarketDev database during office<br />
hours<br />
1GB per hour<br />
Average rate of change to the Research database during office hours 10MB per hour<br />
Percentage of the MarketDev database changed each day (average) 1.2%<br />
Percentage of the Research database changed each day (average) 80%<br />
Office hours (no full database backups permitted during these hours) 8am to 6pm
5-34 Understanding <strong>SQL</strong> Server 2010 Recovery Models<br />
Business <strong>Database</strong> Continuity Requirements for <strong>Database</strong>s on the CustomerService<br />
Server Instance (for Exercises 3)<br />
Recovery Time Objectives<br />
1. The CreditControl database must never be unavailable for longer than two hours.<br />
2. The PotentialIssue database must never be unavailable for longer than one hour.<br />
Recovery Point Objectives<br />
1. When the CreditControl database is recovered from a failure, no more than five minutes of<br />
transactions may be lost.<br />
2. When the PotentialIssue database is recovered from a failure, no more than 30 minutes of<br />
transactions may be lost.<br />
Projected Characteristics<br />
Characteristic Estimated Value<br />
CreditControl database size 20GB<br />
PotentialIssue database size (at the start of each week after<br />
archiving activity is complete)<br />
200MB<br />
Total backup throughput 100MB per minute<br />
Total restore throughput 80MB per minute<br />
Average rate of change to the CreditControl database during office<br />
hours<br />
Average rate of change to the PotentialIssue database (constant all<br />
week long 24 hours per day)<br />
Percentage of the CreditControl database changed each day<br />
(average)<br />
Percentage of the PotentialIssue database changed each day<br />
(average)<br />
Office hours (no full database activity permitted during these<br />
hours)<br />
500MB per hour<br />
10MB per hour<br />
60%<br />
50%<br />
8am to 7pm
Existing Backup Strategy For CreditControl <strong>Database</strong><br />
Recovery Model: Full<br />
Type of Backup Schedule<br />
• Full • Saturday at 6AM,<br />
Wednesday at 6AM<br />
• Differential • Sunday at 10PM,<br />
Monday at 10PM,<br />
Tuesday at 10PM,<br />
Thursday at 10PM,<br />
Friday at 10PM<br />
• Log • Every 60 minutes on the hour<br />
Existing Backup Strategy For PotentialIssue <strong>Database</strong><br />
Recovery Model: Full<br />
Type of Backup Schedule<br />
• Full • Sunday at 10PM<br />
• Log • Every 15 minutes starting at 10 minutes pas<br />
the hour<br />
Exercise 1: Plan a Backup Strategy<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 5-35<br />
Scenario<br />
You need to plan a backup strategy for the two databases on the new Proseware instance. You have been<br />
provided with RPO (recovery point objectives) and RTO (recovery time objectives) for both databases, as<br />
part of a business continuity statement.<br />
The main tasks for this exercise are as follows:<br />
1. Review the business requirements.<br />
2. Determine an appropriate backup strategy for each database.<br />
Task 1: Review the business requirements<br />
• Review the supplied business requirements in the supporting documentation for the exercise.<br />
Task 2: Determine an appropriate backup strategy for each database<br />
• Determine an appropriate backup strategy for each database.<br />
• For the MarketDev database:<br />
• Which recovery model should be used?
5-36 Understanding <strong>SQL</strong> Server 2010 Recovery Models<br />
• Complete the following table for the backup schedule:<br />
Type of Backup Schedule<br />
• •<br />
• •<br />
• •<br />
• For the Research database:<br />
• Which recovery model should be used?<br />
• Complete the following table for the backup schedule:<br />
Type of Backup Schedule<br />
• •<br />
• •<br />
• •<br />
Results: After this exercise, you should have created a plan to backup two databases.<br />
Exercise 2: Configure Recovery Models<br />
Scenario<br />
You have reviewed the database recovery models and identified that the current database recovery<br />
models do not meet the availability needs of the business. In this exercise, you need to set the recovery<br />
models for the databases that do not meet the requirements.<br />
The main task for this exercise is as follows:<br />
1. Review and adjust the current database recovery models.<br />
Task 1: Review and adjust the current database recovery models<br />
• Review the recovery models that you decided were required in Exercise 1, check whether or not the<br />
existing recovery models for the MarketDev and Research databases match your recommendations. If<br />
not, change the recovery models as per your recommendations.<br />
Results: After this exercise, you should have reviewed and modified the database<br />
recovery models where required.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 5-37<br />
Challenge Exercise 3: Review Recovery Models and Strategy (Only if time<br />
permits)<br />
Scenario<br />
There is another instance of <strong>SQL</strong> Server installed for supporting Customer Service operations. There is<br />
concern that existing databases on the CustomerService server instance are configured inappropriately<br />
and have invalid backup strategies, based on their RPO and RTO requirements. In this exercise, you need<br />
to review the database recovery models and backup strategies for the databases on the CustomerService<br />
instance and provide recommended changes.<br />
The main tasks for this exercise are as follows:<br />
1. Review the RPO and RTO requirements for the databases.<br />
2. Review the existing recovery models and backup strategies.<br />
3. Indicate whether or not the strategy would be successful.<br />
Task 1: Review the RPO and RTO requirements for the databases<br />
• The supporting documentation includes details of the business continuity requirements for the<br />
databases. You need to review this documentation.<br />
Task 2: Review the existing recovery models and backup strategies<br />
• The supporting documentation also includes details of the backup strategy for the databases. You<br />
need to review this documentation.<br />
Task3: Indicate whether or not the strategy would be successful<br />
• You need to assess whether or not the current backup strategy and recovery model configuration is<br />
capable of supporting the business continuity requirements. If not, explain why it would not work.<br />
Results: After this exercise, you should have assessed the strategy.
5-38 Understanding <strong>SQL</strong> Server 2010 Recovery Models<br />
Module Review and Takeaways<br />
Review Questions<br />
1. What are the unique features of transaction log restores?<br />
2. When might a full database backup strategy be adequate?<br />
3. What might prevent transaction log truncation?<br />
Best Practices<br />
1. Plan your backup strategy carefully.<br />
2. Plan the backup strategy in conjunction with the business needs.<br />
3. Choose the appropriate database recovery model.<br />
4. Plan your transaction log size based on the transaction log backup frequency.<br />
5. Consider differential backups to speed up recovery.
Module 6<br />
Backup of <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Contents:<br />
Lesson 1: Backing up <strong>Database</strong>s and Transaction Logs 6-3<br />
Lesson 2: Managing <strong>Database</strong> Backups 6-14<br />
Lesson 3: Working with Backup Options 6-20<br />
Lab 6: Backup of <strong>SQL</strong> Server <strong>Database</strong>s 6-26<br />
6-1
6-2 Backup of <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Module Overview<br />
Ensuring reliable backups of corporate data is one of the most important roles for database<br />
administrators. You have seen that <strong>Microsoft®</strong> <strong>SQL</strong> <strong>Server®</strong> provides many types of backups. In this<br />
module, you will explore most of these backup types in more depth, and learn to implement the backups.<br />
Apart from learning to perform full database backups, differential database backups, and transaction log<br />
backups, you will also see how to apply options that affect the way that the backups work. Automating<br />
and scheduling backups will be covered in later modules.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Back up databases and transaction logs.<br />
• Manage database backups.<br />
• Work with more advanced backup options.
Lesson 1<br />
Backing up <strong>Database</strong>s and Transaction Logs<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 6-3<br />
In the previous module, you saw how to plan a backup strategy for a <strong>SQL</strong> Server system. This lesson shows<br />
how to perform the most common forms of <strong>SQL</strong> Server backup: full database backups, differential<br />
database backups, and transaction log backups.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Perform a full database backup.<br />
• Work with backup sets.<br />
• Use backup compression.<br />
• Perform differential backups.<br />
• Perform transaction log backups.
6-4 Backup of <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Performing a Full <strong>Database</strong> Backup<br />
Key Points<br />
Full database backups can be made using the BACKUP DATABASE command in T-<strong>SQL</strong> or using the GUI in<br />
SSMS. A full database backup saves all the data pages in the database, and also saves the active portion of<br />
the transaction log.<br />
Example on the Slide<br />
In the example on the slide, a full database backup is being made of the AdventureWorks database, to a<br />
disk file L:\<strong>SQL</strong>Backups\AW.bak. The option INIT that has been included in the command, instructs <strong>SQL</strong><br />
Server to create the file if it does not already exist, and to overwrite the file if it does already exist.<br />
The default initialization option, NOINIT, tells <strong>SQL</strong> Server to create the file if it does not already exist, and<br />
to append to the file if it already contains a <strong>SQL</strong> Server backup. The BACKUP DATABASE command<br />
includes many other options. The most important options will be discussed throughout this module.<br />
Backup Timing<br />
An important consideration when making a backup is to understand the timing associated with the<br />
contents of the backup. The database may be in use while the backup is occurring.<br />
For example, if a backup starts at 10PM and finishes at 1AM, does the backup contain a copy of the<br />
database as it was at 10PM, a copy of the database as it was at 1AM, or a copy of the database from a<br />
time between the start and finish?<br />
In early versions of <strong>SQL</strong> Server, the backup process wrote data pages to the backup device in sequence.<br />
However, if a user needed to modify a data page, <strong>SQL</strong> Server pushed that data page to the beginning of<br />
the backup page queue, and made the user wait for the page to be written to the backup device. In those<br />
versions of <strong>SQL</strong> Server, the backup made was a copy of the database at the time the backup was started.<br />
To reduce the impact on users, later versions of <strong>SQL</strong> Server write all data pages to the backup device in<br />
sequence, but uses the transaction log to track any pages that are modified while the backup is occurring.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 6-5<br />
<strong>SQL</strong> Server then writes the relevant portion of the transaction log to the end of the backup. This process<br />
makes the backups slightly larger than in earlier versions, particularly if heavy update activities are<br />
happening at the same time as the backup. This altered process also means that the backup contains a<br />
copy of the database as at a time just prior to the completion of the backup, not as at the time the<br />
backup was started.<br />
Question: What happens when you do not specify either INIT or NOINIT and a backup<br />
already exists on the backup device?
6-6 Backup of <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Working with Backup Sets<br />
Key Points<br />
Users are often confused by the fact that more than a single <strong>SQL</strong> Server backup can be contained within a<br />
single operating system file. The most common error related to this is to restore the first backup from a<br />
file, while assuming that it is the latest backup in the file.<br />
A single backup is called a backup set, and is written to a media set, which itself can contain up to 64<br />
backup devices. A backup device can be a disk or tape device. Tape devices must be locally attached and<br />
backups written to tape can be combined with Windows backups.<br />
Note Tape drives that are mapped across a network cannot be used directly with <strong>SQL</strong><br />
Server backup. Further, the ability for <strong>SQL</strong> Server to backup directly to tape is deprecated<br />
and will be removed in a future version of <strong>SQL</strong> Server.<br />
Disk-based devices are the most commonly used. If a media set spans several backup devices, the backups<br />
will be striped across the devices.<br />
Note No parity device is used while striping. If two backup devices are used together,<br />
each receives half the backup. Both must also be present when attempting to restore the<br />
backup.<br />
Every backup operation to a media set must write to the same number and same types of backup devices.<br />
The Enterprise Edition of <strong>SQL</strong> Server also supports mirroring of media sets to improve the probability of<br />
being able to restore the backup. The same backup image is written to multiple locations.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 6-7<br />
Media sets and the backup devices are created the first time a backup is attempted on them. Media sets<br />
can also be named at the time of creation.<br />
Backups created on a single non-mirrored device or a set of mirrored devices in a media set are referred<br />
to as a media family. The number of backup devices used for the media set determines the number of<br />
media families in a media set. For example, if a media set uses two non-mirrored backup devices, the<br />
media set contains two media families.<br />
FORMAT Option<br />
<strong>SQL</strong> Server has been designed to minimize the chance of inadvertent data loss.<br />
As an example, consider a full database backup that has been written to two files, using the command:<br />
BACKUP DATABASE AdventureWorks<br />
TO DISK = 'D:\<strong>SQL</strong>Backups\AW_1.bak',<br />
DISK = 'L:\<strong>SQL</strong>Backups\AW_2.bak'<br />
WITH INIT;<br />
The two disk files that are listed make up a media set. The data from the backup is striped across the two<br />
files. Another backup could be made at a later time, to the same with a command such as the following:<br />
BACKUP DATABASE AdventureWorks<br />
TO DISK = 'D:\<strong>SQL</strong>Backups\AW_1.bak',<br />
DISK = 'L:\<strong>SQL</strong>Backups\AW_2.bak'<br />
WITH NOINIT;<br />
The data from the second backup would again be striped across the two files and the header of the media<br />
set updated to indicate that it now contains the two backups.<br />
However, if a user then tries to create another backup with a command such as:<br />
BACKUP DATABASE AdventureWorksDW<br />
TO DISK = 'D:\<strong>SQL</strong>Backups\AW_1.bak';<br />
<strong>SQL</strong> Server would return an error. Before the member of the media set could be overwritten, the FORMAT<br />
option would need to be added to the WITH clause in the backup command:<br />
BACKUP DATABASE AdventureWorksDW<br />
TO DISK = 'D:\<strong>SQL</strong>Backups\AW_1.bak'<br />
WITH FORMAT, INIT;<br />
Use the FORMAT option to overwrite the contents of a backup file and split up the media set, but use the<br />
FORMAT option very carefully. Formatting one backup file of a media set renders the entire backup set<br />
unusable.<br />
Question: What advantage could striping backups to more than one backup device on a<br />
disk provide?
6-8 Backup of <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Using Backup Compression<br />
Key Points<br />
A number of compression-related technologies were introduced into <strong>SQL</strong> Server in <strong>SQL</strong> Server 2008.<br />
Backup compression trades off some CPU performance against a potentially large reduction in the size of<br />
a backup and increased backup and restore performance. Backup compression can be configured as a<br />
server option or as part of a T-<strong>SQL</strong> BACKUP command as shown:<br />
BACKUP DATABASE AdventureWorksDW<br />
TO DISK = 'D:\<strong>SQL</strong>Backups\AW_1.bak'<br />
WITH FORMAT, COMPRESSION;<br />
Performance Impact of Compressed Backups<br />
Because a compressed backup is smaller than an uncompressed backup of the same amount of data,<br />
compressing a backup typically reduces the amount of device I/O required and decreases duration of<br />
backups significantly.<br />
Any form of compression tends to increase CPU usage, and the additional CPU resources that are<br />
consumed by the compression process could adversely impact concurrent operations on systems that are<br />
CPU bound. Most current <strong>SQL</strong> Server systems are I/O bound, rather than CPU bound. The benefit received<br />
from the reduction in I/O usually outweighs the increase in CPU requirements by a significant factor.<br />
Note In systems where CPU load is a concern, it is also possible to create a low-priority<br />
session for creating compressed backups by limiting CPU usage with the <strong>SQL</strong> Server<br />
Resource Governor. The use of Resource Governor is an advanced topic that is out of scope<br />
for this course.
Recovery Time<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 6-9<br />
While a reduction in the time taken to perform backups is beneficial, backups are usually performed while<br />
the system is being used. However, compression benefits not only the backup process but also the restore<br />
process, and can significantly improve the ability to meet RTO requirements.<br />
Compression Percentages<br />
The degree of compression that is achieved depends entirely upon how compressible the data in the<br />
database is. Some data compresses well: other data does not compress well. A reduction in I/O and<br />
backup size of 30 to 50 percent is not uncommon in typical business systems.<br />
Note While backup compression can be used on a database that has been encrypted<br />
using Transparent <strong>Database</strong> Encryption (TDE), the compression rate will be minimal. TDE is<br />
an advanced topic that is out of scope for this course.<br />
Restrictions on Backup Compression<br />
The following restrictions apply to compressed backups:<br />
• Compressed and uncompressed backups cannot co-exist in a media set.<br />
• Previous versions of <strong>SQL</strong> Server cannot read compressed backups but lower editions of the product<br />
can restore compressed backups, even though the lower editions cannot create compressed backups.<br />
• Windows-based backups cannot share a media set with compressed <strong>SQL</strong> Server backups.<br />
• The default setting for backup compression can be set by the server configuration option ‘backup<br />
compression default’.<br />
• <strong>SQL</strong> Server 2008 R2 introduced the creation of compressed backups to the Standard Edition of <strong>SQL</strong><br />
Server.<br />
Question: Why would both backup and restore time generally decrease when backup<br />
compression is used?
6-10 Backup of <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Performing Differential Backups<br />
Key Points<br />
While full database backups are ideal, there often is not enough time to perform full database backups.<br />
For situations where a relatively small percentage of the database is modified, compared to the overall<br />
database size, differential backups are a good option to consider.<br />
Differential Backups<br />
You can perform a differential backup using <strong>SQL</strong> Server Management Studio, or by adding the<br />
DIFFERENTIAL option to the BACKUP DATABASE T-<strong>SQL</strong> command.<br />
<strong>SQL</strong> Server maintains a map of modified extents called the differential bitmap page. One page is<br />
maintained for every 4GB section of every data file. Each time a full database backup is created, <strong>SQL</strong><br />
Server clears the map. As the data in the data files is modified, <strong>SQL</strong> Server updates this map. A differential<br />
backup saves all extents that have been modified since the last full database backup, not only those<br />
modified since the last differential backup.<br />
Note You cannot create a differential database backup unless a full database backup has<br />
been taken first.<br />
A differential backup also saves the active portion of the transaction log, in exactly the same way that a<br />
full database backup does.<br />
The syntax for the differential backup is identical to the syntax for full database backups, apart from the<br />
addition of the DIFFERENTIAL option. All other options that are available for full database backups are<br />
also available for differential backups.<br />
Question: Does a differential backup truncate the transaction log?
Performing Transaction Log Backups<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 6-11<br />
You can perform a Transaction Log Backup using <strong>SQL</strong> Server Management Studio or by using the BACKUP<br />
LOG T-<strong>SQL</strong> statement. Before a transaction log backup can be performed, the database must be in either<br />
full or bulk-logged recovery model. In addition, a transaction log backup can only occur when a full<br />
database backup has been taken at some time prior.<br />
A transaction log backup does not save any data pages from the database, except when the database is<br />
set to bulk logged recovery model. A transaction log backup finds the MaxLSN of the last successful<br />
transaction log backup, and saves all log entries beyond that point to the current MaxLSN. The transaction<br />
log is then truncated as far as is possible. The longest running active transaction must be retained, in case<br />
the database needs to be recovered after a failure.<br />
Log Record Chains<br />
Before a database can be restored using transaction log backups, an unbroken chain of log records must<br />
be available since the last full database backup to the desired point of restoration. If the chain is broken, it<br />
is only possible to restore up to the point where the backup chain was broken.<br />
For example, imagine a scenario where a database is created, and at a later time, a full backup of the<br />
database is performed. At that point, the database could be recovered. If the recovery model of the<br />
database was then changed to simple, and subsequently changed back to full, a break in the log file chain<br />
would have occurred. Even though a previous full database backup had occurred, the database could only<br />
be recovered up to the point where the last transaction log backup was made (if any) prior to the change<br />
to simple recovery model.<br />
After switching from simple to full recovery model, a full database backup needs to be performed to<br />
create a starting point for transaction log backups.
6-12 Backup of <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Log Truncation<br />
While the default action is to truncate the transaction log as a result of a transaction log backup, this<br />
truncation is not performed if the COPY_ONLY option is used. The COPY_ONLY option is discussed in<br />
Lesson 3 of this module.
Demonstration 1A: Backing up <strong>Database</strong>s<br />
Demonstration Steps<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 6-13<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click <strong>SQL</strong><br />
Server Management Studio. In the Connect to Server window, type Proseware and click Connect.<br />
From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_06_PRJ\10775A_06_PRJ.ssmssln and click Open.<br />
3. From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file from<br />
within Solution Explorer.<br />
4. Open the 11 – Demonstration 1A.sql script file.<br />
5. Follow the instructions contained within the comments of the script file.
6-14 Backup of <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Lesson 2<br />
Managing <strong>Database</strong> Backups<br />
In Module 5, several example scenarios were presented where users performed backups but at the time<br />
the backups needed to be restored, the restore was not possible. The most basic check that should be<br />
performed on a backup is to verify that it is readable. The option to verify a backup and other options for<br />
ensuring backup integrity are discussed in this lesson.<br />
It is also important to know how to find information about backups that have been performed. <strong>SQL</strong> Server<br />
keeps a history of backup operations in the msdb database. In this lesson, you will see how to query the<br />
tables that hold this information, and also see how to retrieve the header information from backup<br />
devices.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe options for ensuring backup integrity.<br />
• View backup information.
Options for Ensuring Backup Integrity<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 6-15<br />
A great deal of effort can be expended in performing backups. This effort can be entirely wasted if the<br />
backups that are produced are not usable when the time comes to restore them. <strong>SQL</strong> Server includes<br />
several options to help avoid an inability to restore backups.<br />
Mirrored Media Sets<br />
A mirrored media set is a copy of the backup media set optionally created in parallel during the backup<br />
operation, on the Enterprise Edition of <strong>SQL</strong> Server.<br />
A mirrored media set consists of two to four device mirrors; each mirror contains the entire media set.<br />
Each mirror must be configured with the same number of backup devices and the backup devices must<br />
be of the same device type.<br />
Mirroring a media set increases availability based on the assumption that it is better to have multiple<br />
copies of a backup, rather than a single copy. However it is important to realize that mirroring a media set<br />
exposes your system to a higher level of hardware failure risk, as a failure of any of the backup devices,<br />
causes the entire backup operation to fail.<br />
A mirrored backup set is created by using the MIRROR TO option as shown in the following command:<br />
BACKUP DATABASE AdventureWorksDW<br />
TO DISK = 'D:\<strong>SQL</strong>Backups\AW.bak'<br />
MIRROR TO DISK = 'L:\<strong>SQL</strong>Backups\AW_M.bak'<br />
WITH FORMAT, INIT;
6-16 Backup of <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
WITH CHECKSUM Option<br />
<strong>SQL</strong> Server 2005 introduced the option to perform a checksum operation over an entire backup stream.<br />
This option consumes slightly more CPU resources than backups without the calculation of a checksum.<br />
The WITH CHECKSUM option validates the page-level information (checksum or torn page if either is<br />
present) as well as the one checksum for the backup stream. The checksum value is written to the end of<br />
the backup and can be checked during restore operations or during backup verification operations made<br />
with the RESTORE VERIFYONLY command.<br />
A backup checksum is enabled by using the CHECKSUM option as shown in the following command:<br />
BACKUP DATABASE AdventureWorksDW<br />
TO DISK = 'D:\<strong>SQL</strong>Backups\AW.bak'<br />
WITH CHECKSUM;<br />
Note The COMPRESSION option also enables the CHECKSUM option automatically.<br />
Backup Verification<br />
For backup verification, a RESTORE VERIFYONLY command exists that checks the backup for validity. It<br />
performs the following tests:<br />
• Backup set is complete.<br />
• All volumes are readable.<br />
• Page identifiers are correct (to the same level as if it were about to write the data).<br />
• Checksum is valid (if present on the media).<br />
• Sufficient space exists on destination devices.<br />
The checksum value can only be validated if the backup was performed with the WITH CHECKSUM<br />
option. Without the CHECKSUM option during backup, the verification options only check the metadata<br />
and not the actual backup data.<br />
Verification can also be performed through an option in the backup database task in SSMS, and as part of<br />
<strong>SQL</strong> Server Maintenance plans.<br />
Note Consider verifying backups on a different system to the one where the backup was<br />
performed, to eliminate the situation where a backup is only readable on the source<br />
hardware.<br />
Note Make sure that you create your backups on different disks than the ones holding<br />
your database files. Avoid ever overwriting your most recent backup.<br />
Question: Can you guarantee that a database could be recovered, if a backup of the<br />
database can be verified?
Viewing Backup Information<br />
Key Points<br />
<strong>SQL</strong> Server tracks all backup activity in a set of tables in the msdb database:<br />
• backupfile<br />
• backupfilegroup<br />
• backupmediafamily<br />
• backupmediaset<br />
• backupset<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 6-17<br />
These tables can be queried to retrieve information about backups that have been performed. In<br />
Demonstration 2A, you will see how to perform these queries.<br />
SSMS also has options to retrieve details of backup operations on databases and logs. In Demonstration<br />
2A, you will also see an example report that is launched from within SSMS and that shows relevant backup<br />
information.<br />
Deleting Backup History<br />
Backup history can be deleted using system stored procedures. Consider the following command:<br />
EXEC sp_delete_backuphistory @oldest_date = '20090101';<br />
This command deletes all history prior to the date provided. The date provided is the oldest date to keep.<br />
Also consider the following command:<br />
EXEC sp_delete_database_backuphistory @database_name = 'Market';
6-18 Backup of <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
This command deletes the history for a database named Market.<br />
If a database is restored onto another server, the backup information is not restored with the database, as<br />
it is held in the msdb database of the original system.<br />
Retrieving Backup Metadata<br />
Information about a specific media set is available by executing the RESTORE command with the<br />
following options:<br />
Command Description<br />
RESTORE LABELONLY Returns information about the backup media on a<br />
specified backup device<br />
RESTORE HEADERONLY Returns all the backup header information for all<br />
backup sets on a particular backup device<br />
RESTORE FILELISTONLY Returns a list of data and log files contained in a<br />
backup set<br />
Question: In what situations would <strong>SQL</strong> Server not have complete information on the<br />
backups of a database stored in msdb?
Demonstration 2A: Viewing Backup History<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 6-19<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_06_PRJ\10775A_06_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
2. Open the 21 – Demonstration 2A.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
6-20 Backup of <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Lesson 3<br />
Working with Backup Options<br />
Now that you have seen the most common options related to backing up databases and transactions<br />
logs, it is important to consider general considerations surrounding the creation of backups, along with<br />
some of the less commonly used, but useful, options related to backups.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Highlight backup considerations.<br />
• Perform copy-only backups.<br />
• Perform tail-log backups.
Backup Considerations<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 6-21<br />
<strong>SQL</strong> Server backups can be created while other users are working with the system. Other users might,<br />
however, be impacted by the I/O load placed on the system by the backup operation. <strong>SQL</strong> Server also<br />
places some limitations on the types of commands that can be executed while a backup is being<br />
performed. For example, the ALTER DATABASE command cannot be used with ADD FILE or REMOVE FILE<br />
options and shrinking a database is not permitted during a backup.<br />
The BACKUP command cannot be included in either an explicit or an implicit transaction. You cannot<br />
ROLLBACK a BACKUP.<br />
<strong>Database</strong>s can only be backed up when they are online but it is still possible to perform a backup of the<br />
transaction log when a database is damaged, assuming the log file itself is still intact. This is a key reason<br />
that it is important to separate data and log files onto separate physical media.<br />
VSS and VDI<br />
The Windows Volume Shadow Copy Service (VSS) and the Virtual Device Interface (VDI) programming<br />
interface are available for use with <strong>SQL</strong> Server. The main use for these interfaces is so that third party<br />
backup tools can work with <strong>SQL</strong> Server.<br />
In very large systems, it is common to need to perform disk to disk imaging while the system is in<br />
operation, as standard <strong>SQL</strong> Server backups might take too long to be effective. The VDI programming<br />
interface allows an application to freeze <strong>SQL</strong> Server operations momentarily while a consistent snapshot<br />
of the database files is created. This form of snapshot is commonly used in geographically distributed SAN<br />
replication systems.
6-22 Backup of <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Copy-only Backups<br />
Key Points<br />
A copy-only backup is a <strong>SQL</strong> Server backup that is independent of the sequence of conventional <strong>SQL</strong><br />
Server backups. Usually, taking a backup changes the database and affects how later backups are<br />
restored.<br />
There may, however, be a need to take a backup for a special purpose without affecting the overall<br />
backup and restore procedures for the database.<br />
Copy-only backups can be made of either the database or of the transaction logs. Restoring a copy-only<br />
full backup is the same as restoring any full backup.<br />
Question: Can you suggest a scenario where you might use a Copy-only backup?
Tail-log Backups<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 6-23<br />
<strong>SQL</strong> Server 2005 and later versions require that you take a tail-log backup before you start a restore over<br />
an existing database. This requirement is intended to avoid a common data loss scenario where a<br />
database has not been fully backed up before a restore of the database occurs. The requirement ensures<br />
that, by default, all transactions must have been written to at least one backup, before they can be<br />
overwritten. The tail-log backup prevents work loss and keeps the log chain intact.<br />
WITH NORECOVERY<br />
The WITH NORECOVERY option is normally applied to restore operations and is discussed in detail in<br />
Module 7. However, users are often surprised to see that the command to create a tail-log backup also<br />
has a WITH NORECOVERY option. This option backs up the transaction log and then immediately changes<br />
the database into a recovering state. The most common reason for the use of this option is in conjunction<br />
with log shipping, when a server is changing roles from a primary server to a secondary server.<br />
Note Log Shipping is an advanced option that is out of scope for this course.<br />
Tail-log Backups<br />
When you are recovering a database to the point of a failure, the tail-log backup is often the last backup<br />
of interest in the recovery plan. It is a standard type of transaction log backup. If you cannot back up the<br />
tail of the log, you can only recover a database to the end of the last backup that was created before the<br />
failure.<br />
Not all restore scenarios require a tail-log backup. You do not need to have a tail-log backup if the<br />
recovery point is contained in an earlier log backup, or if you are moving or replacing (overwriting) the<br />
database and do not need to restore it to a point of time after the most recent backup.
6-24 Backup of <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Use the CONTINUE_AFTER_ERROR option if you are backing up the tail of a damaged database.<br />
If you are unable to back up the tail of the log using the NO_TRUNCATE option when the database is<br />
damaged, you can attempt a tail-log log backup by specifying CONTINUE_AFTER_ERROR instead of<br />
NO_TRUNCATE.<br />
The NO_TRUNCATE option is available and is equivalent to the use of the COPY_ONLY and<br />
CONTINUE_AFTER_ERROR options together. It should be attempted for damaged databases. It causes the<br />
database engine to attempt the backup regardless of the state of the database. This means that a backup<br />
taken while using the NO_TRUNCATE option might have incomplete metadata. Without the<br />
NO_TRUNCATE database option, the database must be online.<br />
Question: What is the biggest advantage of being able to perform tail-log backups even<br />
when data files are damaged?
Demonstration 3A: Tail-log Backup<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 6-25<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_06_PRJ\10775A_06_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
2. Open the 31 – Demonstration 3A.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
6-26 Backup of <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Lab 6: Backup of <strong>SQL</strong> Server <strong>Database</strong>s<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_06_PRJ\10775A_06_PRJ.ssmssln.<br />
7. From the View menu, click Solution Explorer. In Solution Explorer, double-click the query<br />
00-Setup.sql. When the query window opens, click Execute on the toolbar.<br />
Lab Scenario<br />
You have reviewed and updated the recovery models. As the database administrator, you need to<br />
implement a database backup strategy. You have been provided with details of the required backup<br />
strategy for a number of databases on a <strong>SQL</strong> Server instance. You need to complete the required<br />
backups.
Exercise 1: Investigate Backup Compression<br />
Scenario<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 6-27<br />
The size of the database backups has been increasing with the number of orders being placed. You want<br />
to investigate the amount of space that can be saved by using backup compression. In this exercise, you<br />
need to investigate the effectiveness of backup compression when backing up the MarketDev database<br />
on the Proseware instance. You will perform a full database backup with backup compression disabled.<br />
You will then perform another full database backup with backup compression enabled. You will compare<br />
the backup files produced to identify the possible space savings.<br />
The main tasks for this exercise are as follows:<br />
1. Create a database backup without compression.<br />
2. Create a database backup with compression.<br />
3. Compare the file sizes created.<br />
Task 1: Create a database backup without compression<br />
• Using Windows Explorer, create a new folder L:\<strong>SQL</strong>Backups.<br />
• Perform a full backup of the MarketDev database with compression disabled to the file<br />
L:\<strong>SQL</strong>Backups\MarketDev_Full_Uncompressed.BAK.<br />
Task 2: Create a database backup with compression<br />
• Perform a full backup of the MarketDev database with compression enabled to the file<br />
L:\<strong>SQL</strong>Backups\MarketDev_Full_Compressed.BAK.<br />
Task 3: Compare the file sizes created<br />
• Calculate the space savings provided by compression as:<br />
SpaceSavings=(Uncompressed size–Compressed size)*100/Uncompressed size<br />
Results: After this exercise, you have calculated the space saved by using backup compression on the<br />
MarketDev database.<br />
Exercise 2: Transaction Log Backup<br />
Scenario<br />
Part of the ongoing management of the MarketDev database is a series of transaction log backups to<br />
provide point in time recovery. In this exercise, you need to back up the transaction log.<br />
The main tasks for this exercise are as follows:<br />
1. Execute a script to introduce workload to the MarketDev database.<br />
2. Backup the transaction log on the MarketDev database.<br />
Task 1: Execute a script to introduce workload to the MarketDev database<br />
• Open and execute the script file 61 – Workload File.sql from Solution Explorer.
6-28 Backup of <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Task 2: Backup the transaction log on the MarketDev database<br />
• Backup the transaction log of the MarketDev database to the file<br />
L:\<strong>SQL</strong>Backups\MarketDev_Log_Compressed.BAK. Use backup compression during the backup.<br />
Results: After this exercise, you should have completed a transaction log backup.<br />
Exercise 3: Differential Backup<br />
Scenario<br />
There is a concern that the data volumes in the MarketDev database will be so large that daily full<br />
backups will not be possible. In this exercise, you need to perform a differential backup to assist to<br />
manage the size of the database backups.<br />
The main tasks for this exercise are as follows:<br />
1. Execute a script to introduce workload to the MarketDev database.<br />
2. Create a differential backup of the MarketDev database.<br />
Task 1: Execute a script to introduce workload to the MarketDev database<br />
• Open and execute the script file 61 – Workload File.sql from Solution Explorer.<br />
Task 2: Create a differential backup of the MarketDev database<br />
• Create a differential backup of the MarketDev database to the file<br />
L:\<strong>SQL</strong>Backups\MarketDev_Differential_Compressed.BAK. Use backup compression during the backup.<br />
• Using Windows Explorer, note the size of the Differential backup compared to the Full backup.<br />
Task 3: Execute a script to introduce workload to the MarketDev database<br />
• Open and execute the script file 61 – Workload File.sql from Solution Explorer.<br />
Task 4: Append a differential backup to the previous differential backup file<br />
• Append a differential backup of the MarketDev database to the file<br />
L:\<strong>SQL</strong>Backups\MarketDev_Differential_Compressed.BAK.<br />
• Using Windows Explorer, note that the size of the Differential backup has increased. The file now<br />
contains two backups.<br />
Results: After this exercise, you should have completed two differential backups.<br />
Exercise 4: Copy-only Backup<br />
Scenario<br />
Another team periodically needs a temporary copy of the MarketDev database. It is important that these<br />
copies do not interfere with the backup strategy that is being used. In this exercise, you need to perform a<br />
copy-only backup and verify the backup.
The main task for this exercise is as follows:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 6-29<br />
1. Create a copy-only backup of the MarketDev database, ensuring to choose to verify the backup.<br />
Task 1: Create a copy-only backup of the MarketDev database, ensuring to choose to<br />
verify the backup<br />
• Create a copy-only backup of the MarketDev database to the file<br />
L:\<strong>SQL</strong>Backups\MarketDev_Copy_Compressed.BAK. Make sure you choose to:<br />
• Verify the backup while creating it.<br />
• Use backup compression.<br />
• Choose to create a new media set called MarketDev Copy Backup.<br />
• For the media set description use MarketDev Copy Backup for Integration Team.<br />
Results: After this exercise, you should have completed a copy-only backup.<br />
Challenge Exercise 5: Partial Backup (Only if time permits)<br />
Scenario<br />
On the Proseware instance, there is a database called RateTracking that has two filegroups. The ARCHIVE<br />
filegroup is set to read-only and both the default filegroup USERDATA and the PRIMARY filegroup are<br />
read-write. In this exercise, you need to back up the read-write filegroups only, using T-<strong>SQL</strong> commands.<br />
The main task for this exercise is as follows:<br />
1. Perform a backup of the read-write filegroups on the RateTracking database.<br />
Task 1: Perform a backup of the read-write filegroups on the RateTracking database<br />
• Perform a backup of the read-write filegroups (USERDATA and PRIMARY) on the RateTracking<br />
database. Write the backup to the file L:\<strong>SQL</strong>Backups\RateTracking_ReadWrite.BAK. Use the<br />
CHECKSUM and INIT options.<br />
Results: After this exercise, you should have completed a partial backup.
6-30 Backup of <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Module Review and Takeaways<br />
Review Questions<br />
1. Which backup types can be performed in simple recovery model?<br />
2. How can backup information be read?<br />
Best Practices<br />
1. Consider using CHECKSUM to create a checksum over your backup files.<br />
2. Use backup compression to increase backup and restore performance and safe storage space.<br />
3. Consider mirroring your backups to increase safety.<br />
4. Check if differential backup can speed up your restore process in full recovery mode.<br />
5. Use COPY_ONLY for out of sequence backups.
Module 7<br />
Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Contents:<br />
Lesson 1: Understanding the Restore Process 7-3<br />
Lesson 2: Restoring <strong>Database</strong>s 7-8<br />
Lesson 3: Working with Point-in-time recovery 7-19<br />
Lesson 4: Restoring System <strong>Database</strong>s and Individual Files 7-27<br />
Lab 7: Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 7-34<br />
7-1
7-2 Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Module Overview<br />
In the previous module, you saw how to create backups of <strong>Microsoft®</strong> <strong>SQL</strong> <strong>Server®</strong> <strong>2012</strong> databases. A<br />
backup strategy might involve many different types of backup. This means that it is important for you to<br />
understand the process required when restoring databases. Often when database restores are required, an<br />
urgent situation exists. Unfortunately, database administrators often make errors of judgment when<br />
placed into urgent recovery situations. The first rule when dealing with a bad situation should always be<br />
to "do no further harm". In urgent situations, it is more important than ever to have a clear plan for how<br />
to proceed. A good understanding of the process required and a good plan can help avoid making the<br />
situation worse.<br />
Not all database restores are related to system failures. With most system failure situations, there is a need<br />
to return the system to as close as possible to the state that it was in prior to the failure. Some failures are<br />
related to human errors. In those cases, you may wish to recover the system to a point prior to the failure.<br />
The point-in-time recovery features of <strong>SQL</strong> Server <strong>2012</strong> can help you to achieve this.<br />
User databases are more likely to be affected by system failures than system databases as user databases<br />
are typically much larger than system databases. However, system databases can be affected by failures,<br />
and special care needs to be taken when recovering system databases. In particular, you need to<br />
understand how each system database should be recovered as not all system databases can be recovered<br />
using the same process.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Understand the restore process.<br />
• Restore databases.<br />
• Work with Point-in-time Recovery.<br />
• Restore system databases and individual files.
Lesson 1<br />
Understanding the Restore Process<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 7-3<br />
You have seen that there are many types of backup that can be created with <strong>SQL</strong> Server <strong>2012</strong>. Similarly,<br />
there are different types of restore processes that can be required.<br />
It was mentioned earlier that when it is time to recover a database, a good plan is required, to avoid<br />
causing further damage. Once the preliminary step of attempting to create a tail-log backup has been<br />
carried out, the most important decision that needs to be taken is the determination of which database<br />
backups need to be restored, and in which order.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe the different types of restores.<br />
• Decide which backups to restore and in which order.
7-4 Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Types of Restores<br />
Key Points<br />
Restoring a database in <strong>SQL</strong> Server <strong>2012</strong> is a two-step process. The first step involves restoring data pages<br />
from one or more backups. Once the data pages have been restored, the database is potentially in an<br />
inconsistent state. To correct this situation, in the second step of the restore process, the available details<br />
from the transaction log are used to recover the database. The restore scenarios available for a database<br />
depend on the recovery model of the database and the edition of <strong>SQL</strong> Server.<br />
Complete <strong>Database</strong> Restore in Simple Recovery model<br />
The most basic restore strategy for <strong>SQL</strong> Server databases is to restore and recover a full database backup.<br />
If a differential backup of the database is available, the latest differential backup could be restored after<br />
the restore of the full database backup but before the recovery process for the database.<br />
In most cases that use simple recovery model, no differential backups are performed, in which case only<br />
the last full database backup will be restored and the database would be returned to the state it was in at<br />
the time just prior to the full database backup being completed.<br />
Complete <strong>Database</strong> Restore in Full Recovery model<br />
The most common restore strategy requires full or bulk-logged recovery model and involves restoring full,<br />
differential (if present), and log backups.<br />
To restore to the last available point in time:<br />
1. Try to perform a tail-log backup. This step might or might not be possible, depending upon the type<br />
of failure that has occurred.<br />
2. Restore the last full database backup.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 7-5<br />
3. Restore the most recent differential backup if a differential backup had been created.<br />
4. Restore all transaction log backups performed after the point of the most recent differential backup<br />
(or full backup if no differential backup was created) in the same sequence as they were created. If a<br />
tail-log backup was successfully created, it would be restored as the most recent transaction log<br />
backup.<br />
While the aim of the restore would normally be to recover the database to the latest point in time<br />
possible, options do exist to restore the database to earlier points in time. These options will be discussed<br />
in Lesson 3 of this module.<br />
Note Even if differential backups and transaction log backups were created, you can<br />
choose not to apply them, if you wish to return the database state to an earlier point in<br />
time.<br />
System <strong>Database</strong> Restore<br />
Restoring system databases is possible but requires special processes to avoid further issues from<br />
occurring. For example, if a master database is left in an inconsistent state, <strong>SQL</strong> Server will refuse to start<br />
until the master database is recovered. The recovery of system databases will be discussed in Lesson 4 of<br />
this module.<br />
File Restore<br />
If individual files in a database have become corrupted or have been lost, the ability to restore individual<br />
files has the potential to substantially reduce the overall time to recover the database. The recovery of<br />
individual files is only supported for read-only files when operating in simple recovery model, but can be<br />
used for read-write files when using the bulk-logged or full recovery models. The recovery of individual<br />
files uses a process that is similar to the complete database restore process and will be discussed in Lesson<br />
4 of this module.<br />
Online Restore<br />
Online restore involves restoring data while the database is online. This is the default option for File, Page,<br />
and Piecemeal restores. In <strong>SQL</strong> Server <strong>2012</strong>, online restore is only available in the Enterprise edition.<br />
Piecemeal Restore<br />
A piecemeal restore is used to restore and recover the database in stages, based on filegroups, rather than<br />
restoring the entire database at a single time. The first filegroup that must be restored is the primary<br />
filegroup. In <strong>SQL</strong> Server <strong>2012</strong>, piecemeal restore is only available in the Enterprise edition.<br />
Page Restore<br />
Another advanced option is the ability to restore an individual data page. If an individual data page is<br />
corrupt, users will usually see either an 823 error or an 824 error when they execute a query that tries to<br />
access the page. An online page restore could be used to try to recover the page. Once the restore has<br />
commenced, if a user query tries to access the page, the error that the user would see is error 829, which<br />
indicates "page is restoring". If the page restore is successful, user queries that access the page would<br />
again return results as expected. Page restores are supported under full and bulk-logged recovery models<br />
but are not supported under simple recovery model. In <strong>SQL</strong> Server <strong>2012</strong>, online page restore is only<br />
available in the Enterprise edition. Offline page restore is available in lower editions.
7-6 Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Preparations for Restoring Backups<br />
Key Points<br />
In critical situations, users often make inappropriate choices about the actions that should be taken. It is<br />
important to avoid any action that will make the situation worse than necessary. Before restoring any<br />
database, it is important to attempt a tail-log backup, unless you are intending to replace the current<br />
state of the database. The tail-log backup can often be performed, even when damage has occurred to<br />
the data files of the database. The tail-log backup is critical when you need to restore the database to the<br />
latest point in time possible.<br />
Identifying Backups to Restore<br />
The recovery of any database depends upon restoring the correct backups in the correct order. The<br />
normal process for restoring a database is:<br />
1. Restore the latest full database backup as a base to work from. (If only individual files are damaged or<br />
missing, you may be able to restore only those files).<br />
2. If differential backups have been created, only the latest differential backup is needed. (Differential<br />
backups save all database extents that have been modified since the last full database backup.<br />
Differential backups are not incremental in nature.<br />
3. If transaction log backups have been created, all transaction log backups since the last differential<br />
backup are required. You also need to include the tail-log backup created at the start of the restore<br />
process, if the tail-log backup was successful. (This step does not apply to databases in simple<br />
recovery model).
Discussion: Determining Required Backups to Restore<br />
Discussion<br />
The example on the slide describes the backup schedule for an organization.<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 7-7<br />
Question: If a failure occurs at Thursday at 10:30AM, what is the restore process that should<br />
be undertaken?
7-8 Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Lesson 2<br />
Restoring <strong>Database</strong>s<br />
It is important to understand the phases that <strong>SQL</strong> Server <strong>2012</strong> uses when restoring a database. Once a<br />
decision has been made to restore a database, you need to know how to implement the restore process.<br />
The restore process might involve both database and transaction log backups. When multiple backups<br />
need to be restored in a single process, you need to control the point at which the recovery of the<br />
database occurs. If database recovery occurs too early in the process, you will not be able to complete the<br />
entire restore process.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe the different phases of the restore process.<br />
• Restore a database from a full database backup or a differential backup.<br />
• Restore a transaction log backup.<br />
• Control database recovery by using the WITH RECOVERY option.<br />
• Allow read-only access to a recovering database by using the WITH STANDBY option.
Phases of the Restore Process<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 7-9<br />
The restore of a <strong>SQL</strong> Server <strong>2012</strong> database passes through three phases: Data Copy, Redo, and Undo. The<br />
combination of the Redo and the Undo phases is commonly referred to as the recovery of a database.<br />
Data Copy<br />
The data copy phase is typically the longest phase in a database restore. The data files from the database<br />
need to be recovered from the backups. Before any data pages are restored, the header of the backup is<br />
read and <strong>SQL</strong> Server recreates the required data and log files. If instant file initialization (IFI) has not been<br />
enabled by granting rights to the <strong>SQL</strong> Server service account, the rewriting of the data files can take a<br />
substantial amount of time.<br />
Once the data and log files are recreated, the data files are restored from the full database backup. Data<br />
pages are retrieved from the backup in order and written to the data files.<br />
The log files need to be zeroed out before they can be used. IFI is not used for log files. This process can<br />
also take a substantial time if the log files are large.<br />
If a differential backup is also being restored, <strong>SQL</strong> Server overwrites the extents in the data files with the<br />
ones that are contained in the differential backup.<br />
Redo Phase<br />
Details from the transaction log are then retrieved. In simple recovery model, these details would only be<br />
retrieved from either the full database backup or the differential backup, if a differential backup is also<br />
being restored. In full or bulk-logged recovery model, these log file details will be supplemented by the<br />
contents of any transaction log backups that were taken after the full and differential database backups.<br />
In the redo phase, <strong>SQL</strong> Server rolls into the database pages all changes that are contained within the<br />
transaction log details, up to the recovery point. The recovery point is typically the latest time for which<br />
transactions are contained in the log.
7-10 Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Undo Phase<br />
Note that the transaction log details will likely include details of transactions that were not committed at<br />
the recovery point, which is typically the time of the failure. In the undo phase, <strong>SQL</strong> Server rolls back any<br />
uncommitted transactions.<br />
Because the action of the undo phase involves rolling back uncommitted transactions and placing the<br />
database online, subsequent backups cannot be restored.<br />
During the undo phase, the Enterprise edition of <strong>SQL</strong> Server <strong>2012</strong> will allow the database to come online<br />
and will allow users to begin to access the database. This capability is referred to as the fast recovery<br />
feature. Queries that attempt to access data that is still being undone are blocked until the undo phase is<br />
complete. This could potentially cause transactions to time out.<br />
Question: Why does <strong>SQL</strong> Server need to redo and undo transactions when only restoring a<br />
full database backup?
WITH RECOVERY Option<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 7-11<br />
In general, a database cannot be brought online until it has been recovered. The one exception to this is<br />
the fast recovery option that was mentioned in the last topic. The fast recovery option allows users to<br />
access the database while the undo phase is continuing.<br />
Recovery Events<br />
Note that recovery does not only occur during the execution of RESTORE commands. If a database is<br />
taken offline and then placed back into an ONLINE state, recovery of the database will occur. The same<br />
recovery process occurs when <strong>SQL</strong> Server <strong>2012</strong> restarts.<br />
Note Other events that lead to database recovery include clustering or database<br />
mirroring failovers. Failover clustering and database mirroring are advanced topics that are<br />
out of scope for this course. These events are listed here for completeness.<br />
The recovery process in <strong>SQL</strong> Server is critical to the maintenance of transactional integrity that requires<br />
that all transactions that had committed are recorded in the database and that all transactions that had<br />
not committed are rolled back.<br />
Recovery Options<br />
Each RESTORE command includes an option to specify WITH RECOVERY or WITH NORECOVERY. The<br />
WITH RECOVERY option is the default action and does not need to be specified.<br />
It is important to choose the correct option (WITH RECOVERY or WITH NORECOVERY) when executing a<br />
RESTORE command. The process is straightforward in most cases. All restores must be performed WITH<br />
NORECOVERY except the last restore which has to be performed WITH RECOVERY.
7-12 Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
There is no way to restore additional backups after a restore WITH RECOVERY. If a backup has been<br />
performed WITH RECOVERY by accident the restore sequence has to be restarted.<br />
If the last backup of a set was inadvertently also restored WITH NORECOVERY, the database can be forced<br />
to recover by executing the following command:<br />
RESTORE LOG databasename WITH RECOVERY;<br />
Question: Why is it not possible to restore additional backups to a recovered database?
Restoring a <strong>Database</strong><br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 7-13<br />
You can restore a database using either the GUI in SSMS or by using the RESTORE DATABASE command<br />
in T-<strong>SQL</strong>.<br />
If a single database backup is being restored, the WITH RECOVERY option can be used, as no later<br />
backups need to be restored. You can also omit the WITH RECOVERY option as it is the default for the<br />
RESTORE DATABASE command.<br />
Differential Restore<br />
The command for restoring a differential backup is identical to the command for restoring a full database<br />
backup. Differential backups might be appended to the same file as the full database backup. In that case,<br />
you need to specify the file from the media set that you need to restore.<br />
Consider the following command:<br />
RESTORE DATABASE AdventureWorks<br />
FROM DISK = 'D:\<strong>SQL</strong>Backups\AW.bak'<br />
WITH FILE = 1, NORECOVERY;<br />
RESTORE DATABASE AdventureWorks<br />
FROM DISK = 'D:\<strong>SQL</strong>Backups\AW.bak'<br />
WITH FILE = 3, RECOVERY;<br />
In this example, the database AdventureWorks is restored from the first file in the media set. The media<br />
set is stored in the operating system file D:\<strong>SQL</strong>Backups\AW.bak. In this case, the second file that was<br />
contained in the media set was the first differential backup that was performed on the database. The third<br />
file in the media set was the second differential backup that was performed on the database. Because the<br />
second differential backup was the latest differential backup that was performed, the second RESTORE<br />
command in the example shows how to restore the latest differential backup from that media set.
7-14 Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
WITH REPLACE<br />
<strong>SQL</strong> Server <strong>2012</strong> will not allow you to restore a database backup over an existing database if you have not<br />
performed a tail-log backup on the database. If you attempt to do this using <strong>SQL</strong> Server Management<br />
Studio, <strong>SQL</strong> Server <strong>2012</strong> will provide a warning and will automatically attempt to create a tail-log backup<br />
for you first. If you need to perform the restore operation and you do not have a tail-log backup, you<br />
must specify the WITH REPLACE option.<br />
Note The WITH REPLACE option needs to be used with caution as it can lead to data loss.<br />
WITH MOVE<br />
When you restore a database from another server, you might need to place the database files in different<br />
locations than those that are recorded in the backup from the original server. You might also need to do<br />
this if you are copying a database by a process of backup and restore. The WITH MOVE option allows you<br />
to specify new file locations. Consider the following command:<br />
RESTORE DATABASE Spatial<br />
FROM DISK = 'D:\<strong>SQL</strong>Backups\Spatial.bak'<br />
WITH MOVE 'Spatial_Data' TO 'D:\MKTG\Spatial.mdf',<br />
MOVE 'Spatial_Log' TO 'L:\MKTG\Spatial.ldf';<br />
In the example shown, the database named Spatial is being restored from another server. As well as<br />
specifying the source location for the media set, in the command, new locations for each database file<br />
have been specified. Note that the MOVE option requires the specification of the logical file name, rather<br />
than the original physical file path.
Restoring a Transaction Log<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 7-15<br />
You can restore the transaction logs for a database using either the GUI in SSMS or by using the RESTORE<br />
LOG command in T-<strong>SQL</strong>. All log files apart from the last log should be restored WITH NORECOVERY. The<br />
last log file (which is often the tail-log backup) is then restored WITH RECOVERY.<br />
Transaction Log Restores<br />
All transaction logs created after the last full or differential backup must be restored in chronological<br />
order with no break in the chain of backups. A break in the chain of transaction logs will cause the restore<br />
process to fail. The restore process cannot be continued after a failure and would need to be restarted.<br />
While the database is in recovering mode, Object Explorer shows it with the words "(Restoring…)" after the<br />
name of the database, as shown in the slide example.<br />
Question: Why is it faster to restore differential and log backups instead of restoring all log<br />
backups since the last full database backup?
7-16 Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
WITH STANDBY Option<br />
Key Points<br />
<strong>SQL</strong> Server <strong>2012</strong> provides the ability to view the contents of a database that has not been recovered, by<br />
using the option WITH STANDBY, instead of the WITH NORECOVERY option.<br />
Further transaction log backups can be applied to a database that has been restored WITH STANDBY.<br />
There are two common reasons for using the WITH STANDBY option.<br />
WITH STANDBY and Log Shipping<br />
One widely used high availability feature maintains a standby server that can be brought online quickly.<br />
This feature is called Log Shipping. The basic operation of Log Shipping is to automate the process of<br />
backing up log files on one computer, copying the log files to another computer, and restoring the log<br />
files on that other computer. The database on the second server would be nearly complete yet unusable if<br />
the restores were performed using WITH NORECOVERY. The database cannot be recovered to allow for<br />
read-only use as more transaction logs would need to be restored at a later time.<br />
Note Log shipping is an advanced topic beyond the scope of this course but an<br />
introduction to log shipping is provided in Appendix A.<br />
WITH STANDBY was designed to help in this situation, by performing a modified version of recovery that<br />
copies the transactions that would have been deleted by the undo phase, to an operating system file.<br />
When the next transaction log restore operation is required, <strong>SQL</strong> Server automatically reapplies the<br />
transactions from that file to the log before continuing with the log restore.
WITH STANDBY for Inspection<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 7-17<br />
Imagine that a user error has caused the inadvertent deletion of some data. You may not be aware of<br />
when the error occurred. When restoring the database, you may not then know which log file contains the<br />
deletion. You can use the WITH STANDBY option on each log file restore and inspect the state of the<br />
database after each restore operation.<br />
Question: What would be a reason to provide read-only access to a database?<br />
Question: What would be a limitation of the WITH STANDBY option when used to permit<br />
reporting on the second standby database in a log shipping environment?
7-18 Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Demonstration 2A: Restoring <strong>Database</strong>s<br />
Demonstration Steps<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click <strong>SQL</strong><br />
Server Management Studio. In the Connect to Server window, type Proseware and click Connect.<br />
From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_07_PRJ\10775A_07_PRJ.ssmssln and click Open.<br />
3. From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file from<br />
within Solution Explorer. Note: The setup script for this module is intended to throw an error<br />
regarding missing files; this is normal.<br />
4. Open the 21 – Demonstration 2A.sql script file.<br />
5. Follow the instructions contained within the comments of the script file.
Lesson 3<br />
Working with Point-in-time Recovery<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 7-19<br />
In the previous lesson, you have seen how to recover a database to the latest point in time possible.<br />
However, there are occasions when there is a need to recover the database to an earlier point in time. You<br />
have seen that you could stop the restore process after any of the backups are restored, and initiate the<br />
recovery of the database. While stopping the restore process after a restore provides a coarse level of<br />
control over the recovery point, <strong>SQL</strong> Server <strong>2012</strong> provides additional options that allow for more fine<br />
grained control over the recovery point.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe Point-in-time Recovery.<br />
• Implement the stop at time restore option.<br />
• Explain the complexities surrounding the synchronization of multiple databases.<br />
• Implement the stop at mark restore option.
7-20 Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Overview of Point-in-time Recovery<br />
Key Points<br />
Recovery of a database to the most recent point in time is the most commonly requested option.<br />
However, there can also be a need to restore a database to an earlier point in time.<br />
<strong>SQL</strong> Server <strong>2012</strong> allows the restore of a database to stop at a specified point in time and to then<br />
commence recovery. The point in time can be specified in two different ways. A datetime value can<br />
specify the exact time for the recovery point. Keep in mind that computer times tend to be approximate<br />
and that computer systems can perform a large amount of work in a short period of time. A datetime<br />
value might then not be precise enough for specifying a recovery point.<br />
The other option provided by <strong>SQL</strong> Server <strong>2012</strong> is to specify a named transaction (referred to as a<br />
transaction log mark) as the recovery point.<br />
For either of these options to work, the database needs to be in full or bulk-logged recovery model. <strong>SQL</strong><br />
Server can only stop at points in the transaction log chain when the database is in full recovery model. If a<br />
database changes from full recovery model to bulk-logged recovery model to process bulk transactions,<br />
and is then changed back to full recovery model, the recovery point cannot be in the time that the<br />
database was in bulk-logged recovery model. If you attempt to specify a recovery point during which the<br />
database was in bulk-logged recovery model, the restore will fail and an error will be returned.<br />
Question: What other restore option might be useful if the point in time is not known<br />
exactly and no mark was set?
STOPAT Option<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 7-21<br />
The STOPAT option is used to specify a recovery point that is based on a datetime value. Because the DBA<br />
might not know in advance which transaction log backup contains the time where the recovery needs to<br />
occur, the syntax of the RESTORE LOG command allows the RECOVERY option to be specified for each log<br />
restore command in the sequence.<br />
Consider the following restore sequence:<br />
RESTORE DATABASE database_name FROM full_backup<br />
WITH NORECOVERY;<br />
RESTORE DATABASE database_name FROM differential_backup<br />
WITH NORECOVERY;<br />
RESTORE LOG database_name FROM first_log_backup<br />
WITH STOPAT = time, RECOVERY;<br />
… (additional log backups could be restored here)<br />
RESTORE LOG database_name FROM final_log_backup<br />
WITH STOPAT = time, RECOVERY;<br />
Note that the RECOVERY option is specified on each of the RESTORE LOG commands, not just on the last<br />
command.
7-22 Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
The behavior of the STOPAT and RECOVERY options is as follows:<br />
• If the specified time is earlier than the first time in the transaction log backup, the restore command<br />
fails and returns an error.<br />
• If the specified time is contained within the period covered by the transaction log backup, the restore<br />
command recovers the database at that time.<br />
• If the specified time is later than the last time contained in the transaction log backup, the restore<br />
command restores the logs, sends a warning message, and the database is not recovered so that<br />
additional transaction log backups can be applied.<br />
With the behavior described above, the database is recovered up to the requested point, even when<br />
STOPAT and RECOVERY are both specified with every restore as long as the requested point is not before<br />
the restore sequence.<br />
Question: Why might you need to recover a database to a specific point in time?
Discussion: Synchronizing Recovery of Multiple <strong>Database</strong>s<br />
Discussion<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 7-23<br />
An application might use data in more than a single database, including data in multiple <strong>SQL</strong> Server<br />
instances.<br />
Question: Do you use any multi-database applications?<br />
Question: What problems might occur when the databases need to be restored?<br />
Question: Why might restoring up to a point in time not be sufficient?
7-24 Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
STOPATMARK Option<br />
Key Points<br />
If more precise control over the recovery point is required, the STOPATMARK option can be useful. The<br />
GUI in SSMS has no options for working with transaction log marks during restore operations. This process<br />
must be carried out using T-<strong>SQL</strong>.<br />
Marking a Transaction<br />
If you know in advance that you might need to recover to the point of a specific operation, you can place<br />
a mark in the transaction log to record that precise location. Consider the following command:<br />
BEGIN TRAN UpdPrc WITH MARK 'Start of nightly update process';<br />
In this command, a transaction is commenced but the transaction is also given the name UpdPrc, and a<br />
transaction mark with the same name as the transaction is created. The value after the WITH MARK clause<br />
is only a description and is not used in the processing of the transaction mark.<br />
If you do not know the name of a transaction that was marked, you can query the dbo.logmarkhistory<br />
table in the msdb database.<br />
Mark-Related Options<br />
The STOPATMARK option is similar to the STOPAT option for the RESTORE command. <strong>SQL</strong> Server will stop<br />
at the named transaction mark and include the named transaction in the redo phase.<br />
If you wish to exclude the transaction (that is restore everything up to the beginning of the named<br />
transaction), you can specify the STOPBEFOREMARK option instead.<br />
If the transaction mark is not found in the transaction log backup that is being restored, the restore<br />
completes and the database is not recovered so that other transaction log backups can be restored.
Multi-database Applications<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 7-25<br />
The main use for the stop at mark feature is to restore an entire set of databases to a mutually consistent<br />
state, at some earlier point in time. If you need to perform a backup of multiple databases so that they<br />
can be recovered to a consistent point, consider marking all the transaction logs before commencing the<br />
backups.<br />
Question: Why might STOPAT not be a good choice for synchronizing the restore of several<br />
databases and that STOPATMARK might be preferred?
7-26 Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Demonstration 3A: Using STOPATMARK<br />
Demonstration Steps<br />
1. If Demonstration 2A was not performed:<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_07_PRJ\10775A_07_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer. Note: The setup script for this module is intended to throw an<br />
error regarding missing files; this is normal.<br />
2. Open and execute the 31 – Demonstration 3A.sql script file from within Solution Explorer.<br />
3. Follow the instructions contained within the comments of the script file
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 7-27<br />
Lesson 4<br />
Restoring System <strong>Database</strong>s and Individual Files<br />
User databases are restored much more regularly than system databases, as the user databases are<br />
typically much larger, and thus more exposed to failure as they are often spread across many devices.<br />
However, failures can affect system databases and they may need to be restored. Restoring a system<br />
database is not identical to restoring user databases.<br />
The master database is the most critical database for a <strong>SQL</strong> Server system and recovery of the master<br />
database involves more steps than the recovery of other databases.<br />
Rather than restoring entire databases, situations can arise where only a single file or filegroup needs to<br />
be restored. This can speed up the overall restore process.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Recover System databases.<br />
• Restore the master database.<br />
• Restore a file or filegroup from a backup.
7-28 Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Recovering System <strong>Database</strong>s<br />
Key Points<br />
The recovery process for all system databases is not identical. Each system database has specific recovery<br />
requirements:<br />
master<br />
The master database holds all system level configurations. <strong>SQL</strong> Server requires the master database before<br />
a <strong>SQL</strong> Server instance can run at all. A special procedure needs to be followed to restore the master<br />
database. The process for restoring the master database is discussed in the next topic.<br />
model<br />
The model database is the template for all databases that are created on the instance of <strong>SQL</strong> Server. When<br />
the model database is corrupt, the instance of <strong>SQL</strong> Server cannot start. This means that a normal restore<br />
command cannot be used to recover the model database if it becomes corrupted. In the case of a corrupt<br />
model database, the instance must be started with the -T3608 trace flag as a command-line parameter.<br />
This trace flag only starts the master database. Once <strong>SQL</strong> Server is running, the model database can be<br />
restored using the normal RESTORE DATABASE command.<br />
msdb<br />
The msdb database is used by <strong>SQL</strong> Server Agent for scheduling alerts and jobs, and for recording details<br />
of operators. The msdb database also contains history tables, such as the history tables that record details<br />
of backup and restore operations. If the msdb database becomes corrupt, <strong>SQL</strong> Server Agent will not start.<br />
msdb can be restored like user databases, using the RESTORE DATABASE command and then the <strong>SQL</strong><br />
Server Agent service can be restarted.
esource<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 7-29<br />
The resource database is a read-only database that contains copies of all system objects that ship with<br />
Microsoft <strong>SQL</strong> Server <strong>2012</strong>. No backup operations can be performed on this database, and it is a hidden<br />
database. It can, however, be corrupted by failures in areas such as I/O subsystems or memory. If the<br />
resource database is corrupt, it can be restored by a file-level restore in Windows or by running the setup<br />
program for <strong>SQL</strong> Server.<br />
tempdb<br />
The tempdb database is a workspace for holding temporary or intermediate result sets. This database is<br />
re-created every time an instance of <strong>SQL</strong> Server is started. When the server instance is shut down, any<br />
data in tempdb is deleted permanently. No backup operations can be performed on the tempdb<br />
database but as it is recreated with every restart of the instance, a restart of the instance is enough to<br />
recover the tempdb in case of corruption.
7-30 Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Restoring the master <strong>Database</strong><br />
Key Points<br />
The master database is integral to the operation of <strong>SQL</strong> Server. As <strong>SQL</strong> Server will not start if the master<br />
database is missing or corrupt, you cannot execute a standard RESTORE DATABASE command to restore<br />
the master database in the case of a missing or corrupt database.<br />
Recovering the master <strong>Database</strong><br />
Before starting this process, some version of a master database must exist so that the <strong>SQL</strong> Server instance<br />
will start at all. If your master database becomes corrupted, a temporary master database must be created<br />
first. This temporary master database doesn’t need to have the correct configuration as it will be only used<br />
to start up the instance. The correct master database will be restored afterwards using the process<br />
described on the slide.<br />
There are three ways to obtain a temporary master database. You can use the <strong>SQL</strong> Server setup program<br />
to rebuild the system databases, either from the location that you installed <strong>SQL</strong> Server from, or by running<br />
the setup program found at: Microsoft <strong>SQL</strong> Server\110\Setup\Bootstrap\<strong>SQL</strong>11\setup.exe. (This path is<br />
approximate and may change in future).<br />
Note The setup program will overwrite all system databases and all will need to be<br />
restored later.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 7-31<br />
Another way to obtain a temporary master database is to use file-level backup of the master database<br />
files to restore the master database. This file-level backup must have been taken when the master<br />
database was not in use, that is, when <strong>SQL</strong> Server was not running, or by using the VSS service.<br />
Note Copying the master database from another instance is not supported. The VSS<br />
service is out of scope for this course.<br />
The final option is to locate a master.mdf database from the Templates folder located in the MS<strong>SQL</strong>\Binn<br />
folder for each instance.<br />
Once a temporary master database has been put in place, use the following procedure to recover the<br />
correct master database:<br />
• Start the server instance in single-user mode. (<strong>SQL</strong> Server configuration manager can be used to start<br />
<strong>SQL</strong> Server in single user mode by using the –m startup option).<br />
• Use a RESTORE DATABASE statement to restore a full database backup of master. (In single-user<br />
mode, it is recommended that you enter the RESTORE DATABASE statement using the sqlcmd utility).<br />
• After the master database is restored, the instance of <strong>SQL</strong> Server will shut down and terminate your<br />
sqlcmd connection.<br />
• Remove the single-user startup parameter.<br />
• Restart <strong>SQL</strong> Server.
7-32 Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Restoring a File or Filegroup from a Backup<br />
Key Points<br />
Restoring individual files or filegroups from backups, instead of restoring entire databases, can often be a<br />
much faster option when corruption has occurred or when files or filegroups are missing.<br />
Note While it is possible to backup only files or filegroups, there is no need to have<br />
performed file or filegroup backups before restoring individual files or filegroups. <strong>SQL</strong><br />
Server can extract specific database files out of a full backup or a differential backup.<br />
Restore Process<br />
1. Create a tail-log backup of the active transaction log. (If you cannot do this because the log has been<br />
damaged, you must restore the whole database or restore to an earlier point-in-time).<br />
2. Restore each damaged file from the most recent file backup of that file.<br />
3. Restore the most recent differential file backup, if any, for each restored file.<br />
4. Restore transaction log backups in sequence, starting with the backup that covers the oldest of the<br />
restored files and ending with the tail-log backup created in step 1.<br />
5. Recover the database.<br />
You must restore the transaction log backups that were created after the file backups to bring the<br />
database back to a consistent state. The transaction log backups can be rolled forward quickly, because<br />
only the changes that apply to the restored files or filegroups are applied. Undamaged files are not copied<br />
and then rolled forward. However, the whole chain of log backups still needs to be processed.
Demonstration 4A: Restoring a File<br />
Demonstration Steps<br />
1. If Demonstration 2A was not performed:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 7-33<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_07_PRJ\10775A_07_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer. Note: The setup script for this module is intended to throw an<br />
error regarding missing files; this is normal.<br />
2. Open and execute the 41 – Demonstration 4A.sql script file from within Solution Explorer.<br />
3. Follow the instructions contained within the comments of the script file.
7-34 Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Lab 7: Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_07_PRJ\10775A_07_PRJ.ssmssln.<br />
7. From the View menu, click Solution Explorer. In Solution Explorer, double-click the query<br />
00-Setup.sql. When the query window opens, click Execute on the toolbar.<br />
Note The setup script for this module is intended to throw an error regarding missing<br />
files. This is normal.
Lab Scenario<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 7-35<br />
You have been provided with a series of backups taken from a database on another server that you need<br />
to restore to the Proseware, Inc. server with the database name MarketYields. The backup file includes a<br />
number of full, differential, and log backups. You need to identify backups contained within the file,<br />
determine which backups need to be restored, and perform the restore operations. When you restore the<br />
database, you need to ensure that it is left as a warm standby, as additional log backups may be applied<br />
at a later date.<br />
If you have time, you should test the standby operation.<br />
Exercise 1: Determine a Restore Strategy<br />
Scenario<br />
You need to restore a database backup from another instance to the Proseware instance. You have been<br />
provided with a backup file containing multiple full, differential, and log backups. In this exercise you<br />
need to determine which backups are contained within the file and determine which backups need to be<br />
restored and in which order.<br />
The main tasks for this exercise are as follows:<br />
1. Review the backups contained within the backup file.<br />
2. Determine how the restore should be performed.<br />
Task 1: Review the backups contained within the backup file<br />
• Use the HEADERONLY option of the RESTORE command to identify the backups that are contained<br />
within the file D:\MS<strong>SQL</strong>SERVER\MarketYields.bak.<br />
Task 2: Determine how the restore should be performed<br />
• Determine which backup need to be restored and in which order.<br />
Results: After this exercise, you should have identified the backups that need to be restored.<br />
Exercise 2: Restore the <strong>Database</strong><br />
Scenario<br />
You have determined which backups need to be restored. You now need to restore the database<br />
MarketYields to the Proseware instance from the backups that you have decided upon. You will leave the<br />
database in STANDBY mode.<br />
The main task for this exercise is as follows:<br />
1. Restore the database.<br />
Task 1: Restore the database<br />
• Using SSMS, restore the MarketYields database using the backups that you determined were<br />
needed in Exercise 1. Make sure you use the STANDBY option with a STANDBY filename of<br />
L:\MKTG\Log_Standby.bak. You will need to move the mdf file to the folder D:\MKTG and the ldf file<br />
to the folder L:\MKTG.
7-36 Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
• In Object Browser refresh the list of databases and check the status of the MarketYields database on<br />
the Proseware instance.<br />
Results: After this exercise, you should have restored the database in STANDBY<br />
mode.<br />
Challenge Exercise 3: Using STANDBY Mode (Only if time permits)<br />
Scenario<br />
In this exercise, you will ensure that the STANDBY mode works as expected. You will access the database<br />
and then restore another log file to make sure the database can continue to be restored.<br />
The main tasks for this exercise are as follows:<br />
1. Execute queries against the STANDBY database to ensure it is accessible.<br />
2. Restore another log file, leaving the database in STANDBY mode.<br />
Task 1: Execute queries against the STANDBY database to ensure it is accessible<br />
• Open a query window against the MarketYields database on the Proseware instance.<br />
• Select a count of the rows in the LogData table.<br />
• Close the query window.<br />
Task 2: Restore another log file, leaving the database in STANDBY mode<br />
• Restore the log file D:\MS<strong>SQL</strong>SERVER\MarketYields_log.bak. Ensure you leave the database in<br />
STANDBY mode.<br />
• In Object Browser refresh the list of databases and check the status of the MarketYields database on<br />
the Proseware instance.<br />
Results: After this exercise, you should have tested the STANDBY capability.
Module Review and Takeaways<br />
Review Questions<br />
1. What are the three phases of the restore process?<br />
2. What is always performed before a database starts up and goes ONLINE?<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 7-37<br />
Best Practices related to a particular technology area in this module<br />
1. Don’t forget to backup the tail of the log before starting a restore sequence.<br />
2. Use differential restore to speed up the restore process if available.<br />
3. Use file level restore to speed up restores when not all database files are corrupt.<br />
4. Perform regular database backups of master, msdb and model system databases.<br />
5. Create a disaster recovery plan for your <strong>SQL</strong> Server and test restoring databases regularly.
7-38 Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s
Module 8<br />
Importing and Exporting Data<br />
Contents:<br />
Lesson 1: Transferring Data To/From <strong>SQL</strong> Server 8-3<br />
Lesson 2: Importing & Exporting Table Data 8-15<br />
Lesson 3: Inserting Data in Bulk 8-20<br />
Lab 8: Importing and Exporting Data 8-29<br />
8-1
8-2 Importing and Exporting Data<br />
Module Overview<br />
While a great deal of data that resides in a <strong>Microsoft®</strong> <strong>SQL</strong> <strong>Server®</strong> system may be entered directly by<br />
users that are running application programs, there is also a need to move data in other locations to and<br />
from <strong>SQL</strong> Server.<br />
<strong>SQL</strong> Server provides a set of tools that can be used to transfer data in and out of <strong>SQL</strong> Server. Some of<br />
these tools, such as the bcp utility and <strong>SQL</strong> Server Integration Services are external to the database<br />
engine, and other tools such as the BULK INSERT statement, and the OPENROWSET function, are<br />
implemented within the database engine. In this module, you will briefly explore each of these tools.<br />
On occasions, large amounts of data need to be imported into <strong>SQL</strong> Server. While the default settings in<br />
<strong>SQL</strong> Server can be used while importing the data, higher performance can be achieved by exerting control<br />
over how constraints, triggers, and indexes are used during the import process.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Transfer data to and from <strong>SQL</strong> Server.<br />
• Import and export table data.<br />
• Insert data in bulk and optimize the bulk insert process.
Lesson 1<br />
Transferring Data To/From <strong>SQL</strong> Server<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 8-3<br />
The first step in learning to transfer data in and out of <strong>SQL</strong> Server is to become familiar with the processes<br />
involved and with the tools that <strong>SQL</strong> Server provides to implement data transfer.<br />
When large amounts of data need to be inserted into <strong>SQL</strong> Server tables, the default settings for<br />
constraints, triggers, and indexes are not likely to provide the best performance possible. You may achieve<br />
higher performance by controlling when the checks that are made by constraints are carried out and by<br />
controlling when the index pages for a table are updated.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Explain core data transfer concepts.<br />
• Describe the tools that <strong>SQL</strong> Server provides for data transfer.<br />
• Improve the performance of data transfers.<br />
• Disable and rebuild indexes.<br />
• Disable and re-enable constraints.
8-4 Importing and Exporting Data<br />
Overview of Data Transfer<br />
Key Points<br />
Not all data can be entered row by row by database users. Often data needs to be imported from external<br />
data sources such as other database servers or from operating system files. Users often also request that<br />
data from tables in databases is exported to operating system files. In earlier modules, you have seen how<br />
collations can cause issues when misconfigured. Correcting the collation of a database also often requires<br />
the export and re-import of the data from the database.<br />
Data Transfer Steps<br />
Although not all data transfer requirements are identical, there is a standard process that most data<br />
transfer tasks follow. The main steps of this process are:<br />
• Extracting data from a given data source.<br />
• Transforming the data in some way to make it suitable for the target system.<br />
• Loading the data into the target system.<br />
Together, these three steps are commonly referred to as an Extract, Transform, Load (ETL) process. Tools<br />
that implement these processes are commonly referred to as ETL tools.<br />
Note In some situations, an Extract, Load, Transform (ELT) process might be more<br />
appropriate. For example, you may decide to perform data transformations once the data<br />
has been loaded into the database engine rather than before it is loaded.
Extracting Data<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 8-5<br />
While there are other options, extracting data typically involves the execution of queries on a source<br />
system to retrieve the data, or opening and reading operating system files. Another option would involve<br />
the querying of views provided by the source system.<br />
During the extraction process, there are two common aims:<br />
• Avoid excessive impact on the source system. For example, do not read entire tables of data when<br />
you only need to read selected rows. Also, do not continually re-read the same data, and avoid the<br />
execution of statements that block users of the source system in any way.<br />
• Ensure consistency of the data extraction. For example, do not include one row from the source<br />
system more than once in the output of the extraction.<br />
Transforming Data<br />
The transformation phase of an ELT process will generally involve several steps, such as the following:<br />
• Data might need to be cleansed. For example, you might need to remove erroneous data or provide<br />
default values for missing columns.<br />
• Lookups might need to be performed. For example, the input data might include the name of a<br />
customer, but the database might need an ID for the customer.<br />
• Data might need to be aggregated. For example, the input data might include every transaction that<br />
occurred on a given day, but the database might need only daily summary values.<br />
• Data might need to be de-aggregated. This is often referred to as data allocation. For example, the<br />
input data might include quarterly budgets, but the database might need daily budgets.<br />
In addition to these common operations, data might need to be restructured in some way. One common<br />
requirement is that the data might need to be pivoted or unpivoted. For example, the input data might<br />
reside in a table such as the following:<br />
Object Attribute Value<br />
1 FirstName David<br />
1 Age 64<br />
1 Gender M<br />
1 LastName Pelton<br />
2 FirstName Erin<br />
2 Age 47<br />
2 Gender F<br />
2 LastName Hagens<br />
The database might, however, require the same data in this format:<br />
PersonID FirstName Age Gender LastName<br />
1 David 64 M Pelton<br />
2 Erin 47 F Hagens<br />
This transformation is an example of pivoting data, where rows become columns or columns become<br />
rows.
8-6 Importing and Exporting Data<br />
Loading Data<br />
Once data is in an appropriate format, it can be loaded into the target system. Instead of performing row<br />
by row insert operations for the data, special options for loading data in bulk might be used. In addition,<br />
temporary configuration changes may be made to improve the performance of the load operation.<br />
Question: What other types of aggregation might need to be performed on data during the<br />
transformation phase?
Available Tools for Data Transfer<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 8-7<br />
<strong>SQL</strong> Server provides a set of tools for performing data transfer tasks. It is important to understand where<br />
to use each of the tools.<br />
Bulk Copy Program (bcp)<br />
The Bulk Copy Program (bcp) can be used to import large numbers of new rows from an operating<br />
system data file into a <strong>SQL</strong> Server table, or to export data from a <strong>SQL</strong> Server table to an operating system<br />
file. Although the bcp utility can be used with a T-<strong>SQL</strong> queryout option, which specifies the rows to be<br />
exported, the normal use of bcp does not require T-<strong>SQL</strong> knowledge.<br />
BULK INSERT<br />
The BULK INSERT statement is a T-<strong>SQL</strong> command that is used to import data directly from an operating<br />
system data file into a database table. The BULK INSERT statement differs from bcp in a number of ways.<br />
First, the BULK INSERT statement is executed from within T-<strong>SQL</strong> whereas the bcp utility is a command line<br />
utility. Also, while the bcp utility can be used for both import and output, the BULK INSERT statement can<br />
only be used for data import.<br />
OPENROWSET (BULK)<br />
OPENROWSET is a table-valued function that is used to connect to and retrieve data from OLE-DB data<br />
sources. Full details of how to connect to the data source need to be provided as parameters to the<br />
OPENROWSET function. OPENROWSET can be used to connect to other types of database engine.<br />
<strong>SQL</strong> Server provides a special OLE-DB provider called BULK that can be used with the OPENROWSET<br />
function. The BULK provider allows the import of entire documents from the file system.
8-8 Importing and Exporting Data<br />
Import/Export Wizard<br />
<strong>SQL</strong> Server Integration Services (SSIS) is an ETL tool that is supplied with <strong>SQL</strong> Server. SSIS is capable of<br />
connecting to a wide variety of data sources and destinations and is capable of performing complex<br />
transformations on data. SSIS provides many tasks and transformations out of the box and can also be<br />
extended by the use of custom .NET components and scripts. <strong>SQL</strong> Server also provides the Import/Export<br />
Wizard, which is a simple method of creating SSIS packages, without the need to use the SSIS design<br />
tools.<br />
XML Bulk Load<br />
The XML Bulk Load provider can be used to import XML data as a binary stream within a T-<strong>SQL</strong><br />
statement. The data can be inserted directly into a column in an existing row of a database table.<br />
Question: When would you choose SSIS over bcp?
Improving the Performance of Data Transfers<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 8-9<br />
If constraints, indexes, and triggers are enabled on the tables that are the targets of data transfers, the<br />
data values need to be checked for every row that is imported. This constant checking can substantially<br />
slow down <strong>SQL</strong> Server data transfers.<br />
Disabling Constraints, Indexes, and Triggers<br />
Rather than checking each value that is imported, or updating each index for every row that is inserted,<br />
higher overall performance can often be achieved by disabling the process of checking or index updating,<br />
until all the data is loaded, and then performing that work one time at the end of the import process.<br />
For example, consider a FOREIGN KEY constraint that is used to ensure that the relevant customer does in<br />
fact exist, whenever a customer order is inserted into the database. While this reference could be checked<br />
for each customer order, consider that a customer might have thousands of customer orders. Instead of<br />
checking each value as it is inserted, the customer reference could be checked as a single lookup after the<br />
overall import process, to cover all customer orders that refer to that customer.<br />
Only CHECK and FOREIGN KEY constraints can be disabled. The process for disabling and re-enabling<br />
constraints will be discussed later in this lesson.<br />
Similar to the way that avoiding lookups for FOREIGN KEY constraints during data import can improve<br />
performance, avoiding constant updating of indexes can also improve performance. In many cases,<br />
rebuilding the indexes after the import process is complete will be much faster than updating the indexes<br />
as the rows are imported. The exception to this situation is when there is a much larger number of rows<br />
already in the table than are being imported.<br />
Triggers are commands that are executed when data is modified. It is important to decide if the<br />
processing that the triggers perform would also be better processed in bulk after the import, rather than<br />
as each insert occurs.
8-10 Importing and Exporting Data<br />
Control the Locking Behavior<br />
By default, <strong>SQL</strong> Server manages the granularity of the locks it acquires during the execution of commands.<br />
<strong>SQL</strong> Server starts with row level locking and only tries to escalate when a significant number of rows are<br />
locked within a table. Managing large numbers of locks occupies resources which could be used to<br />
minimize the execution time for queries. As the data in tables that are the target of bulk-import<br />
operations are normally only accessed by the process that is importing the data, the advantage of rowlevel<br />
locking is often not present. For this reason, it may be advisable to lock the entire table by using a<br />
TABLOCK query hint during the import process.<br />
Use Minimal Logging Whenever Possible:<br />
The operation of the transaction log was discussed in Module 5. Minimal logging is a special operation<br />
that can provide substantial performance improvements in some operations such as bulk-imports. As well<br />
as making the operations faster, minimal logging helps avoid excessive log growth during large import<br />
operations.<br />
Not all commands can use minimal logging. While not an exhaustive list, the items below indicate the<br />
types of restrictions that must be met for minimal logging to be applied:<br />
• The table is not being replicated.<br />
• Table locking is specified (using TABLOCK).<br />
• If the table has no clustered index but has one or more nonclustered indexes, data pages are always<br />
minimally logged. How index pages are logged, however, depends on whether the table is empty.<br />
• If the table is empty, index pages are minimally logged.<br />
• If table is non-empty, index pages are fully logged.<br />
• If the table has a clustered index and is empty, both data and index pages are minimally logged.<br />
• If a table has a clustered index and is non-empty, data pages and index pages are both fully logged<br />
regardless of the recovery model.<br />
Note Index types including clustered and nonclustered indexes are discussed in course<br />
10776A: Developing Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s.<br />
Question: What would the main problem with the transaction log be, if full logging occurs<br />
during a bulk-import operation?
Disabling & Rebuilding Indexes<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 8-11<br />
Prior to <strong>SQL</strong> Server 2005, indexes needed to be dropped to prevent them from being updated as the data<br />
in the table was updated. The problem with dropping the index, is that when you need to put the index<br />
back in place by recreating it, you would need to know exactly how the index was configured.<br />
Disabling an Index<br />
Since <strong>SQL</strong> Server 2005, an option exists to disable an index. Rather than totally dropping the index details<br />
from the database, disabling an index leaves the metadata about the index in place, and stops the index<br />
from being updated. Queries that are executed by users will not use disabled indexes.<br />
The major advantage of disabling an index instead of dropping it is that the index can be put back into<br />
operation by a rebuild operation. When you rebuild an index, you do not need to know details of how it is<br />
configured. This makes it much easier to create administrative scripts that stop indexes being updated<br />
while large import or update operations are taking place, and that put the indexes back into operation<br />
after those operations have completed.<br />
Note One special type of index known as a clustered index relates to how the table is<br />
structured, rather than to a separate index that speeds up the location of rows within the<br />
table. If a clustered index is disabled, the table becomes unusable until the index is rebuilt.<br />
Question: What is the main advantage of disabling and enabling indexes compared to<br />
dropping and recreating an index during bulk-imports?
8-12 Importing and Exporting Data<br />
Disabling & Enabling Constraints<br />
Key Points<br />
PRIMARY KEY constraints define the column or columns that uniquely identify each row in a table.<br />
UNIQUE constraints ensure that a column or columns do not contain duplicate values. <strong>SQL</strong> Server creates<br />
indexes to help it to enforce these constraints.<br />
Disabling PRIMARY KEY or UNIQUE Constraints<br />
To disable a PRIMARY KEY or UNIQUE constraint, you need to disable the index that is associated with the<br />
constraint. This is typically only useful with nonclustered PRIMARY KEY constraints. When the constraint is<br />
re-enabled, the associated indexes are rebuilt automatically. If duplicate values are found during the<br />
rebuild, the re-enabling of the constraint will fail. For this reason, if you disable these constraints while<br />
importing data, you need to be sure that the data that is being imported will not violate the rules that the<br />
constraints enforce.<br />
Note If a table has a primary key enforced with a clustered index, disabling the index<br />
associated with the constraint would prevent access to any data in the table.<br />
FOREIGN KEY constraints are used to make sure that those entities in one table that are referred to by<br />
entities in another table, actually exist. For example, a supplier must exist before a purchase order could<br />
be entered for the supplier. FOREIGN KEY constraints use PRIMARY KEY or UNIQUE constraints while<br />
checking the references. If you disable the PRIMARY KEY or UNIQUE constraint that a FOREIGN KEY<br />
reference points to, the FOREIGN KEY constraint will also automatically be disabled. However, when you<br />
re-enable the PRIMARY KEY or UNIQUE constraint, FOREIGN KEY references that use these constraints will<br />
not also be automatically re-enabled.
Disabling FOREIGN KEY and CHECK Constraints<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 8-13<br />
CHECK constraints are used to limit the values that can be contained in a column or the relationship<br />
between the values in multiple columns in a table.<br />
Note Constraints are described in course 10776A: Developing Microsoft <strong>SQL</strong> Server <strong>2012</strong><br />
<strong>Database</strong>s.<br />
Both FOREIGN KEY and CHECK constraints can be disabled and enabled using the CHECK and NOCHECK<br />
options of the ALTER TABLE statement. For example, consider the following code sample that disables a<br />
CHECK constraint named SalaryCap on a table called Person.Salary:<br />
ALTER TABLE Person.Salary NOCHECK CONSTRAINT SalaryCap;<br />
The following code is used to re-enable the constraint:<br />
ALTER TABLE Person.Salary CHECK CONSTRAINT SalaryCap;<br />
Question: Why do referencing foreign key constraints get disabled when the referenced<br />
PRIMAY KEY or UNIQUE constraints get disabled?
8-14 Importing and Exporting Data<br />
Demonstration 1A: Disabling & Enabling Constraints<br />
Demonstration Steps<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click <strong>SQL</strong><br />
Server Management Studio. In the Connect to Server window, type Proseware and click Connect.<br />
From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_08_PRJ\10775A_08_PRJ.ssmssln and click Open.<br />
3. From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file from<br />
within Solution Explorer.<br />
4. Open the 11 – Demonstration 1A.sql script file.<br />
5. Follow the instructions contained within the comments of the script file.
Lesson 2<br />
Importing & Exporting Table Data<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 8-15<br />
<strong>SQL</strong> Server Integration Services (SSIS) is a powerful ETL tool that can be used in conjunction with <strong>SQL</strong><br />
Server. SSIS is capable of performing complex transformations on data from one or many sources and<br />
loading that data into one or many destinations.<br />
While SSIS is a very capable and complex tool, configuring SSIS for simple import and export processes<br />
has been made much easier because <strong>SQL</strong> Server also provides the Import/Export Wizard. This wizard<br />
presents simple configuration dialogs to users and creates an SSIS package based on the selections that<br />
the user has made.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe <strong>SQL</strong> Server Integration Services.<br />
• Use <strong>SQL</strong> Server Import/Export wizard.
8-16 Importing and Exporting Data<br />
Overview of <strong>SQL</strong> Server Integration Services<br />
Key Points<br />
<strong>SQL</strong> Server Integration Services (SSIS) is an ETL tool that is provided with <strong>SQL</strong> Server, in Standard and<br />
higher editions. SSIS allows the definition of complex data flows and transformations. The main purpose<br />
of SSIS is to create reusable and easily deployable packages that perform data transfers.<br />
Packages<br />
The result of building a set of tasks and transformations in SSIS is referred to as a package. Packages are<br />
also the unit of deployment for SSIS and contain a number of objects: Data Sources, Data Destinations,<br />
Control Flow, and Data Flows.<br />
Each package has a single Control Flow that contains a set of tasks that need to be executed. The<br />
workflow that needs to be followed when executing the tasks is defined by a set of precedence<br />
constraints. If data needs to be moved within an SSIS package, a Data Flow task is added to the set of<br />
tasks in the Control Flow. Each Data Flow task is individually configured to specify where data should<br />
come from, how it should be transformed, and where it should be sent to.<br />
One goal of SSIS is to perform all data transformation steps of the ETL process in a single operation<br />
without the need to stage data before transforming it.<br />
Packages are built using <strong>SQL</strong> Server Data Tools (SSDT). The <strong>SQL</strong> Server Import/Export Wizard also creates<br />
SSIS packages without the need for users to work with SSDT.<br />
Question: When it will useful to use SSIS instead of other data transfer options?
Demonstration 2A: Working with SSIS<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 8-17<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, open the 10775A_08_PRJ <strong>SQL</strong> Server script project within <strong>SQL</strong> Server<br />
Management Studio.<br />
• Open and execute the 00 – Setup.sql script file from within Solution Explorer.<br />
2. Open the 21 – Demonstration 2A.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
8-18 Importing and Exporting Data<br />
<strong>SQL</strong> Server Import/Export Wizard<br />
Key Points<br />
The <strong>SQL</strong> Server Import and Export Wizard can copy data to and from any data source for which a<br />
managed .NET Framework data provider or a native OLE-DB provider is available.<br />
The wizard has some limitations, but can be used with <strong>SQL</strong> Server, flat files, Microsoft Office Access®,<br />
Microsoft Office Excel®, and a wide variety of other database engines.<br />
Although it leverages <strong>SQL</strong> Server Integration Services, the <strong>SQL</strong> Server Import and Export Wizard provides<br />
minimal transformation capabilities. Except for setting the name, the data type, and the data type<br />
properties of columns in new destination tables and files, the <strong>SQL</strong> Server Import and Export Wizard<br />
supports no column-level transformations.<br />
Question: If additional transformations are needed above what is provided with the<br />
Import/Export Wizard, how could these be created?
Demonstration 2B: Using the Import/Export Wizard<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 8-19<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_08_PRJ\10775A_08_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
2. Open the 22 – Demonstration 2B.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
8-20 Importing and Exporting Data<br />
Lesson 3<br />
Inserting Data in Bulk<br />
There are three common options that are used to process the insertion of data in bulk with <strong>SQL</strong> Server.<br />
The bcp utility can be used from the command line to perform both input and output operations.<br />
BULK INSERT is a T-<strong>SQL</strong> command that can be used to import data in bulk and OPENROWSET is a tablevalued<br />
function that retrieves data from a variety of sources and returns it as a table that can be queried.<br />
OPENROWSET can be used in conjunction with an INSERT…SELECT statement to insert data in bulk.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Use the bcp utility.<br />
• Use the BULK INSERT statement.<br />
• Use the OPENROWSET function.
cp Utility<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 8-21<br />
The bcp utility is used to bulk copy data between an instance of Microsoft <strong>SQL</strong> Server <strong>2012</strong> and a data file<br />
in a user-specified format. The bcp utility can be used to import large numbers of new rows into <strong>SQL</strong><br />
Server tables or to export data out of tables into data files. Except when used with the queryout option,<br />
the utility requires no knowledge of T-<strong>SQL</strong>.<br />
Format Files<br />
To import data into a table, you must either use a format file created for that table or provide that<br />
information to bcp interactively. Two types of format files are supported. Current versions of <strong>SQL</strong> Server<br />
use XML-based format files but are able to work with the older text-based format files.<br />
You can use the bcp utility to create a format file that can then be consumed by the bcp utility. In the first<br />
example on the slide, the bcp utility is being used to create a format file, based on the column layout of<br />
the Adv.Sales.Currency table.
8-22 Importing and Exporting Data<br />
The main parameters that have been specified have the following meanings:<br />
Parameter Description<br />
format Create a format file<br />
-T Integrated security is used to connect to the server<br />
-c Character data type is used for the export. Character data type provides<br />
the highest compatibility between different types of system. An<br />
alternative option –n would use the <strong>SQL</strong> Server native format, which is a<br />
more compact format but which can only be used for import/export to<br />
other <strong>SQL</strong> Server systems.<br />
-f The name of the format file<br />
-x The format file should be created as XML file<br />
In this example, bcp would connect to the default instance on the local server. If it is necessary to connect<br />
to another instance or another server, the –S parameter can be used to supply a server name or a server<br />
name and an instance name.<br />
Exporting Data<br />
In the second example on the slide, bcp is being used to export the current contents of the<br />
Adv.Sales.Currency table to the file Cur.dat. The one parameter that is different to the previous example is<br />
the "out" parameter that is used to specify the output file name.<br />
Importing Data<br />
In the third example on the slide, bcp is being used to import the contents of a file Cur.dat into the<br />
tempdb.Sales.Currency2 table. Two further uses of parameters are of interest in this example. The "in"<br />
parameter is being used to name the file that should be read. The parameter "-f" is being used to specify<br />
the format file Cur.xml so that <strong>SQL</strong> Server understands the format of the input file.<br />
Question: How could you improve the import speed of a bcp operation?
Demonstration 3A: Working with bcp<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 8-23<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_08_PRJ\10775A_08_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
2. Open the 31 – Demonstration 3A.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
8-24 Importing and Exporting Data<br />
BULK INSERT Statement<br />
Key Points<br />
BULK INSERT loads data from a data file into a table. This functionality is similar to that provided by the<br />
"in" option of the bcp command; however, the data file is read by the <strong>SQL</strong> Server process, not by an<br />
external utility. The BULK INSERT statement executes within a T-<strong>SQL</strong> batch. Because the data files are<br />
opened by a <strong>SQL</strong> Server process, data is not copied between client process and <strong>SQL</strong> Server processes. By<br />
comparison, the bcp utility runs in a separate process which produces a higher load on the server when<br />
run on the same system.<br />
Constraints and Triggers<br />
The BULK INSERT statement offers the CHECK_CONSTRAINTS and FIRE_TRIGGERS options that can be<br />
used to tell <strong>SQL</strong> Server to check constraints and triggers. Unlike the bcp utility, the default operation of<br />
the BULK INSERT statement is to not check CHECK and FOREIGN KEY constraints, or to fire triggers on the<br />
target table during import operations.<br />
Also unlike bcp, the BULK INSERT can be executed from within a user-defined transaction, which gives the<br />
ability to group BULK INSERT with other operations in a single transaction. Care must be taken however,<br />
to ensure that the size of the data batches that are imported within a single transaction are not excessive,<br />
or significant log file growth might occur, even when the database is in simple recovery model.<br />
Question: How does the BULK INSERT statement differ from bcp?
Demonstration 3B: Working with BULK INSERT<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 8-25<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_08_PRJ\10775A_08_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
2. Open the 32 – Demonstration 3B.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.<br />
Question: Why does the first message show 199 and messages after that show 200?
8-26 Importing and Exporting Data<br />
OPENROWSET Function<br />
Key Points<br />
OPENROWSET can be used to access data using an OLE-DB provider. For OLE-DB providers to be usable<br />
in OPENROWSET, a system configuration option "Ad Hoc Distributed Queries" must be enabled and a<br />
registry entry for the OLE-DB provider called DisallowAdhocAccess must be explicitly set to 0. This registry<br />
key is typically located here:<br />
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MS<strong>SQL</strong>Server\Providers\MSDA<strong>SQL</strong>.<br />
When these options are not set, the default behavior does not allow for the ad hoc access that is required<br />
by the OPENROWSET function when working with external OLE-DB providers.<br />
BULK Provider<br />
Since <strong>SQL</strong> Server 2005, a special OLE-DB provider called BULK has been provided which allows the<br />
specification of operating system files that will be returned. The same format files that are used with bcp<br />
and BULK INSERT can be used with this provider.<br />
In addition to the import of typical data rows, the BULK provider offers three special options that allow<br />
the BULK provider to read entire file contents into a single column of a table. These special options are:<br />
Option Description<br />
SINGLE_CLOB Reads an entire single-byte character-based file as a single value of data<br />
type varchar(max).<br />
SINGLE_NCLOB Reads an entire double-byte character-based file as a single value of data<br />
type nvarchar(max).<br />
SINGLE_BLOB Reads an entire binary file as a single value of data type varbinary(max).
For example, consider the following code sample:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 8-27<br />
INSERT INTO Sales.Documents(FileName, FileType, Document)<br />
SELECT 'JanuarySales.txt' AS FileName,<br />
'.txt' AS FileType,<br />
*<br />
FROM OPENROWSET(BULK N'K:\JanuarySales.txt', SINGLE_BLOB) AS Document;<br />
In this code sample, the file K:\JanuarySales.txt is being inserted into the Document column of the<br />
Sales.Documents table, along with its filename and filetype.<br />
Two key advantages of OPENROWSET compared to bcp are that it can be used in a query with a WHERE<br />
clause (to filter the rows that are loaded), and that it can be used in a SELECT statement that is not<br />
necessarily associated with an INSERT statement.<br />
Question: When will it make sense to use OPENROWSET instead of bcp or BULK INSERT?
8-28 Importing and Exporting Data<br />
Demonstration 3C: Working with OPENROWSET<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_08_PRJ\10775A_08_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
2. Open the 33 – Demonstration 3C.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
Lab 8: Importing and Exporting Data<br />
Lab Setup<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 8-29<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_08_PRJ\10775A_08_PRJ.ssmssln.<br />
7. From the View menu, click Solution Explorer. In Solution Explorer, double-click the query<br />
00-Setup.sql. When the query window opens, click Execute on the toolbar.
8-30 Importing and Exporting Data<br />
Lab Scenario<br />
Proseware regularly receives updates of currencies and exchange rates from an external provider. One of<br />
these files is provided as an Excel spreadsheet, the other file is provided as a comma-delimited text file.<br />
You need to import both these files into tables that will be used by the Direct Marketing team within<br />
Proseware.<br />
Periodically the Marketing team requires a list of prospects that have not been contacted within the last<br />
month. You need to create and test a package that will extract this information to a file for them.<br />
You are concerned about the import performance for the exchange rate file and you are considering<br />
disabling constraints and indexes on the exchange rate table during the import process. If you have time,<br />
you will test the difference in import performance.<br />
Supporting Documentation<br />
Exercise 1<br />
Item Description<br />
Output table DirectMarketing.Currency<br />
Output columns CurrencyID int not null<br />
CurrencyCode nvarchar(3) not null<br />
CurrencyName nvarchar(50) not null<br />
Input file D:\10775A_Labs\10775A_08_PRJ\10775A_08_PRJ\Currency.xls<br />
Exercise 1: Import the Excel Spreadsheet<br />
Scenario<br />
You need to load a file of currency codes and names from an Excel spreadsheet. In this exercise, you will<br />
use the import wizard to perform the data load.<br />
The main task for this exercise is as follows:<br />
1. Import the data using the Import Wizard.<br />
Task 1: Import the data using the Import Wizard<br />
• Import the spreadsheet Currency.xls into a table in the MarketDev database called<br />
DirectMarketing.Currency. If the table already exists, delete the table first. Refer to the Supporting<br />
Documentation for the file location and output table format.<br />
• Query the DirectMarketing.Currency table to see that the data that was loaded.<br />
Results: After this exercise, you should have imported the DirectMarketing.Currency table.
Exercise 2: Import the CSV File<br />
Scenario<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 8-31<br />
You have also been provided with a comma-delimited file of exchange rates. You need to import these<br />
exchange rates into the existing DirectMarketing.ExchangeRate table. The table should be truncated<br />
before the data is loaded.<br />
The main task for this exercise is as follows:<br />
1. Import the CSV file.<br />
Note Make sure that you record how long the command takes to execute. Warning: it will<br />
take several minutes to complete. Use this time to prepare for the next Exercise.<br />
Task 1: Import the CSV file<br />
• Truncate the DirectMarketing.ExchangeRate table.<br />
• Review the ExchangeRates.xml format file in the Solution Explorer.<br />
• Using BULK INSERT T-<strong>SQL</strong> command import the ExchangeRates.csv file into the table<br />
DirectMarketing.ExchangeRate. The ExchangeRates.csv file can be found in the following location:<br />
D:\10775A_Labs\10775A_08_PRJ\10775A_08_PRJ\ExchangeRates.csv.<br />
• Use ExchangeRates.xml as the format file and a batch size of 10,000, and use the option to skip the<br />
first row as it contains headings.<br />
Results: After this exercise, you should have imported the ExchangeRate table using T-<strong>SQL</strong> BULK<br />
INSERT statement.<br />
Exercise 3: Create and Test an Extraction Package<br />
Scenario<br />
Periodically the Marketing team requires a list of prospects that have not been contacted within the last<br />
month. You need to create and test a package that will extract this information to a file for them.<br />
The main task for this exercise is as follows:<br />
1. Create and test an extraction package.<br />
Task 1: Create and test an extraction package<br />
• Using the Export Wizard, export the Marketing.Prospect table to a text file in the following location:<br />
D:\MKTG\ProspectsToContact.csv. Column Names should be included in the first row. The extraction<br />
query should be as shown in the snippet below:<br />
SELECT ProspectID, FirstName, LastName, CellPhoneNumber,<br />
WorkPhoneNumber,EmailAddress, LatestContact<br />
FROM Marketing.Prospect<br />
WHERE LatestContact < DATEADD(MONTH,-1,SYSDATETIME())<br />
OR LatestContact IS NULL<br />
ORDER BY ProspectID;
8-32 Importing and Exporting Data<br />
Note Save the SSIS package that is created by the Export Wizard to <strong>SQL</strong> Server in the<br />
package root location.<br />
Results: After this exercise, you should have created and tested an extraction package.<br />
Challenge Exercise 4: Compare Loading Performance (Only if time permits)<br />
Scenario<br />
You are concerned about the import performance for the exchange rate file and you are considering<br />
disabling constraints and indexes on the exchange rate table during the import process. If you have time,<br />
you will test the difference in import performance.<br />
The main task for this exercise is as follows:<br />
1. Re-execute load with indexes disabled.<br />
Task 1: Re-execute load with indexes disabled<br />
• Alter your script from Exercise 2 to disable any non-clustered indexes on the<br />
DirectMarketing.ExchangeRate table before loading the data and to rebuild the indexes after the load<br />
completes.<br />
• Execute your modified script and compare the duration to the value recorded in Exercise 2.<br />
Results: After this exercise, you should have compared the load performance with indexes<br />
disabled.
Module Review and Takeaways<br />
Review Questions<br />
1. When would you use SSIS instead of other data transfer utilities?<br />
2. Why are minimally logged operations faster than fully logged operations?<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 8-33<br />
Best Practices related to a particular technology area in this module<br />
1. Choose the right tool for bulk-imports.<br />
2. Use SSIS for complex transformations.<br />
3. Use bcp or BULK INSERT for fast imports and exports.<br />
4. Use OPENROWSET when data needs to be filtered before it gets inserted.<br />
5. Try to achieve minimal logging to speed up data import.
8-34 Importing and Exporting Data
Module 9<br />
Authenticating and Authorizing Users<br />
Contents:<br />
Lesson 1: Authenticating Connections to <strong>SQL</strong> Server 9-3<br />
Lesson 2: Authorizing Logins to Access <strong>Database</strong>s 9-13<br />
Lesson 3: Authorization Across Servers 9-22<br />
Lab 9: Authenticating and Authorizing Users 9-30<br />
9-1
9-2 Authenticating and Authorizing Users<br />
Module Overview<br />
Securing <strong>Microsoft®</strong> <strong>SQL</strong> <strong>Server®</strong> can be viewed as a series of steps, involving four areas: the platform,<br />
authentication, objects (including data), and applications that access the system. Well-planned security is<br />
important for protecting your organization’s sensitive information.<br />
In this module you will be introduced to securing <strong>SQL</strong> Server. You will learn how <strong>SQL</strong> Server authenticates<br />
users and how it authorizes the users to access databases. Not all resources reside on a single server. In<br />
this module, you will also see how distributed authorization is configured; that is, where more than one<br />
<strong>SQL</strong> Server system is involved.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe how <strong>SQL</strong> Server authenticates connections.<br />
• Describe how logins are authorized to access databases.<br />
• Explain the requirements for authorization across servers.
Lesson 1<br />
Authenticating Connections to <strong>SQL</strong> Server<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 9-3<br />
The first layer of security within <strong>SQL</strong> Server is authentication of users. Before any other security settings<br />
can be examined, <strong>SQL</strong> Server needs to verify the identity of the user.<br />
Authentication is the process of identifying users and of verifying their identity. In this lesson, you will<br />
learn about the options available for authenticating users, about the available types of logins to the server<br />
and about how policies are managed in relation to <strong>SQL</strong> Server logins.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe <strong>SQL</strong> Server security.<br />
• Implement <strong>SQL</strong> Server authentication options.<br />
• Provide access to <strong>SQL</strong> Server for Windows® users and groups.<br />
• Provide access to <strong>SQL</strong> Server for other users.
9-4 Authenticating and Authorizing Users<br />
Overview of <strong>SQL</strong> Server Security<br />
Key Points<br />
A user can gain permission to connect to <strong>SQL</strong> Server in one of four ways:<br />
• They can be a user that <strong>SQL</strong> Server manages directly. These users become known as <strong>SQL</strong> Server<br />
logins.<br />
• They can be a user that Windows has authenticated and <strong>SQL</strong> Server has been configured to allow the<br />
user to connect. These users become known as Windows logins.<br />
• They can be a Windows user that is a member of a Windows group. <strong>SQL</strong> Server has been configured<br />
to allow members of the group to connect. These users also become known as Windows logins.<br />
• They can be a database user that is assigned a password directly where the user is not associated with<br />
a login.<br />
Note It is also possible for logins to be created from certificates and keys but the creation<br />
of these is an advanced topic, outside the scope of this course.<br />
Logins vs. <strong>Database</strong> Users<br />
The distinction between logins and database users is important. The term "Login" is applied to a principal<br />
(or process) that has been granted access to <strong>SQL</strong> Server via one of the three methods mentioned above.<br />
Note Having access to the server does not indicate (in itself) that a login has any access to<br />
user databases on the server.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 9-5<br />
Logins can be granted permission to access one or more databases. A mapping can exist between a login<br />
(either Windows Login or a <strong>SQL</strong> Server Login) and a "<strong>Database</strong> User" in a particular database. <strong>Database</strong><br />
users often have the same name as the logins that they are mapped to but the names can be different. A<br />
login can even be mapped to different user names in each database.<br />
Note Keeping login and database user names the same is considered a best practice.<br />
Permissions<br />
Even once a login is granted access to a database by the creation of a database user, the database user<br />
will need to be granted permissions within the database before they can access securable objects such as<br />
tables, views, functions, and stored procedures.<br />
Note Some logins have permissions across all databases because they are added to roles<br />
at the server level, such as the sysadmin role, but these are exceptions to the general<br />
situation.<br />
There are two ways that database users can be granted permission to access securable objects. The<br />
database users can be granted permissions on the objects. Alternatively, roles can be created within the<br />
database. Roles are granted permissions on the securable objects and the database users are added as<br />
members of the roles. <strong>Database</strong> users inherit all permissions that are associated with any roles that they<br />
are members of, along with any permissions that have been directly assigned to them.<br />
Question: Apart from Windows users, what other types of users might want to connect to<br />
<strong>SQL</strong> Server?
9-6 Authenticating and Authorizing Users<br />
<strong>SQL</strong> Server Authentication Options<br />
Key Points<br />
Authentication is the process of verifying that an identity is valid. If the identity is a user, is the user who<br />
they claim to be? There are two basic ways this can occur.<br />
• <strong>SQL</strong> Server trusts the Windows operating system to verify the identity of a Windows login. Windows<br />
might use different methods such as password-checking, biometric checks (for example, fingerprint<br />
scanning) or certificates to validate a user's identity. It might even use a combination of such<br />
methods. In <strong>SQL</strong> Server, Windows logins are often referred to as "trusted logins".<br />
• <strong>SQL</strong> Server can directly verify the identity of a <strong>SQL</strong> Server login by checking that they know the<br />
password associated with that login.<br />
Question: If you call your bank by phone, how do they verify your identity before speaking<br />
to you about your account details?<br />
Server Configuration<br />
<strong>SQL</strong> Server can be configured in two modes<br />
• Windows Authentication mode.<br />
• <strong>SQL</strong> Server and Windows Authentication mode.<br />
Windows Authentication mode was formerly known as Integrated Mode. In Windows Authentication<br />
mode, only users that the Windows operating system has authenticated are permitted to connect to the<br />
server.<br />
In <strong>SQL</strong> Server and Windows Authentication mode, both users that have been authenticated by the<br />
Windows operating system and users that <strong>SQL</strong> Server has directly authenticated are permitted to connect<br />
to the server. This mode is often called Mixed Mode.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 9-7<br />
The choice between these two modes is made at the <strong>SQL</strong> Server instance level during the installation of<br />
<strong>SQL</strong> Server and can easily be changed. The instance needs to be restarted after changing the<br />
configuration.<br />
Note The configuration can be made using the GUI tooling in SSMS but it is in fact only a<br />
single registry key that is being changed. This registry key could be configured via a group<br />
policy within Windows.<br />
If <strong>SQL</strong> Server and Windows Authentication mode is enabled, a <strong>SQL</strong> Server login called "sa" is then active. It<br />
is important to set an appropriate (and complex) password for the "sa" login and record it somewhere.<br />
When <strong>SQL</strong> Server is installed as Windows Authentication Only, the "sa" account is disabled by default. If<br />
<strong>SQL</strong> Server is installed in Mixed Mode, then "sa" is enabled by default. Changing the server's<br />
authentication mode does not change the enabled/disabled state of the "sa" login.<br />
Protocols for Authentication<br />
Windows authentication is typically performed via the Kerberos protocol. The Kerberos protocol is<br />
supported with <strong>SQL</strong> Server over the TCP/IP, Named Pipes, and Shared Memory network protocols.<br />
(Support for Kerberos over Named Pipes and Shared Memory was introduced in <strong>SQL</strong> Server 2008. <strong>SQL</strong><br />
Server 2005 supported Kerberos on the TCP/IP network protocol).
9-8 Authenticating and Authorizing Users<br />
Managing Windows Logins<br />
Key Points<br />
Logins can be created directly using T-<strong>SQL</strong> code or via the GUI provided in SSMS. While the GUI option<br />
can be used, creating logins is a common operation and if many logins need to be created, you will find it<br />
much faster, more repeatable, and accurate to use a script.<br />
CREATE LOGIN<br />
Logins can be created in two ways. To create a login in SSMS, expand the Security node at the server<br />
instance level and then right-click Logins to choose New Login.<br />
Alternatively, logins can be created using the CREATE LOGIN statement as shown in the slide example.<br />
Note Windows user and group names must be enclosed within square brackets as shown<br />
in the slide example, primarily because they contain a backslash character.<br />
Windows logins can be created for individual users or for Windows groups. The first example in the slide<br />
shows the creation of a login for an individual user. Note that a default database and a default language<br />
are assigned.<br />
When a default database is not assigned directly, the master database will be assigned by <strong>SQL</strong> Server.<br />
When a default language is not assigned directly, the default language of the server instance will be<br />
assigned by <strong>SQL</strong> Server.
Windows Groups<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 9-9<br />
The second example in the slide is for a Windows group. Members of the group<br />
AdventureWorks\Salespeople will be permitted to connect to the server without the need for individual<br />
logins.<br />
Note Windows users and groups can both refer to local or domain users and groups.<br />
Removing Logins<br />
Logins are removed with the DROP LOGIN statement. Note that you cannot drop a login if the user is<br />
currently logged in. Should you need to do this, you will need to locate their session ID in the SSMS<br />
Activity Monitor (by viewing the list of processes) and kill the session first. Avoid deleting logins where the<br />
database users associated with the logins own objects within databases. (Object ownership is discussed in<br />
Module 11).<br />
Question: Why would you create logins based on groups in preference to logins based on<br />
users?
9-10 Authenticating and Authorizing Users<br />
Managing <strong>SQL</strong> Server Logins and Policies<br />
Key Points<br />
<strong>SQL</strong> Server logins are created for individual identities and are created using the same tools as Windows<br />
logins.<br />
Security of <strong>SQL</strong> Server Logins<br />
For many years, the use of <strong>SQL</strong> Server logins was considered a poor security practice. There were several<br />
reasons for this, including:<br />
• Unencrypted authentication.<br />
• Lack of account policy.<br />
The concern with encryption was that during the authentication phase, the traffic on the network was not<br />
encrypted. This means that an attacker that "sniffed" network packets would have been able to detect<br />
enough information to access the system via the <strong>SQL</strong> Server login.<br />
The concern with account policy was that even though the <strong>SQL</strong> Server system might have been part of a<br />
domain that implemented detailed account policy, the policy did not apply to <strong>SQL</strong> Server logins. Both of<br />
these issues were addressed in <strong>SQL</strong> Server 2005.
Encrypted Authentication<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 9-11<br />
The upgraded <strong>SQL</strong> Server Native Access Client (SNAC) that was provided with <strong>SQL</strong> Server 2005 was<br />
enhanced to provide encrypted authentication. If <strong>SQL</strong> Server did not have a Secure Sockets Layer (SSL)<br />
certificate installed by an administrator, <strong>SQL</strong> Server would generate and self-sign a certificate to be used<br />
for encrypting authentication traffic.<br />
Note Encrypted authentication only applied to clients running the <strong>SQL</strong> Server 2005<br />
version of SNAC or later. If an earlier client that did not understand encrypted<br />
authentication tried to connect, by default <strong>SQL</strong> Server would allow this. There is a<br />
configurable property for each supported protocol in <strong>SQL</strong> Server Configuration Manager<br />
that can be used to disallow unencrypted authentication from down-level clients if this is a<br />
concern.<br />
Account Policy<br />
The implementation of account policy is based on the ability to read policy details from the operating<br />
system. Windows Server 2003 and later operating systems introduced an API that allowed applications to<br />
read details of account policy.<br />
<strong>SQL</strong> Server is supported on some operating systems (such as Windows XP) that do not support this API.<br />
For those operating systems, account policy is replaced by a basic set of password complexity rules.<br />
The full application of account policy is not always desirable. For example, some applications use fixed<br />
logins to connect to the server. Often, these applications do not support regular changing of login<br />
passwords. In these cases, it is common to disable policy checking for those logins.<br />
Password Changing and Login Expiry<br />
Passwords can be reset using the GUI in SSMS or via the ALTER LOGIN statement. If logins are not being<br />
used for a period of time, they can be disabled and later re-enabled. If there is any chance that a login will<br />
be needed again in the future, it would be better to be disabled rather than dropped. Disabling the login<br />
is achieved by executing the ALTER LOGIN statement as shown in the code below:<br />
ALTER LOGIN James DISABLE;<br />
Question: Can you suggest a type of account policy that Windows provides?
9-12 Authenticating and Authorizing Users<br />
Demonstration 1A: Authenticating Logons and Logon Tokens<br />
Demonstration Steps<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click <strong>SQL</strong><br />
Server Management Studio. In the Connect to Server window, type Proseware and click Connect.<br />
From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_09_PRJ\10775A_09_PRJ.ssmssln and click Open.<br />
3. From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file from<br />
within Solution Explorer.<br />
4. Open the 11 – Demonstration 1A.sql script file.<br />
5. Follow the instructions contained within the comments of the script file.
Lesson 2<br />
Authorizing Logins to Access <strong>Database</strong>s<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 9-13<br />
Once the identity of a login has been verified, it is necessary to provide that login with access to the<br />
databases that the login needs to work with. This access is provided by granting the login access to a<br />
database and then by providing the login with sufficient permissions within the database to perform the<br />
necessary work.<br />
In this lesson, you will see how to grant access to a database to a login and see details of special forms of<br />
access to databases.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe the <strong>SQL</strong> Server authorization process.<br />
• Grant access to databases.<br />
• Manage dbo and guest access.<br />
• Configure users with passwords for partially contained databases.
9-14 Authenticating and Authorizing Users<br />
Authorization Overview<br />
Key Points<br />
A very common error when discussing security is to confuse the concepts of authentication and<br />
authorization. You can think of authentication as proving who you are and you can think of authorization<br />
as determining what you are allowed to do.<br />
Formal Terminology<br />
It is important to be familiar with common security-related formal terminology as shown in the following<br />
table:<br />
Term Description<br />
Principal An entity that can request access to a resource<br />
Securable A resource that can be secured or an object that you can control access to<br />
Authentication Ensuring that the principal is who they say they are<br />
Authorization Providing controlled access to a securable, for a principal. That is,<br />
determining the permissions that a principal has on a securable<br />
Role A container for principals that can be used to assign permissions indirectly<br />
A role is a name that is given to a group of principals. Roles are created to make it easier to grant<br />
permissions to many principals that need similar levels of access. Roles in <strong>SQL</strong> Server are similar to groups<br />
within the Windows operating system.<br />
Permissions are controlled by the GRANT, REVOKE, and DENY commands and will be discussed in<br />
Module 11.<br />
Question: Would you imagine that a login is a principal or a securable?
Granting Access to <strong>Database</strong>s<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 9-15<br />
Logins are granted access to databases through the creation of database users. A database user is a<br />
principal within a database that is mapped to a login at the server.<br />
<strong>Database</strong> users can be created through the SSMS GUI or via T-<strong>SQL</strong> commands. To create a new database<br />
user via the GUI, expand the relevant database, expand the Security node, right-click the Users node and<br />
choose the option for New User.<br />
Note There are special types of database users that are created directly from certificates<br />
and are not associated with logins. The creation of these is an advanced topic beyond the<br />
scope of this course.<br />
CREATE USER / DROP USER<br />
Once a login exists, it can be linked to a new database user with the CREATE USER statement of T-<strong>SQL</strong>.<br />
The DROP USER statement is used to remove these users.<br />
Consider the examples shown on the slide:<br />
• The first example creates a database user named SecureUser for an existing <strong>SQL</strong> Server login also<br />
named SecureUser.<br />
• The second example creates a database user named Student for a Windows login named<br />
AdventureWorks\Student.
9-16 Authenticating and Authorizing Users<br />
• The third example creates a database user HRApp for a <strong>SQL</strong> Server login named HRUser. Note that<br />
the database user has been given a different name than the name of the login.<br />
Note The names of Windows logins (either users or groups) must be enclosed in square<br />
brackets.
Managing dbo and guest Access<br />
Key Points<br />
Each <strong>SQL</strong> Server database includes two special database users: dbo and guest.<br />
dbo<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 9-17<br />
dbo is a special user that has implied permissions to perform all activities in the database. Any member of<br />
the sysadmin fixed server role (including the "sa" user) who uses a database, is mapped to the special<br />
database user called dbo inside each database. The dbo database user cannot be deleted and is always<br />
present in every database.<br />
<strong>Database</strong> Ownership<br />
Like other objects in <strong>SQL</strong> Server, databases also have owners. The owner of a database is also mapped to<br />
the dbo user. The database owner can be modified using the ALTER AUTHORIZATION statement, as<br />
shown in the following code:<br />
ALTER AUTHORIZATION ON DATABASE::MarketDev<br />
TO [ADVENTUREWORKS\Administrator];<br />
Any object that is created by any member of the sysadmin fixed server role will also automatically have<br />
dbo as its owner. Owners of objects have full access to the objects and do not require explicit permissions<br />
before they can perform operations on the objects.
9-18 Authenticating and Authorizing Users<br />
guest<br />
The guest user account allows logins that are not mapped to a database user in a particular database to<br />
gain access to that database. Login accounts assume the identity of the guest user when the following<br />
conditions are met:<br />
• The login has access to <strong>SQL</strong> Server but does not have access to the database through its own<br />
database user mapping.<br />
• The guest account has been enabled.<br />
The guest account can be added to a database to allow anyone with a valid login to access the database.<br />
The guest username is automatically a member of the public role. (Roles will be discussed in the next<br />
module).<br />
A guest user works as follows:<br />
• <strong>SQL</strong> Server checks to see whether the login is mapped to with a database user in the database that<br />
the login is trying to access. If so, <strong>SQL</strong> Server grants the login access to the database as the database<br />
user.<br />
• <strong>SQL</strong> Server checks to see whether a guest database user is enabled. If so, the login is granted access<br />
to the database as guest. If the guest account does not exist or is not enabled, <strong>SQL</strong> Server denies<br />
access to the database.<br />
The guest user cannot be dropped but you can prevent it from accessing a database by executing the<br />
following command:<br />
REVOKE CONNECT FROM guest;<br />
The guest account can be enabled by executing the following command:<br />
GRANT CONNECT TO guest;<br />
Note The guest user is used to provide access to the master, msdb, and tempdb<br />
databases. You should not attempt to revoke guest access to these databases.<br />
Question: What is the guest user useful for?
Demonstration 2A: Authorizing Logins and User Tokens<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 9-19<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_09_PRJ\10775A_09_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
• Open and execute the 11 – Demonstration 1A.sql script file from within Solution Explorer.<br />
2. Open the 21 – Demonstration 2A.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
9-20 Authenticating and Authorizing Users<br />
Creating Users for a Specific <strong>Database</strong><br />
Key Points<br />
Until <strong>SQL</strong> Server <strong>2012</strong>, for a user to connect to a database, they first needed to have a login created in the<br />
master database on the server.<br />
Users can now be created within a database without any entries being made in the master database for<br />
the users. A database must be partially contained before it can be used for users without logins.<br />
Containment is a property of a database.<br />
There are three types of users that can be authenticated by the database rather than by the server:<br />
• A Windows user can be authenticated at the database level based upon their Windows user<br />
credentials.<br />
• Members of a Windows group can be authenticated at the database level.<br />
• <strong>SQL</strong> Server users can be assigned passwords and authenticated at the database level, without the<br />
need for a <strong>SQL</strong> Server login.<br />
Users that are authenticated by the database cannot access other databases except as guest users.
Demonstration 2B: Configuring Users with Passwords<br />
Demonstration Steps<br />
1. If Demonstration 2A was not performed:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 9-21<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_09_PRJ\10775A_09_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
• Open and execute the 21 – Demonstration 2A.sql script file from within Solution Explorer.<br />
2. Open the 22 – Demonstration 2B.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
9-22 Authenticating and Authorizing Users<br />
Lesson 3<br />
Authorization Across Servers<br />
Not all resources reside within a single server. It is common to need to access resources on other servers.<br />
There are two common issues that arise in relation to working across servers. One is referred to as the<br />
"double-hop" problem and it relates to the difference between impersonation and delegation. In this<br />
lesson, these concepts will be explained. The second common problem is referred to as the "mismatched<br />
SID" problem that occurs when restoring or attaching a database from another server that was using <strong>SQL</strong><br />
Server logins. You will also see approaches for dealing with this issue in this lesson.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe the typical "Double-Hop" problem.<br />
• Explain the difference between impersonation and delegation.<br />
• Work with security IDs.
Typical "Double-Hop" Problem<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 9-23<br />
There is a very common issue that arises when working across multiple servers. The problem occurs as<br />
shown on the slide:<br />
1. A user starts an application. This might involve starting a web application from a corporate web<br />
server. The application either logs the user onto another Windows identity or, more commonly,<br />
impersonates the user's identity.<br />
2. At this point, the application can perform tasks as though it was the user performing the tasks<br />
directly. The web server process might be executing as a low-privilege account but it is performing<br />
functions as though it was the Windows user. For example, the user would be able to access business<br />
functions within the application, based on the user's identity. A bank manager might be able to<br />
access the "Delete Bank Account" functionality while a bank teller might be denied access to do this.<br />
3. The application needs to connect to a database to retrieve business data. If the database server is<br />
installed on the same server as the web server, access to the database server is made using the same<br />
impersonation details. Within the <strong>SQL</strong> Server, the user would still have the correct Windows identity.<br />
The user would have permissions that have been assigned directly to the user and permissions that<br />
have been assigned to roles that the user is a member of.<br />
4. If, however, the database server is residing on another server, a problem occurs. When the identity of<br />
the user in <strong>SQL</strong> Server is checked, it is found to be the identity of the web server service, such as the<br />
low-privilege account, instead of the identity of the original user.<br />
This is a very typical "double-hop" problem caused by the default action of the Windows operating<br />
system that permits impersonation but not delegation. <strong>Database</strong> administrators are often asked why this<br />
is occurring. A significant security hole would be opened if this type of access was permitted between<br />
servers by default.
9-24 Authenticating and Authorizing Users<br />
What often causes confusion is that the application may have worked as expected in a development<br />
environment where the web server and the database server were both on the same server but then fails to<br />
work as expected once the application is deployed to a production environment where this is no longer<br />
the case.<br />
In the next topic, you will see how impersonation and delegation differ.<br />
Question: If you have seen this problem with a web server, what was the user account that<br />
often appears to be connecting to <strong>SQL</strong> Server instead of the user?
Impersonation vs. Delegation<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 9-25<br />
By default, when a user connects to a Windows server, the user is impersonated on that server. That does<br />
not, however, give a process on the server the right to impersonate that user across the network. This<br />
impersonation across a network is known as delegation. This can apply to any Windows server that is<br />
connecting to a separate <strong>SQL</strong> Server and it also applies to one <strong>SQL</strong> Server system connecting to another<br />
<strong>SQL</strong> Server system.<br />
Delegation Requirements<br />
To illustrate the requirements for delegation between two <strong>SQL</strong> Server systems, consider the following<br />
scenario:<br />
• A user logs on to a client computer that connects to a server that is running an instance of <strong>SQL</strong><br />
Server, <strong>SQL</strong>SERVER1.<br />
• The user wants to run a distributed query against a database on another server, <strong>SQL</strong>SERVER2.<br />
• This scenario, in which one computer connects to another computer to connect to a third computer,<br />
is an example of a "double-hop".<br />
Each server or computer that is involved in delegation needs to be configured appropriately.<br />
Requirements for the Client<br />
• The Windows authenticated login of the user must have access permissions to <strong>SQL</strong>SERVER1 and<br />
<strong>SQL</strong>SERVER2.<br />
• The user Active Directory property "Account is sensitive and cannot be delegated" must not be<br />
selected.<br />
• The client computer must be using TCP/IP or named pipes network connectivity.
9-26 Authenticating and Authorizing Users<br />
Requirements for the First/Middle Server (<strong>SQL</strong>SERVER1)<br />
• The server must have a Server Principal Name (SPN) registered by the domain administrator.<br />
• The account under which <strong>SQL</strong> Server is running must be trusted for delegation.<br />
• The server must be using TCP/IP or named pipes network connectivity.<br />
• The second server, <strong>SQL</strong>SERVER2, must be added as a linked server. This can be done by executing the<br />
sp_addlinkedserver stored procedure or by configurations within SSMS. For example:<br />
EXEC sp_addlinkedserver '<strong>SQL</strong>SERVER2', N'<strong>SQL</strong> Server‘<br />
• The linked server logins must be configured for self-mapping. This can be done by executing the<br />
sp_addlinkedsrvlogin stored procedure. For example:<br />
EXEC sp_addlinkedsrvlogin '<strong>SQL</strong>SERVER2', 'true'<br />
Requirements for the Second Server (<strong>SQL</strong>SERVER2)<br />
• If using TCP/IP network connectivity, the server must have an SPN registered by the domain<br />
administrator.<br />
• The server must be using TCP/IP or named pipes network connectivity.
Working with Mismatched Security IDs<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 9-27<br />
Another very common issue that relates to the use of multiple servers is referred to as the "mismatched<br />
SIDs" problem.<br />
Mismatched SIDs<br />
When a <strong>SQL</strong> Server login is created, the login is allocated both a name and a Security ID (SID). When a<br />
database user is created for the login, details of both the name and the SID for the login are entered into<br />
the database.<br />
If the database is then backed up and restored onto another server, the database user entry is still present<br />
within the database but there is no login on the server that matches it.<br />
<strong>Database</strong> administrators often then create the new login and map it as a user in the database and they<br />
find that this fails. When the login is created, it might have the same name and even the same password<br />
as the original login on the other server, but by default <strong>SQL</strong> Server will allocate it a new SID.<br />
The new login will be unable to be added to the database. This can be a source of frustration because the<br />
error that is returned explains that the database user already exists, yet a check on the list of database<br />
user mappings for this login in SSMS will not show this entry. This is a good example of a situation where<br />
an understanding of T-<strong>SQL</strong> coding is helpful in <strong>SQL</strong> Server administration, rather than just an<br />
understanding of how to use the GUI in SSMS.
9-28 Authenticating and Authorizing Users<br />
Resolving Mismatched SIDs<br />
In earlier versions of <strong>SQL</strong> Server, the option provided to deal with this situation was a system stored<br />
procedure sp_change_users_login.<br />
Starting with Service Pack 2 of <strong>SQL</strong> Server 2005, a new alternative was provided:<br />
ALTER USER dbuser WITH LOGIN = loginname;<br />
The problem with either of these methods is that they "fix" the SID of the database user to match the SID<br />
of the login. The next time the database is restored (as often happens), the same problem occurs again.<br />
Avoiding the Issue<br />
A better way of dealing with mis-matched SIDs is to avoid the problem in the first place.<br />
The CREATE LOGIN statement has a WITH SID option. If you supply the SID from the original server while<br />
creating the login on the second server, you will avoid the problem occurring at all. The sp_helprevlogin<br />
stored procedure is another option that can be used to help with scripting <strong>SQL</strong> Server logins and it<br />
includes the value of the SID. Logins that are created using the scripts generated by this procedure will<br />
also not suffer from the mis-matched SIDs problem. Details of the sp_helprevlogin procedure (along with<br />
the source code of the procedure) are provided on the support.microsoft.com web site.<br />
Note While <strong>SQL</strong> Server Integration Services includes a task for transferring logins, the tool<br />
disables the logins and assigns them a random password.<br />
You will see an example of the mismatched SIDs problem in Demonstration 3A.
Demonstration 3A: Working with Mismatched SIDs<br />
Demonstration Steps<br />
1. If Demonstration 1A or 2A was not performed:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 9-29<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click <strong>SQL</strong><br />
Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_09_PRJ\10775A_09_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the script files<br />
00 – Setup.sql, 11 – Demonstration 1A.sql , and 21 – Demonstration 2A.sql from within<br />
Solution Explorer.<br />
2. Open the 31 – Demonstration 3A.sql script file and follow the instructions contained within the<br />
comments of the script file.
9-30 Authenticating and Authorizing Users<br />
Lab 9: Authenticating and Authorizing Users<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_09_PRJ\10775A_09_PRJ.ssmssln.<br />
7. From the View menu, click Solution Explorer. In Solution Explorer, double-click the query<br />
00-Setup.sql. When the query window opens, click Execute on the toolbar.<br />
Lab Scenario<br />
You need to configure the security for the Marketing database prior to the business accessing the system.<br />
You need to configure security so that organizational users are able to connect to <strong>SQL</strong> Server but are only<br />
to access resources that they are permitted to access. Most users will connect using their Windows group<br />
credentials. Some users however will need to use individual Windows logins. An application requires the<br />
use of a <strong>SQL</strong> Server login.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 9-31<br />
If you have time there is a problem with the LanguageDetails database that you should try to solve.<br />
Note The changes you make will later be migrated to the production environment. You<br />
should use T-<strong>SQL</strong> commands to implement the required changes.<br />
Supporting Documentation<br />
Existing Windows User and Group Structure<br />
ITSupport SalesPeople CreditManagement HumanResources CorporateManagers<br />
David.Alexander X X<br />
Jeff.Hay X<br />
Palle.Petersen X<br />
Terry.Adams X<br />
Darren.Parker X X<br />
Mike.Ray X<br />
April.Reagan X<br />
Jamie.Reding X<br />
Darcy.Jayne X<br />
Naoki.Sato X<br />
Bjorn.Rettig X X<br />
Don.Richardson X<br />
Wendy.Kahn X<br />
Neil.Black X X<br />
Madeleine.Kelly X
9-32 Authenticating and Authorizing Users<br />
Pre-existing Security Configuration<br />
• The <strong>SQL</strong> Login PromoteApp has been created.<br />
Security Requirements<br />
Note: this list of security requirements applies to several modules. For this module, you only need to<br />
consider those requirements that can be satisfied by topics covered in this module and the assigned tasks<br />
in the lab instructions.<br />
1. The senior DBA Jeff Hay should have full access to and control of the entire Proseware server instance.<br />
2. All ITSupport group members should have full access to and control of the MarketDev database.<br />
3. Proseware uses an application called DBMonitor from Trey Research. This application requires a <strong>SQL</strong><br />
login called DBMonitorApp, which requires the ability to read but not update all objects in the<br />
MarketDev database. It does not require access to other databases.<br />
4. All CorporateManagers group members perform periodic Strength, Weakness, Opportunity, and<br />
Threat (SWOT) analysis. For this they need to be able to both read and update rows in the<br />
DirectMarketing.Competitor table.<br />
5. All SalesPeople group members should be able to read data from all tables in the DirectMarketing<br />
schema, except April Reagan who is a junior work experience student.<br />
6. Only ITSupport group members and members of the CreditManagement group should be able to<br />
update the Marketing.CampaignBalance table directly.<br />
7. Within the company members of the SalesPeople group, the CreditManagement group, and the<br />
CorporateManagers group are referred to as sales team members.<br />
8. All sales team members should be able to read rows in the Marketing.CampaignBalance table.<br />
9. All sales team members should be able to read rows in the DirectMarketing.Competitor table.<br />
10. The Sales Manager should be able to read and update the Marketing.SalesTerritory table.<br />
11. All HumanResources group members should be able to read and update rows in the<br />
Marketing.SalesPerson table.<br />
12. The Sales Manager should be able to execute the Marketing.MoveCampaignBalance stored<br />
procedure.<br />
13. All sales team members should be able to execute all stored procedures in the DirectMarketing<br />
schema.<br />
Exercise 1: Create Logins<br />
Scenario<br />
You have been provided with the security requirements for the MarketDev database. In this exercise you<br />
need to create individual Windows logins, Windows group logins, and <strong>SQL</strong> logins that are required to<br />
implement the security requirements.<br />
The main tasks for this exercise are as follows:<br />
1. Review the requirements.<br />
2. Create the required logins.
Task 1: Review the requirements<br />
• Review the supplied security requirements in the supporting documentation.<br />
Task 2: Create the required logins<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 9-33<br />
• Create the logins that you have determined are required for the system. This will include Windows<br />
logins, Windows group logins, and <strong>SQL</strong> logins.<br />
Results: After this exercise, you have created the required Windows and <strong>SQL</strong> logins.<br />
Exercise 2: Correct an Application Login Issue<br />
Scenario<br />
The Adventure Works IT department has implemented a new web application called Promote. The<br />
Promote application requires a <strong>SQL</strong> login called PromoteApp. The <strong>SQL</strong> login needs to operate with a fixed<br />
password. The application has been operating for some time but has now stopped working. It appears the<br />
application is unable to log on to the Proseware server. You need to reset the password for the<br />
PromoteApp user and disable policy checking for the login.<br />
The main task for this exercise is as follows:<br />
1. Correct an Application Login Issue.<br />
Task 1: Correct an application login issue<br />
• Reset the password for the PromoteApp <strong>SQL</strong> login to “Pa$$w0rd”.<br />
• Disable policy checking for the application login.<br />
Results: After this exercise, you have corrected an application login issue.<br />
Exercise 3: Create <strong>Database</strong> Users<br />
Scenario<br />
You have created the required logins for the Proseware server as per the security requirements that you<br />
have been supplied. You need to create database users for those logins in the MarketDev database.<br />
The main tasks for this exercise are as follows:<br />
1. Review the requirements.<br />
2. Create the required database users.<br />
Task 1: Review the requirements<br />
• Review the supplied security requirements in the supporting documentation.<br />
Task 2: Create the required database users<br />
• Create the database users that you have determined are required for the MarketDev database.<br />
Results: After this exercise, you should have created the required database users.
9-34 Authenticating and Authorizing Users<br />
Challenge Exercise 4: Correct Access to Restored <strong>Database</strong> (Only if time<br />
permits)<br />
Scenario<br />
A junior DBA has been trying to restore the LanguageDetails database and grant access to a <strong>SQL</strong> login<br />
named LDUser. He was able to restore the database and to create the login but he has been unable to<br />
create the database user. He suspects that something in the existing database is preventing this as he can<br />
create and assign other <strong>SQL</strong> logins without issue. You need to restore the LanguageDetails database,<br />
create the LDUser login, create the LDUser database user, and test that the user can access the database.<br />
The main task for this exercise is as follows:<br />
1. Correct Access to a Restored <strong>Database</strong>.<br />
Task 1: Correct Access to a restored database<br />
• Restore the LanguageDetails database from the file<br />
D:\10775A_Labs\10775A_09_PRJ\LanguageDetails.bak to the Proseware server instance.<br />
• Create the login LDUser with policy checking disabled and a password of “Pa$$w0rd”.<br />
• Correct access to the LanguageDetails database for the LDUser database user.<br />
• Test that the LDUser login can access the database and can select the rows from the dbo.Language<br />
table.<br />
Results: After this exercise, you should have resolved the situation.
Module Review and Takeaways<br />
Review Questions<br />
1. How does <strong>SQL</strong> Server take advantage of Windows password Policy?<br />
2. What account policy is applied on Windows XP?<br />
Best Practices<br />
1. Minimize the number of <strong>SQL</strong> Server logins.<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 9-35<br />
2. Ensure that expiry dates are applied to logins that are created for temporary purposes.<br />
3. Disable logins rather than dropping them if there is any chance that they will be needed again.<br />
4. Configure Kerberos delegation when a Windows user identity needs to be passed between systems.
9-36 Authenticating and Authorizing Users
Module 10<br />
Assigning Server and <strong>Database</strong> Roles<br />
Contents:<br />
Lesson 1: Working with Server Roles 10-3<br />
Lesson 2: Working with Fixed <strong>Database</strong> Roles 10-12<br />
Lesson 3: Creating User-defined <strong>Database</strong> Roles 10-18<br />
Lab 10: Assigning Server and <strong>Database</strong> Roles 10-26<br />
10-1
10-2 Assigning Server and <strong>Database</strong> Roles<br />
Module Overview<br />
Once a login has been authenticated by the server, and mapped to a database user, you need to assign<br />
permissions to the login or database user. Permissions can be assigned directly to the login or database<br />
user. In the next module, you will see how permissions are directly assigned but while it is possible to do<br />
so, where sets of permissions potentially apply to multiple users, roles can be used to provide the<br />
permissions instead.<br />
<strong>SQL</strong> <strong>Server®</strong> has a set of fixed roles at both the server and database levels and user-defined roles can also<br />
be created at both levels. The fixed roles have a specified set of permissions but user-defined roles have a<br />
user-defined set of permissions applied to them. A login or database user is assigned the permissions<br />
associated with any role that the login or database user is a member of.<br />
Objectives<br />
After completing this module, you will be able to:<br />
• Work with server roles.<br />
• Work with fixed database roles.<br />
• Create user-defined database roles.
Lesson 1<br />
Working with Server Roles<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 10-3<br />
The first type of role that you will investigate is a Server Role. Server Roles have permissions that span the<br />
entire server instance. While Server Roles are very powerful, they should be used rarely.<br />
The most powerful Server Role is the sysadmin role. You should be cautious about assigning logins to this<br />
role as members of this role have complete access to the entire server.<br />
In this lesson, you will also investigate the public role. public can be used to assign instance-level<br />
permissions to all logins.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe server-scoped permissions.<br />
• Describe permissions that might typically be assigned at the server level.<br />
• Explain the use of fixed server roles.<br />
• Explain the purpose of the public server role.<br />
• Configure user-defined server roles.
10-4 Assigning Server and <strong>Database</strong> Roles<br />
Server-scoped Permissions<br />
Key Points<br />
Permissions that are allocated at the server level are very powerful. They apply to the entire server and all<br />
databases on the server. For this reason, you should avoid allocating server-scoped permissions and try to<br />
assign more specific permissions instead.<br />
Fixed server-scoped roles have been part of <strong>SQL</strong> Server for many versions and are mostly maintained for<br />
backward compatibility.<br />
Slide Examples<br />
On this slide, two examples are provided of granting server-level permissions. The first example shows<br />
how to grant a specific permission. The Windows login AdventureWorks\Holly is being granted the<br />
permission to alter the login HRApp.<br />
The second example shows how to use the ANY option. ANY can be applied to many object types,<br />
including objects that are scoped to the server level. The use of the ANY option allows for permissions to<br />
be assigned on specific classes of object without the need to create large numbers of individual<br />
permissions. Note that as well as granting AdventureWorks\Holly the ability to alter any existing<br />
databases, this permission grant would also apply to any future databases that are created on the server.<br />
The full syntax of the GRANT statement is complex. More details on the GRANT statement will be<br />
provided in Module 11.
Assignment Using the GUI<br />
<strong>SQL</strong> Server securables are any objects that can be protected via permissions.<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 10-5<br />
Note If no securable is shown on the tab in SSMS, do not assume that no permissions are<br />
assigned to any securable. Most times, the securable that needs to be viewed or changed<br />
needs to be added manually.
10-6 Assigning Server and <strong>Database</strong> Roles<br />
Typical Server-scoped Permissions<br />
Key Points<br />
On this slide, you can see a list of server-scoped permissions that are commonly assigned. As mentioned,<br />
very few permissions are typically assigned at the server level.<br />
These permissions cannot be assigned unless the current database is the master database. Otherwise, an<br />
error will be returned.<br />
The sys.server_permissions view is used to query the currently assigned server-scoped permissions. You<br />
will see an example of it being used in Demonstration 1A.
Overview of Fixed Server Roles<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 10-7<br />
On this slide, you can see the list of fixed server roles, a general description of what they are used for and<br />
a list of the major permissions that are associated will each of the roles.<br />
Note The list of permissions shown is not an exhaustive list. For full details, consult Books<br />
Online.<br />
These server-level roles are referred to as fixed server roles because you cannot change the permissions<br />
associated with these roles.<br />
Populating Fixed Server Roles<br />
You can add logins into server-level roles. One behavior that might not be expected is that each member<br />
of a fixed server role can add other logins to that same role.<br />
Roles vs. Permissions<br />
Being assigned the same permissions as those that are assigned to a role is not the same as being a<br />
member of the role.
10-8 Assigning Server and <strong>Database</strong> Roles<br />
For example, granting CONTROL SERVER permission to a login is not the same as making the login a<br />
member of the sysadmin fixed server role. sysadmin members are automatically assigned to the dbo user<br />
in a database. This does not happen for logins that have been only been granted CONTROL SERVER<br />
permission.<br />
Note Securityadmin is almost functionally equivalent to the sysadmin role and should be<br />
granted with caution.<br />
To change the membership of fixed server roles, use the ALTER SERVER ROLE command. You will see an<br />
example of this in Demonstration 1A.<br />
Question: Why would the securityadmin role be powerful?
public Server Role<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 10-9<br />
public is a special role that is also scoped to the server level. It is not considered a fixed server role as it is<br />
possible to alter the permissions that are assigned to the public role. All logins are automatically members<br />
of the public role, so this is another role that should be assigned permissions with caution.<br />
VIEW ANY DATABASE<br />
Many <strong>SQL</strong> Server tools, utilities and applications assume that the list of databases can be viewed, even if<br />
the user that is viewing the list does not have permission to perform any actions on or in the database.<br />
This is achieved by the public role having VIEW ANY DATABASE permission. While it is possible to stop the<br />
ability to view databases that you do not have access to, be cautious about doing so. SSMS, in particular,<br />
becomes very slow to use if you do this, as it then needs to check your permissions within every database.<br />
One possible use case for this would be hosted databases, where the provision of a database for a user<br />
should not enable the user to view all the other databases on the server. Test the behavior and<br />
performance of your database tooling and applications before deciding to change this.<br />
Connect Permissions<br />
By default, the public role has been granted CONNECT permission on the endpoints for the Shared<br />
Memory, TCP, Named Pipes, and VIA protocols. This allows users to connect to the server using any of<br />
these protocols.
10-10 Assigning Server and <strong>Database</strong> Roles<br />
User-defined Server Roles<br />
Key Points<br />
New to <strong>SQL</strong> Server <strong>2012</strong> is the ability to create user-defined roles at the server level.<br />
In general, you should avoid using fixed server roles as they tend to provide users with more permissions<br />
than required to do their assigned tasks. User-defined server roles allow you to configure a specific set of<br />
server-level permissions that are required for members of the role.
Demonstration 1A: Assigning Server Roles<br />
Demonstration Steps<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 10-11<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click <strong>SQL</strong><br />
Server Management Studio. In the Connect to Server window, type Proseware and click Connect.<br />
From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_10_PRJ\10775A_10_PRJ.ssmssln and click Open.<br />
3. From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file from<br />
within Solution Explorer.<br />
4. Open the 11 – Demonstration 1A.sql script file.<br />
5. Follow the instructions contained within the comments of the script file.
10-12 Assigning Server and <strong>Database</strong> Roles<br />
Lesson 2<br />
Working with Fixed <strong>Database</strong> Roles<br />
In addition to the roles provided at the server level, <strong>SQL</strong> Server provides a set of fixed roles at the<br />
database level. These fixed database roles are similar in concept to the fixed server roles but they relate to<br />
access to database objects or access to the database itself, rather than access to all databases on the<br />
server.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Explain the available database-scoped permissions.<br />
• Describe fixed database roles.<br />
• Assign users to roles.<br />
• Describe the concept of the database owner.
<strong>Database</strong>-scoped Permissions<br />
Key Points<br />
There are three ways that permissions can be assigned at the database level:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 10-13<br />
• Fixed database roles are very similar to fixed server roles, apart from the objects that they apply to<br />
and that the scope is database level rather than server level.<br />
• Similar to server roles, it is possible to create user-defined database roles and to assign a user-defined<br />
set of permissions to each role.<br />
• <strong>Database</strong>-scoped permissions can be individually assigned.<br />
Try to avoid assigning fixed database roles as they usually provide more capabilities than those that are<br />
required for most users. Assign much more specific permissions instead of the fixed database role<br />
membership.<br />
Slide Example<br />
In the first example on the slide, the database user HRManager is being granted permission to create<br />
tables within the database. It is important to understand that several permissions might be required to<br />
perform an action. In this example, HRManager would also require ALTER SCHEMA permission to<br />
successfully create tables.<br />
In the second example on the slide, the database user James is being assigned permission to view the<br />
definitions of objects within the database. This permission grant is common for users that need to perform<br />
documentation but who are not permitted to alter the design of the database.
10-14 Assigning Server and <strong>Database</strong> Roles<br />
Overview of Fixed <strong>Database</strong> Roles<br />
Key Points<br />
The slide shows the available fixed database roles and a description of the purpose they are intended to<br />
fill.<br />
In practice, fixed database roles should be assigned very selectively. Instead of assigning fixed database<br />
roles, consider assigning much more specific permissions.<br />
For example, instead of assigning the db_datareader role to a user, consider assigning SELECT permission<br />
on the objects that the user needs to be able to SELECT. This avoids situations where additional tables are<br />
added to a database and the user already has permissions on those tables. Even if the user needs<br />
permission to SELECT all objects in the database, it would be preferable to assign SELECT permissions at<br />
the database level than to assign the user membership of the db_datareader role. One reason for this is<br />
that it is easier to review the permission assignments using views such as sys.database_permissions.<br />
Similar to fixed server roles, fixed database roles exist largely for backward compatibility.<br />
Note Similar to the securityadmin fixed server role, adding users to the db_securityadmin<br />
role should be performed with caution as the permission is almost functionally equivalent to<br />
the dbo user.
Assigning Users to Roles<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 10-15<br />
Users can be assigned to roles either using the GUI in SSMS or by using T-<strong>SQL</strong> commands.<br />
GUI<br />
To assign a role to a user using SSMS, expand the server, expand the Security node, expand the Logins<br />
node, then right-click the login and click Properties. In the Properties window for the login, click on the<br />
User Mapping tab to see the list of databases that the login has been mapped to. As you select each<br />
database in the upper pane, a list of database roles appears in the lower pane. You can then select the<br />
roles that the login should be assigned to.<br />
You can also assign role membership from the perspective of the role. To assign or remove users from a<br />
role using SSMS, expand the relevant database, expand the Security node, expand the Roles node, expand<br />
the <strong>Database</strong> Roles node, then right-click the relevant role and click Properties. From the Properties<br />
screen for the role, you can add and delete users from that database.<br />
Managing Role Membership via T-<strong>SQL</strong><br />
In the example on the slide, you can see the use of the ALTER ROLE statement. The user James is being<br />
added to the db_datareader fixed database role.<br />
The DROP MEMBER clause of the ALTER ROLE statement removes members from roles.<br />
Question: Can you think of an example of a command that could be executed by a member<br />
of the db_ddladmin role?
10-16 Assigning Server and <strong>Database</strong> Roles<br />
<strong>Database</strong> Owner<br />
Key Points<br />
It was mentioned in the last module that there are two special users: dbo and guest. Guest was described<br />
in the last module, along with an introduction to dbo.<br />
dbo<br />
A dbo user has implied permissions to perform all activities in the database. The "sa" login and any<br />
member of the sysadmin fixed server role that uses a database are automatically mapped to the dbo user.<br />
Like other objects in <strong>SQL</strong> Server, databases have owners. The owner of the database is also mapped to the<br />
dbo user.<br />
Note Schema-scoped objects are automatically owned by the owner of the schema,<br />
regardless of who creates them. Non-schema-scoped objects are automatically owned by<br />
the database principal that created them. For any principal with db_owner rights, the owner<br />
would be dbo.<br />
dbo cannot be deleted and is present in every database. Equivalence to dbo is special in that once <strong>SQL</strong><br />
Server finds that a user is mapped to dbo, no other permission checks are made within the database. This<br />
means that you cannot later DENY a permission within the database to a user with dbo equivalence.
Demonstration 2A: Managing Roles and Users<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 10-17<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_10_PRJ\10775A_10_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
• Open and execute the 11 – Demonstration 1A.sql script file from within Solution Explorer.<br />
2. Open the 21 – Demonstration 2A.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
10-18 Assigning Server and <strong>Database</strong> Roles<br />
Lesson 3<br />
Creating User-defined <strong>Database</strong> Roles<br />
In the first two lessons in this module, you have seen how to work with to both fixed and user-defined<br />
server roles and with fixed database roles. Similar to user-defined server roles, <strong>SQL</strong> Server provides you<br />
with an option to create user-defined database roles. You should consider user-defined database roles<br />
rather than fixed database roles in most situations. The appropriate design of user-defined database roles<br />
is critical when designing a security architecture for your database.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Work with user-defined database roles.<br />
• Apply roles in common scenarios.<br />
• Define application roles.
Working with User-defined <strong>Database</strong> Roles<br />
Key Points<br />
User-defined database roles are created using the CREATE ROLE statement.<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 10-19<br />
In the example shown on the slide, a new role called MarketingReaders is being created. Note that like<br />
other objects in <strong>SQL</strong> Server, roles have owners. Owners of roles have full control of the role. In the<br />
example on the slide, dbo is being assigned as the owner of the MarketingReaders role.<br />
Roles and Permissions<br />
Once user-defined database roles are created, they are assigned permissions in the same way that<br />
permissions are assigned to database users.<br />
In the example shown on the slide, SELECT permission is being granted on the Marketing schema to<br />
members of the MarketingReaders role. From that point on, any users that are added to the<br />
MarketingReaders role will have permission to SELECT any object that is contained within the Marketing<br />
schema. (The assignment of permissions on objects including schemas is covered in the next module).
10-20 Assigning Server and <strong>Database</strong> Roles<br />
Applying Roles in Common Scenarios<br />
Key Points<br />
It is important to apply a prescriptive process when applying roles in typical business applications.<br />
1. You should start by defining any administrative roles at the server or database level and determining<br />
an appropriate dbo user. There should be few users who are equivalent to dbo.<br />
2. You should consider the types of access that each user needs. This may involve considering how the<br />
requirements overlap with their Windows group membership. Where a number of users (including<br />
group users) need common permissions, define the permission groups as roles.<br />
3. If all users need a set of permissions, consider the use of the public role within the database. For<br />
example, formatting functions would be good examples of code that any user could potentially be<br />
permitted to execute.<br />
4. For remaining permission groups, create appropriate roles and assign the permissions to those roles.<br />
5. Add the users that need the groups of permissions to the roles that provide those permissions.<br />
Note that unlike Windows domain groups that are typically named after the users that are members of<br />
them (such as Salespeople), roles should be named based on the permissions that they provide. They are<br />
more like Windows local groups in this regard.
Testing for Role Membership in T-<strong>SQL</strong> Code<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 10-21<br />
While you may decide to allow members of a role to execute T-<strong>SQL</strong> code (such as a stored procedure) or<br />
not, it is also possible to place limits within the code module.<br />
For example, you may decide that BankManagers and BankTellers can both transfer funds but BankTellers<br />
might have a lower limit on the transfer amounts.<br />
The IS_SRVROLEMEMBER function tests for server role membership, and the IS_MEMBER function tests for<br />
database role membership. IS_MEMBER can also test for Windows group membership.<br />
In the code fragment shown in the slide, an operation is being rolled back if the user is not a member of<br />
the BankManagers group.
10-22 Assigning Server and <strong>Database</strong> Roles<br />
Demonstration 3A: Working with User-defined <strong>Database</strong> Roles<br />
Demonstration Steps<br />
1. If Demonstration 1A or 2A was not performed:<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_10_PRJ\10775A_10_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the script files 00 – Setup.sql,<br />
11 – Demonstration 1A.sql , and 21 – Demonstration 2A.sql from within Solution Explorer.<br />
2. Open the 31 – Demonstration 3A.sql script file.<br />
3. Follow the instructions contained within the comments of the script file to execute each T-<strong>SQL</strong> batch<br />
contained in the file.
Defining Application Roles<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 10-23<br />
Application roles are used to enable permissions for users only when they are running particular<br />
applications.<br />
For example, imagine that you might want a user to be able to update rows in a table while using an<br />
application. You might not, however, want the same user to be able to open the table in SSMS and edit<br />
the rows, or to connect to the table from Microsoft Excel.<br />
Application Roles<br />
Application roles are one solution to this scenario. Application roles contain no members. They are a<br />
special type of role that is assigned permissions.<br />
A user will typically connect to an application using a low-privilege account. The application then calls the<br />
sp_setapprole system stored procedure to "enter" the application role. At that point, the user permissions<br />
for the existing connection are replaced by those from the application role.<br />
Note The permissions of the application role are not added to the permissions of the user.<br />
The permissions of the application role replace the permissions of the user.<br />
Reverting from an Application Role<br />
In versions of <strong>SQL</strong> Server prior to <strong>SQL</strong> Server 2005, there was no way to revert to the original security<br />
context. From <strong>SQL</strong> Server 2005 onwards, the sp_setapprole system stored procedure creates a cookie. The<br />
application can store the cookie and later pass the cookie back to the sp_unsetapprole system stored<br />
procedure, to revert to the original security context.
10-24 Assigning Server and <strong>Database</strong> Roles<br />
Password and Encryption<br />
To stop other applications and users from entering the application role, each application role is assigned a<br />
password. The application must provide the password when calling sp_setapprole.<br />
To avoid exposure to network packet sniffing an option is provided to send the password in an encrypted<br />
form, to avoid it being visible in network traces. Be cautious about depending upon this behavior entirely<br />
as the encryption method used is not particularly strong.
Demonstration 3B: Working with Application Roles<br />
Demonstration Steps<br />
1. If Demonstration 1A, 2A or 3A was not performed:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 10-25<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_10_PRJ\10775A_10_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the script files 00 – Setup.sql,<br />
11 – Demonstration 1A.sql , 21 – Demonstration 2A.sql, and 31 – Demonstration 3A.sql<br />
from within Solution Explorer.<br />
2. Open the 32 – Demonstration 3B.sql script file.<br />
3. Follow the instructions contained within the comments of the script file to execute each T-<strong>SQL</strong> batch<br />
contained in the file.
10-26 Assigning Server and <strong>Database</strong> Roles<br />
Lab 10: Assigning Server and <strong>Database</strong> Roles<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_10_PRJ\10775A_10_PRJ.ssmssln.<br />
7. From the View menu, click Solution Explorer. In Solution Explorer, double-click the query<br />
00-Setup.sql. When the query window opens, click Execute on the toolbar.<br />
Lab Scenario<br />
You have created the <strong>SQL</strong> Server logins and <strong>Database</strong> users. You now need to assign the logins and users<br />
to the required roles based upon the security requirements for the MarketDev database. You should<br />
assign the minimum level of access that will allow each user to perform their job. This will require a<br />
combination of server, fixed database, and user defined database roles.<br />
Do not be concerned with object and schema permissions as these will be assigned in Module 11 but you<br />
do need to consider the role requirements that will be required at that time.<br />
Note The changes you make will later be migrated to the production environment. You<br />
should use T-<strong>SQL</strong> commands to implement the required changes.
Supporting Documentation<br />
Existing Windows User and Group Structure<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 10-27<br />
ITSupport SalesPeople CreditManagement HumanResources CorporateManagers<br />
David.Alexander X X<br />
Jeff.Hay X<br />
Palle.Petersen X<br />
Terry.Adams X<br />
Darren.Parker X X<br />
Mike.Ray X<br />
April.Reagan X<br />
Jamie.Reding X<br />
Darcy.Jayne X<br />
Naoki.Sato X<br />
Bjorn.Rettig X X<br />
Don.Richardson X<br />
Wendy.Kahn X<br />
Neil.Black X X<br />
Madeleine.Kelly X
10-28 Assigning Server and <strong>Database</strong> Roles<br />
Pre-existing Security Configuration<br />
• The following Windows group logins and database users have been created:<br />
• AdventureWorks\ITSupport<br />
• AdventureWorks\SalesPeople<br />
• AdventureWorks\CreditManagement<br />
• AdventureWorks\HumanResources<br />
• AdventureWorks\CorporateManagers<br />
• The following Windows logins and database users have been created:<br />
• AdventureWorks\Jeff.Hay<br />
• AdventureWorks\April.Reagan<br />
• AdventureWorks\Darren.Parker<br />
• The following <strong>SQL</strong> logins have been created:<br />
• PromoteApp<br />
• DBMonitorApp<br />
Security Requirements<br />
1. The senior DBA Jeff Hay should have full access to and control of the entire Proseware server instance.<br />
2. All ITSupport group members should have full access to and control of the MarketDev database.<br />
3. Proseware uses an application called DBMonitor from Trey Research. This application requires a <strong>SQL</strong><br />
login called DBMonitorApp, which requires the ability to read but not update all objects in the<br />
MarketDev database.<br />
4. All CorporateManagers group members perform periodic Strength, Weakness, Opportunity, and<br />
Threat (SWOT) analysis. For this they need to be able to both read and update rows in the<br />
DirectMarketing.Competitor table.<br />
5. All SalesPeople group members should be able to read data from all tables in the DirectMarketing<br />
schema, except April Reagan who is a junior work experience student.<br />
6. Only ITSupport group members and members of the CreditManagement group should be able to<br />
update the Marketing.CampaignBalance table directly.<br />
7. Within the company members of the SalesPeople group, the CreditManagement group, and the<br />
CorporateManagers group are referred to as sales team members.<br />
8. All sales team members should be able to read rows in the Marketing.CampaignBalance table.<br />
9. All sales team members should be able to read rows in the DirectMarketing.Competitor table.<br />
10. The Sales Manager should be able to read and update the Marketing.SalesTerritory table.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 10-29<br />
11. All HumanResources group members should be able to read and update rows in the<br />
Marketing.SalesPerson table.<br />
12. The Sales Manager should be able to execute the Marketing.MoveCampaignBalance stored<br />
procedure.<br />
13. All sales team members should be able to execute all stored procedures in the DirectMarketing<br />
schema.<br />
Exercise 1: Assign Server Roles<br />
Scenario<br />
You need to implement any required server roles that are needed to support the supplied security<br />
requirements.<br />
The main tasks for this exercise are as follows:<br />
1. Review the requirements.<br />
2. Assign any required server roles.<br />
Task 1: Review the requirements<br />
• Review the supplied security requirements in the supporting documentation.<br />
Task 2: Assign any required server roles<br />
• Assign any server roles that are required to support the security requirements for the MarketDev<br />
database.<br />
Results: After this exercise, you should have assigned any required server roles.<br />
Exercise 2: Assign Fixed <strong>Database</strong> Roles<br />
Scenario<br />
You have been provided with a set of requirements detailing the access that each login needs to the<br />
MarketDev database. Some of these requirements might be met by fixed database roles but it is<br />
important to not provide permissions that are not specifically required. If you consider there is a need for<br />
user-defined database roles these will be created in the next exercise.<br />
The main tasks for this exercise are as follows:<br />
1. Review the requirements.<br />
2. Assign any required fixed database roles.<br />
Task 1: Review the requirements<br />
• Review the supplied security requirements in the supporting documentation.<br />
Task 2: Assign any required fixed database roles<br />
• Assign any fixed database roles that are required to support the security requirements for the<br />
MarketDev database.<br />
Results: After this exercise, you have assigned fixed database roles as required.
10-30 Assigning Server and <strong>Database</strong> Roles<br />
Exercise 3: Create and Assign User-defined <strong>Database</strong> Roles<br />
Scenario<br />
You have been provided with a set of requirements detailing the access that each login needs to the<br />
MarketDev database. In Exercise 2, you assigned fixed database role membership. Other requirements<br />
might be best supported by user-defined database roles. In this exercise you will create and assign<br />
required user-defined database roles.<br />
The main tasks for this exercise are as follows:<br />
1. Review the requirements.<br />
2. Create and assign any required user-defined database roles.<br />
Task 1: Review the requirements<br />
• Review the supplied security requirements in the supporting documentation.<br />
Task 2: Create and assign any required user-defined database roles<br />
• Create and assign any user-defined database roles that are required to support the security<br />
requirements for the MarketDev database.<br />
Results: After this exercise, you have created and assigned user-defined database<br />
roles as required.<br />
Challenge Exercise 4: Check Role Assignments (Only if time permits)<br />
Scenario<br />
You have created logins and database users, assigned server and database roles, and created and assigned<br />
user-defined database roles. It is important to check that the role assignments are operating as expected.<br />
In this exercise you will use the sys.login_token and sys.user_token system views to check the available<br />
tokens for Darren Parker.<br />
The main task for this exercise is as follows:<br />
1. Check the role assignments for Darren Parker.<br />
Task 1: Check the role assignments for Darren Parker<br />
• Using the EXECUTE AS statement, change your security context to the login<br />
AdventureWorks\Darren.Parker.<br />
• Query the sys.login_token and sys.user_token system functions to check the available tokens for<br />
Darren Parker.<br />
• Change your security context back using the REVERT command.<br />
Results: After this exercise, you should have tested the role assignments for<br />
Darren Parker.
Module Review and Takeaways<br />
Review Questions<br />
1. Is it possible to create new database roles in <strong>SQL</strong> Server <strong>2012</strong>?<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 10-31<br />
2. Which function allows you to determine in T-<strong>SQL</strong> code whether or not a user is a member of a<br />
Windows group?<br />
Best Practices<br />
Avoid granting more permissions than are necessary. It is very common to see <strong>SQL</strong> Server systems where<br />
excessive permissions have been granted. Often the installers for applications will assume the need for<br />
much higher level permissions than should be necessary. Users should push back on vendors who do this.<br />
Even better, make appropriate security and permissions configuration a criterion for vendors to meet.
10-32 Assigning Server and <strong>Database</strong> Roles
Module 11<br />
Authorizing Users to Access Resources<br />
Contents:<br />
Lesson 1: Authorizing User Access to Objects 11-3<br />
Lesson 2: Authorizing Users to Execute Code 11-12<br />
Lesson 3: Configuring Permissions at the Schema Level 11-21<br />
Lab 11: Authorizing Users to Access Resources 11-28<br />
11-1
11-2 Authorizing Users to Access Resources<br />
Module Overview<br />
In the previous two modules, you have seen how <strong>Microsoft®</strong> <strong>SQL</strong> <strong>Server®</strong> security is organized and how<br />
sets of permissions can be assigned at the server and database level via fixed server roles, user-defined<br />
server roles, fixed database roles, user-defined database roles, and application roles.<br />
The final step in authorizing users to access <strong>SQL</strong> Server resources is the authorization of users and roles to<br />
access server and database objects. In this module, you will see how these object permissions are<br />
managed. As well as access permissions on database objects, <strong>SQL</strong> Server provides the ability to determine<br />
which users are allowed to execute code, such as stored procedures and functions.<br />
In many cases, these permissions and the permissions on the database objects are best configured at the<br />
schema level rather than at the level of the individual object. Schema-based permission grants can<br />
simplify your security architecture. You will explore the granting of permissions at the schema level in the<br />
final lesson in this module.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Authorize user access to objects.<br />
• Authorize users to execute code.<br />
• Configure permissions at the schema level.
Lesson 1<br />
Authorizing User Access to Objects<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 11-3<br />
Before moving on to managing permissions on code, you need to consider how permissions are managed<br />
on database objects. <strong>SQL</strong> Server has a fine-grained security model that allows you to grant the minimum<br />
permissions to users that will allow them to do their work. In particular, permissions can be granted at the<br />
column-level, not just at the table and view level. You will also see how you can delegate the work of<br />
granting permissions to other users.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Explain the role of principals.<br />
• Explain the role of securables.<br />
• Use the GRANT, REVOKE, and DENY commands.<br />
• Secure tables and views.<br />
• Implement column-level security.<br />
• Delegate the ability to assign permissions by using WITH GRANT OPTION.
11-4 Authorizing Users to Access Resources<br />
What Are Principals?<br />
Key Points<br />
In the previous two modules, you have seen a number of security principals.<br />
Principals are entities that can request and can be granted access to <strong>SQL</strong> Server resources. Like other<br />
components of the <strong>SQL</strong> Server authorization model, principals can be arranged in a hierarchy. This slide<br />
summarizes the principals that have been discussed and places them into their appropriate locations<br />
within the hierarchy.<br />
At the Windows® level, principals include users and groups. These users and groups can be domainbased<br />
if the server is part of a Windows domain. Local accounts can be used from servers, whether the<br />
server is a member of a domain or not.<br />
At the <strong>SQL</strong> Server level, logins can be created for users that are either not Windows users or for users that<br />
are part of non-trusted Windows environments, such as users in other Windows domains where no trust<br />
relationship is in place with the domain containing the <strong>SQL</strong> Server system. Also at the <strong>SQL</strong> Server level,<br />
fixed and user-defined server roles are a form of principal that contains other principals.<br />
At the database level, you have seen database users, fixed and user-defined database roles and<br />
application roles.<br />
Every principal has two numeric IDs associated with it: a principal ID and a security identifier (SID).
What Are Securables?<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 11-5<br />
Securables are the resources to which the <strong>SQL</strong> Server <strong>Database</strong> Engine authorization system regulates<br />
access. Some securables can be contained within others, creating nested hierarchies called scopes that can<br />
themselves be secured. The securable scopes are server, database, and schema.<br />
It is important to understand the different securable scopes in <strong>SQL</strong> Server to plan your security model.<br />
Question: Can you suggest a reason why a Login is a securable? What types of permissions<br />
would be needed on a Login?
11-6 Authorizing Users to Access Resources<br />
GRANT, REVOKE, DENY<br />
Key Points<br />
Permissions are managed via the GRANT, DENY, and REVOKE T-<strong>SQL</strong> commands. Most permissions (but<br />
not all permissions) can also be managed via the GUI in SSMS.<br />
GRANT and REVOKE<br />
A user that has not been granted a permission is unable to perform the action related to the permission.<br />
For example, users have no permission to SELECT data from tables if they have not been granted<br />
permission at some level. Some other database engines consider this an implicit form of denial.<br />
The GRANT command is used to assign permissions to database users. The REVOKE command is used to<br />
remove those same permissions.<br />
DENY<br />
ANSI <strong>SQL</strong> does not provide a DENY command. If a user does not have permission to perform an action,<br />
they cannot perform the action.<br />
What is different about Windows-based systems is group membership. ANSI <strong>SQL</strong> has no concept of<br />
groups. In <strong>SQL</strong> Server, a Windows user can receive permissions directly but can also receive permissions<br />
through membership in Windows groups or receive permissions through membership in roles.<br />
The DENY command allows you to deny a permission to a user that has been granted permission by<br />
membership in a group or role that has permission. This is very similar to how deny permissions work in<br />
Windows. For a Windows example, consider that you could decide that all members of the Salespeople<br />
group can access a Color printer, except Holly (who is a member of Salespeople) because she causes<br />
problems with it. You grant access to the Salespeople group then deny access to Holly.
<strong>SQL</strong> Server and DENY<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 11-7<br />
<strong>SQL</strong> Server works this same way. You could grant SELECT permission on a table to every Salesperson but<br />
deny Holly access to that table.<br />
As with Windows, you should use DENY sparingly. A need to DENY many permissions tends to be<br />
considered a "code-smell" that indicates a potential problem with your security design.<br />
What does tend to confuse new users is that the REVOKE command is also used to remove a DENY, not<br />
just to remove a GRANT. This means that it could cause a user to have access that they did not have<br />
before the REVOKE command was issued. For example, if you revoke the DENY permission from Holly, she<br />
would then be able to access the table.<br />
Question: If a user cannot perform an action without permission, why is there any need to<br />
DENY a permission?
11-8 Authorizing Users to Access Resources<br />
Securing Tables and Views<br />
Key Points<br />
The permissions to access data that apply to tables and views are SELECT, INSERT, UPDATE, DELETE, and<br />
REFERENCES.<br />
In the example shown on the slide, SELECT permission on the Marketing.Salesperson object (which is likely<br />
to be a table or view) is being granted to the HRApp user within the MarketDev database.<br />
Optional Components of GRANT<br />
Note that two forms of the command are shown. While the full terminology involves OBJECT:: as a<br />
prefix, this prefix is optional. In the second example, the same GRANT is shown without the OBJECT::<br />
prefix.<br />
It is not necessary to specify the schema for the table or view but doing so is highly recommended. If the<br />
schema name is not specified, the default schema for the user that is granting the permission is used. If<br />
the object is not found in the user's default schema, the dbo schema is used instead.<br />
REFERENCES<br />
While the meaning of the SELECT, INSERT, UPDATE, and DELETE permissions will likely be obvious to you,<br />
the meaning and purpose of the REFERENCES permission might not be. The REFERENCES permission is<br />
necessary before a foreign key relationship can specify the object as a target, and is only required if no<br />
other permissions exist on the object.<br />
Question: Why would there be a need for a permission to refer a table in a foreign key<br />
reference?
Column-level Security<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 11-9<br />
While they are not implemented as often as table or view level permissions, column-level permissions can<br />
also be assigned. This provides an even finer grain of security control than is provided by controlling<br />
access to tables and views.<br />
You do not need to execute separate GRANT statements for every column that you wish to assign<br />
permissions on. Where a set of columns needs to be controlled in the same way, a list of columns can be<br />
provided in a single GRANT statement. In the first example shown on the slide, SELECT permission on the<br />
Marketing.Salesperson table is being granted to James but access to the entire table is not being granted.<br />
Only permission to the SalespersonID and EmailAlias columns is permitted.<br />
Table-level DENY and Column-level GRANT<br />
There is an anomaly in the <strong>SQL</strong> Server security model that you need to be aware of.<br />
A table-level DENY does not take precedence over a column-level GRANT. This is acknowledged as an<br />
inconsistency but needed to be preserved for backward compatibility. There is a plan to remove this<br />
inconsistency in the future. Do not depend upon it in new development work.<br />
This anomaly is demonstrated in the second example on the slide. Holly is denied permission to SELECT<br />
from the Salesperson table but is then granted permission to SELECT specific columns in the table. The<br />
result (probably an unexpected result) is that Holly would still be permitted to SELECT from those columns<br />
in the table.
11-10 Authorizing Users to Access Resources<br />
WITH GRANT OPTION<br />
Key Points<br />
When a principal is granted a permission, it is also possible to grant the principal the right to re-grant the<br />
permission to other principals. This further right is assigned by use of the WITH GRANT OPTION clause.<br />
This mechanism allows you to delegate responsibility for managing permissions but needs to be used with<br />
caution. In general, WITH GRANT OPTION should be avoided.<br />
In the first example on the slide, James is granted permission to update the Marketing.Salesperson table.<br />
In addition, James is granted the right to grant this same permission to other database users.<br />
CASCADE<br />
The challenge of the WITH GRANT OPTION clause comes when you need to REVOKE or DENY the<br />
permission that was granted to James using WITH GRANT OPTION. You do not know which other users<br />
James has already granted the permission to.<br />
When revoking or denying a permission that has been granted, the CASCADE option revokes or denies<br />
the permissions that James had granted as well.
Demonstration 1A: Authorizing User Access to Objects<br />
Demonstration Steps<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 11-11<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click <strong>SQL</strong><br />
Server Management Studio. In the Connect to Server window, type Proseware and click Connect.<br />
From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_11_PRJ\10775A_11_PRJ.ssmssln and click Open.<br />
3. From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file from<br />
within Solution Explorer.<br />
4. Open the 11 – Demonstration 1A.sql script file.<br />
5. Follow the instructions contained within the comments of the script file.
11-12 Authorizing Users to Access Resources<br />
Lesson 2<br />
Authorizing Users to Execute Code<br />
In addition to providing you with control over who accesses data in your database or the objects in your<br />
server, <strong>SQL</strong> Server allows you to control which users can execute your code. Appropriate security control<br />
of code execution is an important aspect of your security architecture.<br />
In this lesson, you will see how to manage the security of stored procedures and functions. You will also<br />
learn how to manage security for code that lives in .NET managed code assemblies that are used with <strong>SQL</strong><br />
CLR integration. Finally, you will see how ownership chains affect the security relationship between code<br />
and database objects.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Secure stored procedures.<br />
• Secure user-defined functions.<br />
• Secure managed code.<br />
• Manage ownership chains.
Securing Stored Procedures<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 11-13<br />
By default, users cannot execute stored procedures that you (or any user) create. Users need EXECUTE<br />
permission before they can execute a stored procedure. They may also need permissions to access the<br />
objects that the stored procedure uses. You will see more about this issue later in the lesson.<br />
In the example shown on the slide, the database user Mod11User is being granted the permission to<br />
execute the stored procedure Reports.GetProductColors.<br />
Managing Stored Procedures<br />
Two other permissions are related to the management of stored procedures:<br />
• The ALTER permission allows a user to change the definition of a stored procedure.<br />
• The VIEW DEFINITION permission was added in <strong>SQL</strong> Server 2005. In earlier versions, a user needed<br />
ALTER permission on a stored procedure before they could view its definition. This represented an<br />
unnecessary permission grant for users involved in documenting systems. The VIEW DEFINITION<br />
permission was introduced to remove the need for such a high level permission when only<br />
documentation access was needed.<br />
Note You cannot use SSMS to grant permissions on system stored procedures. SSMS can<br />
be used to grant permissions on other stored procedures.
11-14 Authorizing Users to Access Resources<br />
Securing User-defined Functions<br />
Key Points<br />
User-defined functions (UDFs) also require permissions before they can be used.<br />
• Scalar UDFs return a single value. Users accessing these functions require EXECUTE permission on the<br />
UDF.<br />
• Table-valued UDFs (which are TVFs) return a table of results rather than a single value. Accessing a<br />
TVF requires SELECT permission rather than EXECUTE permission, similar to the permissions on a<br />
table.<br />
While it is uncommon to directly update a table-valued function, it is possible to assign INSERT, UPDATE,<br />
and DELETE permissions on one form of TVF known as an inline TVF, as this particular form of TVF can be<br />
updated in some cases.<br />
REFERENCES<br />
REFERENCES permission is required for:<br />
• Functions that are used in CHECK constraints.<br />
• To calculate values for DEFAULT constraints.<br />
• To calculate values for computed columns.
public Role and Functions<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 11-15<br />
Functions often provide very basic capabilities within systems and with low risk. Because of this, it is fairly<br />
common practice to grant permissions on basic functions that are contained in a database to the public<br />
role of the database. This allows any user within the database to use those functions without the need for<br />
further permission grants.<br />
Note While this is common for basic functions, this is rarely done (or even appropriate) for<br />
stored procedures.
11-16 Authorizing Users to Access Resources<br />
Securing Managed Code<br />
Key Points<br />
Managed code is .NET code that is provided in assemblies. While assemblies are contained within DLL or<br />
EXE files, only assemblies contained within DLL files can be loaded into <strong>SQL</strong> Server via <strong>SQL</strong> Server CLR<br />
integration.<br />
After an assembly is catalogued, procedures, functions, and other managed code objects that are<br />
contained within the assembly are also catalogued. These objects then appear as standard objects within<br />
<strong>SQL</strong> Server and the standard T-<strong>SQL</strong> object permissions also apply. For example, a user requires EXECUTE<br />
permission on a stored procedure, whether it is written in managed code or in T-<strong>SQL</strong>.<br />
Permission Sets<br />
No matter what .NET code is included in an assembly, the actions it is allowed to take are determined by<br />
the permission set it is catalogued under.<br />
The SAFE permission set strictly limits the actions that the assembly can perform. It is the default<br />
permission set.<br />
The EXTERNAL_ACCESS permission set is needed to access local and network resources, environment<br />
variables, and the registry. EXTERNAL_ACCESS is even necessary for accessing the same <strong>SQL</strong> Server<br />
instance if a connection is made through a network interface. This permission set is not necessary for<br />
direct internal connections from the managed code to <strong>SQL</strong> Server, as a separate direct access path (called<br />
a context connection) is provided for access to the local instance, without using a network interface.<br />
The UNSAFE permission set relaxes many standard controls over code and should be avoided.
Recommendations<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 11-17<br />
In general, <strong>SQL</strong> Server DBAs should find the use of SAFE assemblies to be acceptable; they should require<br />
some discussion before EXTERNAL_ACCESS assemblies are used; and they should need particularly solid<br />
justifications (which should be rare) for any UNSAFE assemblies. UNSAFE is the permission set with the<br />
most capabilities.<br />
Configuration<br />
The EXTERNAL_ACCESS and UNSAFE permission sets also require additional setup. You cannot specify the<br />
need for an EXTERNAL_ACCESS permission set when executing the CREATE ASSEMBLY statement. Either<br />
the database needs to be flagged as TRUSTWORTHY (which is easy but not recommended) or an<br />
asymmetric key needs to be created from the assembly file in the master database, a login created that<br />
maps to the key and the login granted EXTERNAL ACCESS ASSEMBLY permission on the assembly.<br />
Note This last option is clearly an advanced topic that is beyond the scope of the course<br />
and is only mentioned for completeness.<br />
Question: Which permission set should be rarely allowed?
11-18 Authorizing Users to Access Resources<br />
Managing Ownership Chains<br />
Key Points<br />
All database objects have owners. Schema-scoped objects are owned by the schema owner and the<br />
principal_id (owner) property for new objects is NULL by default. An object with a NULL principal_id<br />
inherits its ownership from the schema it is contained in. The best practice is to have all objects owned by<br />
the schema object owner.<br />
When an object such as a stored procedure references another object, an ownership chain is established.<br />
An unbroken ownership chain exists when each object in the chain has the same owner. When an<br />
unbroken ownership chain exists, access is permitted to the underlying objects when access is permitted<br />
to the top level objects.<br />
Ever since <strong>SQL</strong> Server 2005 introduced the concept of schemas, it has been widely misconstrued that <strong>SQL</strong><br />
Server objects no longer have owners. This is not true. Objects still have owners.<br />
Having the same owner for all objects in a schema (which itself also has an owner) of a database keeps<br />
permission management easier but it is important to understand that ownership chain problems can still<br />
occur even though they are much less common now.
Slide Example<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 11-19<br />
Ownership chaining applies to stored procedures, views, and functions. The slide shows an example of<br />
how ownership chaining applies to views or stored procedures.<br />
1. John has no permissions on the table owned by Nupur.<br />
2. Nupur creates a view that accesses the table and grants John permission to access the view. Access is<br />
granted as Nupur is the owner of both the top level object and of the underlying object (that is her<br />
table).<br />
3. Nupur than creates a view that accesses a table that is owned by Tim. Even if Nupur has permission to<br />
access the table, and grants John permission to use the view, John will be denied access. This is<br />
because of the broken chain of ownership from the top level object to the underlying object.<br />
4. However, if John is given permissions directly on the underlying table owned by Tim, he can then<br />
access the view that Nupur created to access that table.<br />
The problem with step #4 is that one of the main reasons for creating views or stored procedures is to<br />
prevent the need for users to have permissions on the underlying objects.
11-20 Authorizing Users to Access Resources<br />
Demonstration 2A: Authorizing Users to Execute Code<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_11_PRJ\10775A_11_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
2. Open the 21 – Demonstration 2A.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
Lesson 3<br />
Configuring Permissions at the Schema Level<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 11-21<br />
<strong>SQL</strong> Server 2005 introduced a change to how schemas are used. Since that version schemas are used as<br />
containers for objects such as tables, views, and stored procedures. Schemas can be particularly helpful in<br />
providing a level of organization and structure when large numbers of objects are present in a database.<br />
Security permissions can also be assigned at the schema level rather than individually on the objects<br />
contained within the schemas. Doing this can greatly simplify the design of system security requirements.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe user-schema separation.<br />
• Describe the role of object name resolution.<br />
• Grant permissions at the schema level.
11-22 Authorizing Users to Access Resources<br />
Overview of User-schema Separation<br />
Key Points<br />
Schemas are used to contain objects and to provide a security boundary for the assignment of<br />
permissions.<br />
Schemas<br />
In <strong>SQL</strong> Server, schemas are essentially used as containers for objects, somewhat like a folder is used to<br />
hold files at the operating system level. Since their change of behavior in <strong>SQL</strong> Server 2005, schemas can<br />
be used to contain objects such as tables, stored procedures, functions, types, views, etc. Schemas are<br />
created with the CREATE SCHEMA statement and schemas form a part of the multi-part naming<br />
convention for objects. In <strong>SQL</strong> Server, an object is formally referred to by:<br />
Server.<strong>Database</strong>.Schema.Object<br />
Security Boundary<br />
Schemas can be used to simplify the assignment of permissions. An example of applying permissions at<br />
the schema level would be to assign the EXECUTE permission on a schema to a user. The user could then<br />
execute all stored procedures within the schema. This simplifies the granting of permissions as there is no<br />
need to set up individual permissions on each stored procedure.<br />
Upgrading Older Applications<br />
If you are upgrading applications from <strong>SQL</strong> Server 2000 and earlier versions, it is important to understand<br />
that the naming convention changed when schemas were introduced. Previously, names were of the form:<br />
Server.<strong>Database</strong>.Owner.Object
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 11-23<br />
Objects still have owners but the owner's name does not form a part of the multi-part naming convention<br />
from <strong>SQL</strong> Server 2005 onwards. When upgrading databases from earlier versions, <strong>SQL</strong> Server will<br />
automatically create a schema with the same name as existing object owners, so that applications that use<br />
multi-part names will continue to work.<br />
Each user can be assigned a default schema that is used when a user refers to an object without specifying<br />
a schema name.<br />
Built-in Schemas<br />
dbo and guest have been discussed in the last module. dbo has an associated schema. The sys and<br />
INFORMATION_SCHEMA schemas are reserved for system objects. You cannot create objects in the sys<br />
and INFORMATION_SCHEMA schemas and you cannot drop those schemas.
11-24 Authorizing Users to Access Resources<br />
Object Name Resolution<br />
Key Points<br />
It is important to use at least two-part names when referring to objects in <strong>SQL</strong> Server code such as stored<br />
procedures, functions, and views.<br />
Object Name Resolution<br />
When object names are referred to in the code, <strong>SQL</strong> Server must determine which underlying objects are<br />
being referred to. For example, consider the following statement:<br />
SELECT ProductID, Name, Size FROM Product;<br />
More than one Product table could exist in separate schemas with the database. When single part names<br />
are used, <strong>SQL</strong> Server must then determine which Product table is being referred to.<br />
Most users have default schemas assigned but not all users are assigned a default schema. Default<br />
schemas are not assigned to users based on certificates but they do apply to users created from standard<br />
Windows and <strong>SQL</strong> Server logins. <strong>SQL</strong> Server <strong>2012</strong> introduced the ability to assign a default schema to a<br />
Windows Group. Users without default schemas assigned to them have the dbo schema as their default<br />
schema.<br />
Note Users created from certificates is an advanced topic that is out of scope for this<br />
course but mentioned for completeness.
Locating Objects<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 11-25<br />
When locating an object, <strong>SQL</strong> Server will first check the user's default schema. If the object is not found,<br />
<strong>SQL</strong> Server will then check the dbo schema to try to locate the object.<br />
It is important to include schema names when referring to objects instead of depending upon schema<br />
name resolution, such as in this modified version of the previous statement:<br />
SELECT ProductID, Name, Size FROM Production.Product;<br />
Apart from rare situations, using multi-part names leads to more reliable code that does not depend upon<br />
default schema settings.
11-26 Authorizing Users to Access Resources<br />
Granting Permissions at the Schema Level<br />
Key Points<br />
Instead of assigning individual permissions on tables, views, and stored procedures, permissions can be<br />
granted at the schema level.<br />
Slide Example<br />
In the first example on the slide, EXECUTE permission on the Marketing schema is granted to Mod11User.<br />
This means that Mod11User could then execute all stored procedures and scalar functions within the<br />
schema.<br />
In the second example on the slide, SELECT permission on the DirectMarketing schema is granted to<br />
Mod11User. This means that Mod11User could then select from all tables, views, and table-valued<br />
functions in the schema.<br />
Question: Why would granting permissions at the schema level be easier to manage?
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 11-27<br />
Demonstration 3A: Configuring Permissions at the Schema Level<br />
Demonstration Steps<br />
1. If Demonstration 1A or 2A was not performed:<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_11_PRJ\10775A_11_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql and<br />
21 – Demonstration 2A.sql script files from within Solution Explorer.<br />
2. Open the 31 – Demonstration 3A.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.<br />
Question: The user has EXECUTE at the schema level and DENY at the procedure level.<br />
Should execution be permitted?
11-28 Authorizing Users to Access Resources<br />
Lab 11: Authorizing Users to Access Resources<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_11_PRJ\10775A_11_PRJ.ssmssln.<br />
7. From the View menu, click Solution Explorer. In Solution Explorer, double-click the query<br />
00-Setup.sql. When the query window opens, click Execute on the toolbar.<br />
Lab Scenario<br />
You have created the <strong>SQL</strong> Server logins and <strong>Database</strong> users and assigned them to appropriate roles. You<br />
now need to grant permissions to the database users and roles so that users can access the resources they<br />
need within the MarketDev database, based on the supplied security requirements.
Supporting Documentation<br />
Existing Windows User and Group Structure<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 11-29<br />
ITSupport SalesPeople CreditManagement HumanResources CorporateManagers<br />
David.Alexander X X<br />
Jeff.Hay X<br />
Palle.Petersen X<br />
Terry.Adams X<br />
Darren.Parker X X<br />
Mike.Ray X<br />
April.Reagan X<br />
Jamie.Reding X<br />
Darcy.Jayne X<br />
Naoki.Sato X<br />
Bjorn.Rettig X X<br />
Don.Richardson X<br />
Wendy.Kahn X<br />
Neil.Black X X<br />
Madeleine.Kelly X
11-30 Authorizing Users to Access Resources<br />
Pre-existing Security Configuration<br />
• The following Windows group logins and database users have been created:<br />
• AdventureWorks\ITSupport<br />
• AdventureWorks\SalesPeople<br />
• AdventureWorks\CreditManagement<br />
• AdventureWorks\HumanResources<br />
• AdventureWorks\CorporateManagers<br />
• The following Windows logins and database users have been created:<br />
• AdventureWorks\Jeff.Hay<br />
• AdventureWorks\April.Reagan<br />
• AdventureWorks\Darren.Parker<br />
• The following <strong>SQL</strong> logins have been created:<br />
• PromoteApp<br />
• DBMonitorApp<br />
• The following server role assignment has been made:<br />
• Jeff Hay sysadmin<br />
• The following fixed database roles member assignments have been made:<br />
• AdventureWorks\ITSupport db_owner<br />
• DBMonitorApp db_datareader<br />
• The following user-defined database roles member assignments have been made:<br />
• AdventureWorks\SalesPeople SalesTeam<br />
• AdventureWorks\CreditManagement SalesTeam<br />
• AdventureWorks\CorporateManagers SalesTeam<br />
• AdventureWorks\Darren.Parker SalesManagers<br />
Security Requirements<br />
1. The senior DBA Jeff Hay should have full access to and control of the entire Proseware server instance.<br />
2. All ITSupport group members should have full access to and control of the MarketDev database.<br />
3. Proseware uses an application called DBMonitor from Trey Research. This application requires a <strong>SQL</strong><br />
login called DBMonitorApp, which requires the ability to read but not update all objects in the<br />
MarketDev database.<br />
4. All CorporateManagers group members perform periodic Strength, Weakness, Opportunity, and<br />
Threat (SWOT) analysis. For this they need to be able to both read and update rows in the<br />
DirectMarketing.Competitor table.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 11-31<br />
5. All SalesPeople group members should be able to read data from all tables in the DirectMarketing<br />
schema, except April Reagan who is a junior work experience student.<br />
6. Only ITSupport group members and members of the CreditManagement group should be able to<br />
update the Marketing.CampaignBalance table directly.<br />
7. Within the company members of the SalesPeople group, the CreditManagement group, and the<br />
CorporateManagers group are referred to as sales team members.<br />
8. All sales team members should be able to read rows in the Marketing.CampaignBalance table.<br />
9. All sales team members should be able to read rows in the DirectMarketing.Competitor table.<br />
10. The Sales Manager should be able to read and update the Marketing.SalesTerritory table.<br />
11. All HumanResources group members should be able to read and update rows in the<br />
Marketing.SalesPerson table.<br />
12. The Sales Manager should be able to execute the Marketing.MoveCampaignBalance stored<br />
procedure.<br />
13. All sales team members should be able to execute all stored procedures in the DirectMarketing<br />
schema.<br />
Exercise 1: Assign Schema-level Permissions<br />
Scenario<br />
You have been supplied with a list of security requirements. Some of these requirements can be met using<br />
permissions assigned at the schema level. Even though it is easy to grant substantial permissions at the<br />
schema level, you should be careful to only grant permissions that are required.<br />
The main tasks for this exercise are as follows:<br />
1. Review the security requirements that have been updated from the previous module.<br />
2. Assign the required permissions.<br />
Task 1: Review the security requirements that have been updated from the previous<br />
module<br />
• Review the supplied requirements in the supporting documentation for the exercise.<br />
• Determine the permissions that should be assigned at the schema level.<br />
Task 2: Assign the required permissions<br />
• Assign the required permissions at the schema level.<br />
Results: After this exercise, you should have assigned the required schema-level permissions.
11-32 Authorizing Users to Access Resources<br />
Exercise 2: Assign Object-level Permissions<br />
Scenario<br />
You have been supplied with a list of security requirements. Some of these requirements need to be met<br />
using permissions assigned at the object level. In this exercise you will assign the required level object<br />
permissions.<br />
The main tasks for this exercise are as follows:<br />
1. Review the security requirements.<br />
2. Assign the required permissions.<br />
Task 1: Review the security requirements<br />
• Review the supplied requirements in the supporting documentation for the exercise.<br />
• Determine the permissions that should be assigned at the object level. This would include permissions<br />
on tables, views, stored procedures, and functions where required.<br />
Task : Assign the required permissions<br />
• Assign the required permissions at the object level.<br />
Results: After this exercise, you should have assigned the required object-level permissions.<br />
Challenge Exercise 3: Test Permissions (Only if time permits)<br />
Scenario<br />
You need to test some of your permission assignments. In particular you need to test that salespeople can<br />
select rows from the Marketing.CampaignBalance table. However you will also need to test that the work<br />
experience student April Reagan cannot select rows from that table even though she is a member of the<br />
SalesPeople group.<br />
The main task for this exercise is as follows:<br />
1. Design and execute a test.<br />
Task 1: Design and execute a test<br />
• Design and execute a test to show that Darcy Jayne can select rows from the<br />
Marketing.CampaignBalance table.<br />
• Design and execute a test to show that April Reagan cannot select rows from the<br />
Marketing.CampaignBalance table.<br />
Results: After this exercise, you should have tested the required permissions.
Module Review and Takeaways<br />
Review Questions<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 11-33<br />
1. What permission needs to be assigned to a function before it can be used in a CHECK constraint?<br />
2. What permission should be assigned to a schema to allow a user to read the data in all the tables,<br />
views and table-valued functions?<br />
Best Practices<br />
1. Always assign the least possible privileges that users need.<br />
2. Test code as a standard user instead of testing as an administrator.<br />
3. Use EXECUTE AS and REVERT for quick testing of user permissions.
11-34 Authorizing Users to Access Resources
Module 12<br />
Auditing <strong>SQL</strong> Server Environments<br />
Contents:<br />
Lesson 1: Options for Auditing Data Access in <strong>SQL</strong> 12-3<br />
Lesson 2: Implementing <strong>SQL</strong> Server Audit 12-12<br />
Lesson 3: Managing <strong>SQL</strong> Server Audit 12-26<br />
Lab 12: Auditing <strong>SQL</strong> Server Environments 12-31<br />
12-1
12-2 Auditing <strong>SQL</strong> Server Environments<br />
Module Overview<br />
One of the most important aspects of configuring security for <strong>Microsoft®</strong> <strong>SQL</strong> <strong>Server®</strong> systems is<br />
ensuring that auditing and compliance requirements are met. Organizations may need to meet a variety<br />
of compliance goals. Choosing the appropriate configuration for <strong>SQL</strong> Server will often be a key<br />
component in meeting those goals.<br />
<strong>SQL</strong> Server 2008 introduced the <strong>SQL</strong> Server Audit feature in the Enterprise edition. (Some audit features<br />
are part of all editions of <strong>SQL</strong> Server <strong>2012</strong>). The audit feature provides that ability to perform types of<br />
auditing that were not possible in earlier versions of the product. In this module, you will see the available<br />
options for auditing within <strong>SQL</strong> Server, see how to implement the <strong>SQL</strong> Server Audit feature and learn to<br />
manage that feature.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe the options for auditing data access in <strong>SQL</strong> Server.<br />
• Implement <strong>SQL</strong> Server Audit.<br />
• Manage <strong>SQL</strong> Server Audit.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 12-3<br />
Lesson 1<br />
Options for Auditing Data Access in <strong>SQL</strong> Server<br />
Prior to <strong>SQL</strong> Server 2008, a variety of auditing options were available in <strong>SQL</strong> Server. These options are still<br />
available in <strong>SQL</strong> Server <strong>2012</strong> and may form part of your auditing strategy. In general, no one feature<br />
provides all possible auditing requirements and a combination of features often needs to be used. In this<br />
lesson, you will learn about the options available.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe the need for auditing.<br />
• Use C2 audit mode.<br />
• Describe the Common Criteria Audit Option.<br />
• Configure triggers for auditing.<br />
• Describe the use of <strong>SQL</strong> trace for auditing.
12-4 Auditing <strong>SQL</strong> Server Environments<br />
Discussion: Auditing Data Access<br />
Key Points<br />
Question: Why is auditing required?<br />
Question: What methods have you used for auditing?<br />
Question: What are the limitations of the methods you have used?<br />
Question: Which standards that require auditing does your organization need to comply<br />
with?
Using C2 Audit Mode<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 12-5<br />
C2 refers to a set of security policies that define how a secure system operates. Certification applies to a<br />
particular installation, including the hardware, software, and the environment that the system operates in.<br />
Products do not become C2 certified. Sites become C2 certified.<br />
C2 Certification<br />
The Windows NT® operating system (server and workstation) were first listed on the National Security<br />
Agency (NSA) Evaluated Products List (EPL) in 1995. This means that it was acknowledged that those<br />
products could be configured in a way that would enable sites using them to become certified. Windows<br />
systems have a long history of being able to be configured in a compliant manner.<br />
C2 is one of a series of security ratings, involving A, B, C, and D level secure products. These ratings were<br />
published by the National Computer Security Center (NCSC) in a document called the Trusted Computer<br />
System Evaluation Criteria (TCSEC). This document was commonly referred to in the industry as the<br />
"orange book".<br />
Note As a curious side-note, the "orange book" was part of the "rainbow series".<br />
The security policy in C2 is known as Discretionary Access Control. Users of the system:<br />
• Own objects.<br />
• Have control over the protection of the objects they own.<br />
• Are accountable for all their access-related actions.<br />
By today’s standards, the C2 requirements are relatively easy for sites to attain.
12-6 Auditing <strong>SQL</strong> Server Environments<br />
<strong>SQL</strong> Server and C2<br />
<strong>SQL</strong> Server can be configured to meet C2 requirements. A system configuration option ‘c2 audit mode’<br />
can be enabled using the sp_configure system stored procedure. While easy to configure, this option is<br />
now rarely used or even appropriate to use. Enabling the C2 audit option can have a negative<br />
performance impact on the server through the generation of large volumes of event information.<br />
Most customers that configure this option do so without realizing what they are enabling and eventually<br />
(sometimes sooner rather than later) end up running out of disk space.<br />
For practical purposes, C2 has now been superseded by Common Criteria compliance, which is described<br />
in the next topic.
Common Criteria Audit Option<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 12-7<br />
C2 ratings were U.S. based. Common Criteria is an international standard that was ratified by more than<br />
twenty nations in 1999 and has superseded C2 rating as a requirement in most standards.<br />
It is also maintained on an ongoing basis by over twenty countries and was adopted by the International<br />
Standards Organization (ISO) as standard 15408.<br />
‘common criteria compliance enabled’ Option<br />
<strong>SQL</strong> Server provides a server option ‘common criteria compliance enabled’ that can be set using the<br />
sp_configure system stored procedure. It is available in the Enterprise edition for production use. (It is also<br />
available in the Developer and Evaluation editions for non-production use). In addition to enabling the<br />
common criteria compliance enabled option, you also must download and run a script that finishes<br />
configuring <strong>SQL</strong> Server to comply with Common Criteria Evaluation Assurance Level 4+ (EAL4+). You can<br />
download this script from the Microsoft <strong>SQL</strong> Server Common Criteria Web site.<br />
When this option is enabled, three changes occur to how <strong>SQL</strong> Server operates:<br />
Issue Description<br />
Residual Information Protection<br />
(RIP)<br />
Memory is always overwritten with a known bit pattern before<br />
being reused<br />
Ability to view login statistics Auditing of logins is automatically enabled<br />
Column GRANT does not override<br />
table DENY<br />
Changes the behavior of the permission system
12-8 Auditing <strong>SQL</strong> Server Environments<br />
The implementation of RIP increases security but will negatively impact on the performance of the system.<br />
Question: Why is there a need to make a change for GRANT as the column-level overriding<br />
DENY at the table-level?
Using Triggers for Auditing<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 12-9<br />
Triggers can play an important role in auditing. Prior to <strong>SQL</strong> Server 2008, many actions could only be<br />
audited via triggers.<br />
<strong>SQL</strong> Server 2005 SP2 introduced logon triggers. These allowed tracking more details of logons and also<br />
allowed rolling back logons based on business or administrative logic.<br />
Triggers are not perfect though:<br />
• Performance of the system is impacted by triggers. Prior to <strong>SQL</strong> Server 2005, the inserted and deleted<br />
virtual tables in triggers were implemented in a similar manner to views over the transaction log. They<br />
did not offer high performance. From <strong>SQL</strong> Server 2005 onwards, the internal structure of these<br />
internal tables changed. They are now based on a row version table that resides in the tempdb<br />
database. Triggers that use these tables operate much more quickly than in prior versions but there<br />
can be a significant impact on the performance of the tempdb database that needs to be considered.<br />
• Triggers can be disabled. This is a significant issue for auditing.<br />
• Users were requesting SELECT triggers. They did not only want to track data modifications. Many<br />
users in high security environments wanted to see not only the commands that were executed to<br />
retrieve data but also the data that was retrieved.<br />
• Triggers have a nesting limit of thirty-two levels beyond which they do not work.<br />
• Recursive triggers can be disabled.<br />
• Only limited ability to control trigger firing order is provided. Auditing would normally need to be the<br />
last trigger that fires to make sure that it captures all the changes made by other triggers.<br />
Question: What would you imagine that the term "recursive trigger" might refer to?
12-10 Auditing <strong>SQL</strong> Server Environments<br />
Using <strong>SQL</strong> Trace for Auditing<br />
Key Points<br />
Many users have attempted to use <strong>SQL</strong> Server Profiler for auditing because it allows tracing commands<br />
that are sent to <strong>SQL</strong> Server and tracing the errors that are returned. <strong>SQL</strong> Server Profiler can have a<br />
significantly negative performance impact when it is run interactively on production systems.<br />
<strong>SQL</strong> Trace is a set of system stored procedures that are utilized by <strong>SQL</strong> Server Profiler. Executing these<br />
procedures to manage tracing offers a much more lightweight method of tracing, particularly when the<br />
events are well-filtered.<br />
<strong>SQL</strong> Trace can then have a role in auditing. Because it has the ability to capture commands that are sent<br />
to the server, it can be used to audit those commands.<br />
Since <strong>SQL</strong> Server 2005, <strong>SQL</strong> Trace uses a server-side tracing mechanism that guarantees that no events are<br />
lost, as long as there is space available on the disk and that no write errors occur. If the disk fills or write<br />
errors occur, the trace stops. <strong>SQL</strong> Server continues unless c2 audit mode is also enabled. The possibility of<br />
missing events needs to be considered when evaluating the use of <strong>SQL</strong> Trace for auditing purposes.
Demonstration 1A: Using DML Triggers for Auditing<br />
Demonstration Steps<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 12-11<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click <strong>SQL</strong><br />
Server Management Studio. In the Connect to Server window, type Proseware and click Connect.<br />
From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_12_PRJ\10775A_12_PRJ.ssmssln and click Open.<br />
3. From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file from<br />
within Solution Explorer.<br />
4. Open the 11 – Demonstration 1A.sql script file.<br />
5. Follow the instructions contained within the comments of the script file.<br />
Question: Can DML triggers be used to audit the reading of data in a table?
12-12 Auditing <strong>SQL</strong> Server Environments<br />
Lesson 2<br />
Implementing <strong>SQL</strong> Server Audit<br />
<strong>SQL</strong> Server 2008 introduced the <strong>SQL</strong> Server Audit feature. It was based on a new eventing engine called<br />
Extended Events. In this lesson, you will learn about the core terminology used by Extended Events and<br />
how <strong>SQL</strong> Server Audit has been created as a specific package within Extended Events.<br />
Preparing <strong>SQL</strong> Server Audit for use requires the configuration of a number of objects. In this lesson, each<br />
of these objects is introduced, along with details of how they are configured. Finally, you will see the<br />
dynamic management views (DMVs) and system views that have been introduced to support the <strong>SQL</strong><br />
Server Audit feature.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe the Extended Events infrastructure.<br />
• Describe <strong>SQL</strong> Server Audit.<br />
• Configure <strong>SQL</strong> Server Audit.<br />
• Detail the roles of audit actions and action groups.<br />
• Define audit targets.<br />
• Create audits.<br />
• Create server audit specifications.<br />
• Create database audit specifications.<br />
• Use audit-related DMVs and system views.
Introduction to Extended Events<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 12-13<br />
A wide variety of events occurs within the <strong>SQL</strong> Server database engine. For example, a user could execute<br />
a query, the database engine could need to request additional memory, or permissions could be checked.<br />
<strong>SQL</strong> Server 2008 introduced a new feature called Extended Events that allows you to define actions that<br />
should be taken when events occur. As <strong>SQL</strong> Server executes its internal code, it checks to see if an external<br />
user has defined an action that should be taken at that point in the code. If an action is defined, an event<br />
is fired and details of the event are sent to a target location. Targets can be operating system files,<br />
memory-based ring buffers or Windows event logs.<br />
Extended Events is considered to be a lightweight eventing engine as it has very little performance impact<br />
on the database engine that it is monitoring. Extended events can be used for many purposes that <strong>SQL</strong><br />
Trace is currently used for.<br />
Extended Events are important as <strong>SQL</strong> Server Audit is based on the extended events infrastructure. The<br />
eventing engine that is provided by Extended Events is not tied to particular types of events. The engine is<br />
written in such a way that it can process any type of event.<br />
Configurations of Extended Events are shipped in .exe or .dll files that are called "packages". Packages are<br />
the unit of deployment and installation for Extended Events. A package is a container for all objects that<br />
are part of a particular Extended Events configuration. <strong>SQL</strong> Server Audit is a special package within<br />
Extended Events. You cannot change how it is internally configured. You can change other packages.
12-14 Auditing <strong>SQL</strong> Server Environments<br />
Extended Events uses specific terminology for describing the objects that it uses:<br />
Object Description<br />
Events Points of interest during the execution of code<br />
Targets Places that the trace details are sent to (such as files)<br />
Actions Responses that can be made to events (for example, one type of<br />
action captures execution plans to include in the trace)<br />
Types Definitions of the objects that Extended Events works with<br />
Predicates Dynamic filters that are applied to the event capture<br />
Maps Mapping of values to strings. (An example would be the<br />
mapping of codes to descriptions)
Introduction to <strong>SQL</strong> Server Audit<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 12-15<br />
<strong>SQL</strong> Server Audit was introduced in <strong>SQL</strong> Server 2008 to address compliance issues. The Enterprise edition<br />
of <strong>SQL</strong> Server <strong>2012</strong> provides full functionality and other editions provide basic functionality and are<br />
limited to defining server audit specifications.<br />
<strong>SQL</strong> Server Audit<br />
It is important to be aware that <strong>SQL</strong> Server Audit is the name of the feature and the name of one of the<br />
objects that are part of the feature.<br />
An audit is a definition of where the results of the auditing process are sent. This might seem counterintuitive<br />
at first, given that the name sounds like an action that you perform, not a location for the results<br />
of the action. An audit is created at the instance level and multiple audits can be created per instance.<br />
The results of an audit are sent to a target.<br />
Note The term "target" has the same meaning for <strong>SQL</strong> Server Audit as it does for<br />
Extended Events.<br />
Audit Specifications<br />
Server and database audit specifications determine the actions to audit. There are predefined sets of<br />
actions called "action groups". The use of these action groups avoids the need to configure large number<br />
of individual audit actions.
12-16 Auditing <strong>SQL</strong> Server Environments<br />
Configuring <strong>SQL</strong> Server Audit<br />
Key Points<br />
Configuring <strong>SQL</strong> Server Audit is a multi-step process:<br />
Step Description<br />
Creating an audit Determines how the results will be processed. For example,<br />
when configuring an audit, you will decide what to do if the<br />
disk space runs out. You will also decide how long <strong>SQL</strong><br />
Server can buffer audit results before writing them to the<br />
target.<br />
Defining the target Determines where the output will be sent.<br />
Creating an audit specification Determines the actions to be audited. These actions can be<br />
at the server or database level.<br />
Enabling the audit and audit<br />
specification<br />
Is the step where the objects are enabled. (Audits are<br />
created in a disabled state and audit specifications are<br />
created in a disabled state by default).<br />
Read the output events Relates to extracting the output details from the audits.<br />
Several options exist for reading the output events after they are captured:<br />
• Windows event/log file viewers allow reading event log details<br />
• The sys.fn_get_audit_file function returns file-based output as a table that can be queried in T-<strong>SQL</strong>.
Audit Actions and Action Groups<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 12-17<br />
Actions are the events that occur that are of interest to the audit. Actions can occur at three levels: server,<br />
database, and audit.<br />
Action Groups<br />
To avoid the need for many individual actions, action groups are provided. This makes setup and<br />
management of audits easier as it avoids the need to set up large numbers of individual actions for<br />
auditing.<br />
Examples of action groups are:<br />
• BACKUP_RESTORE_GROUP<br />
• DATABASE_MIRRORING_LOGIN_GROUP<br />
• DATABASE_OBJECT_ACCESS_GROUP<br />
• DBCC_GROUP<br />
• FAILED_LOGIN_GROUP<br />
• LOGIN_CHANGE_PASSWORD_GROUP<br />
Note that a state change of any audit is always audited. This cannot be disabled.<br />
<strong>SQL</strong> Server <strong>2012</strong> introduced a new group called USER_DEFINED_AUDIT_GROUP. Applications can cause<br />
audit events to be written to that group by calling the sp_audit_write system stored procedure.<br />
Question: Why would no option exist for disabling the auditing of audit changes?
12-18 Auditing <strong>SQL</strong> Server Environments<br />
Defining Audit Targets<br />
Key Points<br />
Audits can be sent to three targets in the current version.<br />
• Results can be sent to a file. File output provides the highest performance and is the easiest option to<br />
configure.<br />
• Results can be sent to the Windows application event log. Avoid sending too much detail to this log<br />
as network administrators tend to dislike applications that write too much content to any of the event<br />
logs.<br />
Note Be cautious about using the Windows application event log as an output target for<br />
sensitive information as any authenticated user can read the contents of that log.<br />
• Results can be sent to the Windows security event log. The security event log is a secure output<br />
option but requires the <strong>SQL</strong> Server service account to be added to the "Generate Security Audits"<br />
policy before it can be used.<br />
Note If it is important for <strong>SQL</strong> Server administrators to have access to the contents of the<br />
audit, consider whether the use of the security event log is appropriate.<br />
Question: Why would many <strong>SQL</strong> Server DBAs have difficulty working with audit entries in<br />
the Windows security event log?
Creating Audits<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 12-19<br />
When you create an audit, you make decisions about how <strong>SQL</strong> Server will process the results that are sent<br />
to the audit target. Audits can be created using the GUI in SSMS or via the CREATE SERVER AUDIT<br />
command in T-<strong>SQL</strong>.<br />
Audit Configuration Options<br />
The name of an audit will often relate to details of what the audit will contain, or the date and time when<br />
the audit was created or a combination of both.<br />
After configuring a name, configuring a queue delay is particularly important. The queue delay indicates<br />
(in milliseconds) how long <strong>SQL</strong> Server can buffer the audit results before flushing them to the target.<br />
Note The value chosen for queue delay is a trade-off between security and performance.<br />
If a server failure occurs, results that are in the buffer and not yet flushed to the target can<br />
be lost. A value of zero for queue delay will cause synchronous writes as events occur. This<br />
avoids the chance of losing events on failure but can impact performance significantly.<br />
For serious production auditing, the option to shut down the server on audit failure should be selected.<br />
<strong>SQL</strong> Server <strong>2012</strong> introduced a new option to fail the operation that fired the audit, rather than shutting<br />
down the entire server instance.<br />
Note If the shutdown option is chosen, <strong>SQL</strong> Server may fail to initiate if auditing cannot<br />
function. In the next lesson, you will see how to deal with this situation.
12-20 Auditing <strong>SQL</strong> Server Environments<br />
You need to choose a target for the output of your audit. The available audit targets were discussed in a<br />
previous topic.<br />
Note On Windows® XP, the security event log is not available as a destination.<br />
Note Each audit can be the target of at most one server audit specification and one<br />
database audit specification.<br />
Question: Why is it recommended to select the option to shut down server on audit failure?
Creating Server Audit Specifications<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 12-21<br />
Creating a server audit specification can be performed with the GUI or T-<strong>SQL</strong>. Server audit specifications<br />
are created in a disabled state by default. Audit objects, including audit specifications, are usually left<br />
disabled until all audit objects have been created.<br />
Server Audit Specification<br />
A server audit specification details the actions to be audited. You can choose either action groups or<br />
individual actions and objects. In the example shown in the slide, a server audit specification is being<br />
created using the CREATE SERVER AUDIT SPECIFICATION statement. The configuration of the same<br />
specification using the GUI is also shown.<br />
The name of the specification is FailedLoginSpec and the data collected from the specification will be sent<br />
to the Audit-<strong>2012</strong>1222-171544 audit. The action group to be audited is the FAILED_LOGIN_GROUP.<br />
Question: Why would enabling logging of failed logins have potential risks to availability?
12-22 Auditing <strong>SQL</strong> Server Environments<br />
Creating <strong>Database</strong> Audit Specifications<br />
Key Points<br />
Creating a database audit specification can also be performed with the GUI or T-<strong>SQL</strong>. <strong>Database</strong> audit<br />
specifications are also created in a disabled state by default. <strong>Database</strong> audit specifications can only be<br />
created in Enterprise edition.<br />
<strong>Database</strong> Audit Specification<br />
A server audit specification details the actions to be audited. You can choose either action groups or<br />
individual actions and objects.<br />
In the example shown in the slide, a database audit specification is being created using the CREATE<br />
DATABASE AUDIT SPECIFICATION statement. The configuration of the same specification using the GUI is<br />
also shown.<br />
The name of the specification is BackupRestoreSpec and the data collected from the specification will be<br />
sent to the Audit-<strong>2012</strong>1222-171544 audit. The action group to be audited is the<br />
BACKUP_RESTORE_GROUP.
Audit-related DMVs and System Views<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 12-23<br />
<strong>SQL</strong> Server provides a number of dynamic management views (DMVs) and system views that can help you<br />
manage <strong>SQL</strong> Server Audit.<br />
The following DMVs and system views are available:<br />
DMV/View Description<br />
sys.dm_server_audit_status Returns a row for each server audit indicating the<br />
current state of the audit<br />
sys.dm_audit_actions Returns a row for every audit action that can be<br />
reported in the audit log and every action group<br />
that can be configured as part of an audit<br />
sys.dm_audit_class_type_map Returns a table that maps the class types to class<br />
descriptions<br />
sys.server_audits Contains one row for each <strong>SQL</strong> Server audit in a<br />
server instance<br />
sys.server_file_audits Contains extended information about the file<br />
audit type in a <strong>SQL</strong> Server audit<br />
sys.server_audit_specifications Contains information about the server audit<br />
specifications in a <strong>SQL</strong> Server audit
12-24 Auditing <strong>SQL</strong> Server Environments<br />
(continued)<br />
DMV/View Description<br />
sys.server_audit_specification_details Contains information about the server audit<br />
specification details (actions) in a <strong>SQL</strong> Server audit<br />
Sys.database_audit_specifications Contains information about the database audit<br />
specifications in a <strong>SQL</strong> Server audit<br />
Sys.database_audit_specification_details Contains information about the database audit<br />
specifications in a <strong>SQL</strong> Server audit<br />
A number of the system views will be used in the upcoming demonstrations.
Demonstration 2A: Using <strong>SQL</strong> Server Audit<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 12-25<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_12_PRJ\10775A_12_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
2. Open the 21 – Demonstration 2A.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.<br />
Question: What are the three possible event targets for <strong>SQL</strong> Server Audit?
12-26 Auditing <strong>SQL</strong> Server Environments<br />
Lesson 3<br />
Managing <strong>SQL</strong> Server Audit<br />
It is important to be able to retrieve the results from the audits and to understand a few aspects of<br />
ongoing management of <strong>SQL</strong> Server Audit. In particular you will investigate issues related to migrating<br />
databases between servers and see how to restart servers if <strong>SQL</strong> Server refuses to start due to an audit<br />
failure.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Retrieve audits.<br />
• Work with the audit record structure.<br />
• Identify potential <strong>SQL</strong> Server audit issues.
Retrieving Audits<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 12-27<br />
No special configuration is needed to view audits sent to event logs. SSMS provides a log reader for these<br />
targets.<br />
Retrieving File Output<br />
For logs that are sent to a file, <strong>SQL</strong> Server provides a function that returns the contents of the file-based<br />
logs as a table that you can query with T-<strong>SQL</strong>.<br />
Note The filename that you provide to the FILEPATH parameter when creating a server<br />
audit is actually the name of a folder.<br />
The folder that contains the audit logs often contains multiple audit files. The sys.fn_get_audit_file<br />
function is used to retrieve those files. It takes three parameters: the file_pattern, the initial_file_name, and<br />
the audit_record_offset. The file_pattern provided can be in one of three formats:<br />
Format Description<br />
\* Collects all audit files in the specified<br />
location<br />
\LoginsAudit_{GUID} Collect all audit files that have the specified<br />
name and GUID pair<br />
\LoginsAudit_{GUID}_00_29384.sqlaudit Collect a specific audit file
12-28 Auditing <strong>SQL</strong> Server Environments<br />
Working with the Audit Record Structure<br />
Key Points<br />
The audit record structure is detailed in Books Online under the topic "sys.fn_get_audit_file<br />
(Transact-<strong>SQL</strong>)".<br />
Audit records need to be able to be stored in system event logs as well as in files. Because of this<br />
requirement, the record format is limited in size by the rules related to those event logging systems.<br />
Character fields will be split into 4000 character chunks and the chunks will be spread across a number of<br />
entries.<br />
This means that a single event could generate multiple audit entries. A sequence_no column is provided<br />
to indicate the order of multiple row entries.
Potential <strong>SQL</strong> Server Audit Issues<br />
Key Points<br />
There are a number of potential issues to consider with <strong>SQL</strong> Server audit.<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 12-29<br />
• Each audit is identified by a GUID. When a database is restored or attached on a server, an attempt is<br />
made to match the GUID in the database with the GUID of the audit on the server. If no match<br />
occurs, auditing will not work until the situation is corrected by executing the CREATE SERVER AUDIT<br />
command to set the appropriate GUID.<br />
• If databases are attached to editions of <strong>SQL</strong> Server that do not support the same level of audit<br />
capability, the attach works but the audit is ignored.<br />
• Mirrored servers introduce a similar issue to mis-matched GUIDs. The mirror partner must have a<br />
server audit with the same GUID. You can create this by using the CREATE SERVER AUDIT command<br />
and supplying the GUID value to match the value on the primary server.<br />
• In general, the performance impact of audit writes must be considered. If disk space fills up, <strong>SQL</strong><br />
Server may not start. If so, you may need to force entry to it via a single user startup and the –f<br />
startup parameter.<br />
Question: Why would audits be identified by a GUID as well as a name?
12-30 Auditing <strong>SQL</strong> Server Environments<br />
Demonstration 3A: Viewing the Output of a File-based Audit<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_12_PRJ\10775A_12_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
2. Open the 31 – Demonstration 3A.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.<br />
Question: Why are there two entries in the audit log?
Lab 12: Auditing <strong>SQL</strong> Server Environments<br />
Lab Setup<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 12-31<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_12_PRJ\10775A_12_PRJ.ssmssln.<br />
7. From the View menu, click Solution Explorer. In Solution Explorer, double-click the query<br />
00-Setup.sql. When the query window opens, click Execute on the toolbar.<br />
Lab Scenario<br />
You have authorized users to access the Proseware instance. Your Compliance Department has provided<br />
you with details of the auditing requirements for both the Proseware server instance and for the<br />
MarketDev database. The auditing requirements include the need to audit the activities against tables in<br />
the MarketDev database that contain sensitive information. In this lab, you will implement a strategy to<br />
enable appropriate auditing.<br />
If you have sufficient time, you need to test the audit strategy and write a query to extract audit records.
12-32 Auditing <strong>SQL</strong> Server Environments<br />
Supporting Documentation<br />
Audit Requirements from the Compliance Department<br />
1. All audit records should be written to the folder C:\Audit\AuditLog.<br />
2. Audit records should be written as quickly as possible however a tolerance of two seconds of audit<br />
records is the maximum permitted loss in the event of failure.<br />
3. The server instance should not continue to operate if auditing is not occurring.<br />
4. The name of the audit should be Proseware Compliance Audit.<br />
5. There is no limit to the number of audit files that may be created however each audit file should be<br />
limited to 1 GB in size.<br />
6. At the server level, the following items need to be audited:<br />
• Failed login attempts<br />
• Changes to the membership of server roles<br />
• Any changes to server logins (principals)<br />
• Any changes to passwords<br />
7. At the MarketDev database level, the following items need to be audited:<br />
• Changes to the membership of database roles<br />
• Backups and restores of the database<br />
• Changes to any permissions within the database<br />
• Any changes to database users (principals)<br />
• Any change of database ownership<br />
• Any updates to the Marketing.CampaignBalance table<br />
• Any executions of the Marketing.MoveCampaignBalance stored procedure<br />
Exercise 1: Determine Audit Configuration and Create Audit<br />
Scenario<br />
You need to determine the configuration of a server audit, based on the business security requirements. In<br />
this exercise, you will create the required Server Audit.<br />
The main tasks for this exercise are as follows:<br />
1. Review the requirements.<br />
2. Create the server audit.<br />
Task 1: Review the requirements<br />
• Review the supplied requirements in the supporting documentation for the exercise.
Task 2: Create the server audit<br />
• Determine the configuration of the required server audit.<br />
• Create the server audit using <strong>SQL</strong> Server Management Studio.<br />
• Enable the server audit.<br />
Results: After this exercise, you should have created the required server audit.<br />
Exercise 2: Create Server Audit Specifications<br />
Scenario<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 12-33<br />
You need to determine which of the business requirements can be met via server audit specifications. You<br />
will then determine the required Server Audit Specifications and create them.<br />
The main tasks for this exercise are as follows:<br />
1. Review the requirements.<br />
2. Create the server audit specifications.<br />
Task 1: Review the requirements<br />
• Review the supplied requirements in the supporting documentation for the exercise.<br />
Task 2: Create the server audit specifications<br />
• Determine the required server audit specifications.<br />
• Create the server audit specifications using <strong>SQL</strong> Server Management Studio.<br />
• Enable the server audit specifications.<br />
Results: After this exercise, you should have created the required server audit specification.<br />
Exercise 3: Create <strong>Database</strong> Audit Specifications<br />
Scenario<br />
Some of the audit requirements will require database audit specifications to be created. In this exercise,<br />
you will determine which of the audit requirements could be met by database audit specifications. You<br />
will then create those database audit specifications.<br />
The main tasks for this exercise are as follows:<br />
1. Review the requirements.<br />
2. Create the database audit specifications.<br />
Task 1: Review the requirements<br />
• Review the requirements.
12-34 Auditing <strong>SQL</strong> Server Environments<br />
Task 2: Create the database audit specifications<br />
• Determine the required database audit specifications.<br />
• Create any required database audit specifications using <strong>SQL</strong> Server Management Studio.<br />
• Enable any database audit specifications that you created.<br />
Results: After this exercise, you should have created the required database audit specifications.<br />
Challenge Exercise 4: Test Audit Functionality (Only if time permits)<br />
Scenario<br />
You need to check that the auditing you have set up is functioning as expected. You will execute a test<br />
workload script and then review the captured audit using both the GUI is SSMS and T-<strong>SQL</strong>.<br />
The main tasks for this exercise are as follows:<br />
1. Execute the workload script.<br />
2. Review the captured audit details.<br />
Task 1: Execute the workload script<br />
• From Solution Explorer open and execute the workload script 81 – Lab Exercise 4a.sql.<br />
Task 2: Review the captured audit details<br />
• Review the captured audit details using the View Audit Logs option in <strong>SQL</strong> Server Management<br />
Studio. (This is a right-click option from the Server Audit).<br />
• Write a query to retrieve the audit log details using T-<strong>SQL</strong>.<br />
Results: After this exercise, you should have checked that the auditing works as expected.
Module Review and Takeaways<br />
Review Questions<br />
1. What are the three targets for <strong>SQL</strong> Server audits?<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 12-35<br />
2. When common criteria compliance is enabled in <strong>SQL</strong> Server, what changes about column-level<br />
permissions?<br />
3. You may wish to audit actions by a DBA. How would you know if the DBA stopped the audit while<br />
performing covert actions?<br />
Best Practices<br />
1. Choose the option to shut down <strong>SQL</strong> Server on audit failure. There is usually no point in setting up<br />
auditing and then having situations where events can occur but are not audited. This is particularly<br />
important in higher-security environments.<br />
2. Make sure that file audits are placed on drives with large amounts of free disk space and make sure<br />
that the available disk space is monitored on a regular basis.
12-36 Auditing <strong>SQL</strong> Server Environments
Module 13<br />
Automating <strong>SQL</strong> Server <strong>2012</strong> Management<br />
Contents:<br />
Lesson 1: Automating <strong>SQL</strong> Server Management 13-3<br />
Lesson 2: Working with <strong>SQL</strong> Server Agent 13-11<br />
Lesson 3: Managing <strong>SQL</strong> Server Agent Jobs 13-19<br />
Lab 13: Automating <strong>SQL</strong> Server Management 13-26<br />
13-1
13-2 Automating <strong>SQL</strong> Server <strong>2012</strong> Management<br />
Module Overview<br />
The tools provided with <strong>Microsoft®</strong> <strong>SQL</strong> <strong>Server®</strong> make administration easy when compared with other<br />
database engines. Even when tasks are easy to perform though, it is common to need to repeat a task<br />
many times. Efficient database administrators learn to automate repetitive tasks. Automating tasks can<br />
help avoid situations where an administrator forgets to execute a task at the required time. Perhaps more<br />
important though, is that the automation of tasks helps to ensure that tasks are performed consistently,<br />
each time they are executed.<br />
<strong>SQL</strong> Server Agent is the service that is provided in all editions of <strong>SQL</strong> Server <strong>2012</strong> (except <strong>SQL</strong> Server<br />
Express Edition) that is responsible for the automation of tasks. A set of tasks that need to be performed is<br />
referred to as a <strong>SQL</strong> Server Agent job. It is important to learn how to create and manage these jobs.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Automate <strong>SQL</strong> Server Management.<br />
• Work with <strong>SQL</strong> Server Agent.<br />
• Manage <strong>SQL</strong> Server Agent jobs.
Lesson 1<br />
Automating <strong>SQL</strong> Server Management<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 13-3<br />
There are many benefits that can be gained from the automation of <strong>SQL</strong> Server management. Most of the<br />
benefits center on the reliable, consistent execution of routine management tasks. <strong>SQL</strong> Server is a flexible<br />
platform that provides a number of ways to automate management but the most important tool for<br />
automation of management is the <strong>SQL</strong> Server Agent. All database administrators that work with the <strong>SQL</strong><br />
Server need to be very familiar with the configuration and ongoing management of <strong>SQL</strong> Server Agent.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Explain the benefits of automating <strong>SQL</strong> Server management.<br />
• Describe the available options for automating <strong>SQL</strong> Server management and the framework that is<br />
provided with <strong>SQL</strong> Server Agent.<br />
• Describe <strong>SQL</strong> Server Agent.
13-4 Automating <strong>SQL</strong> Server <strong>2012</strong> Management<br />
Benefits of Automating <strong>SQL</strong> Server Management<br />
Key Points<br />
All efficient database administrators automate their routine administrative tasks. Some of the benefits that<br />
can be gained from the automation of <strong>SQL</strong> Server management are as follows:<br />
Reduced Administrative Load<br />
Unfortunately, some administrators that work with <strong>SQL</strong> Server, Windows®, and other tools, see their roles<br />
in terms of a constant stream of repetitive administrative tasks. For example, a Windows administrator at a<br />
University department might receive regular requests to create a large number of user accounts. The<br />
administrator might be happy to create each of these accounts one by one, using the standard tooling. A<br />
more efficient administrator would learn to write a script to create users and execute the script instead of<br />
manually creating the users.<br />
The same sort of situation occurs with routine tasks in <strong>SQL</strong> Server. While these tasks can be performed<br />
individually or manually, efficient database administrators do not do this. They automate all their routine<br />
and repetitive tasks. Automation removes the repetitive workload from the administrators and allows the<br />
administrators to manage larger numbers of systems or to perform higher-value tasks for the<br />
organization.<br />
Reliable Execution of Routine Tasks<br />
When routine tasks are performed manually, there is always a chance that a vital task might be<br />
overlooked. For example, a database administrator could forget to perform database backups.<br />
Automation allows administrators to focus on exceptions that occur during the routine tasks rather than<br />
focusing on the execution of the tasks.
Consistent Execution of Routine Tasks<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 13-5<br />
Another problem that can occur when routine tasks are performed manually is that the tasks may not be<br />
performed the same way each time. Imagine a situation where a database administrator is required to<br />
archive some data from a set of production tables, into a set of history tables every Monday morning. The<br />
new tables need to have the same name as the original tables with a suffix that includes the current date.<br />
While the administrator might remember to perform this task every Monday morning, what is the<br />
likelihood that one of the following errors would occur?<br />
• Copy the wrong tables<br />
• Copy only some of the tables<br />
• Forget what the correct date is when creating the suffix<br />
• Format the date in the suffix incorrectly<br />
• Copy data into the wrong archive table<br />
Anyone that has been involved in ongoing administration of systems would tell you that these and other<br />
problems would occur from time to time, even when the tasks are executed by experienced and reliable<br />
administrators. Automating routine tasks can assist greatly in making sure that they are performed<br />
consistently each time they are executed.<br />
Proactive Management<br />
Once routine tasks are automated, it is easy for a situation where the routine execution of the tasks fails<br />
but no administrator ever notices that the task is failing. For example, there are many tragic tales on the<br />
<strong>SQL</strong> Server community support forums from administrators that automated the backup of their databases,<br />
and where they did not notice that the backups had been failing for a long time, until they needed one of<br />
the backups.<br />
As well as automating your routine tasks, you need to ensure that you create notifications that tell you<br />
when the tasks fail, even if you cannot imagine a situation where the tasks could fail. For example, you<br />
may have created a backup strategy that creates database backups in a given folder. The job may run<br />
reliably for years until another administrator inadvertently deletes or renames the target folder. You need<br />
to know as soon as this problem occurs so that you can rectify the situation.<br />
A more proactive administrator will try to detect potential problems before they occur. For example,<br />
rather than receiving a notification that a job failed because a disk was full, an administrator might<br />
schedule regular checks of available disk space and make sure that a notification is received when<br />
available free space is starting to get too low. <strong>SQL</strong> Server provides alerts on system and performance<br />
conditions.<br />
Question: What tasks need to be automated on the systems in your organization?
13-6 Automating <strong>SQL</strong> Server <strong>2012</strong> Management<br />
Available Options for Automating <strong>SQL</strong> Server Management<br />
Key Points<br />
The primary method for automation of management, administrative, and other routine tasks when<br />
working with <strong>SQL</strong> Server <strong>2012</strong> is to use <strong>SQL</strong> Server Agent.<br />
Framework for <strong>SQL</strong> Server Agent<br />
The management framework that is supplied by <strong>SQL</strong> Server Agent is based on two core objects:<br />
• Jobs that are used to automate tasks<br />
• Alerts that are used to respond to events<br />
Jobs can be used to schedule a wide variety of task types, including tasks that are required for the<br />
implementation of other <strong>SQL</strong> Server features. For example, the Replication, Change Data Capture (CDC),<br />
Data Collection, and Policy Based Management (PBM) features of <strong>SQL</strong> Server create <strong>SQL</strong> Server Agent<br />
jobs. (Data Collection will be discussed in Module 18).<br />
Note Replication, CDC, and PBM are advanced topics that are out of scope for this course.<br />
The alerting system that is provided by <strong>SQL</strong> Server Agent is capable of responding to a wide variety of<br />
alert types, including <strong>SQL</strong> Server Error Messages, <strong>SQL</strong> Server Performance Counter Events, and Windows<br />
Management Instrumentation (WMI) Alerts.<br />
In response to an alert, an action can be configured, such as the execution of a <strong>SQL</strong> Server Agent job or<br />
sending a notification to an administrator. In <strong>SQL</strong> Server Agent, administrators that can be notified are<br />
called operators. Operators are commonly notified by the use of SMTP based email. (Alerts are discussed<br />
in Module 15).
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 13-7<br />
Note that there are other <strong>SQL</strong> Server features that can be used to automate complex monitoring<br />
requirements. The Extended Events feature is an example of this but is out of the scope of this training.<br />
Question: Can you think of events that might occur on a <strong>SQL</strong> Server system that you would<br />
want to be alerted about?
13-8 Automating <strong>SQL</strong> Server <strong>2012</strong> Management<br />
Overview of <strong>SQL</strong> Server Agent<br />
Key Points<br />
As mentioned earlier in this module, <strong>SQL</strong> Server Agent is the component of <strong>SQL</strong> Server that is responsible<br />
for automating <strong>SQL</strong> Server administrative tasks. <strong>SQL</strong> Server Agent runs as a Windows service.<br />
Starting <strong>SQL</strong> Server Agent<br />
For <strong>SQL</strong> Server Agent to perform its main role of executing jobs and firing alerts, <strong>SQL</strong> Server Agent needs<br />
to be running constantly. Because of this, <strong>SQL</strong> Server Agent is typically configured to start automatically<br />
when the operating system starts. Note that the default option during <strong>SQL</strong> Server installation is for <strong>SQL</strong><br />
Server Agent to be started manually. The design of the <strong>SQL</strong> Server installation process ensures that<br />
services and components are not installed or started unless they are required. This means that the default<br />
installation option needs to be changed if there is a need for <strong>SQL</strong> Server Agent to be running on your<br />
system.<br />
The start mode for <strong>SQL</strong> Server Agent is configured in the properties of the <strong>SQL</strong> Server Agent service in<br />
<strong>SQL</strong> Server Configuration Manager as shown on the slide. Note that three start modes are configurable:<br />
Start Mode Description<br />
Automatic The service will start when the operating system starts.<br />
Disabled The service will not start, even if you attempt to start it manually.<br />
Manual The service needs to be manually started.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 13-9<br />
In addition, you can configure the <strong>SQL</strong> Server Agent service to restart automatically if it stops<br />
unexpectedly. The automatic restart option is set in the properties page for the <strong>SQL</strong> Server Agent in SSMS,<br />
as shown below:<br />
To restart automatically, the <strong>SQL</strong> Server Agent service account must be a member of the local<br />
Administrators group for the computer that <strong>SQL</strong> Server is installed on but this is not considered a best<br />
practice. A better option would be to use an external monitoring tool such as System Center Operations<br />
Manager to monitor and restart the <strong>SQL</strong> Server Agent service if necessary.<br />
Question: Why should <strong>SQL</strong> Server Agent service be always configured to start up<br />
automatically?
13-10 Automating <strong>SQL</strong> Server <strong>2012</strong> Management<br />
Demonstration 1A: Working with <strong>SQL</strong> Server Agent<br />
Demonstration Steps<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click <strong>SQL</strong><br />
Server Management Studio. In the Connect to Server window, type Proseware and click Connect.<br />
From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_13_PRJ\10775A_13_PRJ.ssmssln and click Open.<br />
3. From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file from<br />
within Solution Explorer.<br />
4. Open the 11 – Demonstration 1A.sql script file.<br />
5. Follow the instructions contained within the comments of the script file.
Lesson 2<br />
Working with <strong>SQL</strong> Server Agent<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 13-11<br />
You have seen that <strong>SQL</strong> Server Agent is the primary tool for automating tasks within <strong>SQL</strong> Server. <strong>Database</strong><br />
administrators need to be proficient at creating and configuring <strong>SQL</strong> Server Agent jobs. Jobs can be<br />
created to implement a variety of different types of task and can be categorized for ease of management.<br />
Creating a job involves creating a series of steps that the job will execute, along with the workflow that<br />
determines which steps should be executed, and in which order.<br />
Once the steps that a job needs to take have been determined, you need to determine when the job will<br />
be executed. Most <strong>SQL</strong> Server Agent jobs are run on defined schedules. <strong>SQL</strong> Server provides the ability to<br />
create a flexible set of schedules that can be shared between jobs.<br />
It is important to learn to script jobs that have been created. The scripting of jobs allows the jobs to be<br />
quickly recreated if a failure occurs, and allows the jobs to be recreated in other environments. For<br />
example, the jobs may have been created in a test environment but need to be executed in a production<br />
environment.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Define jobs, job types and job categories.<br />
• Create job steps.<br />
• Schedule jobs for execution.<br />
• Script jobs.
13-12 Automating <strong>SQL</strong> Server <strong>2012</strong> Management<br />
Defining Jobs, Job Step Types and Job Categories<br />
Key Points<br />
<strong>SQL</strong> Server Agent jobs are comprised of a series of operations that need to be performed in order. In most<br />
jobs, the steps are performed sequentially but an administrator can exercise control over the order of the<br />
steps.<br />
By configuring the action to occur on the success and failure of each job step, a workflow can be created<br />
that determines the overall logic flow of the job.<br />
Job Step Types<br />
Note that every job step has an associated type that defines the kind of operation to run. The most<br />
commonly used types are:<br />
• Executing a command line script, batch of commands, or application.<br />
• Executing a T-<strong>SQL</strong> statement.<br />
• Executing a Windows PowerShell® script.<br />
• Executing a <strong>SQL</strong> Server Integration Services Package<br />
• Executing Analysis Services commands and queries.<br />
Note While the ability to execute ActiveX® scripts has been retained for backwards<br />
compatibility, this option is deprecated and should not be used for new development.<br />
Note Other specialized job step types are used by features of <strong>SQL</strong> Server such as<br />
Replication. Replication is an advanced topic that is out of scope for this course.
Job Schedules<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 13-13<br />
One or more schedules can be defined for every job. Schedules can be defined as recurring, but also other<br />
schedule types provide for one time execution or for execution when <strong>SQL</strong> Server Agent first starts.<br />
You may need to create multiple schedules for a job when the required recurrence pattern for the job is<br />
complex and cannot be accommodated within a single job schedule.<br />
While each schedule can also be shared between many jobs, it is important to avoid having too many jobs<br />
starting at the same time.<br />
Creating Jobs<br />
You can use <strong>SQL</strong> Server Management Studio to create jobs or you can execute the sp_add_job system<br />
stored procedure. Creating a job requires the execution of a number of additional system stored<br />
procedures, to add steps and schedules to the job.<br />
The job definition is stored in the msdb database, along with all of <strong>SQL</strong> Server Agent configuration.<br />
Job Categories<br />
Jobs can be placed into categories. <strong>SQL</strong> Server has a number of built-in categories as shown below but<br />
you can add your own categories;<br />
Job categories can be useful when you need to perform actions that are associated with jobs in a specific<br />
category. For example, it would be possible to create a job category called "<strong>SQL</strong> Server 2005 Policy Check"<br />
and to write a PowerShell script to execute all the jobs in that category against your <strong>SQL</strong> Server 2005<br />
servers.<br />
Note A detailed discussion on the use of PowerShell is an advanced topic that is out of<br />
scope for this course.<br />
Note It is also important to consider the security account that each type of job step<br />
requires for execution. Module 14 discusses security for <strong>SQL</strong> Server Agent.<br />
Question: Can you think of a use for job categories?
13-14 Automating <strong>SQL</strong> Server <strong>2012</strong> Management<br />
Creating Job Steps<br />
Key Points<br />
Use <strong>SQL</strong> Server Management Studio or execute the sp_add_jobstep system stored procedure to define<br />
each job step that is required to automate a task. Only one execution type can be defined for each job<br />
step, but each step of a job can have a different type.<br />
Job Step Workflow<br />
Every step has an outcome that defines whether the step has succeeded or failed. Note the list of job<br />
steps and the "On Success" and "On Failure" options for each step in the following job:<br />
By default, <strong>SQL</strong> Server advances to the next job step upon success and stops upon failure of a job step but<br />
job steps can continue with any step defined in the job upon success or failure, to define a special<br />
workflow.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 13-15<br />
In the <strong>Advanced</strong> properties of each job step, an action can be configured for both success and failure as<br />
shown below:<br />
By configuring the action to occur on the success and failure of each job step, a workflow can be created<br />
that determines the overall logic flow of the job. Note that as well as each job step having a defined<br />
outcome, the overall job reports an outcome. This means that even though some job steps might succeed,<br />
the overall job might still report failure.<br />
Retrying Job Steps<br />
You can specify the number of times that <strong>SQL</strong> Server should attempt to retry execution of a job step if the<br />
step fails. You also can specify the retry intervals (in minutes). For example, if the job step requires a<br />
connection to a remote server, you could define several retry attempts in case the connection fails.<br />
Question: Which operations should not be grouped together in a job?
13-16 Automating <strong>SQL</strong> Server <strong>2012</strong> Management<br />
Scheduling Jobs for Execution<br />
Key Points<br />
Schedules are used to start jobs at requested times. One or more schedules can be associated with a job.<br />
Schedules are assigned names and schedules can be shared across multiple jobs. Apart from standard<br />
recurring schedules, a number of special recurrence types are also defined:<br />
• One time execution.<br />
• Start automatically when <strong>SQL</strong> Server Agent starts.<br />
• Start whenever the CPU becomes idle.<br />
Even though a job may have multiple schedules, <strong>SQL</strong> Server will limit the job to a single concurrent<br />
execution. If you try to run a job manually while it is running as scheduled, <strong>SQL</strong> Server Agent refuses the<br />
request. If a job is still running when it is scheduled to run again, <strong>SQL</strong> Server Agent refuses to run it again.<br />
Question: What could be changed if the database in the example above does not need<br />
hourly backups during weekend?
Scripting Jobs<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 13-17<br />
Jobs are typically first created in SSMS as there are a several system stored procedures that need to be<br />
executed to define a single job. However, it is important that existing jobs be scripted in T-<strong>SQL</strong> for the<br />
following reasons:<br />
• Scripts of jobs can be used in documentation and can be archived into source code control systems.<br />
• Jobs can easily be recreated after a failure if necessary, if scripts of the jobs have been created.<br />
• There is also a common requirement to be able to create a job in one environment (such as a test<br />
environment) and to deploy the job in another environment (such as a production environment). The<br />
ability to script jobs allows for easier deployment of the jobs.<br />
• Scripts of jobs could be used when performing side by side upgrades of <strong>SQL</strong> Server systems.<br />
Note More than one job can be selected in Object Explorer Details when scripting.<br />
Other more advanced options are available for scripting jobs, such as the use of <strong>SQL</strong> Server Management<br />
Objects (SMO). SMO can be used in conjunction with .NET programming in languages such as Microsoft<br />
Visual Basic® or C#, and can be used in conjunction with PowerShell.<br />
Question: In which scenarios might it be useful to script more than one job at a time?
13-18 Automating <strong>SQL</strong> Server <strong>2012</strong> Management<br />
Demonstration 2A: Scripting Jobs<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_13_PRJ\10775A_13_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
2. Open the 21 – Demonstration 2A.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
Lesson 3<br />
Managing <strong>SQL</strong> Server Agent Jobs<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 13-19<br />
While the automation of routine administrative and other tasks is important, it is equally important to<br />
ensure that those tasks continue to execute as expected. <strong>SQL</strong> Server provides detail regarding previous<br />
and failed executions by maintaining history in tables that are contained in the msdb database. This lesson<br />
will show you how to query the history tables and provide you with an approach for troubleshooting jobs<br />
that fail or that do not perform as expected.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• View job history.<br />
• Query <strong>SQL</strong> Server Agent-related system tables and views.<br />
• Troubleshoot failed jobs.
13-20 Automating <strong>SQL</strong> Server <strong>2012</strong> Management<br />
Viewing Job History<br />
Key Points<br />
<strong>SQL</strong> Server Agent keeps track of job outcomes in system tables in the msdb database. As well as recording<br />
the outcome of entire jobs, <strong>SQL</strong> Server Agent records the outcome of each job step.<br />
You can choose to write job outcomes to the Windows Application log and to the <strong>SQL</strong> Server log by<br />
setting notification properties for each job. The history tables in msdb are always written regardless of the<br />
configuration for log output.<br />
Viewing Job History<br />
Each job and each job step has an outcome. If a job fails, the failing job step outcome needs to be<br />
reviewed to see the reason why the job step failed. The history for each job can be viewed in SSMS but<br />
can also be retrieved programmatically by directly querying the system tables. Writing queries to retrieve<br />
job history will be shown in the next topic in this lesson.<br />
The most recent 1000 entries for job history are retained by default but the retention period for job<br />
history entries can be configured based on either age or the total size of the job history data, using the<br />
property window for the <strong>SQL</strong> Server Agent service.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 13-21<br />
Object Explorer in SSMS also provides a Job Activity Monitor. The Job Activity Monitor offers a view of<br />
currently executing jobs and of data showing the results of the previous execution along with the<br />
scheduled time for the next execution of the job. An example of the data provided by the Job Activity<br />
Monitor is shown below:<br />
Question: How would a corrupt msdb database affect <strong>SQL</strong> Server Agent?
13-22 Automating <strong>SQL</strong> Server <strong>2012</strong> Management<br />
Querying <strong>SQL</strong> Server Agent-related System Tables and Views<br />
Key Points<br />
As mentioned earlier in this module, information about the configuration of <strong>SQL</strong> Server Agent and its<br />
objects such as jobs, alerts, schedules and operators is written to system tables in the msdb database.<br />
These objects are contained in the dbo schema and can be directly queried from there.<br />
Job history is written to the dbo.sysjobhistory table and a list of jobs is written to the dbo.sysjobs table.<br />
Example Query<br />
In the example shown on the slide, the date and time that a job was last run, and the outcome are<br />
queried from the dbo.sysjobhistory table in msdb. The query joins to the dbo.sysjobs table to retrieve the<br />
name of the job.<br />
Note that the WHERE clause specifies a step_id of zero. Job steps begin at one, not zero, but an entry in<br />
the dbo.sysjobhistory table is made with a job step_id of zero to record the overall outcome of the job.<br />
The outcome of individual job steps can be obtained by querying step_id values greater than zero.<br />
Question: Why would querying the job history tables be important?
Troubleshooting Failed Jobs<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 13-23<br />
Jobs do not always execute as expected and sometimes they will fail to execute at all. It is important to<br />
follow a consistent process when attempting to work out why a job is failing.<br />
There are four basic steps that need to be followed when troubleshooting jobs: checking <strong>SQL</strong> Server<br />
Agent status, reviewing job history, checking job execution, and checking access to dependencies.<br />
Checking <strong>SQL</strong> Server Agent Status<br />
If <strong>SQL</strong> Server Agent is not running, no jobs will be executed. Make sure the service is set to start<br />
automatically and attempt to start the service manually. If the service still will not start, check the<br />
following:<br />
• Make sure that the service account for the service is valid, that the password for the account has not<br />
changed, and that the account is not locked out. If any of these items are incorrect, the service will<br />
not start but details about the problem will be written to the System event log on the computer.<br />
• Check that the msdb database is online. If the msdb database is corrupt, suspect or offline, <strong>SQL</strong> Server<br />
Agent will not start.<br />
Review Job History<br />
Review the job outcome to identify the last step run. If the job failed because a job step failed, which is<br />
the most common situation, the error of the job step cannot be seen at this level. It is necessary to then<br />
review the individual job step outcome for the failed job.
13-24 Automating <strong>SQL</strong> Server <strong>2012</strong> Management<br />
Checking Job Execution<br />
If <strong>SQL</strong> Server Agent is running but an individual job will not execute, check the following items:<br />
• Make sure that the job is enabled. Disabled jobs will not run.<br />
• Make sure that the job is scheduled. Perhaps the schedule is incorrect or the time for the next<br />
scheduled execution has not occurred yet.<br />
• Make sure that the schedule is enabled. Both jobs and schedules can be disabled. A job will not run<br />
on a schedule that is disabled.<br />
Check Access to Dependencies<br />
Verify that all dependent objects such as databases, files, and procedures are available. In Demonstration<br />
2A, you saw a situation where the job on the second <strong>SQL</strong> Server instance did not run because the objects<br />
required to run the job were not present.<br />
Jobs often run in a different security context to the user that creates them. Incorrect security settings are a<br />
common problem that causes job execution to fail. The security context for job steps is discussed in more<br />
detail in Module 14.<br />
Question: When migrating a job from test to production, what else would be required apart<br />
from moving the job itself?
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 13-25<br />
Demonstration 3A: Viewing Job History and Resolving Failed Jobs<br />
Demonstration Steps<br />
1. If Demonstration 2A was not performed:<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_13_PRJ\10775A_13_PRJ.ssmssln and click Open.<br />
• Open and execute the 00 – Setup.sql script file from within Solution Explorer.<br />
• From the View menu, click Solution Explorer. Open the script file 21 – Demonstration 2A.sql<br />
and follow the steps in the script file.<br />
2. Open the 31 – Demonstration 3A.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
13-26 Automating <strong>SQL</strong> Server <strong>2012</strong> Management<br />
Lab 13: Automating <strong>SQL</strong> Server Management<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_13_PRJ\10775A_13_PRJ.ssmssln.<br />
7. From the View menu, click Solution Explorer. In Solution Explorer, double-click the query<br />
00-Setup.sql. When the query window opens, click Execute on the toolbar.<br />
Lab Scenario<br />
There are a number of routine tasks to be performed on the Proseware instance. Previously these tasks<br />
have been performed manually and the lack of consistency in performing these tasks has caused issues for<br />
the organization. On the new instance, you need to automate these tasks using <strong>SQL</strong> Server Agent.<br />
There is also a report about an existing <strong>SQL</strong> Server Agent job that is not performing as expected. If you<br />
have time, you need to resolve the issues with the job.
Exercise 1: Create a Data Extraction Job<br />
Scenario<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 13-27<br />
In Module 8 you created an SSIS data extraction package. The extraction process identifies prospects that<br />
have not been contacted recently. The output of this extraction process is used for planning marketing<br />
activities during the week. You need to create a job to execute the SSIS package.<br />
The main tasks for this exercise are as follows:<br />
1. Create the required job.<br />
2. Test that the job executes without error.<br />
Task 1: Create the required job<br />
• Create the required job. Call the job “Extract Uncontacted Prospects”. The job needs to execute the<br />
SSIS package “Weekly Extract of Prospects to Contact” which is located on the Proseware server<br />
instance.<br />
Task 2: Test that the job executes without error<br />
• Using Object Explorer, start the job and make sure it executes correctly.<br />
Results: After this exercise, you should have created the data extraction job.<br />
Exercise 2: Schedule the Data Extraction Job<br />
Scenario<br />
You have created a job to perform the extraction of prospects that need to be contacted. The information<br />
provided by this job is used two times during the week. A meeting is held at 9AM each Monday morning<br />
to plan the marketing activities for the week. A second planning meeting is held at 7PM on Tuesday<br />
evenings. You need to make sure that an updated list is available shortly before each meeting. You should<br />
schedule the extraction job to run each Monday at 8:30AM and each Tuesday at 6:30PM.<br />
The main task for this exercise is as follows:<br />
1. Schedule the data extraction job.<br />
Task 1: Schedule the data extraction job<br />
• Create a new job schedule for each Monday at 8.30AM.<br />
• Create a new job schedule for each Tuesday at 6.30PM.<br />
• Assign the schedule to the data extraction job.<br />
Results: After this exercise, you should have applied multiple schedules to the data extraction job.
13-28 Automating <strong>SQL</strong> Server <strong>2012</strong> Management<br />
Challenge Exercise 3: Troubleshoot a Failing Job (Only if time permits)<br />
Scenario<br />
On the Proseware server, a new <strong>SQL</strong> Server Agent job called Extract Long Page Loads was recently<br />
created. The job retrieves details of web pages that took a long time to load from the Marketing.WebLog<br />
table for further analysis by the web development team. The job is intended to run every Monday at 6AM.<br />
The job was implemented last week but no data retrieval appears to have occurred this week. You need to<br />
investigate and correct any issues with this job.<br />
The main task for this exercise is as follows:<br />
1. Troubleshoot the failing job.<br />
Task 1: Troubleshoot the failing job<br />
• Review the job history for the failing job and identify the cause of the failure.<br />
• Correct the problem that is preventing the job from executing successfully.<br />
• Test that the job now runs.<br />
• Ensure that the job is correctly scheduled.<br />
Results: After this exercise, you should have resolved the issues with the failing job.
Module Review and Takeaways<br />
Review Questions<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 13-29<br />
1. What functions do you currently perform manually that could be placed in a job?<br />
2. How long is the job history kept in msdb?<br />
Best Practices<br />
1. Use <strong>SQL</strong> Agent jobs to schedule routine jobs.<br />
2. Create custom categories to group your jobs.<br />
3. Script your jobs for remote deployment.<br />
4. Use job history to review job and job step outcomes.<br />
5. Use Job Activity Monitor to real time monitor jobs.
13-30 Automating <strong>SQL</strong> Server <strong>2012</strong> Management
Module 14<br />
Configuring Security for <strong>SQL</strong> Server Agent<br />
Contents:<br />
Lesson 1: Understanding <strong>SQL</strong> Server Agent Security 14-3<br />
Lesson 2: Configuring Credentials 14-13<br />
Lesson 3: Configuring Proxy Accounts 14-18<br />
Lab 14: Configuring Security for <strong>SQL</strong> Server Agent 14-24<br />
14-1
14-2 Configuring Security for <strong>SQL</strong> Server Agent<br />
Module Overview<br />
In earlier modules, you have seen the need to minimize the permissions that are granted to users, so that<br />
the users have only the permissions that they need to perform their tasks. The same logic applies to the<br />
granting of permissions to <strong>SQL</strong> Server Agent. While it is easy to execute all jobs in the context of the <strong>SQL</strong><br />
Server Agent service account, and to configure that account as an administrative account, a poor security<br />
environment would result from doing this. It is important to understand how to create a minimal privilege<br />
security environment for jobs that run in <strong>SQL</strong> Server Agent.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Explain <strong>SQL</strong> Server Agent security.<br />
• Configure credentials.<br />
• Configure Proxy accounts.
Lesson 1<br />
Understanding <strong>SQL</strong> Server Agent Security<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 14-3<br />
<strong>SQL</strong> Server Agent can be called upon to execute a wide variety of tasks. Many of the tasks that are<br />
executed by <strong>SQL</strong> Server Agent are administrative in nature but many other tasks that are executed by <strong>SQL</strong><br />
Server Agent are performed on behalf of users. The need to be able to execute a wide variety of task types<br />
leads to the need for flexible security configuration.<br />
Jobs need to be able to access many types of objects. As well as objects that reside inside <strong>SQL</strong> Server, jobs<br />
often need to access external resources such as operating system files and folders. These operating system<br />
(and other) dependencies also require a configurable and layered security model, to avoid the need to<br />
grant too many permissions to the <strong>SQL</strong> Server Agent service account.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• You will be able to <strong>SQL</strong> Server Agent security.<br />
• Describe <strong>SQL</strong> Server Agent roles.<br />
• Assign security contexts to <strong>SQL</strong> Server Agent job steps.<br />
• Troubleshoot <strong>SQL</strong> Server Agent security.
14-4 Configuring Security for <strong>SQL</strong> Server Agent<br />
Overview of <strong>SQL</strong> Server Agent Security<br />
Key Points<br />
Like all services, the <strong>SQL</strong> Server Agent service has an identity within the <strong>Microsoft®</strong> Windows® operating<br />
system. The service startup account defines the Windows account in which the <strong>SQL</strong> Server Agent runs. The<br />
account that is used defines the permissions that the <strong>SQL</strong> Server Agent service has when accessing<br />
network resources.<br />
Agent Service Account<br />
For the <strong>SQL</strong> Server Agent service, you can use the built-in Local Service or Network Service account.<br />
Preferably, another specified Windows domain account can be used.<br />
Note The Local System account option is provided for backward compatibility only and<br />
the Network Service is also not recommended for security reasons. Network Service has<br />
more capabilities than are required for the service account. An account with only the<br />
required permissions should be created and used instead.<br />
A Windows domain account should be used and should be configured with the least possible privileges<br />
that will still allow operation. During the installation of <strong>SQL</strong> Server, a local group is created with a name in<br />
the following format:<br />
<strong>SQL</strong>Server<strong>SQL</strong>AgentUser$$<br />
This group is granted all the access privileges needed by the <strong>SQL</strong> Server Agent account. Note that this<br />
only includes the bare minimum permissions that the account needs for <strong>SQL</strong> Server Agent to function.<br />
When <strong>SQL</strong> Server Agent needs to access other resources in job steps, additional permissions are necessary.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 14-5<br />
When <strong>SQL</strong> Server Configuration Manager is used to assign an account to the service (which is the<br />
preferred and supported method for changing service accounts), <strong>SQL</strong> Server Configuration Manager<br />
places the account into the correct group. No additional special permission grants need to be made.<br />
Note The <strong>SQL</strong> Server Agent account cannot use <strong>SQL</strong> Server Authentication for its<br />
connection to the <strong>SQL</strong> Server database engine.<br />
<strong>SQL</strong> Server Agent jobs are executed in the context of the service account by default. The alternative is to<br />
create Proxy Accounts that will be used for job execution. Proxy Accounts will be described later in this<br />
module.<br />
Question: What would cause a <strong>SQL</strong> Server Agent service account to need sysadmin<br />
privileges on the <strong>SQL</strong> Server instance?
14-6 Configuring Security for <strong>SQL</strong> Server Agent<br />
<strong>SQL</strong> Server Agent Roles<br />
Key Points<br />
By default, only members of the sysadmin fixed server role can administer <strong>SQL</strong> Server Agent. As the <strong>SQL</strong><br />
Server Agent job system can perform tasks that require access not only to <strong>SQL</strong> Server, but potentially also<br />
access to Windows and other network resources, it is important to control who can administer it. It is not<br />
considered good practice to make a login a member of the sysadmin fixed server role, if the only reason<br />
for their membership of that role is to administer <strong>SQL</strong> Server Agent.<br />
Fixed <strong>Database</strong> Roles for <strong>SQL</strong> Server Agent<br />
Fixed database roles in the msdb database are used to control access for non-sysadmin users. <strong>SQL</strong> Server<br />
<strong>2012</strong> includes fixed database roles for working with <strong>SQL</strong> Server Agent. The available roles, listed in order<br />
of increasing capability, are as follows:<br />
• <strong>SQL</strong>AgentUserRole<br />
• <strong>SQL</strong>AgentReaderRole<br />
• <strong>SQL</strong>AgentOperatorRole<br />
When users who are not members of one of these roles are connected to <strong>SQL</strong> <strong>Server®</strong> in SSMS, the <strong>SQL</strong><br />
Server Agent node in Object Explorer is not visible. A user must be a member of at least one of these fixed<br />
database roles or they must be a member of the sysadmin fixed server role, before they can use <strong>SQL</strong><br />
Server Agent.
<strong>SQL</strong>AgentUserRole<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 14-7<br />
<strong>SQL</strong>AgentUserRole is the least privileged of the <strong>SQL</strong> Server Agent fixed database roles. Members of<br />
<strong>SQL</strong>AgentUserRole have permissions only on the local jobs and job schedules that they own. They cannot<br />
run multi-server jobs (master and target server jobs), and they cannot change job ownership to gain<br />
access to jobs that they do not already own. <strong>SQL</strong>AgentUserRole members can also view a list of available<br />
proxies in the Job Step Properties dialog box of SSMS.<br />
<strong>SQL</strong>AgentReaderRole<br />
<strong>SQL</strong>AgentReaderRole includes all the <strong>SQL</strong>AgentUserRole permissions as well as permissions to view the list<br />
of available multiserver jobs, their properties, and their history. Members of this role can also view the list<br />
of all available jobs and job schedules and their properties, not just the jobs and job schedules that they<br />
own. <strong>SQL</strong>AgentReaderRole members cannot change job ownership to gain access to jobs that they do not<br />
already own.<br />
<strong>SQL</strong>AgentOperatorRole<br />
<strong>SQL</strong>AgentOperatorRole is the most privileged of the <strong>SQL</strong> Server Agent fixed database roles. It includes all<br />
the permissions of <strong>SQL</strong>AgentUserRole and <strong>SQL</strong>AgentReaderRole. Members of this role can also view<br />
properties for operators and proxies, and enumerate the available proxies and alerts on the server.<br />
<strong>SQL</strong>AgentOperatorRole members have additional permissions on local jobs and schedules. They can<br />
execute, stop, or start all local jobs, and they can delete the job history for any local job on the server.<br />
They can also enable or disable all local jobs and schedules on the server. To enable or disable local jobs<br />
or schedules, members of this role must use the sp_update_job and sp_update_schedule stored<br />
procedures, specifying the job name or schedule identifier parameter and the enabled parameter. If any<br />
other parameters are specified, execution of these stored procedures fails. <strong>SQL</strong>AgentOperatorRole<br />
members cannot change job ownership to gain access to jobs that they do not already own.<br />
Question: Why should you be careful giving access to non sysadmin fixed roles members?
14-8 Configuring Security for <strong>SQL</strong> Server Agent<br />
Discussion: <strong>SQL</strong> Server Agent Job Dependencies<br />
Discussion Topics<br />
Question: What <strong>SQL</strong> Server resources would <strong>SQL</strong> Server Agent Jobs potentially depend<br />
upon?<br />
Question: What resources outside of <strong>SQL</strong> Server might <strong>SQL</strong> Server Agent jobs depend upon?<br />
Question: What identity is needed for accessing the external resources?
Assigning Security Contexts to Agent Job Steps<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 14-9<br />
Each job step can be assigned a security context. Job steps that execute T-<strong>SQL</strong> code need to be<br />
considered separately from other types of job steps.<br />
T-<strong>SQL</strong> Job Steps<br />
T-<strong>SQL</strong> job steps do not use <strong>SQL</strong> Server Agent proxies. When a T-<strong>SQL</strong> job step is executed, <strong>SQL</strong> Server<br />
Agent impersonates the owner of the job, except in the situation where the owner of the job step is a<br />
member of the sysadmin fixed server role. In that case, the job step will run in the security context of the<br />
<strong>SQL</strong> Server Agent service, unless the sysadmin chooses to have the step impersonate another user.<br />
Members of the sysadmin fixed server role can specify that the job step should impersonate another user.<br />
Other Job Step Types<br />
For job step types that are not T-<strong>SQL</strong> based, a different security model is used. For members of the<br />
sysadmin fixed server role, by default, other job step types still use the <strong>SQL</strong> Server Agent service account<br />
to execute job steps. Because many different types of job steps can be executed within <strong>SQL</strong> Server Agent,<br />
it is undesirable to execute them using this account. To provide for tighter control, a proxy system was<br />
introduced.
14-10 Configuring Security for <strong>SQL</strong> Server Agent<br />
Proxy Accounts<br />
A proxy account is used to associate a job step with a Windows identity, via an object called a Credential.<br />
Proxy accounts can be created for all available subsystems, except for T-<strong>SQL</strong> job steps. The use of proxy<br />
accounts means that different Windows identities can be used to perform the different tasks required in<br />
jobs and provides tighter security by avoiding the need for a single account to have all the permissions<br />
required to execute all jobs.<br />
Credentials are discussed in Lesson 2 of this module and Proxy Accounts are discussed in Lesson 3.<br />
Question: Why should a proxy account be used, even when the owner of the step is a<br />
member of the sysadmin fixed server role?
<strong>SQL</strong> Server Agent Security Troubleshooting<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 14-11<br />
When <strong>SQL</strong> Server Agent jobs are not running as expected, security issues are a common cause of<br />
problems. To troubleshoot <strong>SQL</strong> Server Agent jobs, you should follow these steps:<br />
1. Make sure that the job is in fact running. Look in the Job Activity log and check to see when the job<br />
has run. For each failure of a job that is indicated by a red X (as shown in the example on the slide),<br />
expand the job and find the job steps that are failing. The failing job steps will also have a red X icon.<br />
2. Check the security account. When you click on the job step that is failing, at the bottom right hand<br />
side of the window, there is an indication of the security context that the job step ran under. Check<br />
the group membership for the account to make sure that the account should have all required<br />
permissions.<br />
3. Check the tasks that the job step needs to perform. This includes any T-<strong>SQL</strong> objects that need to be<br />
accessed and any Windows files or resources that need to be accessed.<br />
4. Check for each failing step, that the account that is being used to execute the step is able to access<br />
the resources that you have determined as necessary for the step.<br />
Note Another very common problem that occurs is that job steps might specify local file<br />
paths instead of UNC paths. In general, jobs should use UNC paths to ensure they are<br />
sufficiently portable. That is, a job should be able to be migrated to another server if<br />
required.<br />
Question: What might be the cause of a job that runs perfectly well on a test system but<br />
then fails when run on a production system?
14-12 Configuring Security for <strong>SQL</strong> Server Agent<br />
Demonstration 1A: Assigning a Security Context to Job Steps<br />
Demonstration Steps<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click <strong>SQL</strong><br />
Server Management Studio. In the Connect to Server window, type Proseware and click Connect.<br />
From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_14_PRJ\10775A_14_PRJ.ssmssln and click Open.<br />
3. From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file from<br />
within Solution Explorer.<br />
4. Open the 11 – Demonstration 1A.sql script file.<br />
5. Follow the instructions contained within the comments of the script file to execute each T-<strong>SQL</strong> batch<br />
contained in the file.
Lesson 2<br />
Configuring Credentials<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 14-13<br />
For <strong>SQL</strong> Server job steps to be able to access resources outside of <strong>SQL</strong> Server, the job steps must be<br />
executed in the security context of a Windows identity that has permission to access the required<br />
resources. Windows identities are separate from <strong>SQL</strong> Server identities, even though <strong>SQL</strong> Server can utilize<br />
Windows logins and groups. For a job step to be able to use a separate Windows identity, the job step<br />
needs to be able to logon as that identity and to be able to logon, the Windows user name and password<br />
needs to be stored somewhere. Credentials are <strong>SQL</strong> Server objects that are used to store Windows user<br />
names and passwords.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe credentials.<br />
• Configure credentials.<br />
• Manage credentials.
14-14 Configuring Security for <strong>SQL</strong> Server Agent<br />
Overview of Credentials<br />
Key Points<br />
A credential is a <strong>SQL</strong> Server object that contains the authentication information required to connect to a<br />
resource outside <strong>SQL</strong> Server. Most credentials contain a Windows user name and password.<br />
Credentials<br />
A <strong>SQL</strong> Agent Proxy that can be used for job execution maps to a credential in <strong>SQL</strong> Server. In Lesson 3, you<br />
will see how to map a Proxy account to a credential.<br />
<strong>SQL</strong> Server creates some system credentials that are associated with specific endpoints automatically.<br />
These automatically created credentials are called system credentials and have names that are prefixed<br />
with two hash signs (##).<br />
Question: How does <strong>SQL</strong> Server accesses resources outside <strong>SQL</strong> Server, when the user is<br />
connected using a Windows login?
Configuring Credentials<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 14-15<br />
Credentials can be created using the T-<strong>SQL</strong> CREATE CREDENTIAL statement or by using the GUI in SSMS.<br />
Configuring Credentials<br />
The password for a credential is called a secret and is strongly encrypted and stored in the master<br />
database. When <strong>SQL</strong> Server first needs to perform any type of encryption, the <strong>SQL</strong> Server service<br />
generates a service master encryption key. The service master key is also used to protect the master keys<br />
for each database. (Not all databases have master keys).<br />
Often an organization will have a policy that requires encryption keys to be replaced on a regular basis. If<br />
the service master key is regenerated, the secrets that are stored for credentials are automatically<br />
decrypted and re-encrypted using the new service master key.<br />
Note Encryption in <strong>SQL</strong> Server is an advanced topic that is out of scope for this course.<br />
Note also that the encryption of secrets for credentials by an Extensible Key Management<br />
(EKM) Provider is also supported but also out of scope for this course.
14-16 Configuring Security for <strong>SQL</strong> Server Agent<br />
Managing Credentials<br />
Key Points<br />
<strong>SQL</strong> Server provides the sys.credentials system view to provide catalog information about existing<br />
credentials. Consider the following query:<br />
SELECT * FROM sys.credentials;<br />
When executed, this query returns information similar to the following results:<br />
Modifying Credentials<br />
The password for a Windows account could change over time. You can update a credential with new<br />
values by using the ALTER CREDENTIAL statement. In the example on the slide, notice how the<br />
Agent_Export credential is being updated. Both the user name and password (that is, the secret) are<br />
supplied in the ALTER CREDENTIAL statement. The ALTER CREDENTIAL command always updates both<br />
the identity and the secret.<br />
Credentials are removed by the DROP CREDENTIAL statement.<br />
Question: What happens when the Windows user password that a credential maps to<br />
changes or expires?
Demonstration 2A: Configuring Credentials<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 14-17<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_14_PRJ\10775A_14_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
2. Open the 21 – Demonstration 2A.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
14-18 Configuring Security for <strong>SQL</strong> Server Agent<br />
Lesson 3<br />
Configuring Proxy Accounts<br />
You saw in the last lesson that Credentials are used in <strong>SQL</strong> Server to store identities that are external to<br />
<strong>SQL</strong> Server. Mostly, these are Windows identities and Credentials are used to store their Windows user<br />
names and passwords. To enable a job step in a <strong>SQL</strong> Server Agent job to use a Credential, the job step is<br />
mapped to the Credential using a Proxy Account. There is a set of built-in Proxy Accounts and you can<br />
create Proxy Accounts manually.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe Proxy Accounts.<br />
• Work with built-in Proxy Accounts.<br />
• Manage Proxy Accounts.
Overview of Proxy Accounts<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 14-19<br />
A <strong>SQL</strong> Server Proxy Account is used to define the security context that is used for a job step. A Proxy<br />
Account is typically used to provide <strong>SQL</strong> Server Agent with access to the security credentials for a<br />
Microsoft Windows user. Each Proxy Account can be associated with one or more subsystems.<br />
A job step that uses the Proxy Account can access the specified subsystems by using the security context<br />
of the Windows user. Before <strong>SQL</strong> Server Agent runs a job step that uses a Proxy Account, <strong>SQL</strong> Server<br />
Agent impersonates the credentials defined in the Proxy Account, and then runs the job step by using<br />
that security context.<br />
Note The Windows user that is specified in the credential must have the "Log on as a<br />
batch job" right on the computer on which <strong>SQL</strong> Server is running.<br />
The creation of a Proxy Account does not change existing permissions for the Windows account that is<br />
specified in the credential. For example, you can create a Proxy Account for a Windows account that does<br />
not have permission to connect to an instance of <strong>SQL</strong> Server. Job steps that use that Proxy Account would<br />
be unable to connect to <strong>SQL</strong> Server.<br />
Note that a user must have permission to use a Proxy Account before they can specify the Proxy Account<br />
in a job step. By default, only members of the sysadmin fixed server role have permission to access all<br />
Proxy Accounts.
14-20 Configuring Security for <strong>SQL</strong> Server Agent<br />
Permissions to access a Proxy Account can be granted to three types of security principals:<br />
• <strong>SQL</strong> Server logins<br />
• Server roles<br />
• Roles within the msdb database.<br />
Question: When should Proxy Accounts be used?
Working with Built-in Proxy Accounts<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 14-21<br />
<strong>SQL</strong> Server Proxy Accounts are utilized by subsystems. A subsystem is a predefined object that represents<br />
a set of functionality available within <strong>SQL</strong> Server. Each Proxy Account can be associated with more than<br />
one subsystem.<br />
Subsystems assist in providing security control because they segment the functions that are available to a<br />
Proxy Account. Earlier in this module, you saw that each job step runs in the context of a Proxy Account,<br />
except for T-<strong>SQL</strong> job steps. T-<strong>SQL</strong> job steps use the EXECUTE AS command to set the security context.<br />
<strong>SQL</strong> Server Agent checks subsystem access for a Proxy Account each and every time that a job step runs. If<br />
the security environment has changed and the Proxy Account no longer has access to the subsystem, the<br />
job step fails.<br />
Question: Why should Proxy Accounts not be assigned to all subsystems as a general rule?
14-22 Configuring Security for <strong>SQL</strong> Server Agent<br />
Managing Proxy Accounts<br />
Key Points<br />
The configuration for <strong>SQL</strong> Server Agent is stored in the msdb database. Proxy Accounts are part of the<br />
<strong>SQL</strong> Server Agent, so the configuration for the Proxy Accounts is also stored in the msdb database.<br />
Details of the current Proxy Account configuration can be obtained through a set of system views that are<br />
shown in the following table:<br />
System View Description<br />
dbo.sysproxies Returns one row per proxy defined in <strong>SQL</strong> Server Agent<br />
dbo.sysproxylogin Returns which <strong>SQL</strong> Server logins are associated with each <strong>SQL</strong> Server<br />
Agent Proxy Account. Note that no entry for members of the<br />
sysadmin role is stored or returned<br />
dbo.sysproxyloginsubsystem Returns which <strong>SQL</strong> Server Agent subsystems are defined for each<br />
Proxy Account<br />
dbo.syssubsystems Returns information about all available <strong>SQL</strong> Server Agent proxy<br />
subsystems<br />
You saw earlier that Credentials can be viewed via the sys.credentials system view. Credentials are stored<br />
in the master database, not in the msdb database.<br />
Question: Why would Credentials be stored in the master database instead of the msdb<br />
database?
Demonstration 3A: Configuring Proxy Accounts<br />
Demonstration Steps<br />
1. If Demonstrations 2A were not performed:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 14-23<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_14_PRJ\10775A_14_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
• Open the 21 – Demonstration 2A.sql script file from within Solution Explorer and follow the<br />
instructions contained within the file.<br />
2. Open the 31 – Demonstration 3A.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
14-24 Configuring Security for <strong>SQL</strong> Server Agent<br />
Lab 14: Configuring Security for <strong>SQL</strong> Server Agent<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_14_PRJ\10775A_14_PRJ.ssmssln.<br />
7. From the View menu, click Solution Explorer. In Solution Explorer, double-click the query<br />
00-Setup.sql. When the query window opens, click Execute on the toolbar.<br />
Lab Scenario<br />
You have deployed a job that extracts details of prospects that have not been contacted recently. You<br />
have also scheduled the job to run before each of the two marketing planning meetings that occur each<br />
week. The marketing team has deployed new functionality in Promote application to improve the<br />
planning processes. Rather than having the job scheduled, it is necessary for the Promote application to<br />
execute the job on demand.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 14-25<br />
The Promote application connects as a <strong>SQL</strong> login called PromoteApp. One of the other DBAs Terry Adams<br />
has attempted to configure <strong>SQL</strong> Server so that the PromoteApp login can execute the job. However he is<br />
unable to resolve why the job still will not run. In this lab you need to troubleshoot and resolve the<br />
problem.<br />
Supporting Documentation<br />
Actions that have already been taken by Terry Adams<br />
1. Created a database user for the PromoteApp login in the msdb database.<br />
2. Granted the PromoteApp database user permission to execute the msdb.dbo.sp_start_job stored<br />
procedure.<br />
3. Added the PromoteApp database user to the <strong>SQL</strong>AgentOperatorRole database role.<br />
4. Modified the Extract Uncontacted Prospects job to set the PromoteApp login as the owner of the job.<br />
5. Created a Windows user called ExtractUser with a password of Pa$$w0rd.<br />
6. Added the Windows user ExtractUser to the db_ssisoperator role within the msdb database.<br />
Exercise 1: Troubleshoot Job Execution Failure<br />
Scenario<br />
You need to review the action that Terry Adams has already taken then also review the history log for the<br />
failing job. You need to determine why the job is failing.<br />
The main task for this exercise is as follows:<br />
1. Troubleshoot job execution failure.<br />
Task 1: Troubleshoot job execution failure<br />
• Review the previous actions taken by Terry Adams as detailed in the supporting documentation for<br />
the exercise.<br />
• View History log for the Extract Uncontacted Prospects job.<br />
• Determine from the history the reason that the job is failing.<br />
Results: After this exercise, you should have determined the reason that they job is failing.<br />
Exercise 2: Resolve the Security Issue<br />
Scenario<br />
You have determined that a proxy account is required for the correct execution of the failing job step. You<br />
need to create and assign the proxy account then test to see if all issues have been resolved.<br />
The main tasks for this exercise are as follows:<br />
1. Create and assign proxy account.<br />
2. Test to see if all problems have been resolved.
14-26 Configuring Security for <strong>SQL</strong> Server Agent<br />
Task 1: Create and assign proxy account<br />
• Using <strong>SQL</strong> Server Management Studio, create a <strong>SQL</strong> Server credential called ExtractIdentity that is<br />
associated with the Windows user 1077XA-MIA-<strong>SQL</strong>\ExtractUser and with a password of Pa$$w0rd.<br />
• Create a <strong>SQL</strong> Server proxy account called ExtractionProxy that is associated with the ExtractIdentity<br />
credential and which is active in the <strong>SQL</strong> Server Integration Services Package subsystem. Ensure that<br />
you grant permission to the PromoteApp login to use this credential.<br />
• Assign the proxy account to the Extract Uncontacted Prospects job.<br />
Task 2: Test to see if all problems have been resolved<br />
• Attempt to execute the Extract Uncontacted Prospects job.<br />
• If the job fails, continue to Exercise 3 if you have time.<br />
Results: After this exercise, you should have corrected a security issue with a job.<br />
Challenge Exercise 3: Perform Further Troubleshooting (Only if time<br />
permits)<br />
Scenario<br />
After creating and assigning a proxy account to the job, the initial problem where <strong>SQL</strong> Server refused to<br />
execute job steps without a proxy has been resolved. However, the job still does not operate successfully.<br />
You should attempt to resolve the final issue.<br />
The main task for this exercise is as follows:<br />
1. Perform further troubleshooting.<br />
Task 1: Perform further troubleshooting<br />
• Locate and resolve further issues that are preventing the job from running successfully.<br />
• Test that the job now runs successfully.<br />
Results: After this exercise, you should have identified and resolved the remaining issues.
Module Review and Takeaways<br />
Review Questions<br />
1. What account types can be used to start <strong>SQL</strong> Server Agent service?<br />
2. What can credentials be used for?<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 14-27<br />
Best Practices related to a particular technology area in this module<br />
1. Use a Windows domain user to start <strong>SQL</strong> Server Agent service account.<br />
2. Use an account with least privileges.<br />
3. Create Proxy Accounts with least permissions assigned for job execution.
14-28 Configuring Security for <strong>SQL</strong> Server Agent
Module 15<br />
Monitoring <strong>SQL</strong> Server <strong>2012</strong> with Alerts and Notifications<br />
Contents:<br />
Lesson 1: Configuring <strong>Database</strong> Mail 15-3<br />
Lesson 2: Monitoring <strong>SQL</strong> Server Errors 15-11<br />
Lesson 3: Configuring Operators, Alerts and Notifications 15-18<br />
Lab 15: Monitoring <strong>SQL</strong> Agent Jobs with Alerts and Notifications 15-30<br />
15-1
15-2 Monitoring <strong>SQL</strong> Server <strong>2012</strong> with Alerts and Notifications<br />
Module Overview<br />
Many database administrators work in a reactive mode where they respond when users complain that<br />
errors or problems are occurring. It is important to try to move from a reactive mode of operation to a<br />
more proactive mode.<br />
One key aspect of managing <strong>Microsoft®</strong> <strong>SQL</strong> <strong>Server®</strong> in a proactive manner is to make sure that you are<br />
aware of events that occur in the server, as they happen. At first you might consider that this would still be<br />
a reactive approach. However, there are many types of issues that can arise that are not directly apparent<br />
to users of the database applications. <strong>SQL</strong> Server logs a wealth of information about issues and you can<br />
configure <strong>SQL</strong> Server to advise you automatically when these issues occur via alerts and notifications.<br />
The most common way that <strong>SQL</strong> Server database administrators receive details of events of interest is via<br />
email. <strong>SQL</strong> Server can be configured to send mail via an existing SMTP mail server.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Configure database mail.<br />
• Monitor <strong>SQL</strong> Server errors.<br />
• Configure operators, alerts and notifications.
Lesson 1<br />
Configuring <strong>Database</strong> Mail<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 15-3<br />
<strong>SQL</strong> Server needs to be able to advise administrators when issues arise that require the attention of the<br />
administrators. Electronic mail (email) is the most commonly used mechanism for notifications from <strong>SQL</strong><br />
Server. The <strong>Database</strong> Mail feature of <strong>SQL</strong> Server is used to connect to an existing SMTP server, when <strong>SQL</strong><br />
Server needs to send email.<br />
<strong>SQL</strong> Server can be configured with multiple email profiles and configured to control which users can<br />
utilize the email features of the product. It is important to be able to track and trace emails that have<br />
been sent. <strong>SQL</strong> Server allows you to configure a policy for the retention of emails.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe database mail.<br />
• Configure database mail profiles.<br />
• Configure database mail security.<br />
• Configure database mail retention.
15-4 Monitoring <strong>SQL</strong> Server <strong>2012</strong> with Alerts and Notifications<br />
Overview of <strong>Database</strong> Mail<br />
Key Points<br />
<strong>Database</strong> Mail sends email through a Simple Mail Transport Protocol (SMTP) server. There must be an<br />
available SMTP server on the network that accepts the mail.<br />
Configuring <strong>Database</strong> Mail<br />
To enable and configure <strong>Database</strong> Mail accounts and profiles, use the <strong>Database</strong> Mail Configuration<br />
Wizard.<br />
While the configuration details for <strong>Database</strong> Mail are stored in the msdb database along with all other<br />
<strong>SQL</strong> Server Agent configuration data, <strong>SQL</strong> Server Agent caches profile information in memory, so that it is<br />
possible for <strong>SQL</strong> Server Agent to send email in situations where the <strong>SQL</strong> Server database engine is no<br />
longer available.<br />
<strong>Database</strong> Mail can be used to send email as part of a <strong>SQL</strong> Server Agent job, in response to an alert being<br />
raised, or on behalf of a user by the execution of the sp_send_dbmail system stored procedure.<br />
<strong>SQL</strong> Mail<br />
Note that for backwards compatibility, earlier versions of <strong>SQL</strong> Server included a feature called <strong>SQL</strong> Mail.<br />
<strong>SQL</strong> Mail was a Messaging Application Programming Interface (MAPI)–based email feature that you could<br />
use to configure <strong>SQL</strong> Server to send and receive email via Microsoft Exchange Server or other MAPI-based<br />
email servers. <strong>SQL</strong> Mail is not supported on <strong>SQL</strong> Server <strong>2012</strong>.
SMTP Relay<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 15-5<br />
Most SMTP servers today are configured, by default, to deny all email relay. A server that is configured to<br />
permit relay is willing to accept email from another server even though the target for the email is not in<br />
the mail server's domain. The mail server then forwards the email to its final destination.<br />
The refusal to relay email is important for avoiding spam related issues. Email servers that do not have this<br />
protection are called "open relay" servers and are a target for misuse. Blacklists for email servers that<br />
regularly send spam are maintained by many companies. By relaying email via other servers, it appears<br />
that those other servers are sending the spam, not the server that initially sent the spam. Because the<br />
most common configuration for mail servers is to deny all relaying activity, the SMTP server must be<br />
configured to permit the relay of emails from <strong>SQL</strong> Server if necessary.<br />
Question: Why must mail administrators be included in discussions, when planning a<br />
database mail configuration?
15-6 Monitoring <strong>SQL</strong> Server <strong>2012</strong> with Alerts and Notifications<br />
<strong>Database</strong> Mail Profiles<br />
Key Points<br />
A <strong>Database</strong> Mail profile is a collection of <strong>Database</strong> Mail accounts. At least one <strong>Database</strong> Mail account is<br />
required. If more than one <strong>Database</strong> Mail account is defined for a profile, the accounts are used in a<br />
predefined order in the attempt to send mails. The level of redundancy provided by the use of multiple<br />
email profiles can help to improve overall reliability.<br />
Profiles can be private or public. Private profiles are strictly controlled and are only available to specified<br />
users or roles. By comparison, public profiles can be used by any user or role that has been given<br />
permission to use the profile.<br />
Multiple Profiles<br />
It is possible to create multiple configurations by the use of different profiles. For example a profile can be<br />
created to send mail to an internal SMTP server, using an internal email address, for mails sent by <strong>SQL</strong><br />
Server Agent. A second profile could be created for use by a database application that needs to send<br />
external email notifications to customers.<br />
Each database user might have access to several profiles. If no profile is specified when sending a mail, the<br />
default profile will be used. If both private and public profiles exist, precedence is given to a private<br />
default profile over a public default profile. If no default profile is specified or if a non-default profile<br />
should be used, the profile name must be specified as a parameter to the sp_send_dbmail system stored<br />
procedure as shown in the following code:<br />
EXEC msdb.dbo.sp_send_dbmail<br />
@profile_name = 'Proseware Administrator',<br />
@recipients = 'admin@AdventureWorks.com',<br />
@body = 'Daily backup completed successfully.',<br />
@subject = 'Daily backup status';<br />
Question: If a user has access to several profiles, which profile is used when no profile is<br />
specified?
<strong>Database</strong> Mail Security<br />
Key Points<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 15-7<br />
The choice of service account for the <strong>SQL</strong> Server service is important when configuring <strong>Database</strong> Mail. If<br />
<strong>SQL</strong> Server is configured to run as the Local Service account, it does not have permission to make<br />
outgoing network connections. In this case, <strong>Database</strong> Mail cannot contact an email server located on a<br />
different computer.<br />
<strong>Database</strong> Mail Stored Procedures<br />
To minimize the security surface of <strong>SQL</strong> Server, the system extended stored procedures that are used for<br />
<strong>Database</strong> Mail are disabled by default. When you run the <strong>Database</strong> Mail Configuration Wizard, the<br />
procedures are enabled for you. If you wish to configure <strong>Database</strong> Mail manually, you can enable the<br />
<strong>Database</strong> Mail system extended stored procedures by setting the sp_configure option "<strong>Database</strong> Mail XPs"<br />
to the value 1.<br />
Security and Attachment Limitations<br />
Not all <strong>SQL</strong> Server users are permitted to send emails. The ability to send emails is limited to members of<br />
the database role called <strong>Database</strong>MailUserRole in the msdb database. Members of the sysadmin fixed<br />
server role can also send database mail.<br />
You can also limit both the types and size of attachments that can be included in emails that are sent by<br />
<strong>Database</strong> Mail. This limitation can be configured using the <strong>Database</strong> Mail Configuration Wizard or by<br />
calling the dbo.sysmail_configure_sp system stored procedure in the msdb database.<br />
Question: Why can't database mail be used with a remote SMTP server when using the Local<br />
Service account for the database engine?
15-8 Monitoring <strong>SQL</strong> Server <strong>2012</strong> with Alerts and Notifications<br />
<strong>Database</strong> Mail Logs and Retention<br />
Key Points<br />
<strong>SQL</strong> Server logs messages in internal tables in the msdb database. Log messages can be viewed by<br />
querying the dbo.sysmail_log table. The level of logging that is carried out by <strong>SQL</strong> Server can be<br />
configured to one of following three levels:<br />
Logging Level Description<br />
Normal Only errors are logged<br />
Extended Errors, warnings, and informational messages are logged<br />
Verbose As per Extended plus success messages and a number of<br />
internal messages<br />
You can configure the Logging Level parameter by using the Configure System Parameters dialog box of<br />
the <strong>Database</strong> Mail Configuration Wizard, or by calling the dbo.sysmail_configure_sp stored procedure in<br />
the msdb database.<br />
Verbose level should only be used for troubleshooting purposes as it can generate a large volume of log<br />
entries.
<strong>Database</strong> Mail Tables and Views<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 15-9<br />
Internal tables in the msdb database are used to hold the email messages and the attachments that are<br />
sent from <strong>Database</strong> Mail, together with the current status of each message. <strong>Database</strong> Mail updates these<br />
tables as each message is processed.<br />
You can track the delivery status of an individual message by viewing information in the following views:<br />
• dbo.sysmail_allitems<br />
• dbo.sysmail_sentitems<br />
• dbo.sysmail_unsentitems<br />
• dbo.sysmail_faileditems<br />
To see details of email attachments, query the dbo.sysmail_mailattachments view.<br />
Retention Policy<br />
<strong>Database</strong> Mail retains outgoing messages and their attachments in the msdb database. This means that<br />
there is a need to plan a retention policy for email messages and log entries. If the volume of <strong>Database</strong><br />
Mail messages and related attachments is high, plan for substantial growth of the msdb database.<br />
Periodically delete messages to regain space and to comply with your organization's document retention<br />
policies. For example, the example in the slide shows how to delete messages, attachments, and log<br />
entries that are more than one month old. You could schedule these commands to be executed<br />
periodically by creating a <strong>SQL</strong> Server Agent job.
15-10 Monitoring <strong>SQL</strong> Server <strong>2012</strong> with Alerts and Notifications<br />
Demonstration 1A: Configuring <strong>Database</strong> Mail<br />
Demonstration Steps<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click <strong>SQL</strong><br />
Server Management Studio. In the Connect to Server window, type Proseware and click Connect.<br />
From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_15_PRJ\10775A_15_PRJ.ssmssln and click Open.<br />
3. From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file from<br />
within Solution Explorer.<br />
4. Open the 11 – Demonstration 1A.sql script file.<br />
5. Follow the instructions contained within the comments of the script file.
Lesson 2<br />
Monitoring <strong>SQL</strong> Server Errors<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 15-11<br />
It is important to understand the core aspects of errors as they apply to <strong>SQL</strong> Server. In particular, you need<br />
to consider:<br />
• The nature of errors.<br />
• The locations where errors can occur when T-<strong>SQL</strong> code is being executed.<br />
• The data that is returned by errors.<br />
• The severities that errors can exhibit.<br />
Severe <strong>SQL</strong> Server errors are recorded in the <strong>SQL</strong> Server Error Log. It is important to know how to<br />
configure the log.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe what an error is.<br />
• Describe error severity levels.<br />
• Configure the <strong>SQL</strong> Server error log.
15-12 Monitoring <strong>SQL</strong> Server <strong>2012</strong> with Alerts and Notifications<br />
What Is in an Error?<br />
Key Points<br />
An error is itself an object and has properties as shown in the table.<br />
Error Attributes<br />
It might not be immediately obvious that a <strong>SQL</strong> Server error (or sometimes called an exception) is itself an<br />
object. Errors return a number of useful properties (or attributes).<br />
Error numbers are helpful when trying to locate information about the specific error, particularly when<br />
searching online for information about the error.<br />
You can view the list of system-supplied error messages by querying the sys.messages catalog view:<br />
SELECT * FROM sys.messages<br />
ORDER BY message_id, language_id;<br />
When executed, this command returns the following:<br />
Note that there are multiple messages with the same message_id. Error messages are localizable and can<br />
be returned in a number of languages. A language_id of 1033 is the English version of the message. You<br />
can see an English message in the third line of the output above.<br />
Severity indicates how serious the error is. It is described further in the next topic.
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 15-13<br />
State is defined by the author of the code that raised the error. For example, if you were writing a stored<br />
procedure that could raise an error for a missing customer and there were five places in the code that this<br />
message could occur, you could assign a different state to each of the places where the message was<br />
raised. This would help later to troubleshoot the error.<br />
Procedure name is the name of the stored procedure that that error occurred in and Line Number is the<br />
location within that procedure. In practice, line numbers are not very helpful and not always applicable.<br />
Question: In which language is an error raised?
15-14 Monitoring <strong>SQL</strong> Server <strong>2012</strong> with Alerts and Notifications<br />
Error Severity<br />
Key Points<br />
The severity of an error indicates the type of problem encountered by <strong>SQL</strong> Server. Low severity values are<br />
informational messages and do not indicate true errors. Error severities occur in ranges.<br />
Values from 0 to 10<br />
Values from 0 to 9 are purely informational messages. When queries that raise these are executed in <strong>SQL</strong><br />
Server Management Studio, the information is returned but no error status information is provided. For<br />
example, consider the following code executed against the AdventureWorks database:<br />
SELECT COUNT(Color) FROM Production.Product;<br />
When executed, it returns a count as expected. However, if you look on the Messages tab in <strong>SQL</strong> Server<br />
Management Studio, you will see the following:<br />
Warning: Null value is eliminated by an aggregate or other SET operation.<br />
(1 row(s) affected)<br />
Note that no error really occurred but <strong>SQL</strong> Server is warning you that it ignored NULL values when<br />
counting the rows. Note that no status information is returned.<br />
Severity 10 is the top of the informational messages.<br />
Values from 11 to 16<br />
Values from 11 to 16 are considered errors that the user can correct. Typically they are used for errors<br />
where <strong>SQL</strong> Server assumes that the statement being executed was in error.
Here are a few examples of these errors:<br />
Error Severity Example<br />
11 indicates that an object does not exist<br />
13 indicates a transaction deadlock<br />
14 indicates errors such as permission denied<br />
15 indicates syntax errors<br />
Values from 17 to 19<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 15-15<br />
Values from 17 to 19 are considered serious software errors that the user cannot correct. For example,<br />
severity 17 indicates that <strong>SQL</strong> Server has run out of resources (memory, disk space, locks, etc.).<br />
Values above 19<br />
Values above 19 tend to be very serious errors that normally involve errors with either the hardware or<br />
<strong>SQL</strong> Server itself. It is common to ensure that all errors above 19 are logged and alerts generated on<br />
them.<br />
Question: In which of the error number ranges shown on the slide, would you expect to see<br />
a syntax error?
15-16 Monitoring <strong>SQL</strong> Server <strong>2012</strong> with Alerts and Notifications<br />
Configuring the <strong>SQL</strong> Server Error Log<br />
Key Points<br />
Important messages (particularly those that would be considered as severe error messages) are logged to<br />
both the Windows Application Event Log and <strong>SQL</strong> Server Error Log. The sys.messages view shows the<br />
available error messages and indicates which messages will be logged by default. You can control the<br />
logging behavior of individual messages by using the sp_altermessage system stored procedure.<br />
The <strong>SQL</strong> Server Error Log is located by default in the folder:<br />
Program Files\Microsoft <strong>SQL</strong> Server\MS<strong>SQL</strong>11.\MS<strong>SQL</strong>\LOG\ERRORLOG<br />
The log files are named ERRORLOG.n where n is the log file number. The log files are text files and can be<br />
viewed using any text editor or by using the Log Viewer provided in SSMS.<br />
By default, <strong>SQL</strong> Server retains backups of the previous six logs and gives the most recent log backup the<br />
extension .1, the second most recent the extension .2, and so on. The current error log has no extension.<br />
The number of log files that should be retained can be configured in SSMS using the right-click Configure<br />
option from the <strong>SQL</strong> Server Logs node in Object Explorer.<br />
Recycling Log Files<br />
The log file cycles with every restart of the <strong>SQL</strong> Server instance. On occasions, you might want to remove<br />
excessively large log files. By using the system stored procedure sp_cycle_errorlog, you can close the<br />
existing log file and open a new log file on demand. If there is a regular need to recycle the log file, you<br />
could create a <strong>SQL</strong> Server Agent job to execute the sp_cycle_errorlog system stored procedure on a<br />
schedule. Cycling the log can let you stop the current error log becoming too large.
Demonstration 2A: Cycling the Error Log<br />
Demonstration Setup<br />
1. If Demonstration 1A was not performed:<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 15-17<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_15_PRJ\10775A_15_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
2. Open the 21 – Demonstration 2A.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
15-18 Monitoring <strong>SQL</strong> Server <strong>2012</strong> with Alerts and Notifications<br />
Lesson 3<br />
Configuring Operators, Alerts and Notifications<br />
Earlier in this module, you have seen that it is important for <strong>SQL</strong> Server to be able to send messages to an<br />
administrator when events that need administrative attention occur.<br />
Many <strong>SQL</strong> Server systems will have a number of administrators. <strong>SQL</strong> Server Agent allows you to configure<br />
Operators that are associated with one or more administrators and to determine when each of the<br />
operators should be contacted, along with the method that should be used for contacting the operator.<br />
<strong>SQL</strong> Server can also detect many situations that might be of interest to administrators. You can configure<br />
Alerts that are based on <strong>SQL</strong> Server errors or on system events such as low disk space availability. <strong>SQL</strong><br />
Server can then be configured to notify you of these situations.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe the role of Operators in <strong>SQL</strong> Server Agent.<br />
• Implement <strong>SQL</strong> Server alerts.<br />
• Create alerts.<br />
• Configure actions that need to occur in response to alerts.<br />
• Troubleshoot alerts and notifications.
<strong>SQL</strong> Server Agent Operator Overview<br />
Key Points<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 15-19<br />
An Operator in <strong>SQL</strong> Server Agent is an alias for a person or a group of people that can receive electronic<br />
notifications when jobs complete or when alerts are raised.<br />
Note Operators do not need to be Windows logins, <strong>SQL</strong> Server logins, or database users.<br />
For example, you could create an operator that is a reference to a pager address.<br />
<strong>SQL</strong> Server Agent jobs can be configured to send messages when a job completes, when a job completes<br />
successfully, or when a job fails.<br />
You can define new operators using either SSMS or the dbo.sp_add_operator system stored procedure.<br />
Once an operator is defined, the definition for the operator is visible through the dbo.sysoperators system<br />
table in the msdb database.<br />
Contacting An Operator<br />
You can configure three types of contact methods for each operator:<br />
• Email: SMTP email address that notifications should be sent to. It is desirable to use group email<br />
addresses rather than individual email addresses where possible. It is possible to list multiple email<br />
addresses by separating them with a semicolon.<br />
• Pager Email: SMTP email address that a message is sent to during specified times (and days) during a<br />
week.
15-20 Monitoring <strong>SQL</strong> Server <strong>2012</strong> with Alerts and Notifications<br />
• Net Send address: Messenger address that a message is sent to.<br />
Note The use of Net Send for notifications is deprecated and should not be used for new<br />
development as it will be removed in a future version of <strong>SQL</strong> Server. The Net Send option is<br />
not useful as it depends upon the Messenger service in Microsoft Windows®. That service is<br />
generally disabled on current systems.<br />
A fail-safe operator can be defined to respond to an alert when pager notifications to other operators fail<br />
because of time limitations that have been configured. For example, if all operators are off duty when an<br />
alert is fired, the fail-safe operator will be contacted.<br />
Existing Active Directory® users and groups can be used as operator groups if they are mail enabled<br />
groups.
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 15-21<br />
Demonstration 3A: Configuring <strong>SQL</strong> Server Agent Operators<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_15_PRJ\10775A_15_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
2. Open the 31 – Demonstration 3A.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
15-22 Monitoring <strong>SQL</strong> Server <strong>2012</strong> with Alerts and Notifications<br />
Overview of <strong>SQL</strong> Server Alerts<br />
Key Points<br />
There are many events that can occur in a <strong>SQL</strong> Server system that are of interest to administrators. An<br />
Alert is a <strong>SQL</strong> Server object that defines a condition that requires attention and a response that should be<br />
taken when the event occurs. You can define alerts to execute a job or to notify an operator when a<br />
particular event occurs or even when a performance threshold is exceeded.<br />
<strong>SQL</strong> Server Alerts<br />
Events are generated by <strong>SQL</strong> Server and entered into the Windows Application Event Log. On startup, <strong>SQL</strong><br />
Server Agent registers itself as a callback service with the Windows Application Event log. This means that<br />
<strong>SQL</strong> Server Agent will be directly notified by the application log when events of interest occur. This<br />
callback mechanism operates efficiently as it means that <strong>SQL</strong> Server Agent does not need to continuously<br />
read (or more formally "poll") the application log to find events of interest.<br />
When <strong>SQL</strong> Server Agent is notified of a logged event, it compares the event to the alerts that have been<br />
defined. When <strong>SQL</strong> Server Agent finds a match, it fires an alert, which is an automated response to an<br />
event.<br />
Note The error message must be written to the Windows Application Log to be used for<br />
<strong>SQL</strong> Server Agent Alerts.
Alerts Actions (Responses)<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 15-23<br />
You can create alerts to respond to individual error numbers or to respond to all errors of a specific<br />
severity level. You can define the alert for all databases or for a specific database. You can define the time<br />
delay between responses.<br />
Note It is considered good practice to configure notifications for all error messages with<br />
Severity Level 19 and above.<br />
System Events<br />
In addition to monitoring <strong>SQL</strong> Server events, <strong>SQL</strong> Server Agent can also monitor conditions that can be<br />
detected via Windows Management Instrumentation (WMI) events. The WQL queries that are written to<br />
retrieve the performance data are executed a few times each minute. As a result, it can take a few seconds<br />
for these alerts to fire.<br />
Performance condition alerts can also be configured on any of the performance counters that <strong>SQL</strong> Server<br />
exposes.<br />
Question: What events are you familiar with that should have a configured alert?
15-24 Monitoring <strong>SQL</strong> Server <strong>2012</strong> with Alerts and Notifications<br />
Create an Alert<br />
Key Points<br />
Alerts are created using the GUI in SSMS or by calling the dbo.sp_add_alert system stored procedure.<br />
When defining an alert, you can also specify a <strong>SQL</strong> Server Agent job that should be started when the alert<br />
occurs. In the example on the slide, the Job ID of a <strong>SQL</strong> Server Agent job that has already been created<br />
has been added to the definition of the alert. The job will be started when the alert fires.<br />
The action that <strong>SQL</strong> Server Agent takes in response to the event or performance condition may include<br />
contacting an operator.<br />
Logged Events<br />
You have seen that alerts will only fire for <strong>SQL</strong> Server errors if the error messages are written to the<br />
Microsoft Windows Application Event log. In general, error severity levels from 19 to 25 are automatically<br />
written to the application log but this is not always the case. To check which messages are automatically<br />
written to the log, query the is_event_logged column in the sys.messages table.<br />
Most events with severity levels less than 19 will only trigger alerts if you have used one of the following<br />
options:<br />
• Modified the error message using the dbo.sp_altermessage system stored procedure to make the<br />
error message a logged message.<br />
• Raised the error in code using the RAISERROR WITH LOG option.<br />
• Used the xp_logevent system extended stored procedure to force entries to be written to the log.<br />
Question: What type of alert would be needed to monitor free space in file the system?
Configuring Alert Actions<br />
Key Points<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 15-25<br />
When an alert fires, there are two actions that can be configured to respond to the alert:<br />
• Execute a Job<br />
• Notify Operators<br />
Execute a Job<br />
The execution of a <strong>SQL</strong> Server Agent job can be configured as a response to an alert. Only one job can be<br />
started. However, if you need to start multiple jobs when an alert occurs, create a new job that executes<br />
the other jobs and then configure the new job to respond to the alert.<br />
The job to be executed can be configured when first creating the alert using dbo.sp_add_alert or by<br />
calling the dbo.sp_update_alert system stored procedure after the alert has already been created.<br />
Notify Operators<br />
Unlike the configuration of a job to run as part of the configuration of an alert, the list of operators to be<br />
notified when an alert fires is defined using the dbo.sp_add_notification system stored procedure.
15-26 Monitoring <strong>SQL</strong> Server <strong>2012</strong> with Alerts and Notifications<br />
When sending messages to operators about alerts, it is important to be able to provide the operator with<br />
sufficient context about the alert so that they can determine the appropriate action to take. Tokens can be<br />
included in messages to add detail to the message. The special tokens available for working with alerts are<br />
shown in the following table:<br />
Token Description<br />
A-DBN <strong>Database</strong> Name<br />
A-SVR Server Name<br />
A-ERR Error Number<br />
A-SEV Error Severity<br />
A-MSG Error Message<br />
Note that for security reasons this feature is disabled by default and can be enabled in the properties of<br />
<strong>SQL</strong> Server Agent.<br />
Question: If notifications should be sent to a pager email address, what else should be<br />
configured?
Troubleshooting Alerts and Notifications<br />
Key Points<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 15-27<br />
When troubleshooting alerts and notifications, use the following process to identify the issues:<br />
Step Description<br />
Ensure that <strong>SQL</strong> Server Agent is<br />
running<br />
Check that the error message is<br />
written to Application Log.<br />
The Application Log will only send messages to <strong>SQL</strong> Server<br />
Agent when the Agent is running. The Application Log does<br />
not hold a queue of notifications to be made at a later<br />
time.<br />
For <strong>SQL</strong> Server Event Alerts, check that the error message is<br />
written to the Application Log and also make sure that the<br />
Application Log is configured with sufficient size to hold all<br />
event log details.<br />
Ensure that the alert is enabled. Alerts can be enabled or disabled and will not fire when<br />
disabled.<br />
Check that the alert was raised. If the alert does not appear to be raised, make sure that the<br />
setting for delay between responses is not set to too high a<br />
value.
15-28 Monitoring <strong>SQL</strong> Server <strong>2012</strong> with Alerts and Notifications<br />
(continued)<br />
Step Description<br />
If the alert was raised but no<br />
action was taken<br />
Check that the job that is configured to respond to the alert<br />
functions as expected. For operator notifications, check that<br />
<strong>Database</strong> Mail is working and that the SMTP Server<br />
configuration is correct. Test the <strong>Database</strong> Mail profile that<br />
is being used to send notifications by manually sending mail<br />
from the profile used by <strong>SQL</strong> Server Agent.<br />
Question: Why might an error message not be written to the application log?
Demonstration 3B: Configuring Alerts and Notifications<br />
Demonstration Steps<br />
1. If Demonstration 1A or 3A was not performed:<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 15-29<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_15_PRJ\10775A_15_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
• Open the script file 11 – Demonstration 1A.sql and follow the instructions in that file.<br />
• Open the script file 31 – Demonstration 3A.sql and follow the instructions in that file.<br />
2. Open the 32 – Demonstration 3B.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
15-30 Monitoring <strong>SQL</strong> Server <strong>2012</strong> with Alerts and Notifications<br />
Lab 15: Monitoring <strong>SQL</strong> Agent Jobs with Alerts and<br />
Notifications<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_15_PRJ\10775A_15_PRJ.ssmssln.<br />
7. From the View menu, click Solution Explorer. In Solution Explorer, double-click the query<br />
00-Setup.sql. When the query window opens, click Execute on the toolbar.<br />
Lab Scenario<br />
You have configured automated management tasks using <strong>SQL</strong> Server Agent and have configured security<br />
for those tasks. You now need to configure alerts and notifications for your Proseware system. The IT<br />
Support team at AdventureWorks has a defined escalation policy for <strong>SQL</strong> Server systems. As Proseware is<br />
part of the group of companies owned by AdventureWorks, you need to implement the relevant parts of<br />
this policy.
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 15-31<br />
The IT Support team has supplied you with details from the policy that they have determined are needed<br />
for your Proseware server. For some automated tasks, notifications need to be sent every time the tasks<br />
are completed, whether or not the tasks work as expected. For other tasks, notifications only need to be<br />
sent if the tasks fail for some reason.<br />
Notifications at AdventureWorks are pager-based. You need to configure <strong>Database</strong> Mail within <strong>SQL</strong> Server<br />
so that <strong>SQL</strong> Server Agent can send notification emails to the pager system. There are two on-call DBAs<br />
allocated to your system from the AdventureWorks IT Support team. You need to configure these staff<br />
members as operators based on their current on-call work schedules and also configure a fail-safe<br />
operator for any time period where no team member is working.<br />
If you have enough time, you should also configure <strong>SQL</strong> Server to alert you if severe errors occur on the<br />
server.<br />
Supporting Documentation<br />
<strong>Database</strong> Mail Configuration Parameters<br />
Profile Name: Proseware <strong>SQL</strong> Server Agent Profile<br />
SMTP Account Item Value<br />
Main Account Name Proseware Administrator<br />
E-mail Address prosewaresqladmin@adventureworks.com<br />
Display name Proseware <strong>SQL</strong> Server Administrator<br />
Reply e-mail prosewaresqladmin@adventureworks.com<br />
Server name mailserver.adventureworks.com<br />
Fail-safe Account Name AdventureWorks Administrator<br />
E-mail Address adventureworkssqladmin@adventureworks.com<br />
Display name AdventureWorks <strong>SQL</strong> Server Administrator<br />
Reply e-mail adventureworkssqladmin@adventureworks.com<br />
Server name mailserver.adventureworks.com<br />
Public Profiles: Configure Proseware <strong>SQL</strong> Agent Profile as public and as default<br />
Private Profiles: Configure <strong>SQL</strong> Server Agent Profile as the default profile for the <strong>SQL</strong> Server Agent<br />
service (AdventureWorks\PWService)<br />
Maximum E-mail File Size: 4MB
15-32 Monitoring <strong>SQL</strong> Server <strong>2012</strong> with Alerts and Notifications<br />
On-call DBA Operator Requirements<br />
• Senior DBA Jeff Hay is on-call via pager jeff.hay.pager@adventureworks.com for the entire 24 hours<br />
per day, seven days per week.<br />
• DBA Palle Petersen is on-call via pager palle.petersen.pager@adventureworks.com for the entire 24<br />
hours per day, seven days per week.<br />
• Although there should always be a DBA on call a fail-safe pager address<br />
itsupport.pager@adventureworks.com should be configured. The IT Support operator should also be<br />
available 24 hours per day, seven days per week.<br />
Job Notification Requirements<br />
• Backup-related jobs must send notifications on completion, not just on failure. Notifications for<br />
backup-related jobs should be sent to Jeff Hay.<br />
• System jobs do not need to send any notifications. System jobs are identified by the prefix sys.<br />
• All other jobs should notify on failure only. Notifications for other jobs should be sent to Palle<br />
Petersen.<br />
Severe Error Alerting Requirements<br />
• Any error of severity 17 or 18 should be notified to all operators via pager.<br />
• Error 9002 on any database should be notified to all operators via pager.<br />
Exercise 1: Configure <strong>Database</strong> Mail<br />
Scenario<br />
Notifications at AdventureWorks are pager-based. You need to configure <strong>Database</strong> Mail within <strong>SQL</strong> Server<br />
so that <strong>SQL</strong> Server Agent can send notification emails to the pager system.<br />
The main tasks for this exercise are as follows:<br />
1. Configure database mail.<br />
2. Test that database mail operates.<br />
Task 1: Configure database mail<br />
• Review the database mail configuration parameters in the supporting documentation for the exercise.<br />
• Configure database mail as per supplied parameters.<br />
Task 2: Test that database mail operates<br />
• Send a test email using the right-click option on the database mail node in Object Explorer.<br />
• From Solution Explorer, open and execute the script file 51 – Lab Exercise 1.sql to view outgoing mail<br />
items.<br />
Results: After this exercise, you should have configured and tested database mail.
Exercise 2: Implement Notifications<br />
Scenario<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 15-33<br />
The IT Support team at AdventureWorks has a defined escalation policy for <strong>SQL</strong> Server systems. As<br />
Proseware is part of the group of companies owned by AdventureWorks, you need to implement the<br />
relevant parts of this policy.<br />
The IT Support team has supplied you with details from the policy that they have determined are needed<br />
for your Proseware server. For some automated tasks, notifications need to be sent every time the tasks<br />
are completed, whether or not the tasks work as expected. For other tasks, notifications only need to be<br />
sent if the tasks fail for some reason.<br />
Notifications at AdventureWorks are pager-based. There are two on-call DBAs allocated to your system<br />
from the AdventureWorks IT Support team. You need to configure these staff members as operators<br />
based on their current on-call work schedules and also configure a fail-safe operator for any time period<br />
where no team member is working.<br />
The main tasks for this exercise are as follows:<br />
1. Review the requirements.<br />
2. Configure the required operators.<br />
3. Configure <strong>SQL</strong> Server Agent Mail.<br />
4. Configure and Test Notifications in <strong>SQL</strong> Server Agent Jobs.<br />
Task 1: Review the requirements<br />
• Review the supplied requirements in the supporting documentation for the exercise. In particular,<br />
note any required operators.<br />
Task 2: Configure the required operators<br />
• Configure the required operators that you determined were required in Task 1. The supporting<br />
documentation includes details of how the operators need to be configured.<br />
Task 3: Configure <strong>SQL</strong> Server Agent Mail<br />
• Configure <strong>SQL</strong> Server Agent to use the mail profile that you created in Exercise 1.<br />
• Configure <strong>SQL</strong> Server Agent to use the IT Support fail-safe operator that you configured in Task 2.<br />
Task 4: Configure and Test Notifications in <strong>SQL</strong> Server Agent Jobs<br />
• Configure notifications for jobs as per the requirements in the supporting documentation.<br />
• Test the notifications by executing all non-system jobs and reviewing the mail item sent.<br />
Results: After this exercise, you should have configured <strong>SQL</strong> Server Agent<br />
operators, and job notifications.<br />
Challenge Exercise 3: Implement Alerts (Only if time permits)<br />
Scenario<br />
If you have enough time, you should also configure <strong>SQL</strong> Server to alert you if severe errors occur on the<br />
server.
15-34 Monitoring <strong>SQL</strong> Server <strong>2012</strong> with Alerts and Notifications<br />
The main task for this exercise is as follows:<br />
1. Configure and test alerts.<br />
Task 1: Configure and test alerts<br />
• Review the supporting documentation for the alerting requirements.<br />
• Configure the required alerts.<br />
• Execute the script 71 – Lab Exercise 3.sql to test the alerting functionality.<br />
Note The script will return error 9002.<br />
Results: After this exercise, you should have configured and tested <strong>SQL</strong> Server<br />
alerts.
Module Review and Takeaways<br />
Review Questions<br />
1. What is an Operator in <strong>SQL</strong> Server Agent terminology?<br />
2. What is the lowest error severity that appears as an error message in SSMS?<br />
Best Practices<br />
1. Use <strong>Database</strong> Mail and not <strong>SQL</strong> Mail.<br />
2. Configure different profiles for different usage scenarios.<br />
3. Provide limited access to the ability to send emails from the database engine.<br />
4. Implement a retention policy for database mail log and mail auditing.<br />
5. Create operators to send notifications about Jobs and Alerts.<br />
6. Define Alerts for severe error messages.<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 15-35
15-36 Monitoring <strong>SQL</strong> Server <strong>2012</strong> with Alerts and Notifications
Module 16<br />
Performing Ongoing <strong>Database</strong> Maintenance<br />
Contents:<br />
Lesson 1: Ensuring <strong>Database</strong> Integrity 16-3<br />
Lesson 2: Maintaining Indexes 16-12<br />
Lesson 3: Automating Routine <strong>Database</strong> Maintenance 16-26<br />
Lab 16: Performing Ongoing <strong>Database</strong> Maintenance 16-30<br />
16-1
16-2 Performing Ongoing <strong>Database</strong> Maintenance<br />
Module Overview<br />
The <strong>Microsoft®</strong> <strong>SQL</strong> <strong>Server®</strong> database engine is very capable of running indefinitely without any ongoing<br />
maintenance. However obtaining the best outcomes from the database engine requires a schedule of<br />
routine maintenance operations.<br />
<strong>Database</strong> corruption is relatively rare but one of the most important tasks in the ongoing maintenance of<br />
a database is to check that no corruption has occurred in the database. Recovering from corruption<br />
depends upon detecting the corruption soon after it occurs.<br />
<strong>SQL</strong> Server indexes can also continue to work without any maintenance but they will perform better if any<br />
fragmentation that occurs within them is periodically removed.<br />
<strong>SQL</strong> Server includes a Maintenance Plan Wizard to assist in creating <strong>SQL</strong> Server Agent jobs that perform<br />
these and other ongoing maintenance tasks.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Ensure database integrity.<br />
• Maintain indexes.<br />
• Automate routine database maintenance.
Lesson 1<br />
Ensuring <strong>Database</strong> Integrity<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 16-3<br />
It is particularly rare for the database engine to cause corruption directly. However, the database engine<br />
depends upon the hardware platform that it runs upon and that platform can cause corruption. In<br />
particular, issues in the memory and I/O subsystems can lead to corruption within databases.<br />
If you do not detect corruption soon after it has occurred, further (and significantly more complex or<br />
troublesome) issues can arise. For example, there is little point attempting to recover a corrupt database<br />
from a set of backups where every backup contains a corrupted copy of the database.<br />
The DBCC CHECKDB command can be used to detect, and in some circumstances correct, database<br />
corruption. It is important that you are familiar with how DBCC CHECKDB is used.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Use DBCC CHECKDB.<br />
• Explain the most common DBCC CHECKDB options.<br />
• Explain how to use the DBCC CHECKDB repair options.
16-4 Performing Ongoing <strong>Database</strong> Maintenance<br />
Discussion: Ensuring <strong>Database</strong> Integrity<br />
Discussion Topics<br />
Question: What is database integrity?<br />
Question: What techniques are you currently using to check and maintain database<br />
integrity?
Overview of DBCC CHECKDB<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 16-5<br />
DBCC is a utility that is supplied with <strong>SQL</strong> Server that provides a large number of management facilities. In<br />
earlier documentation, you may see it referred to as the <strong>Database</strong> Consistency Checker. While checking<br />
the consistency of databases by using the CHECKDB option is a primary use of DBCC, it has many other<br />
uses. In current versions of the product, it is referred to as the <strong>Database</strong> Console Commands utility, to<br />
more closely reflect the wider variety of tasks that it can be used for.<br />
DBCC CHECKDB<br />
The CHECKDB option in the DBCC utility makes a particularly thorough check of the structure of a<br />
database, to detect almost all forms of potential corruption. The series of functions that are contained<br />
within DBCC CHECKDB are also available as options that can be performed separately if required.<br />
The most important of these options is shown in the following table:<br />
Option Description<br />
DBCC CHECKALLOC Checks the consistency of disk space allocation structures for a<br />
specified database.<br />
DBCC CHECKTABLE Checks the pages associated with a specified table and the<br />
pointers between pages that are associated with the table. DBCC<br />
CHECKDB executes DBCC CHECKTABLE for every table in the<br />
database.
16-6 Performing Ongoing <strong>Database</strong> Maintenance<br />
(continued)<br />
Option Description<br />
DBCC CHECKCATALOG Checks the database catalog by performing logical consistency<br />
checks on the metadata tables in the database. These metadata<br />
tables are used to hold information that describes both system and<br />
user tables and other database objects. DBCC CHECKCATALOG<br />
does not check user tables.<br />
DBCC CHECKDB also performs checks on other types of objects such as the links for FILESTREAM objects<br />
and consistency checks on the Service Broker objects.<br />
Note FILESTREAM and Service Broker are advanced topics that are out of scope for this<br />
course.<br />
Repair Options<br />
Even though DBCC CHECKDB has repair options, it is not always possible to repair a database without<br />
data loss. Usually, the best option for database recovery is to restore the database. This means that the<br />
execution of DBCC CHECKDB should be synchronized with your backup retention policy, to make sure<br />
that you can always restore a database from an uncorrupted database and that all required log backups<br />
since that time are available.<br />
Online Concurrent Operations<br />
DBCC CHECKDB can take a long time to execute and consumes considerable I/O and CPU resources. For<br />
this reason, DBAs often need to run it while the database is in use.<br />
In versions of <strong>SQL</strong> Server prior to <strong>SQL</strong> Server 2005, it was possible to receive indications of corruption<br />
where no corruption was present if DBCC CHECKDB was executed while the database was being used<br />
concurrently by other users. Since <strong>SQL</strong> Server 2005, DBCC CHECKDB operates using internal database<br />
snapshots to make sure that the utility works with a consistent view of the database. If DBCC CHECKDB<br />
reports corruption, it needs to be investigated.<br />
If the performance needs for the database activity that needs to run while DBCC CHECKDB is executing<br />
are too high, running DBCC CHECKDB against a restored backup of your database would be a better (but<br />
far from ideal) option than not running DBCC CHECKDB at all.<br />
Disk Space<br />
The use of an internal snapshot causes DBCC CHECKDB to need additional disk space. DBCC CHECKDB<br />
creates hidden files (using NTFS Alternate Streams) on the same volumes as the database files are located.<br />
Sufficient free space on the volumes must be available for DBCC CHECKDB to run successfully. The<br />
amount of disk space required on the volumes depends upon how much data is changed during the<br />
execution of DBCC CHECKDB.<br />
DBCC CHECKDB also uses space in tempdb while executing. To provide an estimate of the amount of<br />
space required in tempdb, DBCC CHECKDB offers an ESTIMATEONLY option.
Backups and DBCC CHECKDB<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 16-7<br />
It is considered a good practice to run DBCC CHECKDB on a database prior to performing a backup of the<br />
database. This check helps to ensure that the backup contains a consistent version of the database.<br />
Question: Why is it vital to run DBCC CHECKDB regularly?
16-8 Performing Ongoing <strong>Database</strong> Maintenance<br />
DBCC CHECKDB Options<br />
Key Points<br />
DBCC CHECKDB provides a number of options that alter its behavior while it is executing.<br />
• The PHYSICAL_ONLY option is often used on production systems because it substantially reduces the<br />
time taken to run DBCC CHECKDB on large databases. If you regularly use the PHYSICAL_ONLY<br />
option, you still need to periodically run the full version of the utility. How often you perform the full<br />
version would depend upon specific business requirements.<br />
• The NOINDEX option specifies that intensive checks of nonclustered indexes for user tables should<br />
not be performed. This also decreases the overall execution time but does not affect system tables<br />
because integrity checks are always performed on system table indexes. The assumption that you are<br />
making when using the NOINDEX option is that you can rebuild the nonclustered indexes if they<br />
become corrupt.<br />
• The EXTENDED_LOGICAL_CHECKS can only be performed when the database is in database<br />
compatibility level 100 (<strong>SQL</strong> Server 2008) or above. It performs detailed checks of the internal<br />
structure of objects such as CLR user-defined data types and spatial data types.<br />
• The TABLOCK option is used to request that DBCC CHECKDB takes a table lock on each table while<br />
performing consistency checks rather than using the internal database snapshots. This reduces the<br />
disk space requirements at the cost of preventing other users from updating the tables.<br />
• The ALL_ERRORMSGS and NO_INFOMSGS options only affect the output from the command but not<br />
the operations performed by the command.<br />
• The ESTIMATEONLY option estimates the space requirements in tempdb as discussed in the previous<br />
topic.<br />
Question: Which DBCC CHECKDB option might be used on very large production systems?
DBCC CHECKDB Repair Options<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 16-9<br />
As well as providing details of errors that have been found, the output of DBCC CHECKDB shows the<br />
repair option that would be needed to correct the problem. In the example on the slide, four consistency<br />
errors were found and the REPAIR_ALLOW_DATA_LOSS option would be needed to repair the database.<br />
Repair Options<br />
DBCC CHECKDB offers two repair options. For both options, the database needs to be in single user<br />
mode. The options are:<br />
• REPAIR_REBUILD rebuilds indexes. Corrupt data pages are removed. This option only works with<br />
certain mild forms of corruption and does not involve data loss.<br />
• REPAIR_ALLOW_DATA_LOSS will almost always produce data loss. It deallocates the corrupt pages<br />
and changes other pages that reference the corrupt pages. After the operation is complete, the<br />
database will be consistent, but only from a physical database integrity point of view. Significant loss<br />
of data could have occurred. Repair operations also do not consider any of the constraints that may<br />
exist on or between tables. If the specified table is involved in one or more constraints, it is<br />
recommended that you execute DBCC CHECKCONSTRAINTS after the repair operation is complete.<br />
The use of DBCC CHECKDB is shown in Demonstration 1A.<br />
You should backup a database before performing any repair option. Repairing a database should be an<br />
option of last resort. When a database is corrupt, it is typically better to restore the database from a<br />
backup, after solving the cause of the corruption. Unless you find and resolve the reason for the<br />
corruption, it may well happen again soon after. Corruption in <strong>SQL</strong> Server databases is mostly caused by<br />
failures in I/O or memory subsystems.
16-10 Performing Ongoing <strong>Database</strong> Maintenance<br />
Transaction Log Corruption<br />
If the transaction log becomes corrupt, a special option called an emergency mode repair can be<br />
attempted, but it is strongly recommended to restore the database in that situation. An emergency mode<br />
repair should only be used when no backup is available.<br />
Question: Why would it be preferable to restore a database rather than using<br />
REPAIR_ALLOW_DATA_LOSS?
Demonstration 1A: Using DBCC CHECKDB<br />
Demonstration Steps<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 16-11<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click <strong>SQL</strong><br />
Server Management Studio. In the Connect to Server window, type Proseware and click Connect.<br />
From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_16_PRJ\10775A_16_PRJ.ssmssln and click Open.<br />
3. From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file from<br />
within Solution Explorer.<br />
4. Open the 11 – Demonstration 1A.sql script file.<br />
5. Follow the instructions contained within the comments of the script file.
16-12 Performing Ongoing <strong>Database</strong> Maintenance<br />
Lesson 2<br />
Maintaining Indexes<br />
Another important aspect of <strong>SQL</strong> Server that requires ongoing maintenance for optimal performance is<br />
the management of indexes. Indexes are used to speed up operations where <strong>SQL</strong> Server needs to access<br />
data in a table. Over time, indexes can become fragmented and the performance of database applications<br />
that use the indexes will be reduced. Defragmenting or rebuilding the indexes will restore the<br />
performance of the database.<br />
Index management options are often included in regular database maintenance plan schedules. Before<br />
learning how to set up the maintenance plans, it is important to understand more about how indexes<br />
work and how they are maintained.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe how indexes affect performance.<br />
• Describe the different types of <strong>SQL</strong> Server indexes.<br />
• Describe how indexes become fragmented.<br />
• Use FILLFACTOR and PAD_INDEX.<br />
• Explain the ongoing maintenance requirements for indexes.<br />
• Implement online index operations.<br />
• Describe how statistics are created and used by <strong>SQL</strong> Server.
How Indexes Affect Performance<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 16-13<br />
<strong>SQL</strong> Server can access data in a table by reading all the pages of the table (known as a table scan)<br />
or by using index pages to locate the required rows.<br />
Indexes<br />
Whenever <strong>SQL</strong> Server needs to access data in a table, it makes a decision about whether to read all the<br />
pages of the table or whether there are one or more indexes on the table that would reduce the amount<br />
of effort required in locating the required rows.<br />
Queries can always be resolved by reading the underlying table data. Indexes are not required but<br />
accessing data by reading large numbers of pages is usually considerably slower than methods that use<br />
appropriate indexes.<br />
Indexes can help to improve searching, sorting, and join performance but they can impact data<br />
modification performance, they require ongoing management, and they require additional disk space.<br />
On occasion, <strong>SQL</strong> Server will create its own temporary indexes to improve query performance. However,<br />
doing so is up to the optimizer and beyond the control of the database administrator or programmer, so<br />
these temporary indexes will not be discussed in this module. The temporary indexes are only used to<br />
improve a query plan, if no proper indexing already exists.<br />
In this module, you will consider standard indexes created on tables. <strong>SQL</strong> Server includes other types of<br />
index:<br />
• Integrated full-text search (iFTS) uses a special type of index that provides flexible searching of text.<br />
• Spatial indexes are used with the GEOMETRY and GEOGRAPHY data types.<br />
• Primary and secondary XML indexes assist when querying XML data.
16-14 Performing Ongoing <strong>Database</strong> Maintenance<br />
• Columnstore indexes are used typically used in large data warehouses. The tables essentially become<br />
read-only while the columnstore indexes are in place.<br />
Note iFTS, Spatial, XML indexes, and Columnstore indexes are out of scope for this course<br />
but iFTS, Spatial, and XML indexes are described in course 10776A: Developing Microsoft<br />
<strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s, along with greater detail on standard clustered and<br />
nonclustered indexes.<br />
Question: When might a table scan be more efficient than using an index?
Types of <strong>SQL</strong> Server Indexes<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 16-15<br />
Rather than storing rows of a data as a heap, tables can be designed with an internal logical ordering. This<br />
is known as a clustered index.<br />
Clustered Index<br />
A table with a clustered index has a predefined order for rows within a page and for pages within the<br />
table. The order is based on a key made up of one or more columns. The key is commonly called a<br />
clustering key.<br />
Because the rows of a table can only be in a single order, there can be only a single clustered index on a<br />
table. An Index Allocation Map entry is used to point to a clustered index. Clustered indexes are always<br />
index id = 1.<br />
There is a common misconception that pages in a clustered index are "physically stored in order". While<br />
this is possible in rare situations, it is not commonly the case. If it was true, fragmentation of clustered<br />
indexes would not exist. <strong>SQL</strong> Server tries to align physical and logical order while creating an index but<br />
disorder can arise as data is modified.<br />
Index and data pages are linked within a logical hierarchy and also double-linked across all pages at the<br />
same level of the hierarchy to assist when scanning across an index. For example, imagine a table with ten<br />
extents and with allocated page numbers 201 to 279 all linked in order. (Each extent contains eight<br />
pages). If a page needed to be placed into the middle of the logical order, <strong>SQL</strong> Server finds an extent with<br />
a free page or allocates a new extent for the index. The page is logically linked into the correct position<br />
but it could be located anywhere within the database pages.
16-16 Performing Ongoing <strong>Database</strong> Maintenance<br />
Nonclustered Index<br />
A nonclustered index is a type of index that does not affect the layout of the data in the table in the way<br />
that a clustered index does.<br />
If the underlying table is a heap (that is, it has no clustered index), the leaf level of a nonclustered index<br />
contains pointers to where the data rows are stored. The pointers include a file number, a page number,<br />
and a slot number on the page.<br />
If the underlying table has a clustered index (that is, the pages and the data are logically linked in the<br />
order of a clustering key), the leaf level of a nonclustered index contains the clustering key that is then<br />
used to seek through the pages of the clustered index to locate the desired rows.
Index Fragmentation<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 16-17<br />
Index fragmentation is the inefficient use of pages within an index. Fragmentation occurs over time as<br />
data is modified.<br />
Index Fragmentation<br />
For operations that read data, indexes perform best when each page of the index is as full as possible.<br />
While indexes may initially start full (or relatively full), modifications to the data in the indexes can cause<br />
the need to split index pages. Adding a new index entry to the end of an index is easy but the process is<br />
more complicated if the entry needs to be made in the middle of an existing full index page.<br />
Internal vs. External Fragmentation<br />
Internal fragmentation occurs when pages are not holding as much data as they are capable of holding.<br />
This often occurs when a page is split during an insert operation and can also occur when an update<br />
operation causes a row to be moved to another page. In either situation, empty space is left within pages.<br />
External fragmentation occurs when pages that are logically sequenced are not held in sequenced page<br />
numbers. If a new index page needs to be allocated, it would be logically inserted into the correct location<br />
in the list of pages but could well be placed at the end of the index. That means that a process that needs<br />
to read the index pages in order would need to follow pointers to locate the pages and the process would<br />
involve accessing pages that are not sequential within the database.<br />
Detecting Fragmentation<br />
<strong>SQL</strong> Server provides a useful measure in the avg_fragmentation_in_percent column of the<br />
sys.dm_db_index_physical_stats dynamic management view.
16-18 Performing Ongoing <strong>Database</strong> Maintenance<br />
<strong>SQL</strong> Server Management Studio also provides details of index fragmentation in the properties page for<br />
each index as shown in the following screenshot from the AdventureWorks database:<br />
Question: Why does fragmentation affect performance?
FILLFACTOR and PAD_INDEX<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 16-19<br />
The FILLFACTOR and PAD_INDEX options are used to provide free space within index pages. This can<br />
improve INSERT and UPDATE performance in some situations but often to the detriment of SELECT<br />
operations.<br />
FILLFACTOR and PAD_INDEX<br />
The availability of free space in an index page can have a significant effect on the performance of index<br />
update operations. If an index record must be inserted and there is no free space, a new index page must<br />
be created and the contents of the old page split across the two pages. This can affect performance if it<br />
happens too frequently.<br />
The performance impacts of page splits can be alleviated by leaving empty space on each page when<br />
creating an index, including a clustered index. This is achieved by specifying a FILLFACTOR value.<br />
FILLFACTOR defaults to 0, which means "fill 100%". Any other value (including 100) is taken as the<br />
percentage of how full each page should be. For the example in the slide, this means 70% full and 30%<br />
free space on each page.<br />
Note The difference between the values 0 and 100 can seem confusing. While both values<br />
lead to the same outcome, 100 indicates that a specific FILLFACTOR value has been<br />
requested. The value 0 indicates that no FILLFACTOR has been specified.
16-20 Performing Ongoing <strong>Database</strong> Maintenance<br />
FILLFACTOR only applies to leaf level pages in an index. PAD_INDEX is an option that, when enabled,<br />
causes the same free space to be allocated in the non-leaf levels of the index.<br />
Question: While you could avoid many page splits by setting a FILLFACTOR of 50, what<br />
would be the downside of doing this?<br />
Question: When would a FILLFACTOR of 100 be useful?<br />
Question: What is the significance of applying a FILLFACTOR on a clustered index versus a<br />
non-clustered index?
Ongoing Maintenance of Indexes<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 16-21<br />
As indexes are updated during data modifications, over time the indexes can become fragmented. <strong>SQL</strong><br />
Server provides two options for removing fragmentation from clustered and nonclustered indexes:<br />
• Rebuild<br />
• Reorganize<br />
Rebuild<br />
Rebuilding an index drops and re-creates the index. This removes fragmentation, reclaims disk space by<br />
compacting the pages based on the specified or existing fill factor setting, and reorders the index rows in<br />
contiguous pages. When the option ALL is specified, all indexes on the table are dropped and rebuilt in a<br />
single operation. If any part of the operation fails, the entire operation is rolled back.<br />
Because rebuilds are performed as single operations and are logged, a single rebuild operation can use a<br />
large amount of space in the transaction log. It is possible to perform the rebuild operation as a<br />
minimally-logged operation when the database is in BULK_LOGGED or SIMPLE recovery model. A<br />
minimally-logged rebuild operation uses much less space in the transaction log and completes faster.<br />
Free space needs to be available when rebuilding indexes.<br />
Reorganize<br />
Reorganizing an index uses minimal system resources. It defragments the leaf level of clustered and<br />
nonclustered indexes on tables by physically reordering the leaf-level pages to match the logical, left to<br />
right order of the leaf nodes. Reorganizing an index also compacts the index pages. The compaction is<br />
based on the existing fill factor value. It is possible to interrupt a reorganize without losing the work<br />
performed so far. For example, this means that on a large index, partial reorganization could be<br />
performed each day.
16-22 Performing Ongoing <strong>Database</strong> Maintenance<br />
For heavily fragmented indexes (> 30%) rebuilding is usually the most appropriate option to use.<br />
<strong>SQL</strong> Server maintenance plans include options to rebuild or reorganize indexes. If maintenance plans are<br />
not used, it is important to build a job that performs defragmentation of the indexes in your databases.<br />
Question: What is typically the best option to defragment big indexes and tables (clustered<br />
indexes)?
Online Index Operations<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 16-23<br />
For most organizations, the primary reason for purchasing the Enterprise edition of <strong>SQL</strong> Server is that<br />
those editions can perform index operations online, while users are accessing the database. This is very<br />
important because many organizations have no available maintenance time windows during which to<br />
perform database maintenance operations such as index rebuilds.<br />
When performing an online index rebuild operation, <strong>SQL</strong> Server creates a temporary mapping index that<br />
tracks data changes that occur while the index rebuild operation is occurring. For consistency, <strong>SQL</strong> Server<br />
takes a very brief shared lock on the object at the beginning of the operation and again at the end.<br />
During the online rebuild operation, schema locks are held to prevent metadata changes. This means that<br />
users cannot change the structure of the table using commands such as ALTER TABLE while the online<br />
index rebuild operation is occurring.<br />
Because of the extra work that needs to be performed, online index rebuild operations are typically slower<br />
than their offline counterparts.<br />
Note Some indexes cannot be rebuilt online, including clustered indexes with large object<br />
data or nonclustered indexes that include large object data.<br />
Question: When would online index operations be most important?
16-24 Performing Ongoing <strong>Database</strong> Maintenance<br />
Updating Statistics<br />
Key Points<br />
One of the main tasks performed by <strong>SQL</strong> Server when it is optimizing queries that it needs to execute, is<br />
deciding which indexes to use. <strong>SQL</strong> Server makes decisions about which indexes to use based upon<br />
statistics that it keeps about the distribution of the data in the index.<br />
Statistics should mostly be updated automatically by <strong>SQL</strong> Server and AUTO_UPDATE_STATISTICS is<br />
enabled in all databases by default. It is recommended that you do not disable this option.<br />
Alternatives to Auto-updating Statistics<br />
For large tables, the AUTO_UPDATE_STATISTICS_ASYNC option instructs <strong>SQL</strong> Server to update statistics<br />
asynchronously instead of delaying query execution, where it would have otherwise updated an outdated<br />
statistic that it required for query compilation.<br />
Statistics can also be updated on demand. Executing the command UPDATE STATISTICS against a table<br />
causes all statistics on the table to be updated.<br />
The system stored procedure sp_updatestats can be used to update all statistics in a database.<br />
Question: Why might you decide to update statistics out of hours instead of automatically?
Demonstration 2A: Maintaining Indexes<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 16-25<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_16_PRJ\10775A_16_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
• Open and execute the 11 – Demonstration 1A.sql script file.<br />
2. Open the 21 – Demonstration 2A.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
16-26 Performing Ongoing <strong>Database</strong> Maintenance<br />
Lesson 3<br />
Automating Routine <strong>Database</strong> Maintenance<br />
You have now seen how to manually perform some of the common database maintenance tasks that<br />
need to be executed on a regular basis. <strong>SQL</strong> Server provides a <strong>Database</strong> Maintenance Plan Wizard that<br />
can be used to create <strong>SQL</strong> Server Agent jobs that perform the most common database maintenance tasks.<br />
While the <strong>Database</strong> Maintenance Plan Wizard makes this process easy to set up, it is important to realize<br />
that you could use the output of the wizard as a starting point for creating your own maintenance plans,<br />
or you could create plans from scratch.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Configure <strong>SQL</strong> Server database maintenance plans.<br />
• Monitor database maintenance plans.
Overview of <strong>SQL</strong> Server <strong>Database</strong> Maintenance Plans<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 16-27<br />
The <strong>SQL</strong> Server Maintenance Plan Wizard creates <strong>SQL</strong> Server Agent jobs that perform routine database<br />
maintenance tasks and schedules those jobs to ensure that your database is regularly backed up, performs<br />
well, and is checked for inconsistencies. The wizard creates <strong>SQL</strong> Server Integration Services packages that<br />
are executed by <strong>SQL</strong> Server Agent tasks.<br />
You can schedule many maintenance tasks to run automatically, including:<br />
• Backing up the database and transaction log files. <strong>Database</strong> and log backups can be retained for a<br />
specified period and then automatically deleted.<br />
• Running <strong>SQL</strong> Server Agent jobs that perform a variety of actions.<br />
• Compacting data files by removing empty database pages.<br />
• Performing internal consistency checks of the data and data pages within the database to make sure<br />
that a system or software problem has not damaged data.<br />
• Reorganizing the information on the data pages and index pages by rebuilding indexes.<br />
• Updating index statistics to make sure the query optimizer has up-to-date information about the<br />
distribution of data values in the tables.<br />
Note Maintenance plans can be created using one schedule for all tasks or with individual<br />
schedules for every selected task.<br />
Question: What types of maintenance tasks should be automated?
16-28 Performing Ongoing <strong>Database</strong> Maintenance<br />
Monitoring <strong>Database</strong> Maintenance Plans<br />
Key Points<br />
<strong>SQL</strong> Server database maintenance plans are implemented using <strong>SQL</strong> Server Agent jobs that run <strong>SQL</strong><br />
Server Integration Services (SSIS) packages. Because they use <strong>SQL</strong> Server Agent jobs, the maintenance<br />
plans can be monitored using the standard Job Activity Monitor in SSMS. As with other <strong>SQL</strong> Server Agent<br />
jobs, job history is written but maintenance plans record additional information.<br />
Results from Maintenance Plans<br />
The results generated by the maintenance tasks are written to the maintenance plan tables<br />
dbo.sysmaintplan_log and dbo.sysmaintplan_log_detail in the msdb database. The entries in these tables<br />
can be viewed by querying those tables directly using T-<strong>SQL</strong> or by using the Log File Viewer.<br />
In addition, text reports can be written to the file system and can also be sent automatically to operators<br />
that have been defined in <strong>SQL</strong> Server Agent.<br />
Note that the cleanup tasks that are part of the maintenance plans are used to implement a retention<br />
policy for backup files, job history, maintenance plan report files, and msdb database table entries.<br />
Question: Are maintenance plan history records cleaned up automatically?
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 16-29<br />
Demonstration 3A: Configuring a <strong>Database</strong> Maintenance Plan<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_16_PRJ\10775A_16_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
• Open and execute the 11 – Demonstration 1A.sql script file.<br />
2. Open the 31 – Demonstration 3A.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
16-30 Performing Ongoing <strong>Database</strong> Maintenance<br />
Lab 16: Performing Ongoing <strong>Database</strong> Maintenance<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_16_PRJ\10775A_16_PRJ.ssmssln.<br />
7. From the View menu, click Solution Explorer. In Solution Explorer, double-click the query<br />
00-Setup.sql. When the query window opens, click Execute on the toolbar.<br />
Lab Scenario<br />
There has been a disk failure in the I/O subsystem. The disk has been replaced but you want to check the<br />
consistency of your existing databases. You will execute DBCC CHECKDB to verify the logical and physical<br />
integrity of all databases on the Proseware instance.<br />
You have identified fragmentation in a number of tables in the MarketDev database and you are sure that<br />
performance is decreasing as the amount of fragmentation increases. You will rebuild the indexes for any<br />
of the main database tables that are heavily fragmented.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 16-31<br />
You have also identified a degradation of performance in the application when proper index maintenance<br />
has not been performed. You want to ensure that there is an early detection of any consistency issues in<br />
the MarketDev database and that the index maintenance is automatically executed on a scheduled basis.<br />
To make sure this regular maintenance occurs, you will create a <strong>Database</strong> Maintenance plan to schedule<br />
these operations on a weekly basis.<br />
While DBCC CHECKDB runs quite quickly, you are interested in the performance difference that might be<br />
achieved by using table locks instead of database snapshots during DBCC CHECKDB operations. If you<br />
have time, you will investigate the performance differences.<br />
Supporting Documentation<br />
<strong>Database</strong> Maintenance Plan Requirements<br />
Item Configuration<br />
Plan Name Proseware Weekly Maintenance<br />
Schedule Once per week for all tasks at 6PM Sunday night<br />
Tasks required Check <strong>Database</strong> Integrity for all databases on the Proseware<br />
server instance<br />
Rebuild indexes in the MarketDev database<br />
Notes The database integrity checks should include indexes<br />
When indexes in the MarketDev database are rebuilt, pages in<br />
the indexes should be 90% full<br />
As Proseware uses an Enterprise Edition license, online index<br />
rebuilds are supported and should be used<br />
Reports should be written to the folder L:\MKTG<br />
Exercise 1: Check <strong>Database</strong> Integrity Using DBCC CHECKDB<br />
Scenario<br />
There has been a disk failure in the I/O subsystem. The disk has been replaced but you want to check the<br />
consistency of your existing databases. You will execute DBCC CHECKDB to verify the logical and physical<br />
integrity of all databases on the Proseware instance.<br />
The main tasks for this exercise are as follows:<br />
1. Check the consistency of the databases on the Proseware instance.<br />
2. Correct any issues found.
16-32 Performing Ongoing <strong>Database</strong> Maintenance<br />
Task 1: Check the consistency of the databases on the Proseware instance<br />
• Execute DBCC CHECKDB against all databases on the Proseware server instance. Note any databases<br />
that have errors.<br />
Task 2: Correct any issues found<br />
• For any databases with errors, using the DBCC option to repair while allowing data loss. (Note that<br />
this is an extreme action that should only be undertaken in emergency situations where no backups<br />
are available to be restored).<br />
Results: After this exercise, you should have used the DBCC CHECKDB command to check<br />
consistency on all databases on the Proseware instance and corrected any issues that were<br />
found.<br />
Exercise 2: Correct Index Fragmentation<br />
Scenario<br />
You have identified fragmentation in a number of tables in the MarketDev database and you are sure that<br />
performance is decreasing as the amount of fragmentation increases. You will rebuild the indexes for any<br />
of the main database tables that are heavily fragmented.<br />
The main tasks for this exercise are as follows:<br />
1. Review the fragmentation of indexes in the MarketDev database to determine which indexes should<br />
be defragmented and which indexes should be rebuilt.<br />
2. Defragment indexes as determined.<br />
3. Rebuild indexes as determined.<br />
Task 1: Review the fragmentation of indexes in the MarketDev database to determine<br />
which indexes should be defragmented and which indexes should be rebuilt<br />
• Write a query using sys.dm_db_index_physical_stats function to locate indexes that have more than<br />
30% fragmentation.<br />
Task 2: Defragment indexes as determined<br />
• Write a query to defragment the indexes that you determined had fragmentation levels above 30%<br />
but below 70%.<br />
Task 3: Rebuild indexes as determined<br />
• Write a query to rebuild the indexes that you determined had fragmentation levels above 70%<br />
Results: After this exercise, you should have rebuilt or defragmented any indexes<br />
with substantial fragmentation.
Exercise 3: Create a <strong>Database</strong> Maintenance Plan<br />
Scenario<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 16-33<br />
You have also identified a degradation of performance in the application when proper index maintenance<br />
has not been performed. You want to ensure that there is an early detection of any consistency issues in<br />
the MarketDev database and that the index maintenance is automatically executed on a schedule basis.<br />
To make sure this regular maintenance occurs, you will create a <strong>Database</strong> Maintenance plan to schedule<br />
these operations on a weekly basis.<br />
The main task for this exercise is as follows:<br />
1. Create the required database maintenance plan.<br />
Task 1: Create the required database maintenance plan<br />
• Review the requirements for the exercise in the supporting documentation.<br />
• Create a database maintenance plan that meets the requirements.<br />
Results: After this exercise, you should have created the required database<br />
maintenance plan.<br />
Challenge Exercise 4: Investigate Table Lock Performance (Only if time<br />
permits)<br />
Scenario<br />
While DBCC CHECKDB runs quite quickly, you are interested in the performance difference that might be<br />
achieved by using table locks instead of database snapshots during DBCC CHECKDB operations. If you<br />
have time, you will investigate the performance differences.<br />
The main tasks for this exercise are as follows:<br />
1. Execute DBCC CHECKDB using database snapshots.<br />
2. Execute DBCC CHECKDB using table locks.<br />
Task 1: Execute DBCC CHECKDB using database snapshots<br />
• Execute DBCC CHECKDB against all databases in the Proseware server instance using database<br />
snapshots (the default option).<br />
• Record the total execution time.<br />
Task 2: Execute DBCC CHECKDB using table locks<br />
• Execute DBCC CHECKDB against all databases in the Proseware server instance using table locks<br />
(TABLOCK option).<br />
• Record the total execution time.<br />
• Compare the execution time to the time recorded in Task 1.<br />
Results: After this exercise, you should have compared the performance of DBCC<br />
CHECKDB when using database snapshots and table locks.
16-34 Performing Ongoing <strong>Database</strong> Maintenance<br />
Module Review and Takeaways<br />
Review Questions<br />
1. What regular tasks should be implemented for read only databases?<br />
2. What option should you consider using when running DBCC CHECKDB against large production<br />
databases?<br />
Best Practices<br />
1. Run DBCC CHECKDB regularly.<br />
2. Synchronize DBCC CHECKDB with your backup strategy.<br />
3. Consider RESTORE before repairing if corruption occurs.<br />
4. Defragment your indexes when necessary.<br />
5. Update statistics on schedule, if you don’t want it to occur during normal operations.<br />
6. Use Maintenance Plans to implement regular tasks.
Module 17<br />
Tracing Access to <strong>SQL</strong> Server <strong>2012</strong><br />
Contents:<br />
Lesson 1: Capturing Activity Using <strong>SQL</strong> Server Profiler and Extended<br />
Events Profiler 17-3<br />
Lesson 2: Improving Performance with the <strong>Database</strong> Engine Tuning<br />
Advisor 17-17<br />
Lesson 3: Working with Tracing Options 17-25<br />
Lab 17: Tracing Access to <strong>SQL</strong> Server <strong>2012</strong> 17-36<br />
17-1
17-2 Tracing Access to <strong>SQL</strong> Server <strong>2012</strong><br />
Module Overview<br />
<strong>Microsoft®</strong> <strong>SQL</strong> <strong>Server®</strong> performs well with many types of <strong>SQL</strong> workloads but the performance of most<br />
<strong>SQL</strong> Server systems can be improved by a process of tuning queries and database structures. For most<br />
large organizations, performance tuning is not a task that is performed once and completed. Performance<br />
tuning typically involves a process of continuous incremental improvements.<br />
It is important to spend most of your performance tuning efforts where the effort will provide the greatest<br />
benefit. Developers will often guess how their applications will be used and try to optimize the<br />
applications based on those guesses. The ability to trace activity against a <strong>SQL</strong> Server removes the<br />
guesswork from this process and allows you to focus your efforts on the areas of the applications that<br />
users are actually using.<br />
<strong>SQL</strong> Server Profiler and Extended Events Profiler are used to capture traces of activity against <strong>SQL</strong> Server.<br />
The <strong>Database</strong> Engine Tuning Advisor can be used to analyze the traces captured by <strong>SQL</strong> Server Profiler<br />
and to suggest improvements that can be made, particularly in relation to indexes and statistics.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Capture activity using <strong>SQL</strong> Server Profiler and Extended Events Profiler.<br />
• Improve performance with the <strong>Database</strong> Engine Tuning Advisor.<br />
• Work with tracing options.
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 17-3<br />
Lesson 1<br />
Capturing Activity Using <strong>SQL</strong> Server Profiler and Extended<br />
Events Profiler<br />
<strong>SQL</strong> Server Profiler and Extended Events Profiler provide you with the ability to trace the activity that is<br />
occurring in the database engine within <strong>SQL</strong> Server. <strong>SQL</strong> Server Profiler can also trace activity against<br />
Analysis Services. (The tracing of Analysis Services activity is out of scope for this training).<br />
Traces can be used for performance tuning, for troubleshooting and diagnostic purposes, and for<br />
replaying workloads. The ability to replay a workload allows you to test the impact of performance<br />
changes against test systems or to test application workloads against newer versions of <strong>SQL</strong> Server.<br />
It is important to learn to configure these tools, to avoid creating excessive impacts from the tracing<br />
process itself.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Use <strong>SQL</strong> Server Profiler.<br />
• Describe available tracing output options.<br />
• Detail commonly used trace events.<br />
• Detail commonly used trace columns.<br />
• Filter traces.<br />
• Work with trace templates.<br />
• Use Extended Events Profiler.
17-4 Tracing Access to <strong>SQL</strong> Server <strong>2012</strong><br />
Overview of <strong>SQL</strong> Server Profiler<br />
Key Points<br />
<strong>SQL</strong> Server Profiler is an important tool when tuning the performance of <strong>SQL</strong> Server queries. It captures<br />
the activity from client applications to <strong>SQL</strong> Server and stores it in a trace. These traces can then be<br />
analyzed.<br />
<strong>SQL</strong> Server Profiler<br />
<strong>SQL</strong> Server profiler captures data when events occur. Only events that have been selected are captured. A<br />
variety of information (shown as a set of columns) is available when each event occurs. The trace created<br />
contains only the selected columns for the selected events.<br />
Rather than needing to select events and columns each time you run <strong>SQL</strong> Server Profiler, a set of existing<br />
templates are available. You can also save your own selections as a new template.<br />
The captured traces are useful when tuning the performance of an application and when diagnosing<br />
specific problems that are occurring. When using traces for diagnosing problems, log data from the<br />
Windows Performance Monitor tool can be loaded. This allows relationships between system resource<br />
impacts and the execution of queries in <strong>SQL</strong> Server to be made.<br />
The traces can also be replayed. The ability to replay traces is useful for load testing systems or for<br />
ensuring that upgraded versions of <strong>SQL</strong> Server can be used with existing applications. <strong>SQL</strong> Server Profiler<br />
also allows you to step through queries when diagnosing problems.
<strong>SQL</strong> Trace<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 17-5<br />
<strong>SQL</strong> Server Profiler is a graphical tool and it is important to realize that it can have significant performance<br />
impacts on the server being traced, depending upon the options chosen. <strong>SQL</strong> Trace is a library of system<br />
stored procedures that can be used for tracing when minimizing the performance impacts of the tracing is<br />
necessary. Internally, <strong>SQL</strong> Server Profiler uses the programming interface that has been provided by <strong>SQL</strong><br />
Trace.<br />
Note The Extended Events system that was introduced in <strong>SQL</strong> Server 2008 also provides<br />
capabilities for tracing <strong>SQL</strong> Server activity and resources. The use of Extended Events for<br />
tracing activity is discussed later in this lesson. The use of Extended Events for other<br />
monitoring purposes is outside the scope of this course.<br />
Question: Where would the ability to replay a trace be useful?
17-6 Tracing Access to <strong>SQL</strong> Server <strong>2012</strong><br />
Available Tracing Output Options<br />
Key Points<br />
When a <strong>SQL</strong> Server Profiler trace is active, the captured events are loaded into a graphical grid within the<br />
<strong>SQL</strong> Server Profiler user interface.<br />
In addition, <strong>SQL</strong> Server Profiler can send the captured event details to either operating system files or to<br />
database tables.<br />
Capture to Files<br />
Capturing to an operating system file is the most efficient option for <strong>SQL</strong> Server Profiler traces. When<br />
configuring file output, you need to supply a filename for the trace. The default file type for a trace file is<br />
".trc". <strong>SQL</strong> Server Profiler defaults to a file size of 5MB which is far too small for most traces. A more<br />
realistic value on most large systems would be either 500MB or 5000MB, depending upon the volume of<br />
activity that needs to be recorded.<br />
When the allocated file size is exhausted, <strong>SQL</strong> Server Profiler will open a new file based on a variation of<br />
the current file name and start writing to it, if the "Enable file rollover" option has been checked. It is<br />
considered good practice to work with a large maximum file size and to avoid the need for rollover files,<br />
unless there is a need to move the captured traces onto media such as DVDs or onto download sites that<br />
cannot work with larger files.<br />
Selecting the "Server processes trace data" option causes the <strong>SQL</strong> Server service to write the output file<br />
instead of the client system that is running <strong>SQL</strong> Server Profiler. Using this option can enhance<br />
performance but requires access to the server's filesystem.
Capturing to Tables<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 17-7<br />
<strong>SQL</strong> Server Profiler can also capture trace data to database tables. The underlying <strong>SQL</strong> Trace<br />
programming interface does not directly support output to tables. This means that the <strong>SQL</strong> Server Profiler<br />
program needs to retrieve the event data into its graphical grid and as batches of rows are received, <strong>SQL</strong><br />
Server Profiler then writes those rows to the selected database table.<br />
Note It is very important to avoid writing trace data directly back to the <strong>SQL</strong> Server<br />
system that is being monitored, unless a very low level of activity is expected.<br />
<strong>SQL</strong> Server Profiler also provides an option for saving existing captured event data that is being displayed<br />
in the graphical grid into a database table.<br />
Question: What might be an advantage of saving events in a <strong>SQL</strong> Server table?
17-8 Tracing Access to <strong>SQL</strong> Server <strong>2012</strong><br />
Commonly Used Trace Events<br />
Key Points<br />
The information recorded in a trace is divided into categories. Categories contain events, each of which<br />
has attributes further defined by columns.<br />
Trace Categories<br />
In <strong>SQL</strong> Server Profiler, a category is a group of related event classes. Event classes consist of types of<br />
events that can be traced. The event class contains all the data columns that can be reported by an event.<br />
Events<br />
An event is defined as the occurrence of an action within an instance of the <strong>SQL</strong> Server <strong>Database</strong> Engine.<br />
Events are further defined by their attributes, which are listed in data columns.<br />
The most commonly traced events are as follows:<br />
Event Description<br />
<strong>SQL</strong>:BatchCompleted When a batch of T-<strong>SQL</strong> statements is completed, the<br />
<strong>SQL</strong>:BatchCompleted event is fired. Note that there is also an event<br />
raised when the batch is first started but the completed event<br />
contains more useful information such as details of the resources<br />
used during the execution of the batch.<br />
<strong>SQL</strong>:StmtCompleted If tracing at the <strong>SQL</strong> batch level is too coarse, it is possible to<br />
retrieve details of each individual statement that is contained<br />
within the batch.
(continued)<br />
Event Description<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 17-9<br />
RPC: Completed The RPC:Completed event is fired when a stored procedure finishes<br />
execution. There is a traceable event when the stored procedure<br />
starts but similar to the <strong>SQL</strong>:BatchCompleted event, the<br />
RPC:Completed event is useful as it contains details of the resources<br />
used during the execution of the stored procedure. You can see a<br />
statement by statement breakdown of resources used within the<br />
stored procedure via the SP:StmtCompleted event.<br />
Audit Login / Audit<br />
Logout<br />
You can include in your traces details of each login and logout<br />
event that occurs during the tracing activity.<br />
Deadlock Graph Unhandled deadlocks often lead to errors being passed to end users<br />
from applications. If your system is suffering from deadlocks, the<br />
Deadlock Graph event fires when deadlocks occur and captures<br />
details of what caused the deadlock. The details are captured into<br />
an XML document that can be viewed graphically within <strong>SQL</strong> Server<br />
Profiler.<br />
Question: Why would events that are raised on completion of batches or statements often<br />
be more interesting than events that are raised when batches or statements start?
17-10 Tracing Access to <strong>SQL</strong> Server <strong>2012</strong><br />
Commonly Used Trace Columns<br />
Key Points<br />
Data columns contain the attributes of events. <strong>SQL</strong> Server Profiler uses data columns in the trace output<br />
to describe events that are captured when the trace runs.<br />
<strong>SQL</strong> Server Profiler has a large set of potential columns but not every event writes values to all the<br />
possible columns. For example, in the <strong>SQL</strong>:BatchStarting event, the columns Reads, Writes, Duration, and<br />
CPU are not available because the values are not available at the time of the event. Those columns are<br />
available in the <strong>SQL</strong>:BatchCompleted event.<br />
Note The output in the <strong>SQL</strong> Server Profiler graphical grid can be grouped based on based<br />
on column values.<br />
You should minimize the number of columns that you capture when events occur, to help to minimize the<br />
overall size of the trace that is captured.<br />
You can also organize columns into related groups by using the Organize Columns function.
Useful Columns Often Omitted<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 17-11<br />
One of the more interesting columns is the TextData column. Many events do not include it by default yet<br />
the values contained in it are very useful. For example, in the RPC:Completed event, the TextData column<br />
contains the T-<strong>SQL</strong> statement that was used to execute the stored procedure.<br />
Two other useful columns are the database ID and the database name. You will often need to create a<br />
filter based on the database that you want to trace activity against. Tracing by database ID is more<br />
efficient than tracing by database name. However, trace templates that filter by database ID are less<br />
portable than those that filter by database name because a database that is restored on another server<br />
will typically have a different database ID. You can also use wildcard values within the LIKE clause of a<br />
database name filter.<br />
Question: What information would the TextData column return?
17-12 Tracing Access to <strong>SQL</strong> Server <strong>2012</strong><br />
Filtering Traces<br />
Key Points<br />
Filters can be set for each of the columns that are captured in a trace. It is important to ensure that you<br />
are only capturing events of interest by limiting the events via filters. Effective use of filters helps to<br />
minimize the overall size of the captured trace, helps to avoid overwhelming the server with tracing<br />
activity, and minimizes the number of events that are contained in the trace, which reduces complexity<br />
during analysis. Smaller traces are also typically faster to analyze.<br />
Filters are only applied if an event writes a particular column. For example if you set a filter for<br />
<strong>Database</strong>Name = AdventureWorks and define the event Deadlock Graph to be captured, all deadlock<br />
events will be shown as the <strong>Database</strong>Name column is not exposed by the Deadlock Graph event.<br />
Question: What filter would you use to locate long running batches or statements?
Working with Trace Templates<br />
Key Points<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 17-13<br />
You can use <strong>SQL</strong> Server Profiler to create templates that define the event classes and data columns to<br />
include in traces. After you define and save the template, you can run a trace that records the data for<br />
each event class you selected.<br />
Templates can be generated within <strong>SQL</strong> Server Profiler by creating a trace using the graphical interface,<br />
starting and stopping the trace at least once and then using File > Save as > Trace Template.<br />
<strong>SQL</strong> Server Profiler offers predefined trace templates that allow you to easily configure the event classes<br />
that you need for specific types of traces. The Standard template, for example, helps you to create a<br />
generic trace for recording logins, logouts, batches completed, and connection information. You can use<br />
this template to run traces without modification or as a starting point for additional templates with<br />
different event configurations.<br />
Question: How do <strong>SQL</strong> Server Profiler templates help you trace activity?
17-14 Tracing Access to <strong>SQL</strong> Server <strong>2012</strong><br />
Demonstration 1A: Capturing Activity Using <strong>SQL</strong> Server Profiler<br />
Demonstration Steps<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click <strong>SQL</strong><br />
Server Management Studio. In the Connect to Server window, type Proseware and click Connect.<br />
From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_17_PRJ\10775A_17_PRJ.ssmssln and click Open.<br />
3. From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file from<br />
within Solution Explorer.<br />
4. Open the 11 – Demonstration 1A.sql script file.<br />
5. Follow the instructions contained within the comments of the script file.<br />
Question: Why is there only one Batch Starting event and one Batch Completed event for<br />
the workload we ran?
Working with Extended Events Profiler<br />
Key Points<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 17-15<br />
<strong>SQL</strong> Server <strong>2012</strong> introduces a new tool for managing sessions that capture events as they occur, based<br />
upon the Extended Events infrastructure that was introduced in <strong>SQL</strong> Server 2008. It is the same<br />
technology that underpins <strong>SQL</strong> Server Audit.<br />
Extended Events Profiler is integrated directly into <strong>SQL</strong> Server Management Studio and can be used to<br />
capture activity in the <strong>SQL</strong> Server database engine in much the same way that you can use <strong>SQL</strong> Server<br />
Profiler.<br />
Over time, Extended Events Profiler will replace <strong>SQL</strong> Server Profiler as the tool of choice for tracing activity<br />
in <strong>SQL</strong> Server. In this version, the two tools have intersecting feature sets and both will likely continue to<br />
be used.
17-16 Tracing Access to <strong>SQL</strong> Server <strong>2012</strong><br />
Demonstration 1B: Capturing Activity Using Extended Events Profiler<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_17_PRJ\10775A_17_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
2. Open the 12 – Demonstration 1B.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 17-17<br />
Lesson 2<br />
Improving Performance with the <strong>Database</strong> Engine Tuning<br />
Advisor<br />
Tuning the performance of a <strong>SQL</strong> Server system is one of the most important tasks undertaken by<br />
database administrators. As mentioned earlier in this module, performance tuning tends to be an ongoing<br />
process, not a one-time task.<br />
It is important to follow a standard methodology for performance tuning. <strong>SQL</strong> Server includes many tools<br />
that can help with performance tuning, including the <strong>Database</strong> Engine Tuning Advisor (DETA). It is<br />
important to know how to configure and use DETA.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Explain performance tuning.<br />
• Describe the available options for performance tuning.<br />
• Use <strong>Database</strong> Engine Tuning Advisor.<br />
• Describe the available DETA tuning options.
17-18 Tracing Access to <strong>SQL</strong> Server <strong>2012</strong><br />
Overview of Performance Tuning<br />
Key Points<br />
Performance tuning is an iterative process of incremental improvement made on an ongoing basis.<br />
First of all, it is important to define an initial goal based on any problems that might be apparent. "It<br />
needs to be faster" is not usually a well enough defined goal. Based on the chosen goal, you then need to<br />
select the appropriate tools to monitor the system and provide the metrics that will help you reach the<br />
goal. <strong>SQL</strong> Server provides a number of tools to help with this phase. (An overview of the available tools<br />
will be provided in the next topic).<br />
In the chosen tools, you need to identify the metrics that you will use, from the list of available metrics. A<br />
baseline needs to be created via monitoring and a decision made about the changes to implement.<br />
Based on the results, a strategy is then developed to overcome the issues that have been detected. The<br />
strategy is then implemented, the results are monitored and the process begins again.<br />
Work On Causes Rather Than Symptoms<br />
A key concept when performance tuning is that you need to try to find the underlying causes of<br />
problems, rather than constantly battling against the symptoms. For example, a client might ask you to<br />
help resolve a blocking issue. It is often the case that excessive blocking occurs because of long running<br />
queries. By fixing the queries and indexing so that the queries run quickly, the blocking issues often<br />
disappear.<br />
As another common example, systems often appear to have I/O performance problems yet in many cases,<br />
this could be caused by the lack of available memory in the system.<br />
Question: When would you know that performance tuning is complete?
Available Options for Performance Tuning<br />
Key Points<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 17-19<br />
<strong>SQL</strong> Server provides a number of tools that can be used to perform performance monitoring and tuning.<br />
Each of them is useful in certain scenarios and you will often need to combine several tools to achieve the<br />
desired outcome.<br />
Tool Description & Location<br />
<strong>Database</strong> Engine Tuning Advisor Tool provided with <strong>SQL</strong> Server for tuning indexes and<br />
statistics. DETA is discussed in this lesson.<br />
SSMS Core management tool provided with <strong>SQL</strong> Server.<br />
Aspects that are useful in performance tuning (activity<br />
monitor, standard and custom reports) are discussed in<br />
Module 18.<br />
Dynamic Management Objects <strong>Database</strong> objects that provide insight into internal <strong>SQL</strong><br />
Server operations. Many useful DMOs are discussed in<br />
Module 18.
17-20 Tracing Access to <strong>SQL</strong> Server <strong>2012</strong><br />
(continued)<br />
Tool Description & Location<br />
<strong>SQL</strong> Server Data Collection Automated system for collecting and storing<br />
performance data along with a set of standard reports.<br />
Data collection is discussed in Module 18.<br />
<strong>SQL</strong> Server Profiler Tracing and profiling tool that was discussed in the first<br />
lesson in this module.<br />
<strong>SQL</strong> Trace Programming interface for tracing <strong>SQL</strong> Server access<br />
that is discussed in the last lesson in this module.<br />
<strong>SQL</strong> Server Extended Events Light-weight eventing architecture. You have seen how<br />
to use Extended Events Profiler to capture server activity.<br />
Working directly with Extended Events (not via a GUI<br />
interface) is out of scope for this training.<br />
Distributed Relay <strong>Advanced</strong> tool for replaying workloads across a<br />
potentially distributed set of servers. Distributed Relay is<br />
out of scope for this training.<br />
Reliability and Performance<br />
Monitor<br />
Standard Windows performance tool that is discussed in<br />
Module 18.<br />
Question: Which tools might be useful for real time monitoring?
Introduction to the <strong>Database</strong> Engine Tuning Advisor<br />
Key Points<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 17-21<br />
The <strong>Database</strong> Engine Tuning Advisor utility analyzes the performance effects of workloads run against one<br />
or more databases. Typically these workloads are obtained from traces captured by <strong>SQL</strong> Server Profiler or<br />
the <strong>SQL</strong> Trace facility. (<strong>SQL</strong> Trace is discussed later in the next lesson). After analyzing the effects of a<br />
workload on your databases, <strong>Database</strong> Engine Tuning Advisor provides recommendations for improving<br />
the performance of your system.<br />
<strong>Database</strong> Engine Tuning Advisor<br />
In <strong>SQL</strong> Server 2000 and earlier, a previous version of this tool was supplied. It was called the "Index Tuning<br />
Wizard". In <strong>SQL</strong> Server 2005, the name was changed as the tool evolved to be able to provider a broader<br />
range of recommendations.<br />
<strong>Database</strong> Engine Tuning Advisor was further enhanced in <strong>SQL</strong> Server 2008 with improved workload<br />
parsing, integrated tuning, and the ability to tune multiple databases concurrently.<br />
Workloads<br />
A workload is a set of Transact-<strong>SQL</strong> statements that executes against databases that you want to tune. The<br />
workload source can be a file containing Transact-<strong>SQL</strong> statements, a trace file generated by <strong>SQL</strong> Profiler,<br />
or a table of trace information, again generated by <strong>SQL</strong> Profiler. <strong>SQL</strong> Server Management Studio also has<br />
the ability to launch <strong>Database</strong> Engine Tuning Advisor to analyze an individual statement.
17-22 Tracing Access to <strong>SQL</strong> Server <strong>2012</strong><br />
Recommendations<br />
The recommendations that can be produced include suggested changes to the database such as new<br />
indexes, indexes that should be dropped, and depending on the tuning options you set, partitioning<br />
recommendations. The recommendations that are produced are provided as a set of Transact-<strong>SQL</strong><br />
statements that would implement the suggested changes. You can view the Transact-<strong>SQL</strong> and save it for<br />
later review and application, or you can choose to implement the recommended changes immediately.<br />
Be careful of applying changes to a database without detailed consideration, especially in production<br />
environments. Also, ensure that any analysis that you perform is based on appropriately sized workloads<br />
so that recommendations are not made based on partial information.<br />
Question: Why is it important to tune an entire workload rather than individual queries?
<strong>Database</strong> Engine Tuning Advisor Options<br />
Key Points<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 17-23<br />
DETA provides a rich set of configuration options that allow you to configure what analysis should be<br />
performed and how optimization recommendations should be made.<br />
Executing DETA on large workloads can take a long time, particularly on systems that also have large<br />
numbers of database objects. You can configure DETA to limit the time it will spend on analysis and to<br />
return the results that it has obtained up to the time limit.<br />
You can also configure which types of recommendations should be made, along with whether or not you<br />
wish to see recommendations that involve dropping existing objects.<br />
Exploratory Analysis<br />
<strong>Database</strong> administrators can use <strong>Database</strong> Engine Tuning Advisor also to perform exploratory analysis.<br />
Exploratory analysis involves a combination of manual tuning and tool-assisted tuning. To perform<br />
exploratory analysis with <strong>Database</strong> Engine Tuning Advisor, use the user-specified configuration feature.<br />
The user-specified configuration feature allows you to specify the tuning configurations for existing and<br />
hypothetical physical design structures, such as indexes, indexed views, and partitioning. The benefit of<br />
specifying hypothetical structures is that you can evaluate their effects on your databases without<br />
incurring the overhead of implementing them first.<br />
You can create an XML configuration file to specify a hypothetical configuration. The configuration can<br />
then be used for analysis. The analysis can be performed either in isolation or relative to the current<br />
configuration. This type of analysis can also be performed using a command line interface.<br />
Question: What is the disadvantage of limiting the analysis time?
17-24 Tracing Access to <strong>SQL</strong> Server <strong>2012</strong><br />
Demonstration 2A: Using the <strong>Database</strong> Engine Tuning Advisor<br />
Demonstration Setup<br />
1. If Demonstration 1A was not performed:<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_17_PRJ\10775A_17_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
2. Open the 21 – Demonstration 2A.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.<br />
Question: Should you immediately apply the recommendations made by DETA to your<br />
server?
Lesson 3<br />
Working with Tracing Options<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 17-25<br />
In many larger organizations, you will not be able to use the graphical <strong>SQL</strong> Server Profiler tool in<br />
production environments as it places too high a load on the systems being profiled.<br />
It is still possible, however, to use the <strong>SQL</strong> Trace programming interface that is based on system stored<br />
procedures, to create lightweight traces. It is important to know how to use these traces in place of <strong>SQL</strong><br />
Server Profiler and how to use <strong>SQL</strong> Server Profiler to help you to configure <strong>SQL</strong> Trace configurations.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe <strong>SQL</strong> Trace.<br />
• Describe the differences between <strong>SQL</strong> Trace and <strong>SQL</strong> Server Profiler.<br />
• Retrieve trace output.<br />
• Replay traces.<br />
• Use the default trace.<br />
• Combine traces with Reliability and Performance Monitor logs.
17-26 Tracing Access to <strong>SQL</strong> Server <strong>2012</strong><br />
Overview of <strong>SQL</strong> Trace<br />
Key Points<br />
<strong>SQL</strong> Trace is a feature running within the database engine to create and run traces. Traces are managed<br />
using system stored procedures. Internally, <strong>SQL</strong> Server Profiler makes calls to the <strong>SQL</strong> Trace facility in <strong>SQL</strong><br />
Server when <strong>SQL</strong> Server Profiler needs to create or manage traces.<br />
Traces run in the process of <strong>SQL</strong> Server database engine and can write events to a file or to an application<br />
using <strong>SQL</strong> Server Management Objects (SMO) objects. The information you learned about how events,<br />
columns, and filtering work in <strong>SQL</strong> Server Profiler is directly applicable to how the same objects work<br />
within <strong>SQL</strong> Trace.<br />
Implementing traces can at first appear difficult as you need to make many stored procedure calls to<br />
define and run a trace. However, the graphical interface in <strong>SQL</strong> Server Profiler can be used to create a<br />
trace and to then script the trace for use with <strong>SQL</strong> Trace. Very few changes typically need to be made to<br />
the <strong>SQL</strong> Trace script files that are created by <strong>SQL</strong> Server Profiler, such as the path to output files.
<strong>SQL</strong> Trace vs. <strong>SQL</strong> Server Profiler<br />
Key Points<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 17-27<br />
It is important to understand the differences between <strong>SQL</strong> Trace and <strong>SQL</strong> Server Profiler and to<br />
understand where each tool should be used.<br />
• <strong>SQL</strong> Trace needs to be defined using a series of system stored procedure calls whereas <strong>SQL</strong> Server<br />
Profiler provides a graphical interface for configuration and for controlling the tracing activity.<br />
• <strong>SQL</strong> Trace runs directly inside the database engine whereas <strong>SQL</strong> Server Profiler runs on a client system<br />
(or on the server) and communicates to the database engine by the use of the <strong>SQL</strong> Trace procedures.<br />
• <strong>SQL</strong> Trace can write events to files or to applications based on SMO whereas <strong>SQL</strong> Server Profiler can<br />
write events to files or to database tables.<br />
• <strong>SQL</strong> Trace is useful for long running, performance-critical traces, or for very large traces that would<br />
significantly impact the performance of the target system whereas <strong>SQL</strong> Server Profiler is more<br />
commonly used for debugging on test systems, for performing short term analysis, or for capturing<br />
small traces.<br />
Note The option “Server processes trace data” in <strong>SQL</strong> Server Profiler is not the same<br />
option as scripting a trace and starting it directly through stored procedures. The option<br />
creates two traces: one trace that directly writes to a file and a second trace to send the<br />
events through SMO to <strong>SQL</strong> Server Profiler.
17-28 Tracing Access to <strong>SQL</strong> Server <strong>2012</strong><br />
Traces do not automatically restart after the server instance restarts. If a trace needs to be run constantly,<br />
the trace needs to be scripted, launched from within a stored procedure, and the stored procedure<br />
marked as a startup procedure.<br />
Question: Which option would have less impact on a traced system?<br />
a) <strong>SQL</strong> Trace with output to a file<br />
b) <strong>SQL</strong> Profiler with output to a table
Demonstration 3A: Configuring <strong>SQL</strong> Trace<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 17-29<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_17_PRJ\10775A_17_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
2. Open the 31 – Demonstration 3A.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
17-30 Tracing Access to <strong>SQL</strong> Server <strong>2012</strong><br />
Retrieving Trace Output<br />
Key Points<br />
Traces that are written to trace files are not easy to read directly and are somewhat difficult to parse from<br />
within an application.<br />
Two options are provided by <strong>SQL</strong> Server to make it easy to work with the contents of trace files:<br />
• <strong>SQL</strong> Server Profiler can be used to open trace files. Within <strong>SQL</strong> Server Profiler the trace output can be<br />
filtered and grouped for analysis. <strong>SQL</strong> Server Profiler is especially useful for working with small trace<br />
files.<br />
• Trace files can be imported into <strong>SQL</strong> Server using the fn_trace_gettable system function. Reading the<br />
files into a table is particularly useful when you need to analyze large volumes of trace data as the<br />
table can be indexed to improve the speed of common queries against the captured data. T-<strong>SQL</strong><br />
queries can then be used to analyze and filter the data.
Replaying Traces<br />
Key Points<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 17-31<br />
<strong>SQL</strong> Server Profiler provides the ability to replay trace files. The ability to replay trace files is useful for<br />
validating changes that you are considering making to a system or for testing a workload against new<br />
hardware, indexes, or physical layout changes.<br />
The replay does not need to be performed against the same system that the trace events were captured<br />
on but the system must be configured in a very similar way. This particularly applies to objects such as<br />
databases and logins.<br />
Distributed Replay<br />
<strong>SQL</strong> Server <strong>2012</strong> introduced a Distributed Replay feature for traces captured by either <strong>SQL</strong> Server Profiler<br />
or <strong>SQL</strong> Trace. Distributed Replay is not limited to replaying the workload from a single computer and<br />
provides a more scalable solution than the replay option provided by <strong>SQL</strong> Server Profiler. This capability<br />
can allow improved simulation of mission-critical workloads.<br />
Note In addition to the replay capabilities of <strong>SQL</strong> Server Profiler and the Distributed<br />
Replay Utility, a set of unsupported utilities from the <strong>SQL</strong> Server Product Support Group are<br />
available. These unsupported utilities are called the RML Utilities and are capable of<br />
replaying <strong>SQL</strong> Server workloads with advanced features such as synchronized replay<br />
through several clients. The RML utilities can also be used for analyzing trace files and are<br />
available for download from support.microsoft.com.<br />
Before you can replay captured traces, you need to ensure that all columns that are required for replay<br />
have in fact been captured. To make this easier, <strong>SQL</strong> Server Profiler provides a template called T<strong>SQL</strong><br />
Replay. You can use that template as a starting point but make sure you do not remove any columns that<br />
are included in the template.<br />
Question: When might it be a good idea to test systems using the replay functionality?
17-32 Tracing Access to <strong>SQL</strong> Server <strong>2012</strong><br />
Default Trace<br />
Key Points<br />
Beginning with <strong>SQL</strong> Server 2005, a default server side trace is started automatically whenever <strong>SQL</strong> Server<br />
starts. This trace is called the default trace and has the trace id of 1. The default trace is a lightweight trace<br />
that only keeps up to 5MB of data at any point in time. It captures the following events:<br />
Object Events<br />
<strong>Database</strong> Data file auto grow, Data file auto shrink, <strong>Database</strong><br />
mirroring status change, Log file auto grow, Log file<br />
auto shrink<br />
Errors and Warnings Errorlog, Hash warning, Missing Column Statistics,<br />
Missing Join Predicate, Sort Warning<br />
Full-Text FT Crawl Aborted, FT Crawl Started, FT Crawl Stopped<br />
Objects Object Altered, Object Created, Object Deleted
(continued)<br />
Object Events<br />
Security Audit Audit Add DB user event<br />
Audit Add login to server role event<br />
Audit Add Member to DB role event<br />
Audit Add Role event<br />
Audit Add login event<br />
Audit Backup/Restore event<br />
Audit Change <strong>Database</strong> owner<br />
Audit DBCC event<br />
Audit <strong>Database</strong> Scope GDR event<br />
Audit Login Change Property event<br />
Audit Login Failed<br />
Audit Login GDR event<br />
Audit Schema Object GDR event<br />
Audit Schema Object Take Ownership<br />
Audit Server Starts and Stops<br />
Server Server Memory Change<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 17-33<br />
The default trace can be enabled or disabled using sp_configure. In the example shown in the slide, you<br />
can see how to re-enable the default trace if necessary.<br />
The default trace cannot be modified and also the file location cannot be changed.<br />
Question: Should the default trace be kept enabled?
17-34 Tracing Access to <strong>SQL</strong> Server <strong>2012</strong><br />
Combining Traces with Performance Monitor Logs<br />
Key Points<br />
Using <strong>SQL</strong> Server Profiler, you can open a Microsoft Windows® performance log, choose the counters<br />
you want to correlate with a trace, and display the selected performance counters alongside the trace in<br />
the <strong>SQL</strong> Server Profiler graphical user interface. When you select an event in the trace window, a vertical<br />
red bar in the System Monitor data window pane of <strong>SQL</strong> Server Profiler indicates the performance log<br />
data that correlates with the selected trace event.<br />
To correlate a trace with performance counters, open a trace file or table that contains the StartTime and<br />
EndTime data columns, and then click Import Performance Data on the <strong>SQL</strong> Server Profiler File menu. You<br />
can then open a performance log, and select the System Monitor objects and counters that you want to<br />
correlate with the trace<br />
Question: What is the main advantage of using this feature?
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 17-35<br />
Demonstration 3B: Combining Traces with Performance Monitor Logs<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_17_PRJ\10775A_17_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
2. Open the 32 – Demonstration 3B.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
17-36 Tracing Access to <strong>SQL</strong> Server <strong>2012</strong><br />
Lab 17: Tracing Access to <strong>SQL</strong> Server<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_17_PRJ\10775A_17_PRJ.ssmssln.<br />
7. From the View menu, click Solution Explorer. In Solution Explorer, double-click the query<br />
00-Setup.sql. When the query window opens, click Execute on the toolbar.<br />
Lab Scenario<br />
The developers for the new marketing application are concerned about the performance of their queries.<br />
When the developers were testing the application they were working with small amounts of data and<br />
performance was acceptable. The developers are unsure that they have created appropriate indexes to<br />
support the application.<br />
You will use <strong>SQL</strong> Server Profiler to capture traces of application execution. You will then analyze the traces<br />
using the <strong>Database</strong> Engine Tuning Advisor.<br />
If you have time, you will configure traces using the <strong>SQL</strong> Trace system stored procedures.
Exercise 1: Capture a Trace Using <strong>SQL</strong> Server Profiler<br />
Scenario<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 17-37<br />
You have isolated the workload that causes the poor performance and you want to test the queries in a<br />
development environment. In this exercise, you will generate sample workload and you will capture<br />
database activity using <strong>SQL</strong> Server Profiler.<br />
The main tasks for this exercise are as follows:<br />
1. Create and start a suitable <strong>SQL</strong> Server Profiler trace.<br />
2. Execute the workload.<br />
3. Stop the trace.<br />
Task 1: Create and start a suitable <strong>SQL</strong> Server Profiler trace<br />
• Create a <strong>SQL</strong> Server Profiler trace based upon the following settings:<br />
• Tuning template<br />
• Filtered to the MarketDev database only<br />
• Rollover files disabled<br />
• Maximum trace file size is 500 MB<br />
• Start the trace.<br />
Task 2: Execute the workload<br />
• Open and execute the file 51 – Lab Exercise 1.sql which is a workload file to be analyzed.<br />
Note Ensure that you configure the workload window as per the instructions at the top of<br />
the script.<br />
Task 3: Stop the trace<br />
• When the query completes, stop the trace.<br />
• Close <strong>SQL</strong> Server Profiler.<br />
Results: After this exercise, you should have captured a workload using <strong>SQL</strong> Server Profiler.<br />
Exercise 2: Analyze a Trace Using <strong>Database</strong> Engine Tuning Advisor<br />
Scenario<br />
You want to take the results captured from the <strong>SQL</strong> Trace and identify any changes that could be made to<br />
improve performance. In this exercise you will analyze captured activity using the <strong>Database</strong> Engine Tuning<br />
Advisor.<br />
The main tasks for this exercise are as follows:<br />
1. Analyze the captured trace in <strong>Database</strong> Engine Tuning Advisor.<br />
2. Review the suggested modifications.
17-38 Tracing Access to <strong>SQL</strong> Server <strong>2012</strong><br />
Task 1: Analyze the captured trace in <strong>Database</strong> Engine Tuning Advisor<br />
• Analyze the captured trace file using <strong>Database</strong> Engine Tuning Advisor.<br />
Task 2: Review the suggested modifications<br />
• Review the suggested modifications.<br />
• Close <strong>Database</strong> Engine Tuning Advisor.<br />
Results: After this exercise, you should have analyzed the trace using <strong>Database</strong><br />
Engine Tuning Advisor and reviewed the suggested modifications.<br />
Challenge Exercise 3: Configure <strong>SQL</strong> Trace (Only if time permits)<br />
Scenario<br />
You have noticed that when <strong>SQL</strong> Profiler is running on the production server that performance is reduced.<br />
You want to capture the performance metrics and reduce the impact on the server. In this exercise, you<br />
will capture activity using the <strong>SQL</strong> Trace stored procedures to show how this can be used instead of <strong>SQL</strong><br />
Profiler, to lessen the impact on the server.<br />
The main tasks for this exercise are as follows:<br />
1. Create a script that uses <strong>SQL</strong> Trace procedures to implement the same type of capture as you<br />
performed in Exercise 1 but with a different trace name.<br />
2. Test that the script works as expected by using the same workload.<br />
3. Analyze the new captured output and note if the suggested changes are identical to those suggested<br />
in Exercise 2.<br />
Task 1: Create a script that uses <strong>SQL</strong> Trace procedures to implement the same type of<br />
capture as you performed in Exercise 1 but with a different trace name<br />
• Create, start, and stop the same trace that you used in Exercise 1 but call the trace ProsewareTrace2.<br />
• Export the trace definition to a file.<br />
Task 2: Test that the script works as expected by using the same workload<br />
• Configure and start the saved trace definition file. Make sure that you note the trace ID.<br />
• Re-execute the 51 – Lab Exercise 1.sql workload file.<br />
Task 3: Analyze the new captured output and note if the suggested changes are<br />
identical to those suggested in Exercise 2<br />
• Stop the trace.<br />
• Open and review the captured trace file.<br />
Results: After this exercise, you should have captured a trace using <strong>SQL</strong> Trace.
Module Review and Takeaways<br />
Review Questions<br />
1. What is the main purpose of <strong>Database</strong> Engine Tuning Advisor?<br />
2. What can be used to test a workload after configuration changes?<br />
Best Practices<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 17-39<br />
1. Use <strong>SQL</strong> Server Profiler to perform short traces for debugging and other purposes.<br />
2. Use <strong>SQL</strong> Trace for large and long running traces.<br />
3. Use <strong>SQL</strong> Server Profiler to define traces and script them for <strong>SQL</strong> Trace.<br />
4. Import trace data into a database table for advanced analysis.<br />
5. Use <strong>Database</strong> Engine Tuning Advisor to analyze the database based on a workload rather than<br />
focusing on individual queries.
17-40 Tracing Access to <strong>SQL</strong> Server <strong>2012</strong>
Module 18<br />
Monitoring <strong>SQL</strong> Server <strong>2012</strong><br />
Contents:<br />
Lesson 1: Monitoring Activity 18-3<br />
Lesson 2: Capturing and Managing Performance Data 18-15<br />
Lesson 3: Analyzing Collected Performance Data 18-23<br />
Lab 18: Monitoring <strong>SQL</strong> Server <strong>2012</strong> 18-32<br />
18-1
18-2 Monitoring <strong>SQL</strong> Server <strong>2012</strong><br />
Module Overview<br />
The <strong>Microsoft®</strong> <strong>SQL</strong> <strong>Server®</strong> database engine is capable of running for very long periods without the<br />
need for administration. However, a process of ongoing monitoring of the activity that is occurring on the<br />
database server allows you to be proactive in dealing with potential issues before they arise.<br />
<strong>SQL</strong> Server provides a number of tools that can be used for monitoring current activity and for recording<br />
details of previous activity. You need to become familiar with what each of the tools does and with how to<br />
use the tools.<br />
When you record activity using monitoring tools, it is easy to become overwhelmed by the volume of<br />
output that a monitoring tool can provide. You also need to learn techniques for analyzing the output<br />
that is provided by the monitoring tools.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Monitor current activity.<br />
• Capture and manage performance data.<br />
• Analyze collected performance data.
Lesson 1<br />
Monitoring Activity<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 18-3<br />
Dynamic management views (DMVs) and dynamic management functions (DMFs) provide insights directly<br />
into the inner operations of the <strong>SQL</strong> Server database engine and are useful for monitoring. It is important<br />
for <strong>SQL</strong> Server database administrators to become familiar with some of the more useful DMVs and DMFs<br />
as part of a process of ongoing server monitoring.<br />
<strong>SQL</strong> Server Management Studio (SSMS) provides the Activity Monitor which can be used to investigate<br />
both current issues such as "Is one process being blocked by another process?" and recent historical issues<br />
such as "Which query has taken the most resources since the server was last restarted?" You should<br />
become familiar with the capabilities of Activity Monitor.<br />
The <strong>SQL</strong> Server processes also expose a set of performance-related objects and counters to the Windows<br />
Performance Monitor. These objects and counters allow you to monitor <strong>SQL</strong> Server as part of monitoring<br />
the entire server.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Explain Dynamic Management Views and Functions.<br />
• View activity using Dynamic Management Views.<br />
• Work with Activity Monitor in SSMS.<br />
• Work with Performance Monitor.<br />
• Work with <strong>SQL</strong> Server counters.
18-4 Monitoring <strong>SQL</strong> Server <strong>2012</strong><br />
Overview of Dynamic Management Views and Functions<br />
Key Points<br />
In earlier versions of <strong>SQL</strong> Server, database administrators often used third party tools to monitor the<br />
internal state of <strong>SQL</strong> Server. Most third party tools performed this monitoring by the use of extended<br />
stored procedures. Using extended stored procedures is not desirable as they operate within the memory<br />
space of the <strong>SQL</strong> Server process. Poorly-written programs that operate in these memory regions can cause<br />
instability or crashes of <strong>SQL</strong> Server.<br />
<strong>SQL</strong> Server 2005 and later offer dynamic management objects to provide insight into the inner operation<br />
of the database engine without the need to use extended stored procedures. Some of the objects have<br />
been created as views and are called dynamic management views (DMVs). Other objects have been<br />
created as functions and are called dynamic management functions (DMFs).<br />
Note The information exposed by DMVs and DMFs is generally not persisted in the<br />
database as is the case with catalog views. The views and functions are virtual objects that<br />
return state information. The state is cleared when the server instance is restarted.<br />
DMVs and DMFs<br />
DMVs and DMFs return server state information that can be used to monitor the health of a server<br />
instance, diagnose problems, and tune performance. There are two types of dynamic management views<br />
and functions:<br />
• Server-scoped dynamic management views and functions.<br />
• <strong>Database</strong>-scoped dynamic management views and functions.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 18-5<br />
All dynamic management views and functions exist in the sys schema and follow the naming convention<br />
dm_%. They are defined in the hidden resource database and are mapped to the other databases. The<br />
DMVs and DMFs are organized into a set of categories:<br />
Category Description<br />
sys.dm_exec_% These objects provide information about connections,<br />
sessions, requests, and query execution. For example,<br />
sys.dm_exec_sessions provides one row for every session that<br />
is currently connected to the server.<br />
sys.dm_os_% These objects provide access to <strong>SQL</strong> OS related information.<br />
For example, sys.dm_os_performance_counters provides<br />
access to <strong>SQL</strong> Server performance counters without the need<br />
to access them using operating system tools.<br />
sys.dm_tran_% These objects provide access to transaction management. For<br />
example, sys.dm_os_tran_active_transactions provides details<br />
of currently active transactions.<br />
sys.dm_io_% These objects provide information on I/O processes. For<br />
example, sys.dm_io_virtual_file_stats provides details of I/O<br />
performance and statistics for each database file.<br />
sys.dm_db_% These objects provide database-scoped information. For<br />
example, sys.dm_db_index_usage_stats provides information<br />
about how each index in the database has been used.<br />
Required Permissions<br />
To query a DMV or DMF requires SELECT permission on the object and VIEW SERVER STATE or VIEW<br />
DATABASE STATE permission, depending upon whether the object is server-scoped or database-scoped.<br />
This lets you selectively restrict access of a user or login to dynamic management views and functions. To<br />
control access for a user, first create the user in master (with any user name) and then deny the user<br />
SELECT permission on the DMVs or DMFs that you do not want them to access. After this, the user cannot<br />
select from these DMVs and DMFs, regardless of database context of the user because the DENY within<br />
that database context is processed first.<br />
Question: Why would you want to use dynamic management views to view current activity?
18-6 Monitoring <strong>SQL</strong> Server <strong>2012</strong><br />
Viewing Activity Using Dynamic Management Views<br />
Key Points<br />
You can see the list of available DMVs in Object Explorer in the System Views node for any database. The<br />
DMFs are shown in the Table Valued Functions node under the System Functions node that is under the<br />
master database.<br />
Note Dynamic management views and functions need to be referenced in T-<strong>SQL</strong><br />
statements by using the sys schema as a prefix. They cannot be referenced by one-part<br />
names.<br />
There are two basic types of dynamic management objects:<br />
• Objects that return real-time state information from the system<br />
• Objects that return recent historical information<br />
Objects That Return Real-Time State Information from the System<br />
Most DMVs and DMFs are designed to provide information about the current state of the system. In the<br />
example on the slide, two DMVs are being joined. The sys.dm_exec_sessions view returns one row for each<br />
current user session. The sys.dm_os_waiting_tasks view returns a row for each task that is currently waiting<br />
on a resource. By joining the two views and adding a filter, you can locate a list of user tasks that have<br />
been waiting for longer than 3000 milliseconds (3 seconds).<br />
Note Whenever a task has to wait for any resource, the task is sent to a waiting list. The<br />
task remains on that list until it receives a signal telling it that the requested resource is now<br />
available. The task is then returned to the running list, where it waits to be scheduled for<br />
execution again. This type of wait analysis is very useful when tuning system performance as<br />
it allows you to identify bottlenecks within the system.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 18-7<br />
In many cases, when a task is waiting, the cause of the wait will be some form of lock. Locks are discussed<br />
further in Module 20.<br />
Objects That Return Historical Information<br />
The second type of dynamic management object returns historical information. For example, you saw that<br />
the sys.dm_os_waiting_tasks view returned details of tasks that are currently waiting on resources. By<br />
comparison, the sys.dm_os_wait_stats view returns information about how often and how long any task<br />
had to wait for a specific wait_type since the <strong>SQL</strong> Server instance started.<br />
Another useful example of a historical function is the sys.dm_io_virtual_file_stats() function that returns<br />
information about the performance of database files.<br />
Question: What types of resources might <strong>SQL</strong> Server need to wait for?
18-8 Monitoring <strong>SQL</strong> Server <strong>2012</strong><br />
Demonstration 1A: Viewing Activity Using Dynamic Management Views<br />
Demonstration Steps<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click <strong>SQL</strong><br />
Server Management Studio. In the Connect to Server window, type Proseware and click Connect.<br />
From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_18_PRJ\10775A_18_PRJ.ssmssln and click Open.<br />
3. From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file from<br />
within Solution Explorer.<br />
4. Open the 11 – Demonstration 1A.sql script file.<br />
5. Follow the instructions contained within the comments of the script file.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 18-9<br />
Working with Activity Monitor in <strong>SQL</strong> <strong>Server®</strong> Management Studio<br />
Key Points<br />
Activity Monitor is a tool in <strong>SQL</strong> Server Management Studio that shows information about processes,<br />
waits, I/O resource performance, and recent expensive queries. To start Activity Monitor, in <strong>SQL</strong> Server<br />
Management Studio, right-click the server name, and then click Activity Monitor.<br />
Activity Monitor has five sections:<br />
• The Overview section contains graphical information about processor usage, waiting tasks, database<br />
I/O, and batch requests per second.<br />
• The Processes section includes detailed information on processes, their ID's, logins, databases, and<br />
commands. This section will also show details of processes that are blocking other processes.<br />
• The Resource Waits section shows categories of processes that are waiting for resources, and<br />
information about the wait times.<br />
• The Data File I/O section shows information about the physical database files in use, and their recent<br />
performance.<br />
• The Recent Expensive Queries section shows detailed information about the most expensive recent<br />
queries, and resources consumed by those queries. You can right-click the queries in this section to<br />
view either the query or an execution plan for the query.<br />
You can filter data by clicking column headings and choosing the parameter you want to view<br />
information for.<br />
Question: Why is it important to monitor waits?
18-10 Monitoring <strong>SQL</strong> Server <strong>2012</strong><br />
Demonstration 1B: Working with Activity Monitor in <strong>SQL</strong> Server<br />
Management Studio<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_18_PRJ\10775A_18_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
2. Open the 12 – Demonstration 1B.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
Working with Performance Monitor<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 18-11<br />
Windows Performance Monitor is a Microsoft Windows Server 2008 (and later) tool that brings together<br />
several previously disparate Microsoft Windows Server 2003 performance and monitoring tools. By<br />
consolidating several sources of information in one place, Windows Performance Monitor helps<br />
administrators obtain information needed to diagnose server performance and instability issues.<br />
An important new feature in Windows Performance Monitor is the Data Collector Set, which groups data<br />
collectors into reusable elements for use with different performance monitoring scenarios. Once a group<br />
of data collectors are stored as a Data Collector Set, operations such as scheduling can be applied to the<br />
entire set through a single property change.<br />
Windows Performance Monitor also includes a default Data Collector Set templates to help system<br />
administrators begin collecting performance data specific to a Server Role or monitoring scenario<br />
immediately.<br />
Windows Performance Monitor is the key tool for monitoring Microsoft Windows® systems. Since <strong>SQL</strong><br />
Server runs on the Windows operating system, it is important to monitor at the server level as well as at<br />
the database engine level as problems within the database engine might be caused by problems outside<br />
the database engine.<br />
The main focus of the Windows Performance Monitor is on monitoring CPU, Memory, Disk System, and<br />
Network. After you install <strong>SQL</strong> Server, a number of <strong>SQL</strong> Server objects and counters are available within<br />
Windows Performance Monitor.<br />
Question: Why is Performance Monitor useful when determining performance issues?
18-12 Monitoring <strong>SQL</strong> Server <strong>2012</strong><br />
Working with <strong>SQL</strong> Server Counters<br />
Key Points<br />
It is important to understand some of the basic terminology used within Windows Performance Monitor:<br />
• The term object is used to represent a resource that can be monitored.<br />
• Every object exposes one or more counters.<br />
• A counter might have several instances if more than one resource of that type exists.<br />
As an example, there is an object called Processor that consists of several counters. The Processor object<br />
provides metrics related to the processors that are available on the server. One of the commonly used<br />
counters for the Processor object is the “% Processor Time” counter. That counter then offers a set of<br />
instances that represent the individual processor cores that exist on the system. In addition, a value called<br />
_Total represents all the instances combined.<br />
Application Objects and Counters<br />
Many applications expose application-related statistical data to Performance Monitor. <strong>SQL</strong> Server exposes<br />
a large number of objects and counters.<br />
The <strong>SQL</strong> Server objects have the following naming convention:<br />
Object Name Format Usage<br />
<strong>SQL</strong>Server: Used for default instances<br />
MS<strong>SQL</strong>$: Used for named instances<br />
<strong>SQL</strong>Agent$: Used for <strong>SQL</strong> Server Agent
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 18-13<br />
<strong>SQL</strong> Server also provides the same counter values through the sys.dm_os_performance_counters dynamic<br />
management view.<br />
Question: Why might it be useful to query sys.dm_os_performance_counters to access <strong>SQL</strong><br />
Server counters rather than using Performance Monitor?
18-14 Monitoring <strong>SQL</strong> Server <strong>2012</strong><br />
Demonstration 1C: Working with Performance Monitor<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_18_PRJ\10775A_18_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
2. Open the 13 – Demonstration 1C.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
Lesson 2<br />
Capturing and Managing Performance Data<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 18-15<br />
You have seen that DMVs and DMFs provide useful information about the state of the system. The values<br />
that the DMVs and DMFs provide are not generally persisted in any way and only reside in memory while<br />
the server is running. When the server instance is restarted, these values are reset.<br />
When DMVs and DMFs were introduced in <strong>SQL</strong> Server 2005, it was common for users to want to persist<br />
the values that the DMVs and DMFs provided. To this end, many users would create a database to hold<br />
the values, then create a job that would periodically collect and save the values.<br />
The Data Collector system that was introduced with <strong>SQL</strong> Server 2008 formalizes this concept by creating a<br />
central warehouse for holding performance data, jobs for collecting and uploading the data to the<br />
warehouse, and a set of high quality reports that can be used to analyze the data. This lesson describes<br />
how to set up and configure the Data Collector. Lesson 3 describes the reports that are available from the<br />
data that has been collected by the Data Collector.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Explain the role of the Data Collector.<br />
• Design a topology for use with the Data Collector.<br />
• Configure the Data Collector.<br />
• Configure security for the Data Collector.<br />
• Monitor the Data Collector.
18-16 Monitoring <strong>SQL</strong> Server <strong>2012</strong><br />
Overview of Data Collector<br />
Key Points<br />
Earlier in this module, you have seen how useful data can be obtained from the use of DMVs and DMFs.<br />
One disadvantage of using dynamic management objects is that most views and function only return real<br />
time data and the views and functions that return historic data are aggregations of the occurrences over<br />
time. However, to perform effective performance tuning and monitoring, an overview over time is<br />
needed, along with the ability to drill down to more detailed levels when investigating issues. The <strong>SQL</strong><br />
Server Data Collector helps you to achieve this.<br />
Data Collection Sets<br />
When you configure the <strong>SQL</strong> Server Data Collector, a number of System Data Collection Sets are created.<br />
These sets define the data that needs to be collected, how often the data is uploaded to a central<br />
repository, and how long the data is retained in that repository.<br />
The Data Collector can collect information from several locations:<br />
• The Data Collector can query DMVs and DMFs to retrieve detailed information about the operation of<br />
the system.<br />
• The Data Collector can retrieve performance counters that provide metrics about the performance of<br />
both <strong>SQL</strong> Server and the entire server.<br />
• The Data Collector can also capture <strong>SQL</strong> Trace events that have occurred.<br />
In addition to the System Data Collection Sets, the <strong>SQL</strong> Server Data Collector can be extended by the<br />
creation of user-defined Data Collection Sets. The ability to add user-defined Data Collection Sets allows<br />
users to specify the data that they wish to collect and to use the infrastructure that is provided by the <strong>SQL</strong><br />
Server Data Collector to collect and centralize the data.
Management Data Warehouse and Reports<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 18-17<br />
The data that is collected by the <strong>SQL</strong> Server Data Collector is stored in a central repository called a<br />
management data warehouse. The management data warehouse is a <strong>SQL</strong> Server database that is created<br />
by a wizard that is also used to configure data collection.<br />
Three standard reports and a rich set of sub-reports has been provided with the Data Collector but it is<br />
possible to write your own reports that are based on either the data that is collected by the System Data<br />
Collection Sets or data that is collected by user-defined Data Collection Sets.<br />
Question: Can Data Collector be used for real time monitoring?
18-18 Monitoring <strong>SQL</strong> Server <strong>2012</strong><br />
Designing a Data Collector Topology<br />
Key Points<br />
There are three components that make up the data collection system in <strong>SQL</strong> Server:<br />
• The Data Collector is made up of a set of jobs that run on the local server that collect data from<br />
dynamic management objects, performance counters, and <strong>SQL</strong> Trace events and upload that data to<br />
a central repository.<br />
• The central repository that can consolidate data from multiple server instances.<br />
• The rich reports which are available to analyze the data in the management data warehouse. The<br />
reports are accessed using SSMS.<br />
A large enterprise should consider the use of a standalone system for the management data warehouse.<br />
There are two goals for creating a centralized management data warehouse:<br />
• You can access reports that combine information for all server instances in your enterprise.<br />
• You can offload the need to hold collected data and to report on it from the production servers.<br />
The data collector has two methods for uploading captured performance data into the central warehouse.<br />
Low volume information is sent immediately to the warehouse. Higher volume information is cached<br />
locally first and then uploaded to the warehouse using SSIS.<br />
Question: Is it possible to configure all of the data collector features on a single instance?
Configuring Data Collector<br />
Key Points<br />
Installing the data collection system in <strong>SQL</strong> Server involves two steps:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 18-19<br />
• Configuring a central management data warehouse to hold the collected data.<br />
• Configuring each server instance to collect and upload the required data.<br />
<strong>SQL</strong> Server provides a wizard that is used for both these tasks. The first step when running the wizard is to<br />
create a management data warehouse. The management data warehouse can store collection data from<br />
many <strong>SQL</strong> Server instances.<br />
Planning Space Requirements<br />
Sufficient disk space needs to available to support the needs of the management data warehouse. In a<br />
typical configuration, you should allow at least 300MB per day for each instance that is managed.<br />
Impact on Local Systems<br />
The only processes that are run on the local instances are the jobs that are used to collect and upload the<br />
data to the management data warehouse. Some data is collected very regularly and is cached locally and<br />
later uploaded using SSIS. Other data is captured infrequently and uploaded immediately.<br />
System Data Collection Sets are created automatically during setup. They can be enabled and disabled as<br />
needed and both the frequency of collection and the retention periods for collected data can be<br />
customized for System Data Collection Sets as well as for user-defined Data Collection Sets.<br />
It is also important to consider the security requirements for the Data Collector. The security requirements<br />
are discussed in the next topic.<br />
Question: What might be interesting to include as a custom collection set?
18-20 Monitoring <strong>SQL</strong> Server <strong>2012</strong><br />
Data Collector Security<br />
Key Points<br />
The <strong>SQL</strong> Server Data Collector system has two security aspects that need to be considered:<br />
• Who can access the Management Data Warehouse?<br />
• Who can configure the Data collection?<br />
<strong>Database</strong> roles exist for both these purposes as shown in the tables on the slide.<br />
Note A user needs to be member of the mdw_reader role to be able to access the reports<br />
and the collected data.<br />
As well as needing to be a member of the mdw_writer role to upload data, the jobs that collect the data<br />
need whatever permissions are required to access the data that they are collecting.<br />
Question: If the <strong>SQL</strong> Server Agent service account does not have sufficient privileges to<br />
access the management data warehouse to upload the data, what configuration changes<br />
could you make to allow this?
Monitoring Data Collector<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 18-21<br />
As with other <strong>SQL</strong> Server Agent jobs, the history of the jobs that are used by <strong>SQL</strong> Server Data Collector to<br />
capture and upload performance data are held in the <strong>SQL</strong> Server Agent job history tables and can be<br />
viewed using the standard <strong>SQL</strong> Server Agent job history viewer.<br />
In addition to the standard job history, the Data Collector also logs configuration and other log<br />
information to tables in the msdb database. The Data Collector calls stored procedures to add the log<br />
information and also uses the SSIS logging features for the SSIS packages that it executes.<br />
The data that is logged into the msdb database is kept with the same retention period settings as the<br />
Data Collection Sets that it relates to. The information that is retained can be viewed through the log file<br />
viewer or by querying the following views:<br />
• fn_syscollector_get_execution_details()<br />
• fn_syscollector_get_execution_stats()<br />
• syscollector_execution_log<br />
• syscollector_execution_log_full<br />
• syscollector_execution_stats<br />
Three levels of logging are available and can be set by calling the sp_syscollector_update_collection_set<br />
system stored procedure. The lowest level of logging records starts and stops of collector activity. The<br />
next level of logging adds execution statistics and progress reports. The highest level of logging adds<br />
detailed SSIS package logging.<br />
Question: Why is it better to use the Data Collector logs than the data collector job history<br />
for troubleshooting Data Collector?
18-22 Monitoring <strong>SQL</strong> Server <strong>2012</strong><br />
Demonstration 2A: Configuring Data Collector<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_18_PRJ\10775A_18_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
2. Open the 21 – Demonstration 2A.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
Lesson 3<br />
Analyzing Collected Performance Data<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 18-23<br />
Once performance data has been collected from a number of server instances and consolidated into a<br />
management data warehouse, the data can then be analyzed. You can write your own custom reports<br />
using <strong>SQL</strong> Server Reporting Services, via custom reports in SSMS, or via T-<strong>SQL</strong> queries. Most users will find<br />
the standard reports that are supplied with <strong>SQL</strong> Server to be sufficient, without the need to write<br />
additional reports. It is important to be familiar with the information that is contained in the standard<br />
reports and with how to navigate within the reports.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Explain the available Data Collector reports.<br />
• Use the Disk Usage report.<br />
• Use the Server Activity report.<br />
• Use the Query Statistics report.
18-24 Monitoring <strong>SQL</strong> Server <strong>2012</strong><br />
Overview of Data Collector Reports<br />
Key Points<br />
The <strong>SQL</strong> Server Data Collector provides a series of standard reports that are based on the data that is<br />
collected from the System Data Collection Sets and that has been consolidated into a centralized<br />
management data warehouse. The reports are accessed from within <strong>SQL</strong> Server Management Studio as a<br />
right-click option from the management data warehouse database as shown in the screen shot below:
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 18-25<br />
The overview page for the Management Data Warehouse reports provides links to the main report areas<br />
as shown in the following screenshot:<br />
From this overview page, there are three main report areas that are linked:<br />
• Disk Usage Summary<br />
• Query Statistics History<br />
• Server Activity History<br />
Note Each report can be printed and exported to PDF or Microsoft Excel® files for further<br />
analysis.<br />
Question: What period would the Data Collector reports cover?
18-26 Monitoring <strong>SQL</strong> Server <strong>2012</strong><br />
Disk Usage Report<br />
Key Points<br />
The disk usage reports are based on the Disk Usage System Data Collection Set. By default, the collection<br />
set gathers disk usage data every six hours and keeps the data for two years.<br />
Note that by default, the Disk Usage System Collection Set retains data for much longer than other<br />
System Data Collection Sets. However, as the amount of data collected by this collection set is quite small,<br />
this should not be a concern and the value of being able to track individual file space usage over time<br />
warrants the use of this longer retention period.<br />
This report also includes a number of hyperlinks that lead to a set of linked sub-reports for drilling into<br />
the usage data.<br />
Question: What might be a good custom report that could be created based on disk usage<br />
data?
Demonstration 3A: Viewing the Disk Usage Report<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 18-27<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_18_PRJ\10775A_18_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
• Open and follow the instructions in the 21 – Demonstration 2A.sql script file.<br />
2. Open the 31 – Demonstration 3A.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
18-28 Monitoring <strong>SQL</strong> Server <strong>2012</strong><br />
Server Activity Report<br />
Key Points<br />
The Server Activity Report is based on the Server Activity System Data Collection Set. The collection set<br />
gathers a lot of <strong>SQL</strong> Server related statistics such as waits, locks, latches, and memory statistics that are<br />
accessed using DMVs. In addition, the collection set gathers Windows and <strong>SQL</strong> Server Performance<br />
counters to retrieve information such as CPU and memory usage from the system and from the processes<br />
that are running on the system. The collection runs every sixty seconds and is uploaded every fifteen<br />
minutes by default. The history is retained for fourteen days by default.<br />
This report has a large number of linked sub-reports that provide much deeper information than is<br />
provided on the initial summary. The initial report is a dashboard that provides an overview. If you<br />
investigate this report, you will find that almost every item displayed is a possible drill-through point. For<br />
example, you can click on a trend line in a graph to find out the values that make up the trend.<br />
Question: Why is it important that data collection also tracks the memory usage of other<br />
processes running on the system?
Demonstration 3B: Viewing the Server Activity Report<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 18-29<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_18_PRJ\10775A_18_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
• Open and follow the instructions in the 21 – Demonstration 2A.sql script file.<br />
2. Open the 32 – Demonstration 3B.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
18-30 Monitoring <strong>SQL</strong> Server <strong>2012</strong><br />
Query Statistics Report<br />
Key Points<br />
The Query Statistics Report is based on the Query Statistics System Data Collection Set that retrieves<br />
details of expensive queries. The collection set is the most intensive default collection system and, by<br />
default, runs every ten seconds. To avoid the overhead of constant uploading, the data that is collected by<br />
this collection set is cached in the local file system and uploaded using SSIS every fifteen minutes. The<br />
data that is collected is retained for fourteen days by default but this value can be extended.<br />
<strong>SQL</strong> Server determines expensive queries based upon:<br />
• Elapsed time<br />
• Worker time<br />
• Logical reads<br />
• Logical writes<br />
• Physical reads<br />
• Execution count<br />
This report also includes a large number of linked sub-reports that can be used to drill-through to higher<br />
levels of detail. As an example, it is possible to retrieve query plans from the expensive queries that were<br />
in memory at the time the capture was performed.<br />
Question: What is meant by the term "expensive queries"?
Demonstration 3C: Viewing the Query Statistics Report<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 18-31<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_18_PRJ\10775A_18_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
• Open and follow the instructions in the 21 – Demonstration 2A.sql script file.<br />
2. Open the 33 – Demonstration 3C.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
18-32 Monitoring <strong>SQL</strong> Server <strong>2012</strong><br />
Lab 18: Monitoring <strong>SQL</strong> Server <strong>2012</strong><br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_18_PRJ\10775A_18_PRJ.ssmssln.<br />
7. From the View menu, click Solution Explorer. In Solution Explorer, double-click the query<br />
00-Setup.sql. When the query window opens, click Execute on the toolbar.<br />
Lab Scenario<br />
The earlier versions of <strong>SQL</strong> Server that you have worked with did not include dynamic management<br />
functions and views. You have recently read about these and are interested to see how they might be<br />
used for collecting performance information.<br />
Rather than collecting information separately for each <strong>SQL</strong> Server instance, you have decided to collect all<br />
the performance information to a central server. This will help when dealing with issues that were not<br />
reported to the helpdesk at the time they occurred.
Exercise 1: Investigating DMVs<br />
Scenario<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 18-33<br />
You have been provided with a series of sample queries that can be used to provide information from<br />
Dynamic Management Views and to provide an insight into <strong>SQL</strong> Server activity. In this exercise, you will<br />
execute these queries to gain an understanding of the potential uses for DMVs.<br />
The main task for this exercise is as follows:<br />
1. Investigate the use of Dynamic Management Views and Functions.<br />
Task 1: Investigate the use of Dynamic Management Views and Functions<br />
• Open the script file 51 - Lab Exercise 1.sql and follow the instructions contained in the script file.<br />
Results: After this exercise, you should have investigated the use of Dynamic Management Views<br />
and Functions.<br />
Exercise 2: Configure Management Data Warehouse<br />
Scenario<br />
You want to be able to centrally collect performance information so that you can analyze the all the<br />
Adventure Works performance data a later stage. In this exercise, you will configure a server to hold the<br />
management data warehouse to collect the performance data.<br />
The main task for this exercise is as follows:<br />
1. Create a management data warehouse for central collection of performance data.<br />
Task 1: Create a management data warehouse for central collection of performance data<br />
• Create a management data warehouse for central collection of performance data. Create the<br />
management data warehouse with a database name of MDW and on the Proseware instance.<br />
Results: After this exercise, you should have created a management data warehouse.<br />
Exercise 3: Configure Instances for Data Collection<br />
Scenario<br />
You want to configure each Adventure Works <strong>SQL</strong> Server instance so that it sends performance<br />
information to a central server. In this exercise, you will configure each of the <strong>SQL</strong> Server instances to send<br />
performance data to the newly configured data warehouse.<br />
The main tasks for this exercise are as follows:<br />
1. Configure data collection on each instance.<br />
2. Configure data collection security for the AdventureWorks instance.
18-34 Monitoring <strong>SQL</strong> Server <strong>2012</strong><br />
Task 1: Configure data collection on each instance<br />
• Configure data collection on the Proseware instance.<br />
• Set the target for the performance data to the management data warehouse created in Exercise 2.<br />
• Configure data collection on the AdventureWorks instance.<br />
Results: After this exercise, you should have configured both instances for data collection.<br />
Challenge Exercise 4: Work with Data Collector Reports (Only if time<br />
permits)<br />
Scenario<br />
You want to create a performance report that summarizes the performance information for all of the<br />
Adventure Works <strong>SQL</strong> Server instances. In this exercise, you will review the reports provided by the data<br />
collection system.<br />
Note Performance data takes some time to begin to be collected. To assist with this<br />
exercise, you have been provided with a backup of another management data warehouse<br />
to use for testing reports.<br />
The main tasks for this exercise are as follows:<br />
1. Disable data collectors on both instances.<br />
2. Restore a backup of the MDW database.<br />
3. Review the available reports.<br />
Task 1: Disable data collectors on both instances<br />
• Disable data collection on the Proseware instance.<br />
• Disable data collection on the AdventureWorks instance.<br />
Task 2: Restore a backup of the MDW database<br />
• Restore the MDW database from the backup file<br />
D:\10775A_Labs\10775A_18_PRJ\10775A_18_PRJ\MDW.bak.<br />
Task 3: Review the available reports<br />
• Create a server activity report.<br />
• Create a disk space usage report.<br />
• Create a query statistics report.<br />
Results: After this exercise, you should have worked with Management Data Warehouse Reports.
Module Review and Takeaways<br />
Review Questions<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 18-35<br />
1. Why is it better to have a central management data warehouse for data collection, than local<br />
installations?<br />
2. What are Dynamic Management Objects?<br />
Best Practices<br />
1. Use Dynamic Management Objects to perform real-time monitoring and troubleshooting.<br />
2. Use Activity Monitor for easy access to the most relevant information.<br />
3. Use Performance Monitor to gather metrics for Windows and <strong>SQL</strong> Server.<br />
4. Create a central Management Data Warehouse to hold historical performance information.<br />
5. Set up data collection to gather performance data of your <strong>SQL</strong> Server Instances.
18-36 Monitoring <strong>SQL</strong> Server <strong>2012</strong>
Module 19<br />
Managing Multiple Servers<br />
Contents:<br />
Lesson 1: Working with Multiple Servers 19-3<br />
Lesson 2: Virtualizing <strong>SQL</strong> Server 19-9<br />
Lesson 3: Deploying and Upgrading Data-tier Applications 19-15<br />
Lab 19: Managing Multiple Servers 19-22<br />
19-1
19-2 Managing Multiple Servers<br />
Module Overview<br />
<strong>Database</strong> administrators that work with <strong>Microsoft®</strong> <strong>SQL</strong> <strong>Server®</strong> are increasingly being required to<br />
manage larger numbers of servers. The need to manage multiple servers provides additional challenges.<br />
Microsoft has been making substantial investments in <strong>SQL</strong> Server tooling to assist in the management of<br />
multiple servers.<br />
While the virtualization of operating systems and applications is not new, virtualized <strong>SQL</strong> Server systems<br />
have been less common than other types of systems. However, virtualized <strong>SQL</strong> Server systems are now<br />
rapidly being deployed. It is important that you understand the core concepts involved with the<br />
virtualization of <strong>SQL</strong> Server systems, the tools that are commonly used to implement the virtualization,<br />
and with how such systems are managed.<br />
In <strong>SQL</strong> Server <strong>2012</strong>, Microsoft introduced a new form of database development project known as data-tier<br />
applications. While database administrators do not necessarily need to understand how to create data-tier<br />
applications, database administrators need to know how to deploy and upgrade data-tier applications.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Work with multiple servers.<br />
• Describe options for virtualizing <strong>SQL</strong> Server.<br />
• Deploy and upgrade Data-Tier Applications.
Lesson 1<br />
Working with Multiple Servers<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 19-3<br />
One of the first tools that you need to become familiar with when you need to manage multiple <strong>SQL</strong><br />
Server systems is the central management server (CMS). The CMS was first introduced in <strong>SQL</strong> Server 2008<br />
and allows you to define groups of servers and to execute queries and apply policies against the defined<br />
groups of servers.<br />
It is important that you are aware of how to configure a CMS, to define server groups, and to execute<br />
queries against the groups of servers. It is also important that you are aware of the limitations related to<br />
multi-server queries.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Configure central management servers.<br />
• Execute multi-server queries.
19-4 Managing Multiple Servers<br />
Overview of Central Management Servers<br />
Key Points<br />
For many product versions, <strong>SQL</strong> Server Management Studio has provided the ability for users to define<br />
groups of servers in the Registered Servers tool window. The typical use for this capability was to group<br />
together servers in different environments. For example, you could create a group of development<br />
servers, a group of test servers, a group of staging servers, and a group of production servers.<br />
No actions were permitted against entire groups of servers and a key challenge with this capability was<br />
that the definition of the server groups was stored in an XML file on the local client computer that was<br />
executing SSMS.<br />
If a user moved to a different computer, the groups of servers would no longer exist and would need to<br />
be recreated. If the list of servers in a group changed, the groups would need to be modified on every<br />
computer that was executing SSMS, if access to the groups was needed.<br />
Central Management Server<br />
A central management server is a server that is configured to maintain a centralized list of groups of<br />
servers that needs to be defined only once and that can be accessed by all authorized users. The<br />
centralized list is stored in tables within the msdb database of the CMS.<br />
Users that need to access the list of servers must be members of the ServerGroupReaderRole in the msdb<br />
database. All <strong>SQL</strong> Server editions and versions can be registered through the CMS but all registrations<br />
require the use of Windows Authentication. The server that provides the CMS needs to be <strong>SQL</strong> Server<br />
2008 or later.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 19-5<br />
While the ability to maintain central lists is useful, <strong>SQL</strong> Server 2008 and later versions provide the ability to<br />
perform actions against entire groups of computers. By right-clicking a group in the Registered Servers<br />
tool window, users can execute T-<strong>SQL</strong> statements against all servers within the group at the same time.<br />
Users can also use Object Browser to browse multiple servers concurrently.<br />
Note Users can also import and evaluate policies that are part of <strong>SQL</strong> Server Policy-Based<br />
Management (PBM) but PBM is an advanced topic that is out of scope for this course.<br />
Question: What is the main advantage of using a central management server instead of<br />
standard registered servers in SSMS?
19-6 Managing Multiple Servers<br />
Executing Multi-server Queries<br />
Key Points<br />
When you open a multi-server query window, SSMS changes the color of the status bar to indicate to you<br />
that the query is not a standard query. In addition, instead of showing a server name at the bottom of the<br />
window, SSMS shows the name of the server group. The following screenshot shows the result of<br />
executing the following command against a group containing both the AdventureWorks and Proseware<br />
servers:<br />
SELECT SERVERPROPERTY('IsFullTextInstalled') AS IsFullTextInstalled;
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 19-7<br />
Note the additional column called Server Name that is included in the results but was not included in the<br />
SELECT statement. For the execution of multi-server queries, SSMS provides options that determine how<br />
the results of the queries should be presented. From the Tools menu, click Options, click Query Results,<br />
click <strong>SQL</strong> Server, and then click Multi-server Results as shown in the following screenshot:<br />
By default, SSMS will merge the results from each server into a single result set and insert a first column<br />
that contains the name of the server. Optionally, you can remove this server name, add the login name, or<br />
request that the results are not merged.<br />
Question: What happens if the user has no permissions to execute a statement on one of<br />
the servers of a multi-server query?
19-8 Managing Multiple Servers<br />
Demonstration 1A: Executing Multi-server Queries<br />
Demonstration Steps<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click <strong>SQL</strong><br />
Server Management Studio. In the Connect to Server window, type Proseware and click Connect.<br />
From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_19_PRJ\10775A_19_PRJ.ssmssln and click Open.<br />
3. From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file from<br />
within Solution Explorer.<br />
4. Open the 11 – Demonstration 1A.sql script file.<br />
5. Follow the instructions contained within the comments of the script file.
Lesson 2<br />
Virtualizing <strong>SQL</strong> Server<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 19-9<br />
Virtualization of servers has been common for many years, particularly in relation to server consolidation<br />
efforts. <strong>SQL</strong> Server virtualization has been the exception. It has been much less common to see <strong>SQL</strong> Server<br />
systems virtualized. Mostly, this has been because of concerns about the intense I/O requirements that are<br />
common with <strong>SQL</strong> Server systems. This situation is now rapidly changing and it is very important that<br />
database administrators who work with <strong>SQL</strong> Server are aware of the techniques for virtualizing <strong>SQL</strong> Server<br />
and of the advantages and disadvantages associated with <strong>SQL</strong> Server virtualization.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Discuss the advantages and disadvantages of virtualizing <strong>SQL</strong> Server.<br />
• Describe virtualization.<br />
• Describe available virtualization options.<br />
• Explain considerations for virtualizing <strong>SQL</strong> Server.<br />
• Describe System Center Virtual Machine Manager.
19-10 Managing Multiple Servers<br />
Discussion: Advantages and Disadvantages of Virtualizing <strong>SQL</strong> Server<br />
Discussion Points<br />
Question: Do you use virtualization of <strong>SQL</strong> Server in your environment?<br />
Question: If you are using virtualization, why are you using it?<br />
Question: What are the main advantages provided by virtualization?<br />
Question: What are the main concerns with virtualizing <strong>SQL</strong> Server?
Overview of <strong>SQL</strong> Server Virtualization<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 19-11<br />
Virtualization is a technology that allows you to create and manage workloads running in separate virtual<br />
machines rather than separate physical computers. Microsoft Hyper-V® is a feature of Windows <strong>Server®</strong><br />
2008 (R2) x64 and is installed as an operating system role.<br />
Hyper-V supports isolation through a concept of partitions. A Hyper-V partition is a logical unit that<br />
executes an operating system. Hardware access is performed through the root partition that is initially<br />
created as part of the operating system. None of the child partitions has direct access to the hardware.<br />
Child partitions access resources through the hypervisor that controls the whole system.<br />
Every machine must have a root partition with Windows Server running. The root partition is used to<br />
manage the system, along with any child partitions.<br />
The virtualization that is provided by using Hyper-V technology not only provides better use of hardware<br />
resources, but also enables easier management and high availability through the ability of Live Migration,<br />
Snapshots, and many other features.<br />
Live Migration allows the movement of a child partition from one computer to the child partition of<br />
another computer, while maintaining user connections throughout the process.<br />
Snapshots allow the creation of named points in time that entire virtual machines can be reverted to on<br />
demand.<br />
Question: What advantage would Live Migration provide over other options such as <strong>SQL</strong><br />
Server Failover Clustering?
19-12 Managing Multiple Servers<br />
Common Virtualization Scenarios<br />
Key Points<br />
There are many reasons for virtualizing a <strong>SQL</strong> Server installation:<br />
• One of the primary reasons for virtualizing <strong>SQL</strong> Server is to consolidate a number of standalone<br />
servers onto a single more powerful physical server. Consolidation can bring a number of benefits,<br />
particularly in terms of physical space and power consumption.<br />
• Another core reason for virtualizing <strong>SQL</strong> Server is to move older versions of <strong>SQL</strong> Server and older<br />
server operating systems that need to be retained for application compatibility reasons, onto newer<br />
hardware platforms that are more supportable. Many older servers cannot be repaired when they fail.<br />
Virtualizing the servers alleviates much of this issue. PTV tools (physical to virtual tools) are available<br />
to help with this migration.<br />
• Virtual servers are independent of the underlying hardware and are much easier to move to alternate<br />
hardware.<br />
• Hyper-V based virtualization supports the creation of Hyper-V clusters across multiple physical<br />
servers. <strong>SQL</strong> Server failover clusters can be constructed above the Hyper-V clusters to provide high<br />
availability options. Live Migration can be used for manual failover of virtual machines.<br />
• It is often difficult to balance the resource requirements of different applications. By separating the<br />
applications into separate virtual servers, dynamic balancing of resources between the virtual<br />
machines is possible.<br />
• Test environments are easy to create and tear down when they are based on virtualization<br />
technology. Snapshots provide the ability to roll back the effect of application installations or test<br />
executions.<br />
Question: Why would virtualization be very useful when testing applications?
Considerations for Virtualizing <strong>SQL</strong> Server<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 19-13<br />
Before virtualizing a <strong>SQL</strong> Server (or any) workload, you should carry out an appropriate proof of concept<br />
(POC), making sure to use typical database sizes and typical workloads.<br />
A very common mistake that is often made when virtualizing <strong>SQL</strong> Server systems is to neglect to provide<br />
sufficient I/O resources for the virtual servers. There is a common misconception that a set of virtualized<br />
servers requires fewer resources than the individual servers. Pass-through disks route I/O calls to the host<br />
system for execution and add minimal overhead to the calls. Pass-through disks need to be used in all<br />
cases. Emulated disks introduce substantial overhead and should always be avoided.<br />
The <strong>SQL</strong>IO utility should be used to stress test the I/O subsystem before deploying <strong>SQL</strong> Server, as should<br />
be done with a non-virtualized <strong>SQL</strong> Server system. The testing should be performed from within the<br />
virtual system that will be used to run <strong>SQL</strong> Server so that all software layers are tested. The same<br />
suggestions regarding the separation of data and log drives onto separate physical drives that applies to<br />
non-virtualized servers also apply to virtualized servers.<br />
Virtualized servers tend to make more use of CPU resources than non-virtualized servers. Network<br />
intensive systems require even greater CPU resources.<br />
Hyper-V provides a set of integration components that increase the performance of the virtual system by<br />
replacing devices that are emulated in software with synthetic versions of the devices.<br />
Question: What differences would exist between the needs of a virtualized <strong>SQL</strong> Server and a<br />
non-virtualized <strong>SQL</strong> Server in terms of I/O requirements?
19-14 Managing Multiple Servers<br />
Overview of System Center Virtual Machine Manager<br />
Key Points<br />
<strong>SQL</strong> Server 2008 introduced the <strong>SQL</strong> Server Resource Governor to help with balancing resource<br />
requirements for multiple applications on a single server. While the use of Resource Governor can assist<br />
somewhat with CPU and memory requirements that relate to the database engine and certain types of<br />
queries, users often want to balance resources at a higher level of granularity and across other <strong>SQL</strong> Server<br />
components.<br />
One common option is to install multiple virtual machines that are each running <strong>SQL</strong> Server and to<br />
separate the applications onto the separate virtual machines. The role of System Center Virtual Machine<br />
Manager (SCVMM) is to then balance the resources that are allocated to each of the virtual machines.<br />
SCVMM is a component of the System Center suite, along with other components such as System Center<br />
Operations Manager (SCOM) which provides an overall management framework for an organization.<br />
SCVMM is often used in conjunction with SCOM.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 19-15<br />
Lesson 3<br />
Deploying and Upgrading Data-tier Applications<br />
Creating .NET based applications in Microsoft Visual Studio® is a well-refined process. The creation of<br />
projects to support the database-related aspects of the projects has been less well defined. Since Visual<br />
Studio 2005, it has been possible to create database projects in Visual Studio. <strong>Database</strong> projects could<br />
form part of solutions that include other types of Visual Studio projects. <strong>Database</strong> projects continue to be<br />
used for line of business or large mission-critical applications.<br />
Data-tier applications are another form of database project that offer many of the advantages of database<br />
projects but are much easier to create, deploy, and upgrade. While database administrators do not<br />
necessarily need to know how to create data-tier applications, it is important that database administrators<br />
know how to deploy and upgrade data-tier applications that have been provided to them by developers.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe data-tier application.<br />
• Deploy data-tier applications.<br />
• Upgrade data-tier applications.<br />
• Extract data-tier applications.
19-16 Managing Multiple Servers<br />
Data-tier Application Overview<br />
Key Points<br />
Data-tier applications provide a mechanism for deploying and upgrading simple departmental<br />
applications.<br />
Problems with Previous Approaches<br />
Consider how most database applications are installed. The application itself is normally installed using a<br />
Windows setup program. The setup program asks for configuration information and then installs all<br />
necessary files, registers components, adds necessary registry entries, and so on. Upgrades also usually<br />
work in a very similar way.<br />
Now consider how the databases that are required by the applications are installed and upgraded. The<br />
majority of database projects today are installed via T-<strong>SQL</strong> scripts. Some applications provide installers<br />
that execute the T-<strong>SQL</strong> commands automatically. Script-based installation has many disadvantages.<br />
Developers have had very little support in their development environments to assist in creating the<br />
required setup scripts. The scripts often need to be edited manually. These manual processes invariably<br />
lead to errors. In addition, it is difficult for a developer to view the overall schema of the database they are<br />
working against, without building a database based on the schema. In many cases, this will mean<br />
retrieving and executing a "seed" script that creates a new database and then applying a large number of<br />
upgrade scripts to bring the database to the required level.<br />
The database administrator often needs to run installation scripts manually and often has to perform<br />
other tasks such as creating logins and other objects. If these tasks are not performed perfectly, the<br />
installation and upgrade scripts will often fail and might leave the database in an inconsistent (or even<br />
unknown) state.
Data-tier Applications<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 19-17<br />
Data-tier applications are designed to provide a similar experience for installing and upgrading database<br />
applications as occurs with Windows applications. The developer creates a data-tier application using<br />
Visual Studio, with all required objects, and defines policies that limit how the application can be installed.<br />
For example, a deployment policy could indicate that an application can only be installed on <strong>SQL</strong> Server<br />
versions 10.5 and above.<br />
When the data-tier application project is built, the output is a .dacpac file that can be delivered to the<br />
database administrator. A single .dacpac file can be used for both installing and upgrading an application<br />
and is portable across different environments such as development, test, staging, and production.<br />
Installation and upgrade of data-tier applications is automated. Data-tier applications are not designed<br />
for large line of business applications. The intention is to make it easy to install and upgrade large<br />
numbers of simpler applications.<br />
Question: What is a disadvantage of deploying upgrades via T-<strong>SQL</strong> scripts?
19-18 Managing Multiple Servers<br />
Deploying Data-tier Applications<br />
Key Points<br />
Deploying a data-tier application has been designed to be very straightforward and automated. SSMS<br />
includes a data-tier application installation wizard. The wizard begins by asking for a database name and<br />
the location (file paths) for the database files.<br />
Before making any changes, the wizard checks any policies that are included. In <strong>SQL</strong> Server <strong>2012</strong>, the only<br />
policy that is available is a server selection policy that determines which versions of <strong>SQL</strong> Server that the<br />
application can be installed on. Later versions of data-tier applications might include other types of policy<br />
such as a policy that details failover or high availability requirements.<br />
Before you deploy a .dacpac file as a data-tier application, make sure you trust the source of the .dacpac<br />
file. Otherwise, you should inspect the .dacpac files before deploying them. It is recommended, as with<br />
any type of application, that you install the data-tier applications on a test environment before installing<br />
them in a production environment.<br />
You can view the contents of a .dacpac file by importing the file into a data-tier application project in<br />
Visual Studio. If Visual Studio is not available, a right-click "Unpack" option is available for .dacpac files on<br />
systems where <strong>SQL</strong> Server <strong>2012</strong> and later has been installed.<br />
Question: Why is it important to review a dacpac file before deployment?
Upgrading Data-tier Applications<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 19-19<br />
Upgrades to data-tier applications are also automated. A right-click option on an existing deployed datatier<br />
application in SSMS provides an option to upgrade the application. Upgrades are also wizard based<br />
and designed to be easy to apply.<br />
In earlier versions of <strong>SQL</strong> Server, when you upgraded a data-tier application, a new database was created<br />
and all data in the database that you were upgrading was migrated across to the new database. In <strong>SQL</strong><br />
Server <strong>2012</strong>, databases are upgraded in place instead of being migrated.<br />
The disk space requirements for the upgraded database must also be considered.<br />
All aspects of data-tier applications that can be managed from within SSMS can also be managed by the<br />
use of Windows PowerShell® scripting.<br />
One key advantage of data-tier application upgrades is that when errors occur, the upgrade is rolled back.<br />
Unlike script based upgrades where errors often lead to databases in inconsistent or unknown states,<br />
upgrades to data-tier applications either work or they do not work.<br />
Question: What is the main advantage over .dacpac based upgrades compared to scriptbased<br />
upgrade?
19-20 Managing Multiple Servers<br />
Extracting Data-tier Applications<br />
Key Points<br />
Not all applications are started from scratch. There is a common need to work to further develop existing<br />
applications.<br />
SSMS provides two options for working with existing applications:<br />
• You can register an existing database as a data-tier application.<br />
• You can extract a data-tier application (via a .dacpac file) from an existing database.<br />
Note that data-tier applications do not support all <strong>SQL</strong> Server objects at this time. For example, XML<br />
schema collections and <strong>SQL</strong> CLR based objects are not supported. For this reason, not all databases can be<br />
either registered as data-tier applications or extracted to .dacpac files. When <strong>SQL</strong> Server is unable to<br />
perform a registration or extraction, the objects that are not supported are displayed within the wizard.<br />
Question: What is the main purpose of extracting data-tier applications?
Demonstration 3A: Working with Data-tier Applications<br />
Demonstration Steps<br />
1. If Demonstration 1A was not performed:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 19-21<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
• In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click<br />
<strong>SQL</strong> Server Management Studio. In the Connect to Server window, type Proseware and click<br />
Connect. From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_19_PRJ\10775A_19_PRJ.ssmssln and click Open.<br />
• From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file<br />
from within Solution Explorer.<br />
2. Open the 31 – Demonstration 3A.sql script file.<br />
3. Follow the instructions contained within the comments of the script file.
19-22 Managing Multiple Servers<br />
Lab 19: Managing Multiple Servers<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_19_PRJ\10775A_19_PRJ.ssmssln.<br />
7. From the View menu, click Solution Explorer. In Solution Explorer, double-click the query<br />
00-Setup.sql. When the query window opens, click Execute on the toolbar.<br />
Lab Scenario<br />
You need to configure a solution that allows you to easily manage multiple <strong>SQL</strong> Server instances. You<br />
have noticed that on each computer that you connect to your <strong>SQL</strong> Server network, a different set of<br />
servers has been configured. You decide to configure a central management server to provide a<br />
consistent list of server groups. Your developers have begun using Data-Tier applications for some of their<br />
development. You need to deploy one of these applications to the new server, register an existing<br />
database as a Data-Tier application and, if time permits, upgrade a Data-Tier application.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 19-23<br />
Exercise 1: Configure CMS and Execute Multi-server Queries<br />
Scenario<br />
You want to create a list of <strong>SQL</strong> Server instances so that each database administrator has a consistent list<br />
of <strong>SQL</strong> Server instances in <strong>SQL</strong> Server Management studio. You want to manage this list of servers<br />
centrally. In this exercise, you will configure a central management server and execute multi-server<br />
queries.<br />
The main tasks for this exercise are as follows:<br />
1. Create a central management server.<br />
2. Create a server group within the CMS.<br />
3. Execute a command to find all databases on any core server in full recovery model.<br />
Task 1: Create a central management server<br />
• Create a central management server on the Proseware instance.<br />
Task 2: Create a server group within the CMS<br />
• Create a server group called Core Servers.<br />
• Register the Proseware instance in the CoreServers group.<br />
• Register the AdventureWorks instance in the CoreServers group.<br />
Task 3: Execute a command to find all databases on any core server in full recovery<br />
model<br />
• Open a multi-server query window.<br />
• Execute a multi-server query to find all databases in full recovery model.<br />
• Hint: Look for the value FULL in the recovery_model_desc column in the sys.databases view.<br />
Results: After this exercise, you should have created a centrally managed server group and<br />
executed a multi-server query.
19-24 Managing Multiple Servers<br />
Exercise 2: Deploy a Data-tier Application<br />
Scenario<br />
You want to deploy a data-tier application that has been supplied by your application development<br />
group. This data-tier application contains all of the database and instance objects used by an application.<br />
In this exercise, you will use <strong>SQL</strong> Server Management Studio to deploy a data-tier application.<br />
The main task for this exercise is as follows:<br />
1. Deploy the data-tier application using SSMS.<br />
Task 1: Deploy the data-tier application using SSMS<br />
• Deploy the data-tier application D:\10775A_Labs\10775A_19_PRJ\10775A_19_PRJ\CityCode.dacpac as<br />
database name CityCode on the Proseware instance.<br />
• Test the data-tier application by executing the script file 61 – Lab Exercise 2.sql.<br />
Results: After this exercise, you should have deployed a data-tier application.<br />
Exercise 3: Register and Extract a Data-tier Application<br />
Scenario<br />
You want to encapsulate all of the database and instance objects used by an existing application into a<br />
single package that you can email. In this exercise, you will register an existing database as a data-tier<br />
application. You will also extract a data-tier application to send to the development team.<br />
The main tasks for this exercise are as follows:<br />
1. Register the existing database as a data-tier application.<br />
2. Extract a dacpac from the database to send to the development team.<br />
Task 1: Register the existing database as a data-tier application<br />
• Register the Research database as a data-tier application.<br />
Task 2: Extract a dacpac from the database to send to the development team<br />
• Extract a dacpac from the Research database, to the file D:\MKTG\Research.dacpac.<br />
Results: After this exercise, you should have registered the Research database as a<br />
data-tier application and extracted a DAC package file.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 19-25<br />
Challenge Exercise 4: Upgrade a Data-tier Application (Only if time<br />
permits)<br />
Scenario<br />
The application development group has provided you with a new version of the database schema as a<br />
data-tier application. You want to deploy this new version of the schema. In this exercise, you will upgrade<br />
the existing data-tier application.<br />
Note: Two versions of the updated data-tier application have been supplied. The first (CityCode_v2) will<br />
cause an error when you attempt to deploy it. The intention is to allow you to see that the system is not<br />
left in an unknown state. The second version (CityCode_v3) should deploy successfully.<br />
The main tasks for this exercise are as follows:<br />
1. Attempt to deploy the v2 upgrade.<br />
2. Deploy the v3 upgrade.<br />
Task 1: Attempt to deploy the v2 upgrade<br />
• Attempt to deploy the v2 upgrade from the file<br />
D:\10775A_Labs\10775A_19_PRJ\10775A_19_PRJ\CityCode_v2.dacpac.<br />
Task 2: Deploy the v3 upgrade<br />
• Deploy the v3 upgrade from the file<br />
D:\10775A_Labs\10775A_19_PRJ\10775A_19_PRJ\CityCode_v3.dacpac.<br />
Results: After this exercise, you should have upgraded the data-tier application.
19-26 Managing Multiple Servers<br />
Module Review and Takeaways<br />
Review Questions<br />
1. What must be considered, when executing multi-server queries?<br />
2. What is the purpose of Live Migration as part of Hyper-V?<br />
Best Practices<br />
1. Use Central Management Server to maintain a central configuration of registered servers and groups.<br />
2. Use multi-server queries to run batches against several servers concurrently.<br />
3. Consider virtualization for high availability, consolidation, better hardware utilization, and test<br />
environments.<br />
4. Consider the use of data-tier application for departmental database applications.
Module 20<br />
Troubleshooting Common <strong>SQL</strong> Server <strong>2012</strong> Administrative<br />
Issues<br />
Contents:<br />
Lesson 1: <strong>SQL</strong> Server Troubleshooting Methodology 20-3<br />
Lesson 2: Resolving Service-related Issues 20-7<br />
Lesson 3: Resolving Login and Connectivity Issues 20-13<br />
Lesson 4: Resolving Concurrency Issues 20-17<br />
Lab 20: Troubleshooting Common Issues 20-25<br />
20-1
20-2 Troubleshooting Common <strong>SQL</strong> Server <strong>2012</strong> Administrative Issues<br />
Module Overview<br />
An important role that database administrators that work with <strong>Microsoft®</strong> <strong>SQL</strong> <strong>Server®</strong> need to fulfill is<br />
that of a troubleshooter when issues arise, particularly those issues that are preventing users from<br />
working.<br />
It is important to have a solid methodology for resolving issues in general, and to be familiar with the<br />
most common issues that can arise when working with <strong>SQL</strong> Server systems.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Explain <strong>SQL</strong> Server troubleshooting methodology.<br />
• Resolve service-related issues.<br />
• Resolve login and connectivity issues.<br />
• Resolve concurrency issues.
Lesson 1<br />
<strong>SQL</strong> Server Troubleshooting Methodology<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 20-3<br />
Before starting to try to resolve any issue, it is important to apply a logical troubleshooting methodology<br />
and to apply it in a consistent manner. Troubleshooting is often regarded as somewhat of an art as much<br />
as it regarded as a science. There are a number of characteristics that are common to all good<br />
troubleshooters. You should aim to develop or emulate those characteristics to ensure that you are<br />
successful when troubleshooting issues.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Discuss characteristics of good troubleshooters.<br />
• Apply a troubleshooting methodology.
20-4 Troubleshooting Common <strong>SQL</strong> Server <strong>2012</strong> Administrative Issues<br />
Discussion: Characteristics of Good Troubleshooters<br />
Discussion Points<br />
Question: What characteristics have you observed in people that you consider to be good<br />
troubleshooters?<br />
Question: What characteristics do you notice in people that are poor troubleshooters?
Applying a Troubleshooting Methodology<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 20-5<br />
You have seen in the discussion in the last topic that a key characteristic of good troubleshooters is that<br />
they follow a clear methodology in a logical manner. There are many different methodologies that are<br />
often applied to troubleshooting but the following list describes four phases that are common to most<br />
methodologies:<br />
Investigation Phase<br />
This is a critical phase. Too many people shortcut this step and start to jump directly into finding solutions.<br />
Before you can solve any problem, you need to be very clear in your understanding of the problem that<br />
you are solving.<br />
One very important concept in this phase is that the issue needs to be defined from the point of view of<br />
the user that is affected, not from an assumed perspective of an IT person. For example, there is no point<br />
telling a user that a system is working if the user cannot use it for any reason, regardless of how your ITbased<br />
perspective might tell you that the system is working.<br />
You need to understand what works and what doesn't work. A common mistake in this phase is to assume<br />
that when a user complains that something doesn't work, that it did ever work. Make sure that there was a<br />
time when the issue didn't exist and find out when that was. Also find out about anything, no matter how<br />
unrelated it might seem at this point, that has changed since that time.<br />
Finally, you need to know how the user would decide that the issue is resolved. A common<br />
troubleshooting error is to find a problem, to assume that it is the cause of the issue, to resolve that<br />
problem, and to assume that the original issue is now resolved.
20-6 Troubleshooting Common <strong>SQL</strong> Server <strong>2012</strong> Administrative Issues<br />
Analysis Phase<br />
In the analysis phase, you need to determine all possible causes of the issue that you are trying to resolve.<br />
At this point, it is important to avoid excluding any potential causes, no matter how unlikely you consider<br />
them to be.<br />
A brainstorming session with another person is often useful in this phase, particularly if that person is<br />
capable of constantly providing alternative viewpoints during discussions. (In many countries, this person<br />
would be described as being good at playing the Devil's advocate). The analysis phase often benefits from<br />
two types of people, one of whom has excellent technical knowledge of the product, and another that<br />
constantly requires the first person to justify their thoughts and to think both logically and laterally.<br />
Implementation Phase<br />
In the implementation phase, you need to eliminate each potential cause. This process of elimination<br />
usually returns the best results when the potential causes are eliminated in order from the most likely<br />
cause to the least likely cause.<br />
The critical aspect of the implementation phase is to make sure that your reasons for eliminating potential<br />
causes are logically valid.<br />
If you reach the end of your list of potential causes and have not yet found a solution to the issue, you<br />
need to return to the analysis phase and recheck your thinking. If you cannot find a problem in your<br />
analysis, you might even need to go back to recheck your initial assumptions in the investigation phase.<br />
Validation Phase<br />
Too many people, particularly those that are new to troubleshooting, assume that problems are resolved<br />
when they are not. Do not assume that because you have found and resolved a problem that it was the<br />
original problem that you have solved.<br />
In the investigation phase, you should have determined how the user would decide if the issue is resolved.<br />
In the validation phase, you need to apply that test to see if the issue really is resolved.<br />
Documentation<br />
After the problem is resolved, it is really important to make sure that all in the organization learn from<br />
what happened.<br />
Question: How do you know that a problem is resolved?
Lesson 2<br />
Resolving Service-related Issues<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 20-7<br />
In the remainder of this module, you will see how to approach common types of issues that can arise<br />
when working with <strong>SQL</strong> Server systems. This lesson covers the first of these types of issue that involves<br />
problems with <strong>SQL</strong> Server services.<br />
<strong>SQL</strong> Server comprises a series of Windows services. The troubleshooting of issues with these services<br />
shares much in common with the troubleshooting of issues with other Windows services but there are<br />
some <strong>SQL</strong> Server specific considerations.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Troubleshoot service-related issues.<br />
• Use <strong>SQL</strong> Server error log.<br />
• Use Windows event logs.
20-8 Troubleshooting Common <strong>SQL</strong> Server <strong>2012</strong> Administrative Issues<br />
Troubleshooting Service-related Issues<br />
Key Points<br />
The most common service-related problem is that one of the <strong>SQL</strong> Server services will not start or cannot<br />
be accessed. As with all Windows services, the <strong>SQL</strong> Server services are not permitted to interact directly<br />
with the system console, so this limits the options that are available to the service to advise you of<br />
problems. Also, no user may be logged on at the console to receive notifications even if they were<br />
permitted, as services run in the background and are mostly started automatically, even before any user<br />
logs in.<br />
Windows and <strong>SQL</strong> Server Logs<br />
The most common cause of a service not being able to start is that the service account logon is failing for<br />
some reason. This could be caused by an incorrect or expired password, or the account could be locked<br />
out. Logon failures for services typically appear in the Windows system event log. More complex issues<br />
such as missing files cause entries to be written to the <strong>SQL</strong> Server logs.<br />
Check both the Windows and <strong>SQL</strong> Server logs as the first step in resolving service startup issues.<br />
Other Service-related Issues<br />
If <strong>SQL</strong> Server does start but cannot be accessed, the problem often relates to network issues that are<br />
discussed later in this module. In rare cases, the <strong>SQL</strong> Server scheduler might hang and stop accepting new<br />
connections. In this situation, and in similar situations, you should also try to connect to <strong>SQL</strong> Server using<br />
the Dedicated Administration Connection (DAC). You might need to use the DAC to access the instance<br />
and analyze the problem. On occasions, you might need to kill individual sessions that are causing the<br />
problem.
If <strong>SQL</strong> Server will not start but the issue isn't a logon issue, check the following:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 20-9<br />
• Check whether the <strong>SQL</strong> Server log files indicate that the master or model database is corrupt. If either<br />
of these databases is corrupt, follow the procedures to recover the databases that were described in<br />
Module 7.<br />
• Check whether or not the file paths to the tempdb database files are accessible. The tempdb database<br />
is recreated each time the server instance starts but the path to the database files (as configured in<br />
the master database) must exist and be accessible.<br />
• Try to start the instance using the command prompt. If starting <strong>SQL</strong> Server from a command prompt<br />
does work, check the configuration of the service and make sure that the permission requirements<br />
are met.<br />
Question: What should be reviewed first when a service-related issue happens with <strong>SQL</strong><br />
Server?
20-10 Troubleshooting Common <strong>SQL</strong> Server <strong>2012</strong> Administrative Issues<br />
<strong>SQL</strong> Server Error Log<br />
Key Points<br />
The <strong>SQL</strong> Server log will often provide detailed information about startup issues. The log can be viewed<br />
using the log viewer as part of SSMS.<br />
However, if <strong>SQL</strong> Server cannot start, you cannot use the log viewer in SSMS to see the contents of the <strong>SQL</strong><br />
Server logs. You can, however, open these logs in an editor such as NotePad, as shown in the example on<br />
the slide.<br />
The <strong>SQL</strong> Server Error log was discussed in more detail in Module 15. While the latest error log usually<br />
provides the most useful information, <strong>SQL</strong> Server keeps a set of archive log files and you should review<br />
those files as well, as problems may have been occurring for some time.<br />
Question: What is the problem with the instance startup in the example shown in the slide?
Windows Event Logs<br />
Key Points<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 20-11<br />
Windows services are not permitted to interact directly with the console. The main place they can use to<br />
tell you about issues is the Windows system event log.<br />
In addition, as with other applications, <strong>SQL</strong> Server writes quite an amount of information to the Windows<br />
application log over time.<br />
Both these logs should be checked periodically as not all errors that are logged in them produce<br />
symptoms that are apparent to users.<br />
Question: Which Windows log would logon failures for the <strong>SQL</strong> Server service appear in?
20-12 Troubleshooting Common <strong>SQL</strong> Server <strong>2012</strong> Administrative Issues<br />
Demonstration 2A: Troubleshooting Service-related Issues<br />
Demonstration Steps<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click <strong>SQL</strong><br />
Server Management Studio. In the Connect to Server window, type Proseware and click Connect.<br />
From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_20_PRJ\10775A_20_PRJ.ssmssln and click Open.<br />
3. From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file from<br />
within Solution Explorer.<br />
4. Open the 21 – Demonstration 2A.sql script file.<br />
5. Follow the instructions contained within the comments of the script file.
Lesson 3<br />
Resolving Login and Connectivity Issues<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 20-13<br />
The final type of issues that database administrators who work with <strong>SQL</strong> Server are commonly called upon<br />
to resolve are connectivity and login issues. Users may have issues making connections to <strong>SQL</strong> Server, in<br />
relation to network names, protocols, and ports and the users may have issues with passwords that need<br />
to be resolved. In general, you should try to create logins that are based on Windows group membership.<br />
This removes the need for you as a database administrator to deal with most password and Windows<br />
account related issues.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Troubleshoot connectivity issues.<br />
• Troubleshoot login failures.
20-14 Troubleshooting Common <strong>SQL</strong> Server <strong>2012</strong> Administrative Issues<br />
Troubleshooting Connectivity Issues<br />
Key Points<br />
The most common issue of this type that you might need to resolve is that users cannot connect to <strong>SQL</strong><br />
Server at all.<br />
Eliminate the Network<br />
The first step in trying to resolve these issues is to determine whether or not the issue is related to<br />
network connectivity. You can often eliminate the network from consideration by attempting to connect<br />
via a shared memory connection while local to the server. If you cannot access <strong>SQL</strong> Server via a shared<br />
memory connection, you will need to troubleshoot the login or the service. Login troubleshooting is<br />
discussed in the next topic. Service troubleshooting was discussed in Lesson 2.<br />
Network Related Issues<br />
If the login using shared memory succeeds, the problem is almost always network related. In rare cases,<br />
you could see a problem with incompatible network libraries on the client and the server but most<br />
problems are much less subtle.<br />
Issue Actions<br />
Can the server name be resolved? The client system needs to be able to find the network<br />
address of the server. In TCP/IP networks, this will usually<br />
involve a DNS lookup. Make sure you can ping the server<br />
that <strong>SQL</strong> Server is installed on by name and that you see an<br />
IP address returned by the ping command.
(continued)<br />
Issue Actions<br />
Can the network and the server be<br />
reached?<br />
Is the Browser Service Running for<br />
Named Instances that are not using<br />
fixed ports?<br />
Is the client configured to use the<br />
right protocol and settings?<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 20-15<br />
While a server name might be successfully resolved to a<br />
network address, no connectivity may be possible between<br />
the client and server systems. On TCP/IP networks, make<br />
sure that you can successfully ping the server that <strong>SQL</strong><br />
Server is installed on, assuming that it does not have the<br />
ability to echo ping commands disabled. In many cases,<br />
problems in this area will relate to incorrect subnet or<br />
default gateway/router addresses.<br />
If you are using named instances of <strong>SQL</strong> Server, the client<br />
systems need to be able to resolve the names of the<br />
instances to a port number. By default, this is achieved by<br />
connecting to the <strong>SQL</strong> Browser service on UDP port 1434. If<br />
the <strong>SQL</strong> Browser service is disabled, the named instances<br />
need to be configured to use fixed ports and the client<br />
systems need to be configured to access those ports,<br />
potentially by the creation of aliases. Make sure that the<br />
aliases are created for the correct network library such as 32<br />
bit vs. 64 bit.<br />
Is the protocol that the client is attempting to use to<br />
connect to the server, actually enabled on the server? Are<br />
the protocol settings appropriate?<br />
Is a firewall blocking connectivity? Check to make sure that there is no firewall that is blocking<br />
the ports that you are trying to connect over. If a firewall is<br />
blocking your access, an exception or rule will likely need to<br />
be configured in the firewall.<br />
Question: Why should a local connection be tested first?
20-16 Troubleshooting Common <strong>SQL</strong> Server <strong>2012</strong> Administrative Issues<br />
Troubleshooting Login Failures<br />
Key Points<br />
The typical issue with this type of problem is that a user can establish a network connection to <strong>SQL</strong> Server<br />
but cannot log in. The troubleshooting actions that you need to perform depend upon the type of login<br />
that is being used:<br />
• For Windows Logins, make sure that <strong>SQL</strong> Server can connect to a Domain Controller to process the<br />
logins. To review potential issues, inspect the Windows Logs.<br />
• For <strong>SQL</strong> Server Logins, make sure that <strong>SQL</strong> Server is configured for <strong>SQL</strong> Server Authentication. <strong>SQL</strong><br />
Server Logins can be created and can also be enabled, even when <strong>SQL</strong> Server is configured for<br />
Windows Authentication only. The most common error that is returned in this situation says that a<br />
trusted connection is not available.<br />
Also when working with <strong>SQL</strong> Server logins, make sure that the supplied password is correct and that the<br />
login has not been locked out by account policy. Another common problem for <strong>SQL</strong> Server logins is that<br />
the login might expire but the application that is trying to connect to <strong>SQL</strong> Server does not understand<br />
password expiry. This situation is common with applications that were designed for previous versions of<br />
<strong>SQL</strong> Server that did not implement account policy for <strong>SQL</strong> Server logins.<br />
For both login types, make sure that the login has permission to connect to <strong>SQL</strong> Server and that<br />
permission has been granted to the login to access the database that the login is attempting to connect<br />
to. This check should include the default database for the login.<br />
If a login problem is happening with a large number of different users, check to make sure that there isn't<br />
a logon trigger that is failing. When a logon trigger prevents users from connecting, the error message<br />
that is returned to interactive users indicates that a logon trigger prevented the connection.<br />
Question: Can a <strong>SQL</strong> Server Login be created and enabled when <strong>SQL</strong> Server is configured for<br />
Windows Authentication only?
Lesson 4<br />
Resolving Concurrency Issues<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 20-17<br />
<strong>SQL</strong> Server attempts to provide each user with the illusion that they are the only user on the system. It is<br />
not possible for <strong>SQL</strong> Server to provide this illusion in all circumstances and the actions of one user will<br />
impact on the actions of other users.<br />
It is important to understand the core concepts surrounding concurrency and to then see how to<br />
troubleshoot the more common blocking and deadlock issues.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe core concurrency concepts.<br />
• Troubleshoot blocking issues.<br />
• Troubleshoot deadlock issues.
20-18 Troubleshooting Common <strong>SQL</strong> Server <strong>2012</strong> Administrative Issues<br />
Core Concurrency Concepts<br />
Key Points<br />
Consider your expectations when you transfer money from one bank account to another bank account.<br />
You would expect the same amount of money to come out of one account as is added to the other<br />
account. You would also expect that the entire operation either occurs or does not occur, even if some<br />
form of failure happens during your transfer. You would not expect the funds to be removed from one<br />
account and not to be added to the other account.<br />
Further consider what you would expect to see if you viewed the state of the two accounts while the<br />
transfer was occurring. You would expect to always see the accounts in a consistent state, based on either<br />
the balances of the accounts prior to the transfer or based on the balances of the accounts after the<br />
transfer was complete. You would not expect to ever be able to see the accounts in an intermediate state.<br />
One final issue to consider is what you expect to see when you viewed the state of the accounts on the<br />
day following the successful transaction. You would expect to see that the transaction was still shown as<br />
completed.<br />
ACID Properties<br />
The expectations that you have closely relate to a set of properties that the IT industry refers to as ACID<br />
properties that are expected of transactions:<br />
• Atomicity - A transaction has a defined start and end. It runs completely or is rolled back completely.<br />
• Consistency - The database is not left in an intermediate state after the transaction. All logical<br />
constraints are enforced.<br />
• Isolation - Transactions are prevented from interfering with each other.<br />
• Durability - After a transaction is committed, the details of the transaction are persisted in the system<br />
and will survive restarts.
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 20-19<br />
To achieve the isolation requirements, <strong>SQL</strong> Server uses a locking scheme. Locking is a mechanism used by<br />
the database engine to synchronize access by multiple users to the same piece of data at the same time.<br />
The aim is to maximize concurrency while maintaining the ACID properties.<br />
Note Concurrency is discussed in detail in course 10776A: Developing Microsoft <strong>SQL</strong><br />
Server <strong>2012</strong> <strong>Database</strong>s.<br />
Locking Behavior<br />
Before a transaction acquires a dependency on the current state of a data element, such as by reading or<br />
modifying the data, it must protect itself from the effects of another transaction that attempts to modify<br />
the same data. The transaction does this by requesting a lock on the data element. Locks have different<br />
modes, such as shared or exclusive. The locking mode defines the level of dependency the transaction has<br />
on the data.<br />
No transaction can be granted a lock that would conflict with the mode of a lock that has already been<br />
granted to another transaction on that same data. If a transaction requests a lock mode that conflicts with<br />
a lock that has already been granted on the same data, the database engine will block the requesting<br />
transaction until the first lock is released.<br />
For UPDATE operations, <strong>SQL</strong> Server always holds locks to the end of the transaction. For SELECT<br />
operations, it holds the lock protecting the row for a period that depends upon the transaction isolation<br />
level setting. All locks that are still held when a transaction completes are released, whether the<br />
transaction commits or rolls back.<br />
Locking is crucial for transaction processing and is normal behavior for the system. Problems only occur<br />
when locks are held for too long and other transactions are blocked for too long because of the locks that<br />
are being held.<br />
Question: Why would SELECT statements acquire locks in most cases?
20-20 Troubleshooting Common <strong>SQL</strong> Server <strong>2012</strong> Administrative Issues<br />
Troubleshooting Blocking<br />
Key Points<br />
The most common issue that occurs with blocking is that applications appear to be hung. The symptoms<br />
of this are that the application is not busy and has work that needs to be done, yet no work is occurring.<br />
This situation generally indicates that the application is blocked while waiting for a resource of some type.<br />
You saw in the last topic that locking is the action of taking and holding locks that is used to implement<br />
concurrency control and that blocking is what happens to one process while it needs to wait for a<br />
resource that another process has locked. Blocking is a normal occurrence for systems and is not an issue.<br />
Excessive blocking can be an issue.<br />
Blocking can be monitored in real-time using <strong>SQL</strong> Server Activity Monitor and through Dynamic<br />
Management Views. You will see how to perform this monitoring in Demonstration 3A.<br />
The <strong>SQL</strong> Server data collector that was discussed in Module 18 can be used to collect data about blocking<br />
scenarios including information about all the processes that are involved with the blocking. The reports<br />
that are provided with the data collector via the management data warehouse can be useful when<br />
investigating blocking issues that have happened earlier.<br />
Question: What do you suspect that "excessive" blocking might refer to?
Demonstration 4A: Troubleshooting Blocking<br />
Demonstration Steps<br />
1. If Demonstration 2A was not performed:<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 20-21<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click <strong>SQL</strong><br />
Server Management Studio. In the Connect to Server window, type Proseware and click Connect.<br />
From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_20_PRJ\10775A_20_PRJ.ssmssln and click Open.<br />
3. From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file from<br />
within Solution Explorer.<br />
4. Open the 41 – Demonstration 4A.sql script file from within Solution Explorer.<br />
5. Follow the instructions contained within the comments of the script file.
20-22 Troubleshooting Common <strong>SQL</strong> Server <strong>2012</strong> Administrative Issues<br />
Troubleshooting Deadlocks<br />
Key Points<br />
Deadlock errors are a special type of blocking error where <strong>SQL</strong> Server needs to intervene as there would<br />
otherwise never be any locks released. The most common form of deadlock occurs when two transactions<br />
have locks on separate objects and each transaction requests a lock on the other transaction’s object. For<br />
example:<br />
• Task 1 holds a shared lock on row 1.<br />
• Task 2 holds a shared lock on row 2.<br />
• Task 1 requests an exclusive lock on row 2, but it cannot be granted until Task 2 releases the shared<br />
lock.<br />
• Task 2 requests an exclusive lock on row 1, but it cannot be granted until Task 1 releases the shared<br />
lock.<br />
• Each task must wait for the other to release the lock, which will never happen.<br />
A deadlock can occur when several long-running transactions execute concurrently in the same database.<br />
A deadlock also can occur as a result of the order in which the optimizer processes a complex query, such<br />
as a join.
How <strong>SQL</strong> Server Ends a Deadlock<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 20-23<br />
<strong>SQL</strong> Server ends a deadlock by automatically terminating one of the transactions. <strong>SQL</strong> Server does the<br />
following:<br />
• Chooses a deadlock victim. <strong>SQL</strong> Server gives priority to the process that has the highest deadlock<br />
priority. If both processes have the same deadlock priority, <strong>SQL</strong> Server rolls back the transaction that<br />
is the least costly to rollback.<br />
• Rolls back the transaction of the deadlock victim.<br />
• Notifies the deadlock victim’s application (with message number 1205).<br />
• Allows the other transaction to continue.<br />
Important In a multi-user environment, each client should check for message number<br />
1205, which indicates that the transaction was rolled back. If message 1205 is found, the<br />
application should reconnect and try the transaction again.<br />
Working with Deadlocks<br />
Deadlocks are normally not logged. The only indication that a deadlock has occurred is that an error<br />
message is returned to the client that has been selected as the victim.<br />
To monitor deadlocks use <strong>SQL</strong> Server Profiler and/or <strong>SQL</strong> Trace. There are several deadlock events<br />
available. The event with most information and that is easiest to read is the Deadlock Graph event. You<br />
will see an example of a deadlock graph in Demonstration 3B.<br />
Note that there are other tools that can help with deadlocks that are out of scope for this course:<br />
• Trace flags can be set to write deadlock information to the <strong>SQL</strong> Server Error Log.<br />
• Event Notifications can be generated.<br />
• Extended events can be tracked.<br />
Question: Have you experienced deadlocking problems in your current environment? If so,<br />
how did you determine that deadlocks were a problem, and how was it resolved?
20-24 Troubleshooting Common <strong>SQL</strong> Server <strong>2012</strong> Administrative Issues<br />
Demonstration 4B: Troubleshooting Deadlocks<br />
Demonstration Steps<br />
1. If Demonstration 2A was not performed:<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click <strong>SQL</strong><br />
Server Management Studio. In the Connect to Server window, type Proseware and click Connect.<br />
From the File menu, click Open, click Project/Solution, navigate to<br />
D:\10775A_Labs\10775A_20_PRJ\10775A_20_PRJ.ssmssln and click Open.<br />
3. From the View menu, click Solution Explorer. Open and execute the 00 – Setup.sql script file from<br />
within Solution Explorer.<br />
4. Open the 42 – Demonstration 4B.sql script file.<br />
5. Follow the instructions contained within the comments of the script file.
Lab 20: Troubleshooting Common Issues<br />
Lab Setup<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 20-25<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_20_PRJ\10775A_20_PRJ.ssmssln.<br />
7. From the View menu, click Solution Explorer. In Solution Explorer, double-click the query<br />
00-Setup.sql. When the query window opens, click Execute on the toolbar.<br />
Lab Scenario<br />
You need to be able to resolve common issues with <strong>SQL</strong> Server processes and services at the time they are<br />
occurring. There are five exercises that create issues. You should attempt to troubleshoot and resolve as<br />
many of these issues as possible.
20-26 Troubleshooting Common <strong>SQL</strong> Server <strong>2012</strong> Administrative Issues<br />
Supporting Documentation<br />
Exercise 1<br />
Users of the Promote application are complaining that the application can no longer connect to the<br />
server. The application connects using the <strong>SQL</strong> Login PromoteApp.<br />
Exercise 2<br />
A junior DBA created a backup of the production AdminDB database to send to the development team<br />
for testing. Now users of that database are complaining they can no longer connect to it.<br />
Exercise 3<br />
Users are complaining that the job Get File List in <strong>SQL</strong> Server Agent is failing to execute.<br />
Exercise 4<br />
The performance of the AdminDB database has been steadily reducing since the database was deployed.<br />
Queries that once took seconds are now taking minutes.<br />
Exercise 5<br />
A strange situation is occurring with the CityDetails database. Most databases work slower as more users<br />
are added. However, the CityDetails database performs the worst when only a single user connects to it.<br />
The connection time for just this database can be very long and sometimes times out. Other databases<br />
are OK.<br />
Exercises 1 - 5: Troubleshoot and Resolve <strong>SQL</strong> Server Administrative Issues<br />
Scenario<br />
A number of issues have been reported by the users of the Proseware server. You need to troubleshoot<br />
and resolve the issues.<br />
The main tasks for each exercise are as follows:<br />
1. Read the supporting documentation for the exercise.<br />
2. Troubleshoot and resolve the issue.<br />
Task 1: Read the supporting documentation for the exercise<br />
• Read the supporting documentation for the exercise.<br />
Task 2: Troubleshoot and resolve the issue<br />
• Troubleshoot the issue.<br />
• Resolve the issue.<br />
• If you have difficulty, check the solution in the LAK.<br />
Results: After these exercise, you should have investigated and resolved a number of issues.
Module Review and Takeaways<br />
Review Questions<br />
1. What can be used to monitor blocking issues?<br />
2. What happens when a deadlock occurs?<br />
Best Practices<br />
10775A: <strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s 20-27<br />
1. Monitor your system and store historical data using Data Collector for example for easier<br />
troubleshooting.<br />
2. Clearly identify underlying problems rather than fighting symptoms.<br />
3. Apply deadlock monitoring using <strong>SQL</strong> Trace.
20-28 Troubleshooting Common <strong>SQL</strong> Server <strong>2012</strong> Administrative Issues<br />
Course Review and Evaluation<br />
Your evaluation of this course will help Microsoft understand the quality of your learning experience.<br />
Please work with your training provider to access the course evaluation form.<br />
Microsoft will keep your answers to this survey private and confidential and will use your responses to<br />
improve your future learning experience. Your open and honest feedback is valuable and appreciated.
Appendix A<br />
Core Concepts in <strong>SQL</strong> Server High Availability and<br />
Replication<br />
Contents:<br />
Lesson 1: Core Concepts in High Availability A-3<br />
Lesson 2: Core Concepts in Replication A-11<br />
A-1
A-2 Core Concepts in <strong>SQL</strong> Server High Availability and Replication<br />
Appendix Overview<br />
As you become a more experienced database professional, you are likely to need to implement more<br />
advanced technologies that are available within the Microsoft <strong>SQL</strong> Server platform. Two core areas are<br />
high availability and replication.<br />
In this appendix, you will be introduced to the core concepts surrounding each of these technologies. In<br />
particular, you will learn when to use each of these technologies.<br />
Objectives<br />
After reading this lesson, you will be able to:<br />
• Explain the different high availability options available for Microsoft <strong>SQL</strong> Server and know when to<br />
use each.<br />
• Explain the different replication options available for Microsoft <strong>SQL</strong> Server and know when to use<br />
each.
Lesson 1<br />
Core Concepts in High Availability<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s A-3<br />
<strong>SQL</strong> Server provides a wide variety of options for ensuring the highest availability of servers and<br />
databases. It is important that you understand the core concepts involved and know how to choose<br />
between the available options.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Describe database mirroring.<br />
• Describe failover clustering.<br />
• Explain the role of AlwaysOn technologies.<br />
• Describe log shipping.<br />
• Choose appropriate use cases for each high availability option.
A-4 Core Concepts in <strong>SQL</strong> Server High Availability and Replication<br />
Discussion: Previous Experience with High Availability<br />
Key Points<br />
In this discussion, members of your class will share their experiences with high availability strategies. In<br />
particular, consider (but do not limit yourself to) the following questions:<br />
• What do you understand by the term "High Availability"?<br />
• What techniques have you used to try to achieve high availability?<br />
• How successful have these attempts been?<br />
High Availability is more than technology. It also involves defined processes, documentation, knowledge<br />
of the people and much more. If the staff involved do not know what to do when issues occur, the best<br />
technology will not help. A good backup strategy is also part of a high availability solution.
<strong>Database</strong> Mirroring<br />
Key Points<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s A-5<br />
<strong>Database</strong> mirroring is a software based solution for achieving high availability of databases that was<br />
introduced in Microsoft <strong>SQL</strong> Server 2005. The transaction log from a database on one server is<br />
continuously applied to a copy of the database residing on another server. <strong>Database</strong>s to be mirrored must<br />
be in full recovery model.<br />
Because an entirely separate server is used for the mirror partner, the chance of having a single point of<br />
failure is greatly reduced. While not generally desirable, the servers used in database mirroring could also<br />
be implemented using quite different hardware architectures. <strong>Database</strong> mirroring is, however, easier to<br />
maintain when the servers involved are similar, and when the disk layout of each server is identical.<br />
Two modes of operation are available. In high safety mode, the principal server sends update information<br />
to both its own transaction log and to the transaction log of the mirror partner, and then waits for the<br />
writes to occur on the partner's transaction log, as well as waiting for its own writes to occur. In high<br />
performance mode, the principal server sends the update information to both logs but only waits for its<br />
own log to be written. High performance mode sacrifices data consistency for performance.<br />
When a failure occurs, the mirror partner can take over the role of the principal. This can occur in two<br />
ways. The database administrator can initiate a manual failover or an automated failover can occur.<br />
Automated failovers require the following:<br />
• High safety mode.<br />
• The mirror partner must have been synchronized with the primary at the time of the failover. (This<br />
might not be the case if the mirror partner was catching up after a failure or network issue.)<br />
• A witness server has been configured.
A-6 Core Concepts in <strong>SQL</strong> Server High Availability and Replication<br />
A witness server can be any edition of <strong>SQL</strong> Server including the Express edition, and should reside on<br />
another physical server, separate to both the principal server and the mirror partner. The witness server is<br />
used to provide a quorum with the mirror partner server, where they can agree that the principal server<br />
has failed and that a failover should occur. During a failure, if an automated failover cannot occur, it is<br />
possible to "force service" by switching the mirror partner into the principal role, even though a loss of<br />
data could occur.<br />
Note that system databases such as master, msdb, tempdb, and the model database cannot be mirrored.<br />
<strong>SQL</strong> Server 2008 improved the performance of database mirroring through the introduction of log stream<br />
compression. While only a single mirror could be provided for each database, many more databases could<br />
be concurrently mirrored.<br />
Mirroring can improve the availability of databases, including times when routine maintenance such as<br />
applying service packs is needed. The data held on the mirror partner is not accessible unless a database<br />
snapshot is created on the mirror to provide read-only access to the data.<br />
The biggest limitation of database mirroring is that databases often have external dependencies such as<br />
logins in the master database. Mechanisms or processes need to be put in place to ensure that these<br />
dependent objects are present on both the servers.
Failover Clustering<br />
Key Points<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s A-7<br />
<strong>SQL</strong> Server failover clustering is built on Windows Server failover clustering. It requires each server to be a<br />
Windows Failover cluster node. Failover clustering is designed to protect against the failure of a server.<br />
Clients connect to a virtual <strong>SQL</strong> Server rather than to either physical server.<br />
Unlike database mirroring, when a failover occurs the entire instance fails over. This removes the<br />
complexity of maintaining external database dependencies because the system databases (such as the<br />
master database) are also failed over. <strong>Database</strong>s do not need to be in full recovery model.<br />
Servers involved in failover clustering should use identical hardware. A Cluster Validation Tool (CVT) is<br />
provided to check the configuration of a failover cluster before the cluster is created. Passing the tests<br />
provided by the CVT is a prerequisite for obtaining Microsoft support on the cluster configuration.<br />
Failover clustering uses a shared disk subsystem. It is important to note that only a single copy of the data<br />
is being held. This means that the disk subsystem needs to incorporate its own redundancy mechanisms.<br />
A <strong>SQL</strong> Server cluster exposes a virtual server to client systems. Regardless of which server is currently<br />
operating, the clients continue to connect to the same virtual server name or address. Each server<br />
monitors the other servers in the cluster, so that failover action can be initiated when necessary. This<br />
communication is normally carried out across a private network connection (often called a heartbeat<br />
network). The private network is kept separate to the public network connections from each server.<br />
At any point in time, each <strong>SQL</strong> Server instance is only running on a single server. When a server failover<br />
occurs, the <strong>SQL</strong> Server service for each instance on the secondary server must be restarted and databases<br />
recovered. This process can take some time, depending upon how long recovery takes for each database.
A-8 Core Concepts in <strong>SQL</strong> Server High Availability and Replication<br />
AlwaysOn Availability Groups<br />
Key Points<br />
<strong>SQL</strong> Server <strong>2012</strong> introduced the AlwaysOn technologies. One of these technologies, AlwaysOn Availablity<br />
Groups, provides an enterprise-level alternative to database mirroring. AlwaysOn Availability Groups must<br />
be installed on a Windows cluster node and improves on database mirroring in the following ways:<br />
Aspect <strong>Database</strong> Mirroring AlwaysOn Availability Groups<br />
Applies to <strong>Database</strong> Group of databases<br />
Replicas One Four<br />
Replica Usage Possible to make it available<br />
read-only via database snapshots<br />
but in practice not very useful<br />
Each replica can be made readonly<br />
and can be used for backups<br />
Replica Operation Mode Synchronous or Asynchronous Replicas can be synchronous or<br />
asynchronous. Up to two<br />
synchronous replicas are<br />
supported. Combinations of<br />
synchronous and asynchronous<br />
replicas are supported.<br />
Client Connection Options for automated<br />
redirection available<br />
Easier options for automated<br />
redirection available
Log Shipping<br />
Key Points<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s A-9<br />
Log shipping is a mature and reliable technology that was introduced in <strong>SQL</strong> Server 2000. Many third<br />
party organizations and users have also implemented their own variations on log shipping. In essence, log<br />
shipping is a simple repetitive process involving:<br />
• Backup the transaction log on the primary server.<br />
• Copy the transaction log to the secondary server.<br />
• Restore the transaction log on the secondary server.<br />
Each time the restore is carried out on the secondary server, the database restores are performed without<br />
the recovery option, so that additional transaction logs can be restored later.<br />
When a failure occurs, placing the secondary server into production is a manual task. No automation is<br />
provided for doing this within the product.<br />
Log shipping allows for multiple secondary servers.
A-10 Core Concepts in <strong>SQL</strong> Server High Availability and Replication<br />
Choosing Between High Availability Options<br />
Key Points<br />
The table above provides a comparison of a number of the aspects of each high availability option, as<br />
discussed in the previous four topics. Note that a high availability solution might comprise a combination<br />
of these technologies.<br />
With Failover Clustering, even though a single copy of data is held, the shared drive subsystem should<br />
always be configured with redundancy options.<br />
With Log Shipping, read-only access to the replica data is only available when Standby Mode has been<br />
chosen.
Lesson 2<br />
Core Concepts in Replication<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s A-11<br />
<strong>SQL</strong> Server provides a wide variety of options for replicating data between servers. It is important that you<br />
understand the core concepts involved and know how to choose between the available options.<br />
Objectives<br />
After completing this lesson, you will be able to:<br />
• Explain the publisher/subscriber metaphor that replication is based upon.<br />
• Describe snapshot replication.<br />
• Describe transactional replication.<br />
• Describe peer-to-peer transactional replication.<br />
• Describe merge replication.<br />
• Choose between the available replication options.
A-12 Core Concepts in <strong>SQL</strong> Server High Availability and Replication<br />
<strong>SQL</strong> Server Replication Architecture<br />
Key Points<br />
While <strong>SQL</strong> Server replication can play a role in high availability (particularly in relation to peer-to-peer<br />
transactional replication), the major role that it plays is related to the distribution of data.<br />
It is important to understand the terminology used by replication.<br />
• A Publisher is an instance that is publishing data for subscribers to consume. A database needs to be<br />
configured to allow publication of data contained within it. (In the metaphor, this is the publishing<br />
house).<br />
• An Article is the object being published. An article can comprise a table or a view. Either the entire<br />
table or view can be published, or an article can be based on vertical filtering (columns), horizontal<br />
filtering (rows) or both. (In the metaphor, this directly relates to the concept of an article within a<br />
magazine).<br />
• A Publication is a group of articles that are being published. It is the unit of subscription. (In the<br />
metaphor, this is the magazine).<br />
• A Distributor is used to move publications from publishers to subscribers. A distribution database is<br />
used to hold publications that need to be supplied to a subscriber. A distributor that is on the same<br />
local network as the publisher is called a Local Distributor. Other distributors (perhaps even in a<br />
different continent) are considered to be Remote Distributors.<br />
• A Subscriber is an instance that receives the publications.<br />
• An Agent is a service that performs operations needed by any of the replication roles.
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s A-13<br />
A remote distributor might be used to minimize the network traffic between the distributor and a number<br />
of subscribers by placing the distributor close to the subscribers. In a push subscription, the distributor<br />
sends the data to the subscriber. In a pull subscription, the subscriber retrieves the data from the<br />
distributor.
A-14 Core Concepts in <strong>SQL</strong> Server High Availability and Replication<br />
Snapshot Replication<br />
Key Points<br />
Snapshot is the simplest form of replication. Periodically, the snapshot agent creates a copy of the<br />
published data. The entire copy of the data is then used to replace the data at the subscriber.<br />
Snapshot replication is useful in several situations:<br />
• If permitting out-of-date data at the subscriber is acceptable. Not all systems would allow this.<br />
• If the data does not change often, then having out-of-date data at the subscriber is more likely to be<br />
acceptable.<br />
• If the volume of data is very small, copying all the data is easier than trying to work out which data to<br />
copy.<br />
• If the number of changes that occurs to the data is very large, it might be less costly to recopy the<br />
entire data rather than to copy the changes as the size of the changes might be larger than the size<br />
of the source data.<br />
• The publisher is not a <strong>SQL</strong> Server database. (Publishing from an Oracle database is supported.)
Transactional Replication<br />
Key Points<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s A-15<br />
Transactional replication is used to send changes to subscribers shortly after the changes have occurred<br />
on the publishers. Transactional replication is initiated by first creating a snapshot of the data and<br />
applying the snapshot to the subscriber. There are several options available for creating this copy of the<br />
data, including taking a database backup and restoring it. Changes are applied in the same order and<br />
within the same transactional boundaries as they occurred at the publisher.<br />
Transactional replication should typically be regarded as one-way replication. In general, changes made at<br />
the subscriber are not sent back to the publisher; however there are configuration options to allow for<br />
updates at the subscriber.<br />
Transactional replication is useful in the following situations:<br />
• Incremental changes to data need to be applied to the subscriber shortly after they occur.<br />
• The application requires the delay between a change occurring and it being applied to be short. Note<br />
that this delay is configurable and might range from almost instantaneous to several hours.<br />
• The subscriber needs to see each change to the data that occurs. (This allows the subscriber to take<br />
actions on each change such as firing triggers).<br />
• A high volume of data modification activity occurs at the publisher.<br />
• The publisher is not a <strong>SQL</strong> Server database. (Publishing from an Oracle database is supported.)
A-16 Core Concepts in <strong>SQL</strong> Server High Availability and Replication<br />
Peer-To-Peer Transactional Replication<br />
Key Points<br />
"Allow peer-to-peer subscribers" is an option that can be configured on transactional replication<br />
publications. This can help to scale out a replication topology to allow read operations to occur on any<br />
node.<br />
Reads and writes can occur on all nodes but to prevent conflicts, an appropriate data partitioning scheme<br />
needs to be used. Write operations on any individual rows are still single node. While conflicts can be<br />
detected, they cause the replication to fail or the changes to be held. For this reason, the most important<br />
aspect of peer-to-peer transactional replication is designing the partitioning of the data held by each<br />
server to avoid any chance of conflicts from occurring.<br />
As a typical example, a company might have servers in London, Singapore, and Mumbai. Each row in a<br />
table would indicate which location the data has originated at. All rows are replicated to each server but a<br />
single row is never modified at more than one location.<br />
Peer-to-peer transactional replication is useful in the following scenarios:<br />
• Where a natural partitioning of the data exists.<br />
• Where the partitioning of the data matches the topology of the servers.<br />
• Where conflicts would rarely occur such as situations where each data row within a table is owned by<br />
a single server.
Merge Replication<br />
Key Points<br />
<strong>Administering</strong> Microsoft <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s A-17<br />
Merge replication is commonly used in simple client-server applications. Merge replication works by<br />
applying a set of triggers to the source tables and using the triggers to insert data into change tracking<br />
tables whenever data modifications occur. The merge agent is used to copy those changes across to<br />
subscribers whenever synchronization occurs.<br />
Merge replication is not transactionally consistent but does support non-<strong>SQL</strong>-Server nodes as part of the<br />
replication topology. The performance of merge replication usually suffers as the volume of data or the<br />
number of subscribers increases.<br />
Merge replication is useful in the following situations:<br />
• Multiple subscribers might update the data at the same time. Merge replication provides conflict<br />
detection and conflict correction.<br />
• Subscribers need to make changes to data while disconnected.<br />
• Conflicts occur during data changes on multiple subscribers.<br />
• Applications do not depend upon tables being transactionally consistent.
A-18 Core Concepts in <strong>SQL</strong> Server High Availability and Replication<br />
Choosing Between Replication Options<br />
Key Points<br />
The table above provides a comparison of a number of the aspects of each replication option, as<br />
discussed in the previous four topics.
Module 1: Introduction to <strong>SQL</strong> Server <strong>2012</strong> and Its Toolset<br />
Lab 1: Introduction to <strong>SQL</strong> Server and Its<br />
Toolset<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following step:<br />
• Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
Exercise 1: Verify <strong>SQL</strong> Server Component Installation<br />
Task 1: Check that <strong>Database</strong> Engine and Reporting Services have been installed for the<br />
MKTG instance<br />
1. Click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click Configuration Tools, and<br />
then click <strong>SQL</strong> Server Configuration Manager.<br />
2. In the left-hand pane of the <strong>SQL</strong> Server Configuration Manager window, click <strong>SQL</strong> Server Services.<br />
3. In the right-hand pane, ensure that the following services are listed for the MKTG instance:<br />
• <strong>SQL</strong> Server (MKTG)<br />
• <strong>SQL</strong> Full-text Filter Daemon Launcher (MKTG)<br />
• <strong>SQL</strong> Server Reporting Services (MKTG)<br />
• <strong>SQL</strong> Server Agent (MKTG)<br />
Task 2: Note the services that are installed for the default instance<br />
1. In the left-hand pane of the <strong>SQL</strong> Server Configuration Manager window, click <strong>SQL</strong> Server Services.<br />
2. In the right-hand pane, ensure that the following services are listed for the default instance:<br />
• <strong>SQL</strong> Server (MS<strong>SQL</strong>SERVER)<br />
• <strong>SQL</strong> Full-text Filter Daemon Launcher (MS<strong>SQL</strong>SERVER)<br />
• <strong>SQL</strong> Server Analysis Services (MS<strong>SQL</strong>SERVER)<br />
• <strong>SQL</strong> Server Agent (MS<strong>SQL</strong>SERVER)<br />
Task 3: Ensure that all required services including <strong>SQL</strong> Server Agent are started and set<br />
to autostart for both instances<br />
1. Check that all the services for the MKTG instance have a Start Mode of Automatic. Ignore the <strong>SQL</strong><br />
Full-text Filter Daemon Launcher service at this time.<br />
2. Check that all the services for the default instance have a Start Mode of Automatic. Ignore the <strong>SQL</strong><br />
Full-text Filter Daemon Launcher service at this time.<br />
L1-1
L1-2 Module 1: Introduction to <strong>SQL</strong> Server <strong>2012</strong> and Its Toolset<br />
Exercise 2: Alter Service Accounts for New Instance<br />
Task 1: Change the service account for the MKTG database engine<br />
1. In the left-hand pane of the <strong>SQL</strong> Server Configuration Manager window, click <strong>SQL</strong> Server Services.<br />
2. In the right-hand pane, right-click <strong>SQL</strong> Server (MTKG), and select Properties.<br />
3. In the Account Name text box, type AdventureWorks\PWService.<br />
4. In the Password text box, type Pa$$w0rd.<br />
5. In the Confirm Password text box, type Pa$$w0rd and click OK.<br />
6. In the Confirm Account Change window, click Yes.<br />
Task 2: Change the service account for the MKTG <strong>SQL</strong> Server Agent<br />
1. In the left-hand pane of the <strong>SQL</strong> Server Configuration Manager window, click <strong>SQL</strong> Server Services.<br />
2. In the right-hand pane, right-click <strong>SQL</strong> Server Agent (MTKG), and select Properties.<br />
3. In the Account Name text box, type:<br />
• AdventureWorks\PWService.<br />
4. In the Password text box, type Pa$$w0rd.<br />
5. In the Confirm Password text box, type Pa$$w0rd and click OK.<br />
6. Right-click <strong>SQL</strong> Server Agent (MTKG) and select Start to restart the service.<br />
Exercise 3: Enable Named Pipes Protocol for Both Instances<br />
Task 1: Enable the named pipes protocol for the default instance<br />
1. In the left-hand pane of the <strong>SQL</strong> Server Configuration Manager window, expand <strong>SQL</strong> Server<br />
Network Configuration and then click Protocols for MS<strong>SQL</strong>SERVER.<br />
2. In the right-hand pane, if the Named Pipes protocol is not enabled, right-click Named Pipes and<br />
select Enable. In the Warning window, click OK.<br />
Task 2: Enable the named pipes protocol for the MKTG instance<br />
1. In the left-hand pane of the <strong>SQL</strong> Server Configuration Manager window, click Protocols for MKTG.<br />
2. In the right-hand pane, if the Named Pipes protocol is not enabled, right-click Named Pipes and<br />
select Enable. In the Warning window, click OK.<br />
Task 3: Restart both database engine services<br />
1. In the left-hand pane of the <strong>SQL</strong> Server Configuration Manager window, click <strong>SQL</strong> Server Services.<br />
2. Right-click <strong>SQL</strong> Server (MS<strong>SQL</strong>SERVER) and select Restart.<br />
3. Right-click <strong>SQL</strong> Server (MKTG) and select Restart.<br />
4. In the left-hand pane of the <strong>SQL</strong> Server Configuration Manager window, click <strong>SQL</strong> Server Services.<br />
5. In the toolbar, click the Refresh icon.
Exercise 4: Create an Alias for AdvDev<br />
Task 1: Create a 32-bit alias (AdvDev) for the MKTG instance<br />
Lab 1: Introduction to <strong>SQL</strong> Server and Its Toolset L1-3<br />
1. In the left-hand pane of the <strong>SQL</strong> Server Configuration Manager window, expand <strong>SQL</strong> Native Client<br />
11.0 Configuration (32bit) and click Client Protocols.<br />
2. Confirm that the Named Pipes protocol is Enabled.<br />
3. In the left-hand pane, click Aliases.<br />
4. In the left-hand pane, right-click Aliases and select New Alias.<br />
5. In the Alias – New window, in the Alias Name text box, type AdvDev.<br />
6. In the Protocol drop-down list box, select Named Pipes.<br />
7. In the Server text box, type .\MKTG and click OK.<br />
Task 2: Create a 64-bit alias (AdvDev) for the MKTG instance<br />
1. In the left-hand pane of the <strong>SQL</strong> Server Configuration Manager window, expand <strong>SQL</strong> Native Client<br />
11.0 Configuration and click Client Protocols.<br />
2. Confirm that the Named Pipes protocol is Enabled.<br />
3. In the left-hand pane, click Aliases.<br />
4. In the left-hand pane, right-click Aliases and select New Alias.<br />
5. In the Alias – New window, in the Alias Name text box, type AdvDev.<br />
6. In the Protocol drop-down list box, select Named Pipes.<br />
7. In the Server text box, type .\MKTG and click OK.<br />
Task 3: Use <strong>SQL</strong> Server Management Studio to connect to the alias to ensure it works as<br />
expected<br />
1. Click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and then click <strong>SQL</strong> Server<br />
Management Studio.<br />
2. In the Connect to Server window, ensure that Server Type is set to <strong>Database</strong> Engine.<br />
3. In the Server name text box, type AdvDev.<br />
4. In the Authentication drop-down list, select Windows Authentication, and click Connect.<br />
5. In Object Explorer, under AdvDev expand <strong>Database</strong>s.<br />
Note: The databases that are present include at least the following: System <strong>Database</strong>s,<br />
<strong>Database</strong> Snapshots, ReportServer$MKTG, and ReportServer$MKTGTempDB.<br />
6. Close <strong>SQL</strong> Server Management Studio.<br />
7. Close <strong>SQL</strong> Server Configuration Manager.
L1-4 Module 1: Introduction to <strong>SQL</strong> Server <strong>2012</strong> and Its Toolset<br />
Challenge Exercise 5: Ensure <strong>SQL</strong> Browser is Disabled and Configure a<br />
Fixed TCP/IP Port (Only if time permits)<br />
Task 1: Configure the TCP port for the MKTG database engine instance to 51550<br />
1. In the Virtual Machine window, from the Start menu, click All Programs, click Microsoft <strong>SQL</strong> Server<br />
<strong>2012</strong>, click Configuration Tools, and then click <strong>SQL</strong> Server Configuration Manager.<br />
2. In the left-hand pane of the <strong>SQL</strong> Server Configuration Manager window, expand <strong>SQL</strong> Server<br />
Network Configuration and then click Protocols for MKTG.<br />
3. Right-click the TCP/IP protocol and select Properties.<br />
4. In the TCP/IP Properties window, click IP Addresses tab.<br />
5. Scroll to the bottom of the screen, under the IP All section, clear the value for TCP Dynamic Ports.<br />
6. For TCP Port, type 51550, and click OK.<br />
7. In the Warning window, click OK.<br />
8. In the left-hand pane, click <strong>SQL</strong> Server Services.<br />
9. Right-click <strong>SQL</strong> Server (MKTG) and select Restart.<br />
10. In the left-hand pane of the <strong>SQL</strong> Server Configuration Manager window, click <strong>SQL</strong> Server Services.<br />
11. In the toolbar, click the Refresh icon and make sure the service starts.<br />
Task 2: Disable the <strong>SQL</strong>Browser service<br />
1. In the left-hand pane of the <strong>SQL</strong> Server Configuration Manager window, click <strong>SQL</strong> Server Services.<br />
2. In the right-hand pane, if the state for the <strong>SQL</strong> Server Browser is Running, right-click <strong>SQL</strong> Server<br />
Browser and click Stop.<br />
3. Right-click the <strong>SQL</strong> Server Browser and click Properties.<br />
4. In the <strong>SQL</strong> Server Browser Properties window, in the Service tab, set the Start Mode to Disabled and<br />
click OK.<br />
5. Close <strong>SQL</strong> Server Configuration Manager.
Module 2: Preparing Systems for <strong>SQL</strong> Server <strong>2012</strong><br />
Lab 2: Preparing Systems for <strong>SQL</strong> Server<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_02_PRJ\10775A_02_PRJ.ssmssln.<br />
7. In Solution Explorer, double-click the query 00-Setup.sql. When the query window opens, click<br />
Execute on the toolbar.<br />
Exercise 1: Adjust Memory Configuration<br />
Task 1: Check total server memory<br />
1. Click Start, right-click Computer, and click Properties.<br />
2. Write down the value for Installed memory (RAM).<br />
3. Close the System window.<br />
Task 2: Check the memory allocated to the default instance<br />
1. Click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click <strong>SQL</strong> Server<br />
Management Studio.<br />
2. In the Connect to Server window, type AdventureWorks for the Server name and click Connect.<br />
3. In Object Explorer, right-click the AdventureWorks server instance, and click Properties.<br />
4. In the Select a page pane click Memory.<br />
5. Write down the values for Minimum server memory (in MB) and Maximum server memory (in<br />
MB), and click Cancel.<br />
6. In <strong>SQL</strong> Server Management Studio, from the File menu, click Exit.<br />
L2-5
L2-6 Module 2: Preparing Systems for <strong>SQL</strong> Server <strong>2012</strong><br />
Task 3: Check the memory allocated to the MKTG instance<br />
1. Click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click <strong>SQL</strong> Server<br />
Management Studio.<br />
2. In the Connect to Server window, type Proseware for the Server name and click Connect.<br />
3. In Object Explorer, right-click the Proseware server instance, and click Properties.<br />
4. In the Select a page pane click Memory.<br />
5. Write down the values for Minimum server memory (in MB) and Maximum server memory (in<br />
MB), and click Cancel.<br />
6. In <strong>SQL</strong> Server Management Studio, from the File menu, click Exit.<br />
Task 4: Decide if the memory allocation is appropriate. If not, make the required<br />
changes to the memory configuration<br />
1. Review the Required Memory Configuration from the Supporting Documentation in the Student<br />
Manual.<br />
2. Calculate the Maximum memory for the AdventureWorks server instance as follows:<br />
• Example calculation (actual values depend upon VM configuration)<br />
• Max Memory = (Server Memory – 1.5GB) * 0.6<br />
• Max Memory = (3.0 – 1.5) * 0.6<br />
• Max Memory = 0.9 (approximate)<br />
3. Calculate the Maximum memory for the Proseware server instance as follows:<br />
• Example calculation (actual values depend upon VM configuration)<br />
• Max Memory = (Server Memory – 1.5GB) * 0.4<br />
• Max Memory = (3.0 – 1.5) * 0.4<br />
• Max Memory = 0.6 (approximate)<br />
4. Click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click <strong>SQL</strong> Server<br />
Management Studio.<br />
5. In the Connect to Server window, type AdventureWorks for the Server name and click Connect.<br />
6. In Object Explorer, right-click the AdventureWorks server instance, and click Properties.<br />
7. In the Select a page pane click Memory.<br />
8. Set the Maximum server memory (in MB) to the value 900, and click OK.<br />
Note: A more accurate value for 0.9GB would have been 921MB, but the value 900 has<br />
been used for simplicity.
9. In Object Explorer click Connect, and click <strong>Database</strong> Engine.<br />
Lab 2: Preparing Systems for <strong>SQL</strong> Server L2-7<br />
10. In the Connect to Server window, type Proseware for the Server name and click Connect.<br />
11. In Object Explorer, right-click the Proseware server instance, and click Properties.<br />
12. In the Select a page pane click Memory.<br />
13. Set the Maximum server memory (in MB) to the value 600, and click OK.<br />
Note: A more accurate value for 0.6GB would have been 614MB, but the value 600 has been<br />
used for simplicity.<br />
Exercise 2: Pre-installation Stress Testing<br />
Task 1: Configure <strong>SQL</strong>IOSIM<br />
1. In the folder D:\10775A_Labs\10775A_02_PRJ, right-click sqliosimx64.exe and click Run as<br />
administrator.<br />
2. In the Microsoft <strong>SQL</strong> Server IO Simulator window, note the agreement and click Yes.<br />
3. In the WinZip Self-Extractor - <strong>SQL</strong>IOS~1.EXE window click Unzip.<br />
4. In the WinZip Self-Extractor window click OK.<br />
5. In the WinZip Self-Extractor - <strong>SQL</strong>IOS~1.EXE window click Close.<br />
6. Open Windows Explorer and navigate to the C:\<strong>SQL</strong>IOSimX64 folder. Note that two versions of<br />
<strong>SQL</strong>IOSim are supplied: a command line version and a GUI version. Right-click <strong>SQL</strong>IOSim.exe, and<br />
click Run as administrator.<br />
7. In the Files and Configuration window click on the line containing C:\sqliosim.mdx, then click<br />
Remove.<br />
8. In the Files and Configuration window click on the line containing C:\sqliosim.ldx, then click<br />
Remove.<br />
9. In the Files and Configuration window click on the line containing D:\sqliosim.ldx, then click<br />
Remove.<br />
10. In the Files and Configuration window click on the line containing L:\sqliosim.mdx, then click<br />
Remove.<br />
11. In the Files and Configuration window click on the line containing D:\sqliosim.mdx, then in the<br />
Size (MB) change the value from 4096 to 100, also change the Max Size value from 8192 to 200,<br />
and the Increment to 20, then click Apply.<br />
12. On the line containing L:\sqliosim.ldx, set the Size (MB) to 50, the Max Size to 100, the<br />
Increment to 10, and click Apply.<br />
13. Change the Cycle Duration (sec) value from 300 to 60.<br />
14. Ensure that the Delete Files at Shutdown checkbox is checked, and click OK to complete the<br />
configuration.
L2-8 Module 2: Preparing Systems for <strong>SQL</strong> Server <strong>2012</strong><br />
Task 2: Execute <strong>SQL</strong>IOSIM<br />
1. In the <strong>SQL</strong>IOSIM window from the Simulator menu, click Start.<br />
2. Wait for the test to complete in about 1 minute. While the test is running, note the information that<br />
is displayed.<br />
3. In the <strong>SQL</strong>IOSim window click OK.<br />
Task 3: Review the results from executing <strong>SQL</strong>IOSIM<br />
1. If any errors are returned in red, review the errors.<br />
2. Locate the final summary for file D:\sqliosim.mdx and note the Running Average IO<br />
Duration (ms).<br />
3. Locate the final summary for file L:\sqliosim.ldx and note the Running Average IO<br />
Duration (ms).<br />
Note: The value returned for each drive depends upon the speed of the hardware. A typical<br />
value for either drive would be 15.<br />
4. In the <strong>SQL</strong>IOSim window from the File menu click Exit.<br />
Challenge Exercise 3: Check Specific I/O Operations (Only if time permits)<br />
Task 1: Install the <strong>SQL</strong>IO Utility<br />
1. In the folder D:\10775A_Labs\10775A_02_PRJ, right-click <strong>SQL</strong>IO.msi and click Install.<br />
2. In the Welcome to the <strong>SQL</strong>IO Setup Wizard window click Next.<br />
3. In the License Agreement window, note the agreement click the I Agree radio button, and click<br />
Next.<br />
4. In the Select Installation Folder window click the Everyone radio button, and click Next.<br />
5. In the Confirm Installation window click Next.<br />
6. In the Installation Complete window click Close.<br />
Task 2: Configure and Execute the <strong>SQL</strong>IO Utility<br />
1. Review the Supporting Documentation in the Student Manual for details of the <strong>SQL</strong>IO tests to be<br />
performed.<br />
2. In Windows Explorer, navigate to the C:\Program Files (x86)\<strong>SQL</strong>IO folder.<br />
3. Double-click the file param.txt. Place a # at the beginning of the line c:\testfile.dat. Remove the #<br />
from the line beginning d:\testfile.dat.<br />
4. From the File menu click Exit.<br />
5. In the Notepad window click Save.<br />
6. In Windows Explorer, double-click the file C:\Program Files (x86)\<strong>SQL</strong>IO\Using <strong>SQL</strong>IO.rtf. Note<br />
the examples and descriptions of the available parameters for <strong>SQL</strong>IO.
7. Close the using <strong>SQL</strong>IO.rtf - WordPad window.<br />
Lab 2: Preparing Systems for <strong>SQL</strong> Server L2-9<br />
8. From the Start menu, click All Programs, click Accessories, right-click Command Prompt and<br />
click Run as administrator.<br />
9. Type CD \Program Files (x86)\<strong>SQL</strong>IO and press Enter.<br />
10. Type the following command and press Enter:<br />
sqlio.exe -kR -s60 -fsequential -o8 –b64 -LS -Fparam.txt timeout /T 60<br />
11. Look through the output of the command, note the values returned for IOs/sec and MBs/sec. These<br />
values are commonly referred to as IOPS and throughput. Note the minimum, maximum and<br />
average latency.<br />
12. Type the following command and press Enter:<br />
sqlio.exe -kW -s60 -frandom -o8 –b8 -LS -Fparam.txt timeout /T 60<br />
13. Look through the output of the command, note the values returned for IOs/sec and MBs/sec. These<br />
values are commonly referred to as IOPS and throughput. Note the minimum, maximum and<br />
average latency.<br />
14. Type Exit and press Enter.
Module 3: Installing and Configuring <strong>SQL</strong> Server <strong>2012</strong><br />
Lab 3: Installing and Configuring <strong>SQL</strong><br />
Server<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_03_PRJ\10775A_03_PRJ.ssmssln.<br />
7. In Solution Explorer, double-click the query 00-Setup.sql. When the query window opens, click<br />
Execute on the toolbar.<br />
8. On the host system, in the Virtual Machines list in Hyper-V Manager, right-click the<br />
10775A-MIA-<strong>SQL</strong>1 virtual machine and click Settings.<br />
9. In the Settings for 10775A-MIA-<strong>SQL</strong>1 window, in the Hardware list expand IDE Controller 1, and<br />
click DVD Drive.<br />
10. In the DVD Drive properties pane, click Image file, and click browse.<br />
11. Navigate to the file C:\Program Files\Microsoft Learning\10775\Drives\10775A-MIA-<strong>SQL</strong>1<br />
\Virtual Hard Disks\<strong>SQL</strong>FULL_ENU.iso and click Open.<br />
12. In the Settings for 10775A-MIA-<strong>SQL</strong>1 window, click OK.<br />
Exercise 1: Review installation requirements<br />
Task 1: Review the supporting documentation prior to installation<br />
• Read the supporting documentation in the student manual.<br />
Task 2: Create the folders that are required for the data and log files<br />
• Using Windows Explorer, create the following folders:<br />
• D:\MKTGDEV<br />
• L:\MKTGDEV<br />
L3-11
L3-12 Module 3: Installing and Configuring <strong>SQL</strong> Server <strong>2012</strong><br />
Exercise 2: Install the <strong>SQL</strong> Server Instance<br />
Task 1: Based on the requirements reviewed in exercise 1, install another instance of<br />
<strong>SQL</strong> Server<br />
1. In the Virtual Machine window, in the AutoPlay window (which should now have popped up) click<br />
Run SETUP.EXE and wait for <strong>SQL</strong> Server Setup to start. (If the AutoPlay window has not launched,<br />
or you have inadvertently closed it, double-click SETUP.EXE in the root of the new drive that has<br />
been mounted when you connected the ISO file to your virtual machine).<br />
2. In the <strong>SQL</strong> Server Installation Center window, click on the Installation tab.<br />
3. Click New <strong>SQL</strong> Server stand-alone installation or add features to an existing installation from<br />
the list of available options and wait for <strong>SQL</strong> Server setup to start.<br />
4. In the Setup Support Rules window, click Show details and note that the list of rules that has been<br />
checked.<br />
5. In the Setup Support Rules window, click OK.<br />
6. Make sure that Send feature usage data to Microsoft is not selected and click Next.<br />
Note In general, you are encouraged to choose this feature as it helps Microsoft<br />
produce a better product. In this case, we are deselecting it because we are in an isolated<br />
environment without network access within the virtual machine.<br />
7. In the Product Updates window, uncheck Include <strong>SQL</strong> Server product updates and click Next.<br />
(Note the next screen might take some time to display.)<br />
8. In the Setup Support Rules page, click Show details (if visible) and read the installation checklist<br />
and check to make sure the Status for items have Passed. Note any warnings that are listed and<br />
click Next.<br />
9. In the Installation Type page, ensure the option button Perform a new installation of <strong>SQL</strong><br />
Server <strong>2012</strong> is selected and then click Next.<br />
10. In the Product Key window, click Next.<br />
11. In the License Terms page, note the Microsoft Software License Terms and check I accept the<br />
license terms.<br />
12. In the Setup Role page, ensure that <strong>SQL</strong> Server Feature Installation is selected and click Next.<br />
13. In the Feature Selection page, under the Instance Features check <strong>Database</strong> Engine Services and<br />
click Next.<br />
14. In the Installation Rules page, click Show details, note the list of rules and the status of each rule<br />
and click Next.<br />
15. In the Instance Configuration page, ensure that Named instance is checked and type MKTGDEV<br />
in the Named instance field and click Next.<br />
16. In the Disk Space Requirements page, read the Disk Usage Summary and then click Next.
Lab 3: Installing and Configuring <strong>SQL</strong> Server L3-13<br />
17. In the Server Configuration page, click the Account Name field for the <strong>SQL</strong> Server <strong>Database</strong><br />
Engine row and select from the drop-down list.<br />
18. In the Select User, Computer, Service Account, or Group window, in the Enter the object name to<br />
select (Examples): dialog box type AdventureWorks\PWService, click Check Names and click<br />
OK.<br />
19. In the Password column for the <strong>SQL</strong> Server <strong>Database</strong> Engine row, type Pa$$w0rd.<br />
20. In the Server Configuration page, click the Account Name field for the <strong>SQL</strong> Server Agent row<br />
and click from the drop-down list.<br />
21. In the Select User, Computer, Service Account, or Group window, in the Enter the object name to<br />
select (Examples): dialog box type AdventureWorks\PWService, click Check Names and click<br />
OK.<br />
22. In the Password column for the <strong>SQL</strong> Server Agent row, type Pa$$w0rd.<br />
23. In the Server Configuration page, in the Service Accounts tab, and in the Startup Type dropdown<br />
list for the <strong>SQL</strong> Server Agent select Automatic.<br />
24. Click the Collation tab, ensure that <strong>SQL</strong>_Latin1_General_CP1_CI_AS is selected and click Next.<br />
25. In the <strong>Database</strong> Engine Configuration page, on the Server Configuration tab, in the<br />
Authentication Mode section, ensure that Mixed Mode (<strong>SQL</strong> Server authentication and<br />
Windows authentication) is selected.<br />
26. In the Enter password textbox, type Pa$$w0rd.<br />
27. In the Confirm password textbox, type Pa$$w0rd.<br />
28. Click Add Current User, this will add the user ADVENTUREWORKS\administrator<br />
(Administrator) to the list of Administrators.<br />
29. Click the Data Directories tab, change the User database directory to D:\MKTGDEV.<br />
30. Change the User database log directory to L:\MKTGDEV.<br />
31. Change the Temp DB directory to D:\MKTGDEV.<br />
32. Change the Temp DB log directory to L:\MKTGDEV.<br />
33. Click the FILESTREAM tab, and ensure that Enable FILESTREAM for Transact-<strong>SQL</strong> access is not<br />
selected and click Next.<br />
34. In the Error Reporting page, make sure that Send Windows and <strong>SQL</strong> Server Error Reports to<br />
Microsoft or your corporate report server is not checked, then click Next.<br />
35. In the Installation Configuration Rules page, Show details, review the list of rules, and click<br />
Next.<br />
36. In the Ready to Install page, review the summary and click Install.<br />
37. In the Complete page, click Close.<br />
38. Close the <strong>SQL</strong> Server Installation Center window.
L3-14 Module 3: Installing and Configuring <strong>SQL</strong> Server <strong>2012</strong><br />
Exercise 3: Perform Post-installation Setup and Checks<br />
Task 1: Check that the services for the new <strong>SQL</strong> Server instance are running<br />
1. Click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click Configuration Tools, and<br />
then click <strong>SQL</strong> Server Configuration Manager.<br />
2. In the left-hand pane of the <strong>SQL</strong> Server Configuration Manager window, click <strong>SQL</strong> Server Services.<br />
3. In the right-hand pane, ensure that the following services are listed as running for the MKTGDEV<br />
instance:<br />
• <strong>SQL</strong> Server (MKTGDEV)<br />
• <strong>SQL</strong> Server Agent (MKTGDEV)<br />
Task 2: Configure both 32 bit and 64 bit aliases for the new instance<br />
1. In the left-hand pane of the <strong>SQL</strong> Server Configuration Manager window, expand <strong>SQL</strong> Server<br />
Network Configuration and then click Protocols for MKTGDEV.<br />
2. In the right-hand pane, right-click Named Pipes and select Enable.<br />
3. In the Warning window, click OK.<br />
4. In the left-hand pane of the <strong>SQL</strong> Server Configuration Manager window, click <strong>SQL</strong> Server Services.<br />
5. Right-click <strong>SQL</strong> Server (MKTGDEV) and select Restart.<br />
6. In the left-hand pane of the <strong>SQL</strong> Server Configuration Manager window, click <strong>SQL</strong> Server Services.<br />
7. In the toolbar, click the Refresh icon.<br />
8. In the left-hand pane of the <strong>SQL</strong> Server Configuration Manager window, expand <strong>SQL</strong> Native Client<br />
11.0 Configuration (32bit) and click Client Protocols.<br />
9. Confirm that the Named Pipes protocol is Enabled.<br />
10. In the left-hand pane, right-click Aliases and select New Alias.<br />
11. In the Alias – New window, in the Alias Name text box, type PWDev.<br />
12. In the Protocol drop-down list box, select Named Pipes.<br />
13. In the Server text box, type .\MKTGDEV and click OK.<br />
14. In the left-hand pane of the <strong>SQL</strong> Server Configuration Manager window, expand <strong>SQL</strong> Native Client<br />
11.0 Configuration and click Client Protocols.<br />
15. Confirm that the Named Pipes protocol is Enabled.<br />
16. In the left-hand pane, right-click Aliases and select New Alias.<br />
17. In the Alias – New window, in the Alias Name text box, type PWDev.<br />
18. In the Protocol drop-down list box, select Named Pipes.<br />
19. In the Server text box, type .\MKTGDEV and click OK.<br />
20. Close <strong>SQL</strong> Server Configuration Manager.
Task 3: Connect to the new instance using SSMS<br />
Lab 3: Installing and Configuring <strong>SQL</strong> Server L3-15<br />
1. Click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and then click <strong>SQL</strong> Server<br />
Management Studio.<br />
2. In the Connect to Server window, ensure that Server Type is set to <strong>Database</strong> Engine.<br />
3. In the Server name text box, type PWDev.<br />
4. In the Authentication drop-down list, select Windows Authentication, and click Connect.<br />
5. In Object Explorer, under PWDev expand <strong>Database</strong>s.<br />
6. Close <strong>SQL</strong> Server Management Studio.<br />
Challenge Exercise 4: Configure Server Memory (Only if time permits)<br />
Task 1: Review the current memory available on the server<br />
1. Click Start, right-click Computer, and click Properties.<br />
2. Write down the value for Installed memory (RAM).<br />
Task 2: Determine an appropriate memory allocation for each instance<br />
1. Review the Required Memory Configuration from the Supporting Documentation in the Student<br />
Manual.<br />
2. Calculate the Maximum memory for the AdventureWorks server instance as follows:<br />
• Example calculation (actual values depend upon VM configuration)<br />
• Max Memory = (Server Memory – 1.0GB) * 0.4<br />
• Max Memory = (3.0 – 1.0) * 0.4<br />
• Max Memory = 0.8 (approximate)<br />
3. Calculate the Maximum memory for the Proseware server instance as follows:<br />
• Example calculation (actual values depend upon VM configuration)<br />
• Max Memory = (Server Memory – 1.0GB) * 0.3<br />
• Max Memory = (3.0 – 1.0) * 0.3<br />
• Max Memory = 0.6 (approximate)<br />
4. Calculate the Maximum memory for the PWDev server instance as follows:<br />
• Example calculation (actual values depend upon VM configuration)<br />
• Max Memory = (Server Memory – 1.0GB) * 0.3<br />
• Max Memory = (3.0 – 1.0) * 0.3<br />
• Max Memory = 0.6 (approximate)
L3-16 Module 3: Installing and Configuring <strong>SQL</strong> Server <strong>2012</strong><br />
Task 3: Configure each instance appropriately<br />
1. Click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click <strong>SQL</strong> Server<br />
Management Studio.<br />
2. In the Connect to Server window, type AdventureWorks for the Server name and click Connect.<br />
3. In Object Explorer, right-click the AdventureWorks server instance, and click Properties.<br />
4. In the Select a page pane click Memory.<br />
5. Set the Maximum server memory (in MB) to the value 800, and click OK.<br />
Note A more precise value could have been used but the value 800 has been used for<br />
simplicity.<br />
6. In Object Explorer click Connect, and click <strong>Database</strong> Engine.<br />
7. In the Connect to Server window, type Proseware for the Server name and click Connect.<br />
8. In Object Explorer, right-click the Proseware server instance, and click Properties.<br />
9. In the Select a page pane click Memory.<br />
10. Set the Maximum server memory (in MB) to the value 600, and click OK.<br />
Note A more precise value for 1.0GB could have been used but the value 600 has been<br />
used for simplicity.<br />
11. In Object Explorer click Connect, and click <strong>Database</strong> Engine.<br />
12. In the Connect to Server window, type PWDev for the Server name and click Connect.<br />
13. In Object Explorer, right-click the PWDev server instance, and click Properties.<br />
14. In the Select a page pane click Memory.<br />
15. Set the Maximum server memory (in MB) to the value 600, and click OK.<br />
Note A more accurate value for 1.0GB could have been used but the value 600 has<br />
been used for simplicity.<br />
16. Close <strong>SQL</strong> Server Management Studio.
Module 4: Working with <strong>Database</strong>s<br />
Lab 4: Working with <strong>Database</strong>s<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_04_PRJ\10775A_04_PRJ.ssmssln.<br />
7. In Solution Explorer, double-click the query 00-Setup.sql. When the query window opens, click<br />
Execute on the toolbar.<br />
Exercise 1: Adjust tempdb Configuration<br />
Task 1: Adjust the size of tempdb<br />
1. Review the requirement for tempdb size in the Supporting Documentation.<br />
2. In Object Explorer expand the Proseware server, expand <strong>Database</strong>s, and expand System<br />
<strong>Database</strong>s. Right-click the master database and click New Query.<br />
3. Enter the following statements and click Execute:<br />
USE master;<br />
GO<br />
ALTER DATABASE tempdb<br />
MODIFY FILE ( NAME = tempdev, SIZE = 30MB );<br />
GO<br />
ALTER DATABASE tempdb<br />
MODIFY FILE ( NAME = templog, SIZE = 10MB );<br />
GO<br />
Task 2: Check that the tempdb size is still correct after a restart<br />
1. Click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click Configuration Tools, and<br />
then click <strong>SQL</strong> Server Configuration Manager.<br />
2. In the left-hand pane of the <strong>SQL</strong> Server Configuration Manager window, click <strong>SQL</strong> Server Services.<br />
3. In the right-hand pane, right-click the service <strong>SQL</strong> Server (MKTG) and click restart.<br />
4. In Object Explorer, right-click the tempdb database and click Properties.<br />
5. In the <strong>Database</strong> Properties – tempdb window, click Files in the Select a page pane.<br />
6. In the <strong>Database</strong> files list, check that the initial size of the data file is 30 and the initial size of the<br />
log file is 10.<br />
L4-17
L4-18 Module 4: Working with <strong>Database</strong>s<br />
Exercise 2: Create the RateTracking <strong>Database</strong><br />
Task 1: Create the database<br />
1. Review the supplied requirements in the supporting documentation for the exercise.<br />
2. In Object Explorer expand the Proseware server, expand <strong>Database</strong>s, and expand System<br />
<strong>Database</strong>s. Right-click the master database and click New Query.<br />
3. Enter the following statements and click Execute:<br />
USE master;<br />
GO<br />
CREATE DATABASE RateTracking<br />
ON<br />
( NAME = RateTracking_dat,<br />
FILENAME = 'D:\MKTG\RateTracking.mdf',<br />
SIZE = 10MB, MAXSIZE = 100MB,<br />
FILEGROWTH = 10MB<br />
)<br />
LOG ON<br />
( NAME = RateTracking_log,<br />
FILENAME = 'L:\MKTG\RateTracking.ldf',<br />
SIZE = 20MB,<br />
MAXSIZE = UNLIMITED,<br />
FILEGROWTH = 20MB<br />
);<br />
GO<br />
Task 2: Create the required filegroups and files<br />
1. Review the supplied requirements in the supporting documentation for the required files and<br />
filegroups.<br />
2. In Object Explorer expand the Proseware server, expand <strong>Database</strong>s, and expand System<br />
<strong>Database</strong>s. Right-click the master database and click New Query.<br />
3. Enter the following statements and click Execute:<br />
USE master;<br />
GO<br />
ALTER DATABASE RateTracking<br />
ADD FILEGROUP USERDATA;<br />
GO<br />
ALTER DATABASE RateTracking<br />
ADD FILE<br />
( NAME = RateTracking_dat_1,<br />
FILENAME = 'D:\MKTG\RateTracking_1.ndf',<br />
SIZE = 20MB,<br />
MAXSIZE = 100MB,<br />
FILEGROWTH = 10MB<br />
) TO FILEGROUP USERDATA;<br />
GO<br />
(code continued on next page)
ALTER DATABASE RateTracking<br />
ADD FILE<br />
( NAME = RateTracking_dat_2,<br />
FILENAME = 'D:\MKTG\RateTracking_2.ndf',<br />
SIZE = 20MB,<br />
MAXSIZE = 100MB,<br />
FILEGROWTH = 10MB<br />
) TO FILEGROUP USERDATA;<br />
GO<br />
ALTER DATABASE RateTracking<br />
ADD FILEGROUP ARCHIVE;<br />
GO<br />
ALTER DATABASE RateTracking<br />
ADD FILE<br />
( NAME = RateTracking_dat_3,<br />
FILENAME = 'D:\MKTG\RateTracking_3.ndf',<br />
SIZE = 200MB,<br />
MAXSIZE = 500MB,<br />
FILEGROWTH = 50MB<br />
) TO FILEGROUP ARCHIVE;<br />
GO<br />
ALTER DATABASE RateTracking<br />
ADD FILE<br />
( NAME = RateTracking_dat_4,<br />
FILENAME = 'D:\MKTG\RateTracking_4.ndf',<br />
SIZE = 200MB,<br />
MAXSIZE = 500MB,<br />
FILEGROWTH = 50MB<br />
) TO FILEGROUP ARCHIVE;<br />
GO<br />
Task 3: Change the default filegroup for the database<br />
Lab 4: Working with <strong>Database</strong>s L4-19<br />
1. Review the supplied requirements in the supporting documentation for the default filegroup.<br />
2. In Object Explorer expand the Proseware server, expand <strong>Database</strong>s, and expand System<br />
<strong>Database</strong>s. Right-click the master database and click New Query.<br />
3. Enter the following statements and click Execute:<br />
USE master;<br />
GO<br />
ALTER DATABASE RateTracking MODIFY FILEGROUP USERDATA DEFAULT;<br />
GO
L4-20 Module 4: Working with <strong>Database</strong>s<br />
Exercise 3: Attach the OldProspects <strong>Database</strong><br />
Task 1: Copy the database files<br />
• In Windows Explorer, copy the files as specified below:<br />
Filename Source Folder Destination Folder<br />
OldProspects.mdf D:\10775A_Labs\10775A_04_PRJ D:\MKTG<br />
OldProspects.ldf D:\10775A_Labs\10775A_04_PRJ L:\MKTG<br />
Task 2: Attach the database to the MKTG instance<br />
1. In Object Explorer, expand the Proseware server and expand <strong>Database</strong>s.<br />
2. Right-click <strong>Database</strong>s and click Attach.<br />
3. Click Add and navigate to the file D:\MKTG\OldProspects.mdf and click OK.<br />
4. Click the ellipsis beside the Current File Path for the log entry and navigate to the file<br />
L:\MKTG\OldProspects.ldf and click OK.<br />
5. Click OK to attach the database and note that OldProspects now appears in the list of databases in<br />
Object Explorer.<br />
Challenge Exercise 4: Add Multiple Files to tempdb (Only if time permits)<br />
Task 1: Review the tempdb file requirements<br />
• In the Supporting Documentation review the tempdb Requirements From The Consultant section.<br />
Task 2: Move existing files<br />
1. In Object Explorer expand the Proseware server, expand <strong>Database</strong>s, and expand System<br />
<strong>Database</strong>s. Right-click the master database and click New Query.<br />
2. Enter the following statements and click Execute:<br />
USE master;<br />
GO<br />
ALTER DATABASE tempdb<br />
MODIFY FILE (NAME = tempdev, FILENAME = 'D:\MKTG\tempdb.mdf');<br />
ALTER DATABASE tempdb<br />
MODIFY FILE (NAME = templog, FILENAME = 'L:\MKTG\templog.ldf');<br />
GO
Task 3: Add new files<br />
Lab 4: Working with <strong>Database</strong>s L4-21<br />
1. In Object Explorer expand the Proseware server, expand <strong>Database</strong>s, and expand System<br />
<strong>Database</strong>s. Right-click the master database and click New Query.<br />
2. Enter the following statements and click Execute:<br />
USE master;<br />
GO<br />
ALTER DATABASE tempdb<br />
ADD FILE ( NAME = N'tempdev2',<br />
FILENAME = N'D:\MKTG\tempdb_file2.ndf' ,<br />
SIZE = 20MB,<br />
FILEGROWTH = 10MB,<br />
MAXSIZE = UNLIMITED );<br />
ALTER DATABASE tempdb<br />
ADD FILE ( NAME = N'tempdev3',<br />
FILENAME = N'D:\MKTG\tempdb_file3.ndf' ,<br />
SIZE = 20MB,<br />
FILEGROWTH = 10MB,<br />
MAXSIZE = UNLIMITED );<br />
ALTER DATABASE tempdb<br />
ADD FILE ( NAME = N'tempdev4',<br />
FILENAME = N'D:\MKTG\tempdb_file4.ndf' ,<br />
SIZE = 20MB,<br />
FILEGROWTH = 10MB,<br />
MAXSIZE = UNLIMITED );<br />
GO<br />
Task 4: Restart the server and check file locations<br />
1. Click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, click Configuration Tools, and<br />
then click <strong>SQL</strong> Server Configuration Manager.<br />
2. In the left-hand pane of the <strong>SQL</strong> Server Configuration Manager window, click <strong>SQL</strong> Server Services.<br />
3. In the right-hand pane, right-click the service <strong>SQL</strong> Server (MKTG) and click restart.<br />
4. In Object Explorer, right-click the tempdb database and click Properties.<br />
5. In the <strong>Database</strong> Properties – tempdb window, click Files in the Select a page pane.<br />
6. Make sure that the files listed match the requirements.
Module 5: Understanding <strong>SQL</strong> Server <strong>2012</strong> Recovery<br />
Models<br />
Lab 5: Understanding <strong>SQL</strong> Server Recovery<br />
Models<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project D:\10775A_Labs\10775A_05_PRJ<br />
\10775A_05_PRJ.ssmssln.<br />
7. In Solution Explorer, double-click the query 00-Setup.sql. When the query window opens, click<br />
Execute on the toolbar.<br />
Exercise 1: Plan a Backup Strategy<br />
Task 1: Review the business requirements<br />
• Review the supplied business requirements in the supporting documentation for the exercise.<br />
Task 2: Determine an appropriate backup strategy for each database<br />
• There is no one correct answer. The options provided below are an example strategy that would<br />
meet the requirements.<br />
For the MarketDev database: Full Recovery Model<br />
Type of Backup Scheduled<br />
Full 7:00pm Daily<br />
Log Every 20 minutes, starting at 8:20am<br />
and continuing till 6:00pm<br />
Notes:<br />
• Full database backup should complete in approximately 3.4 hours (20GB/100MB per minute).<br />
This means that you cannot employ only a full database backup strategy as it would not meet<br />
the RPO. The full database backup should complete by 10:24pm which is within the available<br />
time window for backups.<br />
L5-23
L5-24 Module 5: Understanding <strong>SQL</strong> Server <strong>2012</strong> Recovery Models<br />
• Each log backup should complete in approximately 3.4 minutes (1GB per hour / 3 log backups<br />
per hour /100 MB per minute). This fits within the 20 minute interval and meets the RPO.<br />
• Full database restore should complete in approximately 4.3 hours (20GB/80MB per minute).<br />
Log file restore should complete in approximately 2.1 hours (10 hours * 1GB per hour /80MB<br />
per minute) and meets the RTO.<br />
For the Research database: Simple Recovery Model<br />
Type of Backup Scheduled<br />
Full 6:30pm Daily<br />
Notes:<br />
• Recovery to last full daily database backup complies with the RPO.<br />
• Daily backup should complete in approximately 2 minutes (200MB/100MB per minute).<br />
• Full restore should complete in approximately 2.5 minutes and complies with RTO<br />
(200MB/80MB per minute).<br />
Exercise 2: Configure Recovery Models<br />
Task 1: Review and adjust the current database recovery models<br />
1. The recovery model for the MarketDev database is Simple and should be Full. Adjust the model by<br />
the following steps:<br />
1. In SSMS, in Object Explorer expand the Proseware server, right-click <strong>Database</strong>s and click<br />
Refresh.<br />
2. Expand <strong>Database</strong>s, right-click MarketDev and click Properties.<br />
3. In the Select a page pane, click Options.<br />
4. In the right hand pane, choose Full from the Recovery model drop down list and click OK.<br />
2. The recovery model for the Research database is correct and doesn’t need to be modified.<br />
Challenge Exercise 3: Review Recovery Models and Strategy (Only if time<br />
permits)<br />
Task 1: Review the RPO and RTO requirements for the databases<br />
• The supporting documentation includes details of the business continuity requirements for the<br />
databases. You need to review this documentation.<br />
Task 2: Review the existing recovery models and backup strategies<br />
• The supporting documentation also includes details of the backup strategy for the databases. You<br />
need to review this documentation.
Task 3: Indicate whether or not the strategy would be successful<br />
Lab 5: Understanding <strong>SQL</strong> Server Recovery Models L5-25<br />
1. For the CreditControl database, a Full backup would take approximately 3.4 hours (20GB/100MB<br />
per minute). This would not satisfy the requirements for the time window as the Wednesday 6am<br />
Full backup would not complete before office hours start.<br />
2. For the PotentialIssue database, the 15 minute log backups would meet the RPO. A Full restore<br />
should take approximately 24 minutes ((200MB + (7 days * 24 hours) * 10MB per hour) / 80MB per<br />
minute) which meets the RTO. The Full database backup would complete in approximately 2<br />
minutes (200MB/100MB per minute) which means that the Full database would complete in the<br />
available time window. The backup strategy for the PotentialIssue database meets the business<br />
requirements.
Module 6: Backup of <strong>SQL</strong> Server <strong>Database</strong>s<br />
Lab 6: Backup of <strong>SQL</strong> Server <strong>Database</strong>s<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project D:\10775A_Labs\10775A_06_PRJ<br />
\10775A_06_PRJ.ssmssln.<br />
7. In Solution Explorer, double-click the query 00-Setup.sql. When the query window opens, click<br />
Execute on the toolbar.<br />
Exercise 1: Investigate Backup Compression<br />
Task 1: Create a database backup without compression<br />
1. Using Windows Explorer create a new folder L:\<strong>SQL</strong>Backups.<br />
2. In Object Explorer, in <strong>SQL</strong> Server Management Studio, expand the Proseware server, expand<br />
<strong>Database</strong>s, right-click the MarketDev database, click Tasks, and click Back Up.<br />
3. In the Back Up <strong>Database</strong> – MarketDev window click Remove.<br />
4. In the Back Up <strong>Database</strong> – MarketDev window click Add.<br />
5. In the File name textbox type L:\<strong>SQL</strong>Backups\MarketDev_Full_Uncompressed.BAK and click<br />
OK.<br />
6. In the Select a page pane, click Options.<br />
7. In the Set backup compression drop-down list, click Do not compress backup, and click OK.<br />
8. In the Microsoft <strong>SQL</strong> Server Management Studio window click OK.<br />
Task 2: Create a database backup with compression<br />
1. In Object Explorer, right-click the MarketDev database, click Tasks, and click Back Up.<br />
2. In the Back Up <strong>Database</strong> – MarketDev window click Remove.<br />
3. In the Back Up <strong>Database</strong> – MarketDev window click Add.<br />
4. In the File name textbox type L:\<strong>SQL</strong>Backups\MarketDev_Full_Compressed.BAK and click OK.<br />
L6-27
L6-28 Module 6: Backup of <strong>SQL</strong> Server <strong>Database</strong>s<br />
5. In the Select a page pane, click Options.<br />
6. In the Set backup compression drop-down list, click Compress backup, and click OK.<br />
7. In the Microsoft <strong>SQL</strong> Server Management Studio window click OK.<br />
Task 3: Compare the file sizes created<br />
1. In Windows Explorer, navigate to the folder L:\<strong>SQL</strong>Backups and note the size of the compress and<br />
uncompressed backup files.<br />
2. Calculate the space savings provided by compression as:<br />
SpaceSavings=(Uncompressed size–Compressed size)*100/Uncompressed size<br />
Note A typical value would be (233-66)*100/233 = 72% saving.<br />
Exercise 2: Transaction Log Backup<br />
Task 1: Execute a script to introduce workload to the MarketDev database<br />
1. In Solution Explorer, double-click the file 61 – Workload File.sql to open it.<br />
2. On the Toolbar click Execute.<br />
Task 2: Backup the transaction log on the MarketDev database<br />
1. In Object Explorer, right-click the MarketDev database, click Tasks, and click Back Up.<br />
2. In the Back Up <strong>Database</strong> – MarketDev window click Remove.<br />
3. In the Back Up <strong>Database</strong> – MarketDev window click Add.<br />
4. In the File name textbox type L:\<strong>SQL</strong>Backups\MarketDev_Log_Compressed.BAK and click OK.<br />
5. From the Backup type drop-down list box, select Transaction Log.<br />
6. In the Select a page pane, click Options.<br />
7. In the Set backup compression drop-down list, click Compress backup, and click OK.<br />
8. In the Microsoft <strong>SQL</strong> Server Management Studio window click OK.<br />
Exercise 3: Differential Backup<br />
Task 1: Execute a script to introduce workload to the MarketDev database<br />
1. In Solution Explorer, double-click the file 61 – Workload File.sql to open it.<br />
2. On the Toolbar click Execute.<br />
Task 2: Create a differential backup of the MarketDev database<br />
1. In Object Explorer, right-click the MarketDev database, click Tasks, and click Back Up.<br />
2. In the Back Up <strong>Database</strong> – MarketDev window click Remove.<br />
3. In the Back Up <strong>Database</strong> – MarketDev window click Add.
Lab 6: Backup of <strong>SQL</strong> Server <strong>Database</strong>s L6-29<br />
4. In the File name textbox type L:\<strong>SQL</strong>Backups\MarketDev_Differential_Compressed.BAK and<br />
click OK.<br />
5. From the Backup type drop-down list box, select Differential.<br />
6. In the Select a page pane, click Options.<br />
7. In the Set backup compression drop-down list, click Compress backup, and click OK.<br />
8. In the Microsoft <strong>SQL</strong> Server Management Studio window click OK.<br />
9. Using Windows Explorer, note the size of the differential backup L:\<strong>SQL</strong>Backups<br />
\MarketDev_Differential_Compressed.BAK compared to the full backup<br />
L:\<strong>SQL</strong>Backups\MarketDev_Full_Compressed.BAK.<br />
Note A typical value for the Differential backup size would be 240KB. A typical value for<br />
the Full backup size would be 66 MB.<br />
Task 3: Execute a script to introduce workload to the MarketDev database<br />
1. In Solution Explorer, double-click the file 71 – Workload File.sql to open it.<br />
2. On the Toolbar click Execute.<br />
Task 4: Append a differential backup to the previous differential backup file<br />
1. In Object Explorer, right-click the MarketDev database, click Tasks, and click Back Up.<br />
2. Ensure that the Destination shows L:\<strong>SQL</strong>Backups\MarketDev_Differential_Compressed.BAK.<br />
3. From the Backup type drop-down list box, select Differential.<br />
4. In the Select a page pane, click Options.<br />
5. In the Overwrite media option ensure that Append to the existing backup set is selected.<br />
6. In the Set backup compression drop-down list, click Compress backup, and click OK.<br />
7. In the Microsoft <strong>SQL</strong> Server Management Studio window click OK.<br />
8. Using Windows Explorer, note that the size of the Differential backup has increased. The file now<br />
contains two backups.<br />
Note A typical value for the backup file size would be 54 MB.<br />
Exercise 4: Copy-only Backup<br />
Task 1: Create a copy-only backup of the MarketDev database, ensuring to choose to<br />
verify the backup<br />
1. In Object Explorer, right-click the MarketDev database, click Tasks, and click Back Up.<br />
2. In the Back Up <strong>Database</strong> – MarketDev window click Remove.<br />
3. In the Back Up <strong>Database</strong> – MarketDev window click Add.<br />
4. In the File name textbox type L:\<strong>SQL</strong>Backups\MarketDev_Copy_Compressed.BAK and click OK.
L6-30 Module 6: Backup of <strong>SQL</strong> Server <strong>Database</strong>s<br />
5. From the Backup type drop-down list box, select Full.<br />
6. Ensure that Copy-only Backup is checked.<br />
7. In the Select a page pane, click Options.<br />
8. In the Overwrite media options, select Back up to a new media set, and erase all existing<br />
backup sets.<br />
9. In the New media set name text box, type MarketDev Copy Backup.<br />
10. In the New media set description text box, type MarketDev Copy Backup for Integration<br />
Team.<br />
11. In the Reliability options, ensure that Verify backup when finished is checked.<br />
12. In the Set backup compression drop-down list, click Compress backup, and click OK.<br />
13. In the Microsoft <strong>SQL</strong> Server Management Studio window click OK.<br />
14. Using Windows Explorer, note the size of the Copy backup compared to the Full backup.<br />
Challenge Exercise 5: Partial Backup (Only if time permits)<br />
Task 1: Perform a backup of the read-write filegroups on the RateTracking database<br />
1. In Solution Explorer, double-click the file 91 – Lab Exercise 5.sql to open it.<br />
2. Review the script.<br />
3. On the Toolbar click Execute.
Module 7: Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Lab 7: Restoring <strong>SQL</strong> Server <strong>2012</strong><br />
<strong>Database</strong>s<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_07_PRJ\10775A_07_PRJ.ssmssln.<br />
7. In Solution Explorer, double-click the query 00-Setup.sql. When the query window opens, click<br />
Execute on the toolbar.<br />
Note The setup script for this module is intended to throw an error regarding missing<br />
files this is normal.<br />
Exercise 1: Determine a Restore Strategy<br />
Task 1: Review the backups contained within the backup file<br />
1. In Solution Explorer, double-click the file 51 – Lab Exercise 1 Task 1.sql to open it.<br />
2. Review the script.<br />
3. On the Toolbar click Execute.<br />
4. The BackUpTypeDescription contained in the file are (in order):<br />
a. <strong>Database</strong><br />
b. <strong>Database</strong><br />
c. <strong>Database</strong> Differential<br />
d. Transaction Log<br />
e. <strong>Database</strong> Differential<br />
f. Transaction Log<br />
g. Transaction Log<br />
L7-31
L7-32 Module 7: Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s<br />
Task 2: Determine how the restore should be performed<br />
1. Restoring a database requires:<br />
• Latest full database backup (File 2)<br />
• Most recent differential backup (File 5)<br />
• All log file backups since the latest differential backup (Files 6,7)<br />
2. With NORECOVERY should be specified on all restores.<br />
Exercise 2: Restore the <strong>Database</strong><br />
Task 1: Restore the database<br />
1. In Object Explorer, expand the Proseware server, right-click <strong>Database</strong>s, and click Restore<br />
<strong>Database</strong>.<br />
2. In the Source options, click Device then click on the ellipsis button to the right hand side of the<br />
Device textbox.<br />
3. In the Select backup devices window, click Add.<br />
4. Navigate to the file D:\MS<strong>SQL</strong>Server\MarketYields.bak and click OK.<br />
5. In the Select backup devices window, click OK.<br />
6. In the Restore <strong>Database</strong> - MarketYields window, in the Select a page pane, click Files.<br />
7. In the Restore database files as options, check Relocate all files to folder.<br />
8. In the Select a page pane, click Options.<br />
9. From the Recovery state dropdown, select RESTORE WITH STANDBY.<br />
10. In the Standby file textbox, type L:\MKTG\Log_Standby.bak and click OK.<br />
Note If the OK button is not visible you will need to scroll to the bottom of the window,<br />
or increase the resolution of your screen.<br />
11. In the Microsoft <strong>SQL</strong> Server Management Studio window, click OK.<br />
12. In Object Explorer expand the Proseware server, expand <strong>Database</strong>s, right-click <strong>Database</strong>s and<br />
click Refresh.<br />
13. The MarketYields database should show as Standby / Read-Only.
Lab 7: Restoring <strong>SQL</strong> Server <strong>2012</strong> <strong>Database</strong>s L7-33<br />
Challenge Exercise 3: Using STANDBY Mode (Only if time permits)<br />
Task 1: Execute queries against the STANDBY database to ensure it is accessible<br />
1. In Object Explorer, right-click Proseware server, and click New Query.<br />
2. Type the following query as shown in the snippet below:<br />
SELECT * FROM MarketYields.dbo.LogData;<br />
3. On the Toolbar click Execute.<br />
Note 25000 rows should be returned.<br />
4. Close the query window. (If prompted to Save changes click No).<br />
Task 2: Restore another log file, leaving the database in STANDBY mode<br />
1. In Object Explorer, expand the Proseware server, expand <strong>Database</strong>s, right-click the MarketYields<br />
database and click Tasks, click Restore, click Transaction Log.<br />
2. In the Restore source options, click From file or tape, click the ellipsis button at the right hand<br />
side of the From file or tape textbox.<br />
3. In the Select backup devices window, click Add.<br />
4. Navigate to the file D:\MS<strong>SQL</strong>SERVER\MarketYields_log.bak and click OK.<br />
5. In the Select backup devices window, click OK.<br />
6. In the Select a page pane, click Options.<br />
7. In the Recovery state options select Leave the database in read-only mode.<br />
8. In the Standby file textbox, type L:\MKTG\Log_Standby.bak and click OK.<br />
9. In the Microsoft <strong>SQL</strong> Server Management Studio window, click Yes.<br />
10. In the Microsoft <strong>SQL</strong> Server Management Studio window, click OK.<br />
11. In Object Explorer expand the Proseware server, expand <strong>Database</strong>s, right-click <strong>Database</strong>s and<br />
click Refresh.<br />
12. The MarketYields database should show as Standby / Read-Only.
Module 8: Importing and Exporting Data<br />
Lab 8: Importing and Exporting Data<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_08_PRJ\10775A_08_PRJ.ssmssln.<br />
7. In Solution Explorer, double-click the query 00-Setup.sql. When the query window opens, click<br />
Execute on the toolbar.<br />
Exercise 1: Import the Excel Spreadsheet<br />
Task 1: Import the data using the Import Wizard<br />
1. In <strong>SQL</strong> Server Management Studio, in Object Explorer, expand the Proseware server, expand<br />
<strong>Database</strong>s, right-click the MarketDev database, click Tasks, and click Import Data.<br />
2. In the <strong>SQL</strong> Server Import and Export Wizard window, click Next.<br />
3. In the Choose a Data Source window, select Microsoft Excel from the Data source drop-down list,<br />
and click Browse.<br />
4. Navigate to the file D:\10775A_Labs\10775A_08_PRJ\10775A_08_PRJ\Currency.xls and click<br />
Open.<br />
5. In the Choose a Data Source window, make sure that the First row has column names checkbox is<br />
checked, and click Next.<br />
6. In the Choose a Destination window, make sure that the Server name is Proseware and the<br />
<strong>Database</strong> name is MarketDev, and click Next.<br />
7. In the Specify Table Copy or Query window, click Next.<br />
8. In the Select Source Tables and Views window, check the checkbox beside `Currency`, change the<br />
Destination to [DirectMarketing].[Currency] and click Edit Mappings.<br />
9. In the Column Mappings window.<br />
a. Change the Type for the CurrencyID row to int.<br />
b. Uncheck the Nullable column in each row.<br />
c. Change the Size for the CurrencyCode to 3.<br />
d. Change the Size for the CurrencyName to 50 and click OK.<br />
L8-35
L8-36 Module 8: Importing and Exporting Data<br />
10. In the Select Source Tables and Views window, click Next.<br />
11. In the Review Data Type Mapping window, click Next.<br />
12. In the Save and Run Package window, click Next.<br />
13. In the Complete the Wizard window, click Finish.<br />
Note One warning about potential data truncation may occur. This is normal.<br />
14. In the Execution is completed window, click Close.<br />
15. In Object Explorer, right-click the MarketDev database and click New Query.<br />
16. Type the code as shown in the snippet below:<br />
SELECT * FROM DirectMarketing.Currency;<br />
17. On the Toolbar, click Execute.<br />
Note 105 currencies should be returned.<br />
Exercise 2: Import the CSV File<br />
Task 1: Import the CSV file<br />
1. In Solution Explorer, double-click the file 61 – Lab Exercise 2.sql to open it.<br />
2. Review the T-<strong>SQL</strong> script.<br />
3. On the Toolbar click Execute.<br />
4. Wait for the query to complete and record the duration of the query. (The duration of the query is<br />
shown on the bottom right-hand side of the Status bar).<br />
Note A typical execution time on the VM would be approximately 3 minutes.<br />
Exercise 3: Create and Test an Extraction Package<br />
Task 1: Create and test an extraction package<br />
1. In <strong>SQL</strong> Server Management Studio, in Object Explorer, expand the Proseware server, expand<br />
<strong>Database</strong>s, right-click the MarketDev database, click Tasks, and click Export Data.<br />
2. In the <strong>SQL</strong> Server Import and Export Wizard window, click Next.<br />
3. In the Choose a Data Source window, click Next.<br />
4. In the Choose a Destination window, in the Destination drop down list, choose Flat File<br />
Destination.<br />
In the File name text box type the following location:D:\MKTG\ProspectsToContact.csv.<br />
5. Make sure that the Column names in the first data row checkbox is checked and click Next.
Lab 8: Importing and Exporting Data L8-37<br />
6. In the Specify Table Copy or Query window, click Write a query to specify the data to transfer,<br />
and click Next.<br />
7. In the Provide a Source Query window, type the code as shown in the snippet below, and click<br />
Parse:<br />
SELECT ProspectID, FirstName, LastName, CellPhoneNumber,<br />
WorkPhoneNumber,EmailAddress, LatestContact<br />
FROM Marketing.Prospect<br />
WHERE LatestContact < DATEADD(MONTH,-1,SYSDATETIME())<br />
OR LatestContact IS NULL<br />
ORDER BY ProspectID;<br />
8. In the <strong>SQL</strong> Server Import or Export Wizard window, make sure that the parsing succeeded and click<br />
OK.<br />
9. In the Provide a Source Query window, click Next.<br />
10. In the Configure Flat File Destination window, click Next.<br />
11. In the Save and Run Package window, make sure that Run immediately and Save SSIS Package<br />
checkboxes are checked, and that <strong>SQL</strong> Server option button is selected, and click Next.<br />
12. In the Save SSIS Package window, type Weekly Extract of Prospects to Contact in both the Name<br />
and Description textboxes, and click Next.<br />
13. In the Complete the Wizard window, click Finish.<br />
Note One warning about potential data truncation and one error if the table does not<br />
already exist may occur. This is normal.<br />
14. In the Execution is completed window, click Close.<br />
Challenge Exercise 4: Compare Loading Performance (Only if time<br />
permits)<br />
Task 1: Re-execute load with indexes disabled<br />
1. In Solution Explorer, double-click the file 81 – Lab Exercise 4.sql to open it.<br />
2. Review the T-<strong>SQL</strong> script.<br />
3. On the Toolbar click Execute.<br />
4. Wait for the query to complete and record the duration of the query. (The duration of the query is<br />
shown on the bottom right-hand side of the Status bar).<br />
Note A typical execution time on the VM would be approximately 1.2 minutes.
Module 9: Authenticating and Authorizing Users<br />
Lab 9: Authenticating and Authorizing<br />
Users<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_09_PRJ\10775A_09_PRJ.ssmssln.<br />
7. In Solution Explorer, double-click the query 00-Setup.sql. When the query window opens, click<br />
Execute on the toolbar.<br />
Exercise 1: Create Logins<br />
Task 1: Review the requirements<br />
• Review the supplied security requirements in the supporting documentation. Determine the<br />
required Windows logins, Windows group logins and <strong>SQL</strong> logins.<br />
Task 2: Create the required logins<br />
1. In Solution Explorer, double-click the file 51 – Lab Exercise 1.sql to open it.<br />
2. Review the T-<strong>SQL</strong> script.<br />
3. On the Toolbar click Execute.<br />
Exercise 2: Correct an Application Login Issue<br />
Task 1: Correct an Application Login Issue<br />
1. In Solution Explorer, double-click the file 61 – Lab Exercise 2.sql to open it.<br />
2. Review the T-<strong>SQL</strong> script.<br />
3. On the Toolbar click Execute.<br />
L9-39
L9-40 Module 9: Authenticating and Authorizing Users<br />
Exercise 3: Create <strong>Database</strong> Users<br />
Task 1: Review the requirements<br />
• Review the supplied security requirements in the supporting documentation. Determine the<br />
required Windows logins, Windows group logins and <strong>SQL</strong> logins.<br />
Task 2: Create the required database users<br />
1. In Solution Explorer, double-click the file 71 – Lab Exercise 3.sql to open it.<br />
2. Review the T-<strong>SQL</strong> script.<br />
3. On the Toolbar click Execute.<br />
Challenge Exercise 4: Correct Access to Restored <strong>Database</strong> (Only if time<br />
permits)<br />
Task 1: Correct Access to a Restored <strong>Database</strong><br />
1. In Solution Explorer, double-click the file 81 – Lab Exercise 4.sql to open it.<br />
2. Review the T-<strong>SQL</strong> script.<br />
3. On the Toolbar click Execute.
Module 10: Assigning Server and <strong>Database</strong> Roles<br />
Lab 10: Assigning Server and <strong>Database</strong><br />
Roles<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_10_PRJ\10775A_10_PRJ.ssmssln.<br />
7. In Solution Explorer, double-click the query 00-Setup.sql. When the query window opens, click<br />
Execute on the toolbar.<br />
Exercise 1: Assign Server Roles<br />
Task 1: Review the requirements<br />
• Review the supplied security requirements in the supporting documentation.<br />
Task 2: Assign any required server roles<br />
1. In Solution Explorer, double-click the file 51 – Lab Exercise 1.sql to open it.<br />
2. Review the T-<strong>SQL</strong> script.<br />
3. On the Toolbar click Execute.<br />
Exercise 2: Assign Fixed <strong>Database</strong> Roles<br />
Task 1: Review the requirements<br />
• Review the supplied security requirements in the supporting documentation.<br />
Task 2: Assign any required fixed database roles<br />
1. In Solution Explorer, double-click the file 61 – Lab Exercise 2.sql to open it.<br />
2. Review the T-<strong>SQL</strong> script.<br />
3. On the Toolbar click Execute.<br />
L10-41
L10-42 Module 10: Assigning Server and <strong>Database</strong> Roles<br />
Exercise 3: Create and Assign User-defined <strong>Database</strong> Roles<br />
Task 1: Review the requirements<br />
• Review the supplied security requirements in the supporting documentation.<br />
Task 2: Create and assign any required user-defined database roles<br />
1. In Solution Explorer, double-click the file 71 – Lab Exercise 3.sql to open it.<br />
2. Review the T-<strong>SQL</strong> script.<br />
3. On the Toolbar click Execute.<br />
Challenge Exercise 4: Check Role Assignments (Only if time permits)<br />
Task 1: Check the role assignments for Darren Parker<br />
1. In Solution Explorer, double-click the file 81 – Lab Exercise 4.sql to open it.<br />
2. Review the T-<strong>SQL</strong> script.<br />
3. On the Toolbar click Execute.
Module 11: Authorizing Users to Access Resources<br />
Lab 11: Authorizing Users to Access<br />
Resources<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_11_PRJ\10775A_11_PRJ.ssmssln.<br />
7. In Solution Explorer, double-click the query 00-Setup.sql. When the query window opens, click<br />
Execute on the toolbar.<br />
Exercise 1: Assign Schema-level Permissions<br />
Task 1: Review the security requirements that have been updated from the previous<br />
module<br />
1. Review the supplied security requirements in the supporting documentation.<br />
2. Determine the permissions that should be assigned at the schema level. (Note: that a sample<br />
solution is shown in Task 2).<br />
Task 2: Assign the required permissions<br />
1. In Solution Explorer, double-click the file 51 – Lab Exercise 1.sql to open it.<br />
2. Review the T-<strong>SQL</strong> script.<br />
3. On the Toolbar click Execute.<br />
Exercise 2: Assign Object-level Permissions<br />
Task 1: Review the security requirements<br />
1. Review the supplied security requirements in the supporting documentation.<br />
L11-43<br />
2. Determine the permissions that should be assigned at the object level. (Note: that a sample solution<br />
is shown in Task 2).
L11-44 Module 11: Authorizing Users to Access Resources<br />
Task 2: Assign the required permissions<br />
1. In Solution Explorer, double-click the file 61 – Lab Exercise 2.sql to open it.<br />
2. Review the T-<strong>SQL</strong> script.<br />
3. On the Toolbar click Execute.<br />
Challenge Exercise 3: Test Permissions (Only if time permits)<br />
Task 1: Design and execute a test<br />
1. In Solution Explorer, double-click the file 71 – Lab Exercise 3a.sql to open it.<br />
2. Review the T-<strong>SQL</strong> script.<br />
3. On the Toolbar click Execute.<br />
4. In Solution Explorer, double-click the file 72 – Lab Exercise 3b.sql to open it.<br />
5. Review the T-<strong>SQL</strong> script.<br />
6. On the Toolbar click Execute.<br />
Note An error is returned as April.Reagan does not have permission to select the rows<br />
from the DirectMarketing.Competitor table.
Module 12: Auditing <strong>SQL</strong> Server Environments<br />
Lab 12: Auditing <strong>SQL</strong> Server Environments<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_12_PRJ\10775A_12_PRJ.ssmssln.<br />
7. In Solution Explorer, double-click the query 00-Setup.sql. When the query window opens, click<br />
Execute on the toolbar.<br />
Exercise 1: Determine Audit Configuration and Create Audit<br />
Task 1: Review the requirements<br />
• Review the supplied requirements in the supporting documentation for the exercise, noting<br />
requirements that relate to server audits.<br />
Task 2: Create the server audit<br />
1. Determine the configuration of the required server audit.<br />
2. Using Windows Explorer, create a folder C:\Audit.<br />
3. Using Windows Explorer, create a folder C:\Audit\AuditLog.<br />
4. In Object Explorer, expand the Proseware server, and expand Security.<br />
5. Right-click Audits and click New Audit.<br />
6. In the Create Audit window, type Proseware Compliance Audit in the Audit name textbox.<br />
7. In the Queue delay (in milliseconds) textbox, type 2000.<br />
8. For Audit Log Failure, select the Shut down server option.<br />
9. In the File path textbox, type C:\Audit\AuditLog.<br />
10. Uncheck the Unlimited checkbox for Maximum file size.<br />
11. In the Maximum file size drop-down list type 1.<br />
L12-45
L12-46 Module 12: Auditing <strong>SQL</strong> Server Environments<br />
12. Click the GB option button for Maximum file size and click OK.<br />
13. In Object Explorer, expand Audits, right-click Proseware Compliance Audit and click Enable<br />
Audit.<br />
14. In the Enable Audit window, click Close.<br />
Exercise 2: Create Server Audit Specifications<br />
Task 1: Review the requirements<br />
• Review the supplied requirements in the supporting documentation for the exercise, noting<br />
requirements that relate to server audit specifications.<br />
Task 2: Create the server audit specifications<br />
1. Determine the configuration of the required server audit specifications.<br />
2. In Object Explorer, right-click Server Audit Specifications, and click New Server Audit<br />
Specification.<br />
3. In the Create Server Audit Specification window, type Proseware Compliance Server Audit<br />
Specification in the Name textbox.<br />
4. From the Audit drop-down list, select Proseware Compliance Audit.<br />
5. In the Actions list, in row 1, click the Audit Action Type drop-down, and click<br />
FAILED_LOGIN_GROUP.<br />
6. In the Actions list, in row 2, click the Audit Action Type drop-down, and click<br />
SERVER_PRINCIPAL_CHANGE_GROUP.<br />
7. In the Actions list, in row 3, click the Audit Action Type drop-down, and click<br />
SERVER_ROLE_MEMBER_CHANGE_GROUP.<br />
8. In the Actions list, in row 4, click the Audit Action Type drop-down, and click<br />
LOGIN_CHANGE_PASSWORD_GROUP, and click OK.<br />
9. In Object Explorer, expand Server Audit Specifications, right-click Proseware Compliance Server<br />
Audit Specification, and click Enable Server Audit Specification.<br />
10. In the Enable Server Audit Specification window, click Close.<br />
Exercise 3: Create database audit specifications<br />
Task 1: Review the requirements<br />
• Review the supplied requirements in the supporting documentation for the exercise, noting<br />
requirements that relate to database audit specifications.<br />
Task 2: Create the database audit specifications<br />
1. Determine the configuration of the required database audit specifications.<br />
2. In Object Explorer, expand the Proseware server, expand <strong>Database</strong>s, expand the MarketDev<br />
database, expand Security, and expand <strong>Database</strong> Audit Specifications.<br />
3. Right-click <strong>Database</strong> Audit Specifications, and click New <strong>Database</strong> Audit Specifications.
Lab 12: Auditing <strong>SQL</strong> Server Environments L12-47<br />
4. In the Create <strong>Database</strong> Audit Specification window, type Proseware Compliance MarketDev<br />
Audit Specification in the Name textbox.<br />
5. From the Audit drop-down list, select Proseware Compliance Audit.<br />
6. In the Actions list, in row 1, click the Audit Action Type drop-down, and click<br />
BACKUP_RESTORE_GROUP.<br />
7. In the Actions list, in row 2, click the Audit Action Type drop-down, and click<br />
DATABASE_OWNERSHIP_CHANGE_GROUP.<br />
8. In the Actions list, in row 3, click the Audit Action Type drop-down, and click<br />
DATABASE_PERMISSION_CHANGE_GROUP.<br />
9. In the Actions list, in row 4, click the Audit Action Type drop-down, and click<br />
DATABASE_PRINCIPAL_CHANGE_GROUP.<br />
10. In the Actions list, in row 5, click the Audit Action Type drop-down, and click<br />
DATABASE_ROLE_MEMBER_CHANGE_GROUP.<br />
11. In the Actions list, in row 6, click the Audit Action Type drop-down, and click EXECUTE.<br />
12. In row 6, click the Object Class drop-down list and click OBJECT.<br />
13. In row 6, click the ellipsis in the Object Name column.<br />
14. In the Select Objects window, click Browse.<br />
15. In the Matching Objects pane, check the box beside [Marketing].[MoveCampaignBalance], and<br />
click OK.<br />
16. In the Select Objects window, click OK.<br />
17. In row 6, click the ellipsis in the Principal Name column.<br />
18. In the Select Objects window, click Browse.<br />
19. In the Matching Objects pane, check the box beside [public], and click OK.<br />
20. In the Select Objects window, click OK.<br />
21. In the Actions list, in row 7, click the Audit Action Type drop-down, and click UPDATE.<br />
22. In row 7, click the Object Class drop-down list and click OBJECT.<br />
23. In row 7, click the ellipsis in the Object Name column.<br />
24. In the Select Objects window, click Browse.<br />
25. In the Matching Objects pane, check the box beside [Marketing].[CampaignBalance], and click<br />
OK.<br />
26. In the Select Objects window, click OK.<br />
27. In row 7, click the ellipsis in the Principal Name column.<br />
28. In the Select Objects window, click Browse.<br />
29. In the Matching Objects pane, check the box beside [public], and click OK.<br />
30. In the Select Objects window, click OK.
L12-48 Module 12: Auditing <strong>SQL</strong> Server Environments<br />
31. In the Create <strong>Database</strong> Audit Specification window, click OK.<br />
32. In Object Explorer, right-click Proseware Compliance MarketDev Audit Specification and click<br />
Enable <strong>Database</strong> Audit Specification.<br />
33. In the Enable <strong>Database</strong> Audit Specification window, click Close.<br />
Challenge Exercise 4: Test Audit Functionality (Only if time permits)<br />
Task 1: Execute the workload script<br />
1. In Solution Explorer, double-click the file 81 – Lab Exercise 4a.sql to open it.<br />
2. Review the T-<strong>SQL</strong> script. Note any actions that you would expect to be audited. In Task 2 you will<br />
see the actual list of audited actions.<br />
3. On the Toolbar click Execute.<br />
Task 2: Review the captured audit details<br />
1. In Object Explorer, expand the Proseware server, expand Security, expand Audits.<br />
2. Right-click the Proseware Compliance Audit, and click View Audit Logs.<br />
3. In the Log File Viewer – Proseware window, review the captured events and compare them to your<br />
list of expected events from Task 1.<br />
Note Make sure you scroll to the list of events to the right to see all available columns.<br />
4. In the Log File Viewer – Proseware window click Close.<br />
5. In Solution Explorer, double-click the file 82 – Lab Exercise 4b.sql to open it.<br />
6. Review the T-<strong>SQL</strong> script.<br />
7. On the Toolbar click Execute. A list of audited events will be returned.
Module 13: Automating <strong>SQL</strong> Server <strong>2012</strong> Management<br />
Lab 13: Automating <strong>SQL</strong> Server<br />
Management<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_13_PRJ\10775A_13_PRJ.ssmssln.<br />
7. In Solution Explorer, double-click the query 00-Setup.sql. When the query window opens, click<br />
Execute on the toolbar.<br />
Exercise 1: Create a Data Extraction Job<br />
Task 1: Create the required job<br />
1. In Object Explorer, expand the Proseware server, expand <strong>SQL</strong> Server Agent, right-click Jobs and<br />
click New Job.<br />
2. In the New Job window, in the Name textbox type Extract Uncontacted Prospects.<br />
3. In the Select a page pane, click Steps, and then click New.<br />
4. In the Step name textbox, type Execute Prospect Extraction Package.<br />
5. From the Type drop-down list, select <strong>SQL</strong> Server Integration Services Package.<br />
6. In the General tab, from the Package source drop-down, select <strong>SQL</strong> Server.<br />
7. In the Server textbox type Proseware, and click the ellipsis beside the Package textbox.<br />
L13-49<br />
8. In the Select an SSIS Package window, click Weekly Extract of Prospects to Contact and click OK.<br />
9. In the New Job Step window, click OK.<br />
10. In the New Job window, click OK.
L13-50 Module 13: Automating <strong>SQL</strong> Server <strong>2012</strong> Management<br />
Task 2: Test that the job executes without error<br />
1. In Object Explorer, expand the Proseware server, expand <strong>SQL</strong> Server Agent, and expand Jobs.<br />
2. Right-click Jobs and click Refresh.<br />
3. Right-click Extract Uncontacted Prospects, and click Start Job at Step.<br />
4. In the Start Jobs - Proseware window, make sure that the job succeeds and click Close.<br />
Exercise 2: Schedule the Data Extraction Job<br />
Task 1: Schedule the data extraction job<br />
1. In Object Explorer, expand the Proseware server, expand <strong>SQL</strong> Server Agent, and expand Jobs.<br />
2. Right-click Jobs and click Refresh.<br />
3. Right-click Extract Uncontacted Prospects, and click Properties.<br />
4. On the Select a page pane, click Schedules, and then click New.<br />
5. In the New Job Schedule window, in the Name textbox type Each Monday Morning at 8:30AM.<br />
6. Check the Monday checkbox; uncheck the Sunday checkbox.<br />
7. In the Occurs once at textbox, change the time to 8:30:00AM and press the Tab key. Note the<br />
contents of the Description textbox and click OK.<br />
8. In the Job Properties – Extract Uncontacted Prospects window, click New.<br />
9. In the New Job Schedule window, in the Name textbox type Each Tuesday Evening at 6:30PM.<br />
10. Check the Tuesday checkbox; uncheck the Sunday checkbox.<br />
11. In the Occurs once at textbox, change the time to 6:30:00PM and hit the Tab key. Note the<br />
contents of the Description textbox and click OK.<br />
12. In the Job Properties – Extract Uncontacted Prospects window, click OK.<br />
Challenge Exercise 3: Troubleshoot a Failing Job (Only if time permits)<br />
Task 1: Troubleshoot the failing job<br />
1. In Object Explorer, expand the Proseware server, expand <strong>SQL</strong> Server Agent, and expand Jobs.<br />
2. Right-click Jobs and click Refresh.<br />
3. Right-click Extract Long Page Loads, and click View History.<br />
4. In the Log File Viewer – Proseware window, click the plus sign in a row that has a failing execution<br />
to expand the job steps. Note that step ID 2 has failed.<br />
5. Click on the row for step ID 2, and scroll through the Selected row details pane to find the error.<br />
Note that the error is caused by a reference to an invalid object name<br />
Marketing.RecentLongPageLoads.<br />
6. Click Close to close the Log File Viewer – Proseware window.<br />
7. Right-click Extract Long Page Loads, and click Properties.<br />
8. In the Select a page pane, click Steps.
9. In the Job step list, click the row for Step 2 and click Edit.<br />
Lab 13: Automating <strong>SQL</strong> Server Management L13-51<br />
10. Review the command that is being executed and note that Marketing.RecentLongPageLoads<br />
appears to be a table or view that the web log rows are being inserted into.<br />
11. In Object Explorer, expand <strong>Database</strong>s, expand MarketDev, and expand Tables. Note that the<br />
name of the table should be Marketing.RecentLongPageLoad.<br />
12. In the Job Step Properties – Copy Recent Long Page Loads window change the name of the table in<br />
the Command textbox from Marketing.RecentLongPageLoads to<br />
Marketing.RecentLongPageLoad and click OK.<br />
13. In the Job Properties – Extract Long Page Loads window, click OK.<br />
14. In Object Explorer, right-click the Extract Long Page Loads job and click Start Job at Step.<br />
15. In the Start Job on ‘Proseware’ window, click Start.<br />
16. In the Start Job on ‘Proseware’ window, make sure the job executed successfully and click Close.<br />
17. In Object Explorer, right-click Extract Long Page Loads, and click Properties.<br />
18. In the Select a page pane, click Schedules and note the difference between the name of the<br />
schedule and the description of the schedule, then click Edit.<br />
19. From the Occurs drop-down list box, click Weekly.<br />
20. Check the Monday checkbox and review the remaining settings, then click OK.<br />
21. In the Schedule list, note that the schedule name now relates to the schedule description.<br />
22. In the Job Properties – Extract Long Page Loads window, click OK.
L14-53<br />
Module 14: Configuring Security for <strong>SQL</strong> Server Agent<br />
Lab 14: Configuring Security for <strong>SQL</strong> Server<br />
Agent<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_14_PRJ\10775A_14_PRJ.ssmssln.<br />
7. In Solution Explorer, double-click the query 00-Setup.sql. When the query window opens, click<br />
Execute on the toolbar.<br />
Exercise 1: Troubleshoot Job Execution Failure<br />
Task 1: Troubleshoot job execution failure<br />
1. Review the previous actions taken by Terry Adams as detailed in the supporting documentation for<br />
the exercise.<br />
2. In Object Explorer, expand the Proseware server, expand <strong>SQL</strong> Server Agent, expand Jobs,<br />
right-click Extract Uncontacted Prospects and click View History.<br />
3. In the Log File Viewer - Proseware window, click the plus sign in a failing row to expand the job<br />
steps.<br />
4. Click the row for Step ID 1. In the Selected row details pane, scroll the pane until you locate the<br />
error.<br />
Note The error indicates that Non-SysAdmins have been denied permission to run DTS<br />
Execution job steps without a proxy account. The step failed.<br />
5. You have determined that a proxy account is needed for the job step. Close the<br />
Log File Viewer - Proseware window.
L14-54 Module 14: Configuring Security for <strong>SQL</strong> Server Agent<br />
Exercise 2: Resolve the Security Issue<br />
Task 1: Create and assign proxy account<br />
1. In Object Explorer, expand the Proseware server, expand Security, and expand Credentials.<br />
2. Right-click Credentials and click New Credential.<br />
3. In the New Credential window, in the Credential name text box, type ExtractIdentity.<br />
4. In the Identity textbox, type MIA-<strong>SQL</strong>1\ExtractUser.<br />
5. In the Password and Confirm password textboxes, type Pa$$w0rd, and click OK.<br />
6. In Object Explorer, expand <strong>SQL</strong> Server Agent, right-click Proxies, and click New Proxy.<br />
7. In the New Proxy Account window, in the Proxy name textbox, type ExtractionProxy.<br />
8. In the Credential name textbox, type ExtractIdentity.<br />
9. In the Active to the following subsystems list, check the box beside <strong>SQL</strong> Server Integration<br />
Services Package.<br />
10. In the Select a page pane, click Principals, and click Add.<br />
11. In the Available principals list, check the box beside PromoteApp login, and click OK.<br />
12. In the New Proxy Account window, click OK.<br />
13. In Object Explorer, expand Jobs, right-click Extract Uncontacted Prospects and click Properties.<br />
14. In the Job Properties - Extract Uncontacted Prospects window, in the Select a page pane click<br />
Steps, then click Edit.<br />
15. In the Run as drop-down list, click ExtractionProxy, and click OK.<br />
16. In the Job Properties - Extract Uncontacted Prospects window, click OK.<br />
Task 2: Test to see if all problems have been resolved<br />
1. In Object Explorer, right-click the Extract Uncontacted Prospects job, and click Start Job at Step.<br />
2. In the Start Jobs - Proseware window, note that the job still fails, and click Close.<br />
3. In Object Explorer, right-click the Extract Uncontacted Prospects job, and click View History.<br />
4. In the Log File Viewer - Proseware window, click the plus sign beside the top entry in the list to<br />
expand the job steps.<br />
5. Click in the row for Step ID 1, in the Selected row details pane, scroll down to find the error.<br />
Note A <strong>SQL</strong> Statement is now failing because of a login issue.<br />
6. You have resolved the original problem. If you have time you should continue to Exercise 3 to<br />
resolve the remaining problem.
Lab 14: Configuring Security for <strong>SQL</strong> Server Agent L14-55<br />
Challenge Exercise 3: Perform Further Troubleshooting (Only if time<br />
permits)<br />
Task 1: Perform further troubleshooting<br />
1. In the Log File Viewer - Proseware window, in the Select row details pane, read the detail of the<br />
error message, determine the cause of the error and close the window.<br />
Note The most important error is Login failed for user ‘MIA-<strong>SQL</strong>1\ExtractUser’. Even<br />
though a Windows credential is required for the SSIS job to access the file system to write<br />
the extracted file, the credential also needs to be able to connect to <strong>SQL</strong> Server to retrieve<br />
the data from the Marketing.Prospects table. You need to create a login for the Windows<br />
user, create a database user for the login and then assign SELECT permission on the<br />
Marketing.Prospects table to the credential.<br />
2. In Solution Explorer, double-click the file 71 – Lab Exercise 3.sql to open it.<br />
3. Review the T-<strong>SQL</strong> script.<br />
4. On the Toolbar click Execute.<br />
5. In Object Explorer, right-click the Extract Uncontacted Prospects job, and click Start Job at Step.<br />
7. In the Start Jobs - Proseware window, note that the job still fails, and click Close.<br />
8. In Object Explorer, right-click the Extract Uncontacted Prospects job, and click View History.<br />
9. In the Log File Viewer - Proseware window, click the plus sign beside the top entry in the list to<br />
expand the job steps.<br />
10. Click in the row for Step ID 1, in the Selected row details pane, scroll down to find the error.<br />
Note ExtractUser does not have permissions on the D:\MKTG folder.<br />
11. In Windows Explorer, navigate to the folder D:\MKTG, right-click the MKTG folder and click<br />
Properties.<br />
12. In the MKTG Properties window, click on the Security tab, click Edit.<br />
13. In the Permissions for MKTG window, click Add.<br />
14. In the Select Users, Computers, Services Accounts, or Groups window, in the Enter the object<br />
names to select textbox type MIA-<strong>SQL</strong>1\ExtractUser and click OK.<br />
15. In the Permissions for MKTG window, check the Allow checkbox for the Modify row and click OK.<br />
16. In the MKTG Properties window, click OK.<br />
17. In Object Explorer, right-click the Extract Uncontacted Prospects job, and click Start Job at Step.<br />
18. In the Start Jobs - Proseware window, note that the job now completes successfully, and click Close.
Module 15: Monitoring <strong>SQL</strong> Server <strong>2012</strong> with Alerts and<br />
Notifications<br />
Lab 15: Monitoring <strong>SQL</strong> Agent Jobs with<br />
Alerts and Notifications<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_15_PRJ\10775A_15_PRJ.ssmssln.<br />
7. In Solution Explorer, double-click the query 00-Setup.sql. When the query window opens, click<br />
Execute on the toolbar.<br />
Exercise 1: Configure <strong>Database</strong> Mail<br />
Task 1: Configure database mail<br />
1. Review the database mail configuration parameters in the supporting documentation for the<br />
exercise.<br />
L15-57<br />
2. In Object Explorer, expand the Proseware server, expand Management, right-click <strong>Database</strong> Mail,<br />
and click Configure <strong>Database</strong> Mail.<br />
3. In the Welcome to <strong>Database</strong> Mail Configuration Wizard window, click Next.<br />
4. In the Select Configuration Task window, click Next.<br />
5. In the Microsoft <strong>SQL</strong> Server Management Studio window, click Yes.<br />
6. In the New Profile window, in the Profile name textbox type Proseware <strong>SQL</strong> Server Agent<br />
Profile, and click Add.<br />
7. In the New <strong>Database</strong> Mail Account window, in the Account name textbox, type Proseware<br />
Administrator.<br />
8. In the E-mail address textbox, type prosewaresqladmin@adventureworks.com.<br />
9. In the Display name textbox, type Proseware <strong>SQL</strong> Server Administrator.<br />
10. In the Reply e-mail textbox, type prosewaresqladmin@adventureworks.com.<br />
11. In the Server name textbox type mailserver.adventureworks.com, and click OK.
L15-58 Module 15: Monitoring <strong>SQL</strong> Server <strong>2012</strong> with Alerts and Notifications<br />
12. In the New Profile window, click Add.<br />
13. In the New <strong>Database</strong> Mail Account window, in the Account name textbox, type AdventureWorks<br />
Administrator.<br />
14. In the E-mail address textbox, type adventureworkssqladmin@adventureworks.com.<br />
15. In the Display name textbox, type AdventureWorks <strong>SQL</strong> Server Administrator.<br />
16. In the Reply e-mail textbox, type adventureworkssqladmin@adventureworks.com.<br />
17. In the Server name textbox type mailserver.adventureworks.com, and click OK.<br />
18. In the New Profile window, click Next.<br />
19. In the Manage Profile Security window, check the box in the Public column and change the value<br />
in the Default Profile column to Yes.<br />
20. Click on the Private Profiles tab, and from the User name drop-down list, select<br />
ADVENTUREWORKS\pwservice.<br />
21. Check the box in the Access column, and change the value in the Default Profile column to Yes,<br />
then click Next.<br />
22. In the Configure System Parameters window, change the Maximum File Size (Bytes) to 4194304,<br />
and click Next.<br />
23. In the Complete the Wizard window, click Finish.<br />
24. In the Configuring window, click Close.<br />
Task 2: Test that database mail operates<br />
1. In Object Explorer, right-click <strong>Database</strong> Mail and click Send Test E-Mail.<br />
2. In the Send Test E-Mail from MIA-<strong>SQL</strong>1\MKTG window, in the To textbox type<br />
prosewaresqladmin@adventureworks.com, and click Send Test E-Mail.<br />
3. In the <strong>Database</strong> Mail Test E-Mail window, note the description and click OK.<br />
4. In Solution Explorer, double-click the file 51 – Lab Exercise 1.sql to open it.<br />
5. Review the T-<strong>SQL</strong> script.<br />
6. On the Toolbar click Execute.<br />
Note An email with the subject <strong>Database</strong> Mail Test should be returned by the query.<br />
Exercise 2: Implement Notifications<br />
Task 1: Review the requirements<br />
• Review the supplied requirements in the supporting documentation for the exercise. In particular,<br />
note any required operators.
Task 2: Configure the required operators<br />
Lab 15: Monitoring <strong>SQL</strong> Server <strong>2012</strong> with Alerts and Notifications L15-59<br />
1. In Object Explorer, expand <strong>SQL</strong> Server Agent, right-click Operators and click New Operator.<br />
2. In the New Operator window, in the Name textbox, type IT Support Fail-safe Operator.<br />
3. In the Pager e-mail name, type itsupport.pager@adventureworks.com.<br />
4. In the Pager on duty schedule, check Monday, Tuesday, Wednesday, Thursday, Friday,<br />
Saturday, and Sunday. Change the Workday begin time to 12:00:00 AM and Workday end to<br />
11:59:59 PM for every row, and click OK.<br />
5. In Object Explorer, right-click Operators and click New Operator.<br />
6. In the New Operator window, in the Name textbox, type Jeff Hay.<br />
7. In the Pager e-mail name, type jeff.hay.pager@adventureworks.com.<br />
8. In the Pager on duty schedule, check Monday, Tuesday, Wednesday, Thursday, Friday,<br />
Saturday, and Sunday. Change the Workday begin time to 12:00:00 AM and Workday end to<br />
11:59:59 PM for every row, and click OK.<br />
9. In Object Explorer, right-click Operators and click New Operator.<br />
10. In the New Operator window, in the Name textbox, type Palle Petersen.<br />
11. In the Pager e-mail name, type palle.petersen.pager@adventureworks.com.<br />
12. In the Pager on duty schedule, check Monday, Tuesday, Wednesday, Thursday, Friday,<br />
Saturday, and Sunday. Change the Workday begin time to 12:00:00 AM and Workday end to<br />
11:59:59 PM for every row, and click OK.<br />
Task 3: Configure <strong>SQL</strong> Server Agent Mail<br />
1. In Object Explorer, right-click <strong>SQL</strong> Server Agent, and click Properties.<br />
2. In the Select a page pane click Alert System. In the Mail session options, click Enable mail<br />
profile.<br />
3. From the Mail profile drop-down list, select Proseware <strong>SQL</strong> Server Agent Profile.<br />
4. In the Fail-safe operator options, check Enable fail-safe operator.<br />
5. In the Operator drop-down list, select IT Support Fail-safe Operator.<br />
6. In the Notify using options, check Pager, and click OK.<br />
7. In Object Explorer, right-click <strong>SQL</strong> Server Agent and click Restart.<br />
8. In the <strong>SQL</strong> Server Management Studio window, click Yes.<br />
Task 4: Configure and Test Notifications in <strong>SQL</strong> Server Agent Jobs<br />
1. In Object Explorer, expand <strong>SQL</strong> Server Agent, expand Jobs, right-click Backup Log TestAlertDB<br />
and click Properties.<br />
2. In the Select a page pane, click Notifications.<br />
3. In the Actions to perform when the job completes option, check Page.<br />
4. In the Page drop-down list, select Jeff Hay.
L15-60 Module 15: Monitoring <strong>SQL</strong> Server <strong>2012</strong> with Alerts and Notifications<br />
5. In the Page actions drop-down list, select When the job completes, and click OK.<br />
6. In Object Explorer, right-click Job that Fails and click Properties.<br />
7. In the Select a page pane, click Notifications.<br />
8. In the Actions to perform when the job completes option, check Page.<br />
9. In the Page drop-down list, select Palle Petersen.<br />
10. In the Page actions drop-down list, select When the job fails, and click OK.<br />
11. In Object Explorer, right-click Job that Succeeds and click Properties.<br />
12. In the Select a page pane, click Notifications.<br />
13. In the Actions to perform when the job completes option, check Page.<br />
14. In the Page drop-down list, select Palle Petersen.<br />
15. In the Page actions drop-down list, select When the job fails, and click OK.<br />
16. In Object Explorer, right-click Backup Log TestAlertDB and click Start Job at Step.<br />
17. In the Start Jobs - Proseware window, click Close.<br />
18. In Object Explorer, right-click Job That Fails, and click Start Job at Step.<br />
19. In the Start Jobs - Proseware window, note that the job failed and click Close.<br />
20. In Object Explorer, right-click Job That Succeeds and click Start Job at Step.<br />
21. In the Start Jobs - Proseware window, click Close.<br />
22. In Solution Explorer, double-click the file 51 – Lab Exercise 1.sql to open it.<br />
23. Review the T-<strong>SQL</strong> script.<br />
24. On the Toolbar click Execute.<br />
Note Two additional emails should now appear in the list, an email from the backup job<br />
sent to Jeff Hay and an email from the job that fails sent to Palle Petersen. No email<br />
should be sent for the job called “Job That Succeeds”.<br />
Challenge Exercise 3: Implement Alerts (Only if time permits)<br />
Task 1: Configure and Test Alerts<br />
1. Review the supporting documentation for the alerting requirements.<br />
2. In Object Explorer, right-click Alerts and click New Alert.<br />
3. In the New Alert window, in the Name textbox, type Error Severity 17 Alert.<br />
4. In the Severity drop-down list, click 017 – Insufficient Resources.<br />
5. In the Select a page pane, click Response.<br />
6. Check Notify operators, check all checkboxes in the Pager column and click OK.<br />
7. In Object Explorer, right-click Alerts and click New Alert.
Lab 15: Monitoring <strong>SQL</strong> Server <strong>2012</strong> with Alerts and Notifications L15-61<br />
8. In the New Alert window, in the Name textbox, type Error Severity 18 Alert.<br />
9. In the Severity drop-down list, click 018 – Nonfatal Internal Error.<br />
10. In the Select a page pane, click Response.<br />
11. Check Notify operators, check all checkboxes in the Pager column and click OK.<br />
12. In Object Explorer, right-click Alerts and click New Alert.<br />
13. In the New Alert window, in the Name textbox, type Transaction Log Full Alert.<br />
14. Click the Error number option, and in the Error number textbox type 9002.<br />
15. In the Select a page pane, click Response.<br />
16. Check Notify operators, check all checkboxes in the Pager column and click OK.<br />
17. In Solution Explorer, double-click the file 71 – Lab Exercise 3.sql to open it.<br />
18. Review the T-<strong>SQL</strong> script.<br />
19. On the Toolbar click Execute.<br />
Note Executing this script will result in an error, indicating message 9002.<br />
20. In Solution Explorer, double-click the file 51 – Lab Exercise 1.sql to open it.<br />
21. Review the T-<strong>SQL</strong> script.<br />
22. On the Toolbar click Execute.<br />
Note Additional emails should be listed related to the <strong>SQL</strong> Server alert system.
Module 16: Performing Ongoing <strong>Database</strong> Maintenance<br />
Lab 16: Performing Ongoing <strong>Database</strong><br />
Maintenance<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_16_PRJ\10775A_16_PRJ.ssmssln.<br />
7. In Solution Explorer, double-click the query 00-Setup.sql. When the query window opens, click<br />
Execute on the toolbar.<br />
Exercise 1: Check <strong>Database</strong> Integrity Using DBCC CHECKDB<br />
Task 1: Check the consistency of the databases on the Proseware instance<br />
1. In Solution Explorer, double-click the file 51 – Lab Exercise 1a.sql to open it.<br />
2. Review the T-<strong>SQL</strong> script.<br />
3. On the Toolbar click Execute.<br />
Note Errors will be returned from the CoreAdmin database.<br />
Task 2: Correct any issues found<br />
1. In Solution Explorer, double-click the file 52 – Lab Exercise 1b.sql to open it.<br />
2. Review the T-<strong>SQL</strong> script.<br />
3. On the Toolbar click Execute.<br />
Note Errors will be returned from the CoreAdmin database during this process.<br />
L16-63
L16-64 Module 16: Performing Ongoing <strong>Database</strong> Maintenance<br />
Exercise 2: Correct Index Fragmentation<br />
Task 1: Review the fragmentation of indexes in the MarketDev database to determine<br />
which indexes should be defragmented and which indexes should be rebuilt<br />
1. In Solution Explorer, double-click the file 61 – Lab Exercise 2a.sql to open it.<br />
2. Review the T-<strong>SQL</strong> script.<br />
3. On the Toolbar click Execute.<br />
Task 2: Defragment indexes as determined<br />
1. In Solution Explorer, double-click the file 62 – Lab Exercise 2b.sql to open it.<br />
2. Review the T-<strong>SQL</strong> script.<br />
3. On the Toolbar click Execute.<br />
Task 3: Rebuild indexes as determined<br />
1. In Solution Explorer, double-click the file 63 – Lab Exercise 2c.sql to open it.<br />
2. Review the T-<strong>SQL</strong> script.<br />
3. On the Toolbar click Execute.<br />
Exercise 3: Create a <strong>Database</strong> Maintenance Plan<br />
Task 1: Create the required database maintenance plan<br />
1. In Object Explorer, expand Proseware, expand Management, right-click Maintenance Plans, and<br />
click Maintenance Plan Wizard.<br />
2. In the Maintenance Plan Wizard window, click Next.<br />
3. In the Select Plan Properties window, in the Name textbox, type Proseware Weekly Maintenance.<br />
Note the available scheduling options and click Change.<br />
4. In the New Job Schedule window, in the Name textbox, type 6PM Every Sunday.<br />
5. In the Occurs once at textbox, change the time to 6PM, and click OK.<br />
6. In the Select Plan Properties window, click Next.<br />
7. Review the available options and check Rebuild index and Check <strong>Database</strong> Integrity, and then<br />
click Next.<br />
8. In the Select Maintenance Task Order window, click Next.<br />
9. In the Define <strong>Database</strong> Check Integrity Task window, from the <strong>Database</strong>s drop-down list, select All<br />
databases, and click OK then click Next.<br />
10. In the Define Rebuild Index Task window, from the <strong>Database</strong>s drop-down list, check the<br />
MarketDev database, and click OK.<br />
11. In the Free space options, click Change free space per page to:.<br />
12. In the Change free space per page to: textbox, type 10.
13. Check Keep index online while reindexing, and click Next.<br />
Lab 16: Performing Ongoing <strong>Database</strong> Maintenance L16-65<br />
14. In the Select Report Options window, in the folder location textbox, type L:\MKTG, and click<br />
Next.<br />
15. In the Complete the Wizard window, click Finish.<br />
16. In the Maintenance Plan Wizard Progress window, click Close.<br />
Challenge Exercise 4: Investigate Table Lock Performance (Only if time<br />
permits)<br />
Task 1: Execute DBCC CHECKDB using database snapshots<br />
1. In Solution Explorer, double-click the file 81 – Lab Exercise 4a.sql to open it.<br />
2. Review the T-<strong>SQL</strong> script.<br />
3. On the Toolbar click Execute.<br />
Note Note the time taken to execute. A typical duration would be 42 seconds.<br />
Task 2: Execute DBCC CHECKDB using table locks<br />
1. In Solution Explorer, double-click the file 82 – Lab Exercise 4b.sql to open it.<br />
2. Review the T-<strong>SQL</strong> script.<br />
3. On the Toolbar click Execute.<br />
Note Note the time taken to execute. A typical duration would be 7 seconds.
Module 17: Tracing Access to <strong>SQL</strong> Server <strong>2012</strong><br />
Lab 17: Tracing Access to <strong>SQL</strong> Server <strong>2012</strong><br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_17_PRJ\10775A_17_PRJ.ssmssln.<br />
7. In Solution Explorer, double-click the query 00-Setup.sql. When the query window opens, click<br />
Execute on the toolbar.<br />
Exercise 1: Capture a Trace Using <strong>SQL</strong> Server Profiler<br />
Task 1: Create and start a suitable <strong>SQL</strong> Server Profiler trace<br />
1. In Microsoft <strong>SQL</strong> Server Management Studio, on the Tools menu, click <strong>SQL</strong> Server Profiler.<br />
2. In the Connect to Server window, click Connect.<br />
3. In the Trace name, type Proseware Trace.<br />
4. In the Use the template drop-down, select Tuning.<br />
5. Check the Save to file option.<br />
6. In the Save As window, navigate to the desktop, and click Save.<br />
7. Uncheck the Enable file rollover option.<br />
8. Change the Set maximum file size (MB) option to 500.<br />
9. Click the Events Selection tab. Note the selected events.<br />
10. Click on the <strong>Database</strong> Name column heading to create a filter.<br />
11. Expand the Like node and enter MarketDev and click OK.<br />
12. In the Trace Properties window, click Run.<br />
13. From the Window menu, uncheck Auto scroll.<br />
L17-67
L17-68 Module 17: Tracing Access to <strong>SQL</strong> Server <strong>2012</strong><br />
Task 2: Execute the workload<br />
1. In Solution Explorer, double-click the file 51 – Lab Exercise 1.sql to open it.<br />
2. Review the T-<strong>SQL</strong> script.<br />
3. From the Query menu, select Query Options.<br />
4. In the Query Options window, click Grid. In the Grid options, check Discard results after<br />
execution.<br />
5. Click OK to close the Query Options window.<br />
6. On the Toolbar click Execute.<br />
7. Wait for the query to complete.<br />
Note The query is complete when Query executed successfully appears below the<br />
Messages tab. No output messages will be seen.<br />
Task 3: Stop the trace<br />
• In <strong>SQL</strong> Server Profiler, on the Toolbar, click the Stop Selected Trace icon, and close <strong>SQL</strong> Server<br />
Profiler.<br />
Exercise 2: Analyze a Trace Using <strong>Database</strong> Engine Tuning Advisor<br />
Task 1: Analyze the captured trace in <strong>Database</strong> Engine Tuning Advisor<br />
1. In Microsoft <strong>SQL</strong> Server Management Studio, from the Tools menu, click <strong>Database</strong> Engine Tuning<br />
Advisor.<br />
2. In the Connect to Server window, click Connect.<br />
3. Maximize the <strong>Database</strong> Engine Tuning Advisor window.<br />
4. In the Workload group box, ensure that File is selected, and click the Browse for a Workload File<br />
button.<br />
5. Browse to the Desktop folder, select the Proseware Trace.trc file and click Open.<br />
6. In the <strong>Database</strong> for workload analysis drop-down, select MarketDev.<br />
7. In the Select databases and tables to tune list, check the MarketDev database.<br />
8. From the Toolbar, click the Start Analysis button to start the tuning analysis.<br />
Task 2: Review the suggested modifications<br />
1. Observe the tuning progress and when the analysis is complete, note the recommendations. The<br />
exact recommendations will vary but it is likely you will see both index and statistics<br />
recommendations.<br />
2. Scroll the recommendations output to the right and note the hyperlinks to sample code.<br />
3. Click each of the recommendations in turn and note the suggested index or statistics structures.<br />
4. Close <strong>Database</strong> Engine Tuning Advisor.
Lab 17: Tracing Access to <strong>SQL</strong> Server <strong>2012</strong> L17-69<br />
Challenge Exercise 3: Configure <strong>SQL</strong> Trace (Only if time permits)<br />
Task 1: Create a script that uses <strong>SQL</strong> Trace procedures to implement the same type of<br />
capture as you performed in Exercise 1 but with a different trace name<br />
1. In Microsoft <strong>SQL</strong> Server Management Studio, from the Tools menu, click <strong>SQL</strong> Server Profiler.<br />
2. In the Connect to Server window, click Connect.<br />
3. In the Trace name, type ProsewareTrace2.<br />
4. In the Use the template drop-down, select Tuning.<br />
5. Check the Save to file option.<br />
6. In the Save As window, navigate to the desktop, and click Save.<br />
7. Uncheck the Enable file rollover option.<br />
8. Change the Set maximum file size (MB) option to 500.<br />
9. Click the Events Selection tab. Note the selected events.<br />
10. Click on the <strong>Database</strong> Name column heading to create a filter.<br />
11. Expand the Like node and enter MarketDev and click OK.<br />
12. In the Trace Properties window, click Run.<br />
13. From the Toolbar, click the Stop Selected Trace icon.<br />
14. From the File menu, click Export, click Script Trace Definition, click For <strong>SQL</strong> Server 2005 –<br />
<strong>SQL</strong>11.<br />
15. Navigate to the Desktop and in the File name text box, type ProsewareTrace2 and click Save.<br />
16. In the <strong>SQL</strong> Server Profiler window, click OK.<br />
17. Close <strong>SQL</strong> Server Profiler.<br />
Task 2: Test that the script works as expected by using the same workload<br />
1. In Solution Explorer, from the File menu, click Open, click File.<br />
2. In the Open File window, navigate to the Desktop, click Proseware Trace2.sql and click Open.<br />
3. In the first row that begins with an exec command, change the string InsertFileNameHere to<br />
D:\MKTG\ProsewareTrace2.<br />
4. On the Toolbar click Execute to start the trace.<br />
Note Record the Trace ID value that is returned.<br />
5. In Solution Explorer, double-click the file 51 – Lab Exercise 1.sql to open it.<br />
6. Review the T-<strong>SQL</strong> script.<br />
7. From the Query menu, select Query Options.<br />
8. In the Query Options window, click Grid. In the Grid options, check Discard results after<br />
execution.
L17-70 Module 17: Tracing Access to <strong>SQL</strong> Server <strong>2012</strong><br />
9. Click OK to close the Query Options window.<br />
10. On the Toolbar click Execute.<br />
11. Wait for the query to complete.<br />
Note The query is complete when Query executed successfully appears below the<br />
Messages tab. No output messages will be seen.<br />
12. In Solution Explorer, double-click the file 71 – Lab Exercise 3.sql to open it.<br />
13. Review the T-<strong>SQL</strong> script and replace the value of the @TraceID variable with the value you<br />
recorded earlier in this task.<br />
14. On the Toolbar click Execute.<br />
Task 3: Analyze the new captured output and note if the suggested changes are<br />
identical to those suggested in Exercise 2<br />
1. In Microsoft <strong>SQL</strong> Server Management Studio, from the Tools menu, click <strong>Database</strong> Engine Tuning<br />
Advisor.<br />
2. In the Connect to Server window, click Connect.<br />
3. Maximize the <strong>Database</strong> Engine Tuning Advisor window.<br />
4. In the Workload group box, ensure that File is selected, and click the Browse for a Workload File<br />
button.<br />
5. Browse to the D:\MKTG folder, select the ProsewareTrace2.trc file and click Open.<br />
6. In the <strong>Database</strong> for workload analysis drop-down, select MarketDev.<br />
7. In the Select databases and tables to tune list, check the MarketDev database.<br />
8. From the Toolbar, click the Start Analysis button to start the tuning analysis.<br />
9. When the analysis is complete, compare the results to the results you saw earlier in Exercise 2.<br />
Note The results should be identical.
Module 18: Monitoring <strong>SQL</strong> Server <strong>2012</strong><br />
Lab 18: Monitoring <strong>SQL</strong> Server <strong>2012</strong><br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_18_PRJ\10775A_18_PRJ.ssmssln.<br />
7. In Solution Explorer, double-click the query 00-Setup.sql. When the query window opens, click<br />
Execute on the toolbar.<br />
Exercise 1: Investigating DMVs<br />
Task 1: Investigate the use of Dynamic Management Views and Functions<br />
1. In Solution Explorer, double-click the file 51 – Lab Exercise 1.sql to open it.<br />
2. Follow the instructions in the T-<strong>SQL</strong> script.<br />
Exercise 2: Configure Management Data Warehouse<br />
Task 1: Create a management data warehouse for central collection of performance<br />
data<br />
1. In Object Explorer, expand the Proseware server, expand Management, right-click Data<br />
Collection, and click Configure Management Data Warehouse.<br />
2. In the Configure Management Data Warehouse Wizard window, click Next.<br />
3. In the Select Configuration task window, click Next.<br />
4. In the Configure Management Data Warehouse Storage window, in the Server name textbox<br />
ensure Proseware has been entered, and click New.<br />
5. In the New <strong>Database</strong> window, in the <strong>Database</strong> name textbox, type MDW, and click OK.<br />
6. In the Configure Management Data Warehouse Storage window, click Next.<br />
7. In the Map Logins and Users window, review the options and click Next.<br />
8. In the Complete the Wizard window, click Finish.<br />
9. In the Configure Data Collection Wizard Progress window, click Close.<br />
L18-71
L18-72 Module 18: Monitoring <strong>SQL</strong> Server <strong>2012</strong><br />
Exercise 3: Configure Instances for Data Collection<br />
Task 1: Configure data collection on each instance<br />
1. In Object Explorer, expand the Proseware server, expand Management, right-click Data<br />
Collection, and click Configure Management Data Warehouse.<br />
2. In the Configure Management Data Warehouse Wizard window, click Next.<br />
3. In the Select Configuration task window, click Set up data collection, and click Next.<br />
4. In the Configure Management Data Warehouse Storage window, click the ellipsis button beside<br />
the Server name textbox.<br />
5. In the Connect to Server window, in the Server name textbox, type Proseware, and click Connect.<br />
6. In the <strong>Database</strong> name drop-down list, select MDW, and click Next.<br />
7. In the Complete the Wizard window, click Finish.<br />
8. In the Configure Data Collection Wizard Progress window, click Close.<br />
9. In Object Explorer, click Connect, then click <strong>Database</strong> Engine. In the Connect to Server window, in<br />
the Server name textbox type AdventureWorks, and click Connect.<br />
10. In Object Explorer, expand the AdventureWorks server, expand Management, right-click Data<br />
Collection, and click Configure Management Data Warehouse.<br />
11. In the Configure Management Data Warehouse Wizard window, click Next.<br />
12. In the Select Configuration task window, click Set up data collection, and click Next.<br />
13. In the Configure Management Data Warehouse Storage window, click the ellipsis button beside<br />
the Server name textbox.<br />
14. In the Connect to Server window, in the Server name textbox, type Proseware, and click Connect.<br />
15. In the <strong>Database</strong> name drop-down list, select MDW, and click Next.<br />
16. In the Complete the Wizard window, click Finish.<br />
17. In the Configure Data Collection Wizard Progress window, click Close.<br />
Challenge Exercise 4: Work with Data Collector Reports (Only if time<br />
permits)<br />
Task 1: Disable data collectors on both instances<br />
1. In Object Explorer, expand the Proseware server, expand Management, right-click Data<br />
Collection, and click Disable Data Collection. If a confirmation window appears, click Close.<br />
2. In Object Explorer, expand the AdventureWorks server, expand Management, right-click Data<br />
Collection, and click Disable Data Collection. If a confirmation window appears, click Close.
Task 2: Restore a backup of the MDW database<br />
Lab 18: Monitoring <strong>SQL</strong> Server <strong>2012</strong> L18-73<br />
1. In Object Explorer, expand the Proseware server, expand <strong>Database</strong>s, right-click the MDW<br />
database and click Delete.<br />
2. In the Delete Object window, check the Close existing connections checkbox and click OK.<br />
3. In Object Explorer, expand the Proseware server, right-click <strong>Database</strong>s, and click Restore<br />
<strong>Database</strong>.<br />
4. In the Restore <strong>Database</strong> window, in the Source options click Device.<br />
5. Click the ellipsis beside the Device textbox.<br />
6. In the Select backup devices window, click Add.<br />
7. Navigate to the file D:\10775A_Labs\10775A_18_PRJ\10775A_18_PRJ\MDW.bak and click OK.<br />
8. In the Select backup devices window, click OK.<br />
9. In the Select a page pane, click Options.<br />
10. In the Restore options, check Overwrite the existing database (WITH REPLACE), and click OK.<br />
11. In the Microsoft <strong>SQL</strong> Server Management Studio window, click OK.<br />
Task 3: Review the available reports<br />
1. In Object Explorer, expand Proseware server, expand <strong>Database</strong>s.<br />
2. Right-click the MDW database, click Reports, click Management Data Warehouse, and click<br />
Management Data Warehouse overview.<br />
3. Review the Disk Usage Report for MIA-<strong>SQL</strong>1\MKTG.<br />
4. Close the report window.<br />
5. Right-click the MDW database, click Reports, click Management Data Warehouse, and click<br />
Management Data Warehouse overview.<br />
6. Review the Server Activity Report for MIA-<strong>SQL</strong>1\MKTG.<br />
7. Close the report window. Right-click the MDW database, click Reports, click Management Data<br />
Warehouse, and click Management Data Warehouse overview.<br />
8. Review the Query Statistics Report for MIA-<strong>SQL</strong>1\MKTG.<br />
9. Close the report window.
Module 19: Managing Multiple Servers<br />
Lab 19: Managing Multiple Servers<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_19_PRJ\10775A_19_PRJ.ssmssln.<br />
7. In Solution Explorer, double-click the query 00-Setup.sql. When the query window opens, click<br />
Execute on the toolbar.<br />
Exercise 1: Configure CMS and Execute Multi-server Queries<br />
Task 1: Create a central management server<br />
1. From the View menu in <strong>SQL</strong> Server Management Studio, click Registered Servers.<br />
2. In the Registered Servers window that opens, expand <strong>Database</strong> Engine.<br />
3. Right-click Central Management Servers, and click Register Central Management Server.<br />
4. In the New Server Registration window, in the Server name textbox, type Proseware and click<br />
Save.<br />
L19-75<br />
Task 2: Create a server group within the CMS<br />
1. In <strong>SQL</strong> Server Management Studio, in Registered Servers, expand Central Management Servers.<br />
2. Right-click Proseware and click New Server Group.<br />
3. In the New Server Group Properties window, in the Group name textbox, type Core Servers and<br />
click OK.<br />
4. In Registered Servers, right-click Core Servers and click New Server Registration.<br />
5. In the New Server Registration window, in the Server name textbox, type Proseware and click<br />
Save.<br />
6. In Registered Servers, right-click Core Servers and click New Server Registration.<br />
7. In the New Server Registration window, in the Server name textbox, type AdventureWorks and<br />
click Save.
L19-76 Module 19: Managing Multiple Servers<br />
Task 3: Execute a command to find all databases on any core server in full recovery<br />
model<br />
1. In Registered Servers, right-click Core Servers and click New Query.<br />
2. Type the following code into the new query window:<br />
SELECT * FROM sys.databases WHERE recovery_model_desc = ‘FULL’;<br />
3. From the Toolbar, click Execute.<br />
Note A list of databases in full recovery model is returned.<br />
Exercise 2: Deploy a Data-tier Application<br />
Task 1: Deploy the data-tier application using SSMS<br />
1. In Object Explorer, expand the Proseware server, expand <strong>Database</strong>s, right-click <strong>Database</strong>s, and<br />
click Deploy Data-tier Application.<br />
2. In the Introduction window, click Next.<br />
3. In the Select Package window, enter or select the path D:\10775A_Labs\10775A_19_PRJ<br />
\10775A_19_PRJ\CityCode.dacpac, and click Next.<br />
4. In the Update Configuration window, click Next.<br />
5. In the Summary window, click Next.<br />
6. In the Deploy DAC window, click Finish.<br />
7. In Solution Explorer, double-click the file 61 – Lab Exercise 2.sql to open it.<br />
8. Review the T-<strong>SQL</strong> script.<br />
9. On the Toolbar click Execute.<br />
Exercise 3: Register and extract a data-tier application<br />
Task 1: Register the existing database as a data-tier application<br />
1. In Object Explorer, expand the Proseware server, expand <strong>Database</strong>s, right-click the Research<br />
database, click Tasks, and click Register as Data-tier Application.<br />
2. In the Introduction window, click Next.<br />
3. In the Set Properties window, click Next.<br />
4. In the Validation and Summary window, click Next.<br />
5. In the Register DAC window, click Finish.<br />
Task 2: Extract a dacpac from the database to send to the development team<br />
1. In Object Explorer, expand the Proseware server, expand <strong>Database</strong>s, right-click the Research<br />
database, click Tasks, and click Extract Data-tier Application.
2. In the Introduction window, click Next.<br />
3. In the Set Properties window, in the Save to DAC package file textbox, type<br />
D:\MKTG\Research.dacpac, and then click Next.<br />
4. In the Validation and Summary window, click Next.<br />
5. In the Build Package window, click Finish.<br />
Lab 19: Managing Multiple Servers L19-77<br />
Challenge Exercise 4: Upgrade a Data-tier Application (Only if time<br />
permits)<br />
Task 1: Attempt to deploy the v2 upgrade<br />
1. In Object Explorer, expand the Proseware server, right-click <strong>Database</strong>s and click Refresh.<br />
2. Expand <strong>Database</strong>s, right-click the CityCode database, click Tasks, and click Upgrade Data-tier<br />
Application.<br />
3. In the Introduction window, click Next.<br />
4. In the Select Package window, enter or select the path D:\10775A_Labs\10775A_19_PRJ<br />
\10775A_19_PRJ\CityCode_v2.dacpac, and click Next.<br />
5. In the Detect Change window, click Next.<br />
6. In the Options window, uncheck the Execute pre-deployment Script, check Rollback on failure<br />
and click Next.<br />
7. In the Review Upgrade Plan window, check Proceed despite possible data loss and click Next.<br />
8. In the Summary window, click Next.<br />
9. In the Microsoft <strong>SQL</strong> Server Management Studio window, click OK.<br />
10. In the Upgrade DAC window, notice the failures and click Finish.<br />
11. In Object Explorer, right-click <strong>Database</strong>s, and click Refresh.<br />
Note All databases and data-tier applications have been left unchanged.<br />
Task 2: Deploy the v3 upgrade<br />
1. In Object Explorer, expand <strong>Database</strong>s, right-click the CityCode database, click Tasks, and click<br />
Upgrade Data-tier Application.<br />
2. In the Introduction window, click Next.<br />
3. In the Select Package window, enter or select the path D:\10775A_Labs\10775A_19_PRJ<br />
\10775A_19_PRJ\CityCode_v3.dacpac, and click Next.<br />
4. In the Detect Change window, click Next.<br />
5. In the Options window, check Rollback on failure and click Next.<br />
6. In the Review Upgrade Plan window, click Next.<br />
7. In the Summary window, click Next.<br />
8. In the Upgrade DAC window, click Finish.<br />
9. In Object Explorer, right-click <strong>Database</strong>s, and click Refresh.<br />
Note The data-tier application is upgraded.
Module 20: Troubleshooting Common <strong>SQL</strong> Server <strong>2012</strong><br />
Administrative Issues<br />
Lab 20: Troubleshooting Common Issues<br />
Lab Setup<br />
For this lab, you will use the available virtual machine environment. Before you begin the lab, you must<br />
complete the following steps:<br />
1. Revert the virtual machines as per the instructions in D:\10775A_Labs\Revert.txt.<br />
2. In the virtual machine, click Start, click All Programs, click Microsoft <strong>SQL</strong> Server <strong>2012</strong>, and click<br />
<strong>SQL</strong> Server Management Studio.<br />
3. In the Connect to Server window, type Proseware in the Server name text box.<br />
4. In the Authentication drop-down list box, select Windows Authentication and click Connect.<br />
5. In the File menu, click Open, and click Project/Solution.<br />
6. In the Open Project window, open the project<br />
D:\10775A_Labs\10775A_20_PRJ\10775A_20_PRJ.ssmssln.<br />
7. In Solution Explorer, double-click the query 00-Setup.sql. When the query window opens, click<br />
Execute on the toolbar.<br />
Exercises 1 – 5: Troubleshoot and Resolve <strong>SQL</strong> Server Administrative<br />
Issues<br />
Exercise 1 – Issue 1<br />
Task 1: Read the supporting documentation for the exercise<br />
• Read the supporting documentation for the exercise.<br />
Task 2: Troubleshoot and resolve the issue<br />
1. In Object Explorer, expand the Proseware server, expand Security, and expand Logins.<br />
2. Beside the PromoteApp user, note the icon with a red down arrow, the login is disabled.<br />
3. Right-click PromoteApp, and click Properties.<br />
4. In the Select a page pane, click Status, click Enabled, and click OK.<br />
5. In Object Explorer, right-click Logins, and click Refresh. Note that the login is now enabled.<br />
L20-79
L20-80 Module 20: Troubleshooting Common <strong>SQL</strong> Server <strong>2012</strong> Administrative Issues<br />
Exercise 2 – Issue 2<br />
Task 1: Read the supporting documentation for the exercise<br />
• Read the supporting documentation for the exercise.<br />
Task 2: Troubleshoot and resolve the issue<br />
1. In Object Explorer, expand the Proseware server, expand <strong>Database</strong>s, and note the status of the<br />
AdminDB database. The database is in a restoring state, the junior DBA may have performed a taillog<br />
backup.<br />
2. In Object Explorer, right-click the AdminDB database, click Tasks, click Restore, and click<br />
Transaction Log.<br />
3. In the Restore Transaction Log – AdminDB window, click From file or tape, and click OK.<br />
4. In the Microsoft <strong>SQL</strong> Server Management Studio window, click OK.<br />
Exercise 3 – Issue 2<br />
Task 1: Read the supporting documentation for the exercise<br />
• Read the supporting documentation for the exercise.<br />
Task 2: Troubleshoot and resolve the issue<br />
1. In Object Explorer, expand the Proseware server, expand <strong>SQL</strong> Server Agent, and expand Jobs.<br />
2. Right-click the Get File List job, and click Start Job at Step.<br />
3. In the Start Jobs – Proseware window, note the failure, and click Close.<br />
4. In Object Explorer, right-click the Get File List job, and click View History.<br />
5. In the Log File Viewer – Proseware window, click on the plus sign in a failing row to expand the job<br />
steps.<br />
6. Click on the row for Step ID 1.<br />
7. In the Selected row details pane, scroll down to find the error (Logon failure: unknown user name<br />
or bad password). Note when using the proxy for David.Alexander.<br />
8. In the Log File Viewer - Proseware window, click Close.<br />
9. In Object Explorer, expand Security, expand Credentials, right-click DavidAlexanderCredential,<br />
and click Properties.<br />
10. In the Credential Properties - DavidAlexanderCredential window, in the Password textbox, and the<br />
Confirm password textbox, type Pa$$w0rd, and click OK.<br />
11. In Object Explorer, right-click the Get File List job, and click Start Job at Step.<br />
12. In the Start Jobs - Proseware window, note that the job still fails, and click Close.<br />
13. In Object Explorer, right-click the Get File List job, and click View History.<br />
14. In the Log File Viewer - Proseware window, click on the plus sign in a failing row to expand the job<br />
steps.<br />
15. Click on the row for Step ID 1.
16. In the Selected row details pane, scroll down to find the error (Access is denied).<br />
Lab 20: Troubleshooting Common Issues L20-81<br />
17. In the Log File Viewer - Proseware window, click Close. We need to investigate the job step to see<br />
why an Access is denied error could be returned.<br />
18. In Object Explorer, right-click the Get File List job, and click Properties.<br />
19. In the Select a page pane, click Steps, then click Edit.<br />
20. Note that the command being executed requires permission to read the directory entries in the<br />
D:\MKTG folder and permission to write to the L:\MKTG folder using the identity of David<br />
Alexander.<br />
21. Click Cancel and then click Cancel.<br />
22. On the basis that write permissions are less likely to be granted than read permissions, we will<br />
investigate the write permissions first. In Windows Explorer, right-click folder L:\MKTG and click<br />
Properties.<br />
23. In the MKTG Properties window, click the Security tab and note that David Alexander has no access<br />
to the folder, nor do any groups that he is likely to be a member of.<br />
24. Click Edit then in the Permissions for MKTG window, click Add.<br />
25. Ensure that the value of From this location is set to AdventureWorks.msft, then in the Enter the<br />
object names to select textbox, type David.Alexander and click OK.<br />
26. In the Permissions for MKTG window, click the Allow checkbox in the Modify row and click OK.<br />
27. In the MKTG Properties window, click OK.<br />
28. Now we will test if the job runs.<br />
29. In Object Explorer, right-click the Get File List job, and click Start Job at Step.<br />
30. In the Start Jobs - Proseware window, note that the job still fails, and click Close.<br />
31. Now we need to check the read permissions.<br />
32. In Windows Explorer, right-click folder D:\MKTG and click Properties.<br />
33. In the MKTG Properties window, click the Security tab and note that David Alexander has no access<br />
to the folder, nor do any groups that he is likely to be a member of.<br />
34. Click Edit then in the Permissions for MKTG window, click Add.<br />
35. Ensure that the value of From this location is set to AdventureWorks.msft, then in the Enter the<br />
object names to select textbox, type David.Alexander and click OK.<br />
36. In the Permissions for MKTG window, click OK.<br />
37. In the MKTG Properties window, click OK.<br />
38. Now we will test again whether or not the job runs.<br />
39. In Object Explorer, right-click the Get File List job, and click Start Job at Step.<br />
40. In the Start Jobs - Proseware window, note that the job now works, and click Close.
L20-82 Module 20: Troubleshooting Common <strong>SQL</strong> Server <strong>2012</strong> Administrative Issues<br />
Exercise 4 – Issue 2<br />
Task 1: Read the supporting documentation for the exercise<br />
• Read the supporting documentation for the exercise.<br />
Task 2: Troubleshoot and resolve the issue<br />
1. In Object Explorer, expand the Proseware server, expand <strong>Database</strong>s, right-click AdminDB<br />
database, and click Properties.<br />
2. In the Select a page pane, click Options.<br />
3. Review the options related to statistics.<br />
4. Change the Auto Create Statistics to True.<br />
5. Change the Auto Update Statistics to True, and click OK.<br />
Note The following steps cause statistics to be updated immediately rather than waiting<br />
for autostats to correct the situation.<br />
6. In Object Explorer, right-click AdminDB, and click New Query.<br />
7. In the Query window, type the following command:<br />
EXEC sp_updatestats;<br />
8. On the Toolbar, click Execute.<br />
Exercise 5 – Issue 2<br />
Task 1: Read the supporting documentation for the exercise<br />
• Read the supporting documentation for the exercise.<br />
Task 2: Troubleshoot and resolve the issue<br />
1. In Object Explorer, expand the Proseware server, expand <strong>Database</strong>s, right-click CityDetails<br />
database, and click Properties.<br />
2. In the Select a page pane, click Options.<br />
3. Review the options related to statistics.<br />
4. Change the Auto Close option to False and click OK.