07.04.2014 Views

Information Security - Incident Response Policy

Information Security - Incident Response Policy

Information Security - Incident Response Policy

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

INFORMATION SECURITY<br />

INCIDENT RESPONSE<br />

POLICY & PROCEDURES<br />

(IS IRPP)<br />

October 2007<br />

Page 1 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

POLICY STATEMENT<br />

A rapid response to incidents that threaten the confidentiality, integrity, and availability (CIA) of<br />

university information assets, information systems and the networks that deliver the information is<br />

required to protect those assets. Without a rapid response, those assets could be compromised and<br />

the university could be in breach of UK Legislation, JANET acceptable use policy and our own stated<br />

policies, there is also the potential of breaching the trust of our customers and users.<br />

The <strong>Information</strong> Services Management Team has identified improvement of <strong>Incident</strong> <strong>Response</strong> as a<br />

strategic task and has identified improvement of <strong>Information</strong> <strong>Security</strong> as a strategic goal.<br />

<strong>Information</strong> <strong>Security</strong> incidents will occur that require full participation of <strong>Information</strong> Services (IS)<br />

and IS technical personnel as well as management leadership to properly manage the outcome. To<br />

accomplish this IS will establish an incident response policy and procedures that will ensure<br />

appropriate leadership and technical resources are involved to:<br />

• assess of the seriousness of an incident<br />

• assess the extent of damage<br />

• identify the vulnerability created<br />

• estimate what additional resources are required to mitigate the incident<br />

It will also ensure that proper follow‐up reporting occurs and that procedures are adjusted so that<br />

responses to future incidents are improved.<br />

INFORMATION SECURITY INCIDENT RESPONSE POLICY &<br />

PROCEDURES<br />

These policies and procedures underlie the establishment and ongoing deployment of a trained<br />

incident response team, formed with the purpose of managing information security incidents at the<br />

University. This effort is being taken to improve the response time to incidents, to provide<br />

consistent response, and improve incident reporting. The purpose of the <strong>Information</strong> <strong>Security</strong><br />

<strong>Incident</strong> <strong>Response</strong> Procedure is to establish procedures in accordance with applicable legal and<br />

regulatory requirements and University policy to address instances of unauthorised access to or<br />

disclosure of University <strong>Information</strong>, to be known as an <strong>Incident</strong>.<br />

This policy applies to <strong>Information</strong> Services (IS) and all system and services for which it is<br />

responsible.<br />

Nothing in this <strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedure document should be<br />

taken to be in conflict with the following higher level policies:<br />

• Computer Use policy<br />

• Acceptable Use <strong>Policy</strong><br />

Page 2 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

• Memoranda from the University Directorate<br />

These policies and procedures specifically exclude the following:<br />

• Non‐electronic information including paper mail.<br />

• Copier and fax.<br />

• Physical security.<br />

• Contingency Planning, Business Continuity and Disaster Recovery are governed by a<br />

different set of policies. An event may initially be declared an ‘<strong>Information</strong> <strong>Security</strong><br />

<strong>Incident</strong>’ and subsequently declared to be a ‘Disaster’ by the appropriate body. In this case,<br />

an <strong>Incident</strong> <strong>Response</strong> Team (IRT) would be included into the Disaster Recovery process.<br />

In addition to all the defences that have been mounted in protection of the infrastructure and the<br />

information processed within, conventional wisdom recommends a high level of preparedness for a<br />

information security incident. This policy and procedures describes the response to such events,<br />

the conditions whereby this process is invoked, the resources require and the course of<br />

recommended action. Central to this process is the <strong>Information</strong> Services <strong>Incident</strong> <strong>Response</strong> Team<br />

(IRT), assembled with the purpose of addressing that particular circumstance where there is<br />

credible evidence of an incident. See “Appendix A ­ Process Flow –” for a graphical representation<br />

of the information flow and decision process.<br />

The primary emphasis of activities described within this policy is the return to a normal<br />

(secure) state as quickly as possible, whilst minimising the adverse impact to the University.<br />

The capture and preservation of incident relevant data (e.g., network flows, data on drives,<br />

access logs, etc.) is performed primarily for the purpose of problem determination and<br />

resolution, and methods currently employed are suitable for that purpose. It is understood<br />

and accepted that strict forensic measures are not used in the data capture and retention.<br />

Forensic measures will be determined on a case by case basis.<br />

This document may reference other documentation, policies and procedures that support this<br />

protocol but are not contained within the document, e.g., policy that defines sensitive data, scripts<br />

to be followed by the IS Service Desk, IS personnel, or documented IS IRT (<strong>Information</strong> Services<br />

<strong>Incident</strong> <strong>Response</strong> Team) procedures. Where this occurs, instructions to obtain these materials will<br />

be specified.<br />

Circumstances may dictate the activation of other operational teams and execution of other<br />

procedures. The IS IRT must monitor and co‐ordinate all activities occurring under other<br />

operational teams and protocols, and communicate to all interested parties in a timely manner to<br />

ensure accurate assessments and avoid efforts that may be duplicated or at cross‐purposes.<br />

SCOPE<br />

Page 3 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

It is critical that the IS IRT provide consistent and timely response to the customer, and that<br />

sensitive information is handled appropriately. This document provides the guidelines needed for<br />

IRT <strong>Incident</strong> Managers (IM) to classify the case category, criticality level, and sensitivity level for<br />

each IRT case. This information will be entered into the <strong>Incident</strong> <strong>Response</strong> <strong>Incident</strong> Tracking<br />

System (IRIMS) when a case is created. Consistent case classification is required for the IRT to<br />

provide accurate reporting to management on a regular basis. In addition, the classifications will<br />

provide IRT IM’s with proper case handling procedures and will form the basis of Service Level<br />

Agreements (SLA) between the IRT and other University business areas.<br />

<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong><br />

DEFINITIONS<br />

An <strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> is generally defined as any known or highly suspected<br />

circumstance that results in an actual or possible unauthorised release of information<br />

deemed sensitive by the University or subject to regulation or legislation, beyond the<br />

University’s sphere of control.<br />

Examples of an <strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> may include but are not limited to:<br />

• the theft or physical loss of computer equipment known to hold files containing financial<br />

details<br />

• a server known to hold sensitive data is accessed or otherwise compromised by an<br />

unauthorised party<br />

• an outside entity is subjected to a Distributed Denial of Service (DDoS) attack originating<br />

from within the University network<br />

• a firewall is accessed by an unauthorised entity<br />

• a network outage is attributed to the activities of an unauthorised entity<br />

CATEGORIES<br />

For the purposes of this protocol, incidents are categorised as “Unauthorised Access” or<br />

“Unauthorised Acquisition”, and can be recognised by associated characteristics.<br />

UNAUTHORISED ACCESS<br />

The unauthorised access to or disclosure of University information through network and/or<br />

computing related infrastructure, or misuse of such infrastructure, to include access to<br />

related components (e.g., network, server, workstation, router, firewall, system, application,<br />

data, etc.)<br />

Page 4 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

Characteristics of security incidents where unauthorised access might have occurred may include<br />

but are not limited to:<br />

• Evidence (e‐mail, system log) of disclosure of sensitive data<br />

• Anomalous traffic to or from the suspected target<br />

• JANET CERT alerts<br />

• Unexpected changes in resource usage<br />

• Increased response time<br />

• System slowdown or failure<br />

• Changes in default or user‐defined settings<br />

• Unexplained or unexpected use of system resources<br />

• Unusual activities appearing in system or audit logs<br />

• Changes to or appearance of new system files<br />

• New folders, files, programs or executables<br />

• User lock out<br />

• Appliance or equipment failure<br />

• Unexpected enabling or activation of services or ports<br />

• Protective mechanisms disabled (firewall, anti‐virus)<br />

UNAUTHORISED ACQUISITION<br />

The unauthorised physical access to, disclosure or acquisition of assets containing or<br />

providing access to University information (e.g., removable drives or media, hardcopy, cable<br />

rooms, file or document storage, appliance hardware, etc.)<br />

Characteristics of security incidents where unauthorised acquisition might have occurred may<br />

include but are not limited to:<br />

• Theft of computer equipment where sensitive data is stored<br />

• Loss of storage media (removable drive, CD‐Rom, DVD, flash drive, magnetic tape)<br />

• Illegal entry (burglary)<br />

• Suspicious or foreign hardware is connected to the network<br />

Page 5 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

• Normally‐secured storage areas found unsecured<br />

• Broken or non‐functioning locking mechanisms<br />

• Presence of unauthorised personnel in secured areas<br />

• Disabled security cameras or devices<br />

SEVERITY AND CRITICALITY<br />

<strong>Incident</strong>s are further delineated by the actual and potential impact on the business of the<br />

University. For additional information on severity assignments and associated symptoms, see<br />

“Appendix B ­ <strong>Incident</strong> Severity and Criticality Levels,”. The primary focus of this policy is the<br />

handling of Severity 1 incidents.<br />

INCIDENT RESPONSE TEAM (IRT)<br />

The <strong>Incident</strong> <strong>Response</strong> Team (IRT) is comprised of individuals with decision‐making authority from<br />

within IS and charged by the Directorate with the responsibility of assisting in the process<br />

described within this document.<br />

UNIVERSITY INFORMATION<br />

University <strong>Information</strong> is any information asset which is maintained by or on behalf of the<br />

University that is used in the conduct of University business regardless of the manner in which such<br />

information is maintained or transmitted.<br />

SENSITIVE DATA<br />

Sensitive Data is:<br />

• Any University <strong>Information</strong> declared to be Highly Confidential, Confidential, or Restricted<br />

by University policy, and<br />

• Any personally identifiable information as determined or governed by The Data Protection<br />

Act 1998 or University policy requiring protection from disclosure.<br />

Examples include but are not limited to:<br />

• Network ID and Password<br />

• Student records<br />

• Personnel Records<br />

• Name in combination with address<br />

Page 6 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

• Credit or Debit Card Number and Access Code (e.g., PIN or Password)<br />

• Personal Medical Records<br />

• Unpublished results of research or financial investment strategies<br />

• Intellectual Property (e.g., protected formulas or patents)<br />

UNIVERSITY CUSTOMER<br />

A University Customer is:<br />

• any faculty, student, staff or group affiliated with the University, or<br />

• any department or school of the University, or<br />

• any employee (permanent, temporary and contract personnel; past or present)<br />

3 RD PARTY<br />

A 3 rd party is:<br />

• any entity having a relationship with the University not described as a customer (e.g.,<br />

business partner, research subject, vendor), or<br />

• any external entity initiating contact with the University (e.g., JANET CERT, RIAA , target of<br />

DDoS attack, student applicant, member of the general public).<br />

TEAM OBJECTIVES<br />

The IS IRT Chair (Director of <strong>Information</strong> Services) or delegated <strong>Incident</strong> Manager (IM) will lead the<br />

IRT team; the IRT’s objective is to:<br />

1. Co­ordinate and oversee the response to <strong>Incident</strong>s in accordance with the<br />

requirements of UK Legislation, JANET and University policy;<br />

2. Minimise the potential negative impact to the University, Customers and and 3 rd<br />

Parties as a result of such <strong>Incident</strong>s;<br />

3. Where appropriate, inform the affected Customer and 3 rd Party of action that is<br />

recommended or required on their behalf;<br />

4. Restore services to a normal and secure state of operation.<br />

5. Provide clear and timely communication to all interested parties.<br />

RESPONSIBILITIES<br />

Page 7 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

To ensure an appropriate and timely execution of this protocol, the IRT Chair (or designated IRT<br />

IM) is required to:<br />

1. Confirm the occurrence of an <strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> requiring the execution<br />

of this protocol.<br />

2. Confirmation activities include but are not limited to:<br />

• examination or analysis of anomalies or untoward events<br />

• review of system logs or audit records<br />

• direct conversation with Customer, 3rd Party, Service Desk, IS personnel,<br />

“on call” engineer, IRT members or others having information about the event<br />

collection of any evidence supportive of the event<br />

• Supervise and direct the consistent, timely, and appropriate response to an<br />

<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong><br />

• Provide appropriate communication to parties having a vested interest in<br />

the incident.<br />

• Offer support to the Customer or 3 rd Party as appropriate until the <strong>Incident</strong><br />

is resolved.<br />

• Conduct a Post‐<strong>Incident</strong> Review (PIR).<br />

• Maintain the procedures contained in this document.<br />

CRITICAL INCIDENT RESPONSE TEAM COMPOSITION<br />

The IRT consists of a Primary Team and if deemed necessary a Secondary Team. Each member of<br />

the Primary Team will designate an Alternate member to participate if the Primary Member is<br />

unavailable. See Appendix C ­ “Primary and Alternate Contact List ­” for a listing of individual<br />

members. The Primary Team will consist of representatives from the following areas:<br />

A1. PRIMARY TEAM (REQUIRED)<br />

1. Director of IS (Chair)<br />

2. Senior Service Delivery Manager (SSDM)<br />

3. University <strong>Information</strong> Group Manager (UIG)<br />

4. University Applications Group Manager (UAG)<br />

5. IS Technical Personnel<br />

Page 8 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

6. <strong>Security</strong> Architect (SA)<br />

7. Change Manager (CM)<br />

8. Administration Support<br />

A2. SECONDARY TEAM (AS REQUIRED)<br />

The circumstances surrounding each incident may differ and require personnel with expertise or<br />

skills beyond that of the Primary Team. Members of the Primary Team will determine what, if any,<br />

additional resources are required and a Secondary Team may be established with:<br />

• Individuals with decision‐making authority identified to have a vested interest in the<br />

resolution of the incident.<br />

• Individuals identified as subject matter experts or having skills required for resolution of<br />

the incident.<br />

<strong>Information</strong> <strong>Security</strong> Coordinators representing an affected Customer or 3 rd Party, or known to<br />

have an established relationship with an affected Customer or 3 rd Party may be seconded to the<br />

Secondary Team.<br />

ACCOUNTABILITY<br />

Individual IRT members are accountable to the IS Senior Management Team and University<br />

Directorate for the timely and effective execution of this policy, procedures and associated<br />

activities.<br />

REPORTING A SECURITY INCIDENT<br />

Anyone receiving notification of an <strong>Incident</strong> must contact the Service Desk immediately.<br />

Service Desk personnel will provide the customer and the user with a Single Point of Contact<br />

(SPOC). The Service Desk own the <strong>Incident</strong> Management Process and as such the will be<br />

responsible for monitoring and escalation according to all SLA’s<br />

• The e‐mail addresses is servicedesk@port.ac.uk<br />

Note: The email address may be used but is less effective than the direct notification to the Service<br />

Desk via the telephone.<br />

• Telephone Ext: 3265<br />

Service Desk personnel use scripts (e.g., lists of predetermined questions) to assist in problem<br />

determination and resolution. These scripts assist support personnel to identify those events that<br />

may be classified as an <strong>Information</strong> <strong>Security</strong> <strong>Incident</strong>. Additional information may be found in<br />

Appendix D – “Guidelines for Service Desk and IS Personnel”.<br />

Page 9 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

ACTIVATION OF TEAM<br />

Once the IRT Lead has determined an <strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> has occurred, the IRT Lead or<br />

delegated IRT IM will activate this protocol within 24 hours after <strong>Incident</strong> determination.<br />

Notification to the Primary Team member or Alternate should occur via a direct communication by<br />

telephone or face‐to‐face contact. Voice‐mail and e‐mail are not considered direct notification.<br />

Respective Primary and Alternate Team members should exchange information frequently to<br />

ensure their knowledge of the information security incident is current.<br />

Consult Appendix – E “Notification Tree” for details and notification assignments.<br />

KEY COMPONENTS OF CRITICAL INCIDENT RESPONSE PROTOCOL<br />

The Critical <strong>Incident</strong> <strong>Response</strong> Protocol consists of five key components:<br />

• Assessment<br />

• Notification/Communication<br />

• Containment,<br />

• Corrective Measures<br />

• Closure<br />

ASSESSMENT<br />

The IRT Lead or IRT IM will determine the category and severity of the <strong>Incident</strong> and undertake<br />

discussions and activities to best determine the next best course of action, i.e., decide if protocol<br />

execution is required. Appendix F ­ “Critical <strong>Incident</strong> Assessment Checklist” is used in the initial<br />

assessment process conducted by the IRT Lead. Once the IRT is assembled, the Assessment<br />

Checklist is executed and reviewed to ensure all pertinent facts are established. All discussions,<br />

decisions and activities are to be documented.<br />

NOTIFICATION/COMMUNICATION<br />

Designated persons will take action to notify the appropriate internal and external parties, as<br />

necessary.<br />

INTERNAL NOTIFICATION (WITHIN THE UNIVERSITY)<br />

All Internal Notification and communication must be approved by the Primary IRT Lead or<br />

IRT IM.<br />

Primary Team members notify Alternate Team members (and vice‐versa). The IRT will notify<br />

members of Secondary Team (if assembled).<br />

Page 10 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

1. IRT Lead will notify University Administration and IT Committee of the <strong>Incident</strong> and provide<br />

ongoing status.<br />

2. IRT Lead will issue or direct all “sensitive” internal communications.<br />

3. Service Desk will issue all public internal communication.<br />

EXTERNAL NOTIFICATION (OUTSIDE THE UNIVERSITY)<br />

All External Notification and communication must be approved by the Directorate.<br />

1. 3 rd Party ‐ IRT Lead or IRT IM and the Directorate will establish communication with any 3 rd<br />

Party, as appropriate for the circumstance.<br />

2. Law Enforcement ‐ <strong>Security</strong> Architect in consultation with IRT Lead or IRT IM to notify Law<br />

Enforcement as appropriate.<br />

3. Regulatory ‐ Directorate notifies the appropriate regulatory agencies.<br />

4. IRT members will assist in determining if other parties should be notified (e.g. Sun <strong>Security</strong>).<br />

5. Media Interest ‐ Directorate and University Public Relations will determine if, how and when<br />

media interest should be notified, and respond to all inquiries from the media.<br />

6. Directorate will determine if government notification (e.g., <strong>Information</strong> Commissioner) is<br />

required and take appropriate action.<br />

7. Other affected parties – The IRT will determine if there are other parties of interest, with<br />

communications issued accordingly (e.g., JANET CERT)<br />

CUSTOMER NOTIFICATION<br />

1. Customers should be informed that the <strong>Incident</strong> has been reported, recorded and an<br />

investigation underway.<br />

2. The Customer shall be kept abreast of the status of the <strong>Incident</strong> investigation in a timely<br />

manner.<br />

3. The Customer shall be notified of results, closure of investigation, and recommendations.<br />

STATUS<br />

1. IRT Lead assumes responsibility for preparing and issuing timely communication to IRT<br />

members, Administration and other interested parties.<br />

2. Communications may include meetings, video conferencing, teleconferencing, e‐mail,<br />

telephone/messaging, voice recordings or other means as deemed appropriate.<br />

Page 11 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

3. Frequency and timeliness of communications will be established and revised throughout<br />

the life cycle of the incident.<br />

CONTAINMENT<br />

The IRT will determine and cause to be executed the appropriate activities and processes required<br />

to quickly contain and minimise the immediate impact to the University, Customer and 3 rd Party.<br />

Recommended activities addressing Unauthorised Access and Unauthorised Acquisition are<br />

described in Appendix G ­ “<strong>Incident</strong> Containment Activities”.<br />

Containment activities are designed with the primary objectives of:<br />

• Counteract the immediate threat<br />

• Prevent propagation or expansion of the incident<br />

• Minimise actual and potential damage<br />

• Restrict knowledge of the incident to authorised personnel<br />

• Preserve information relevant to the incident<br />

CORRECTIVE MEASURES<br />

The IRT will determine and cause to be executed the appropriate activities and processes required<br />

to quickly restore circumstances to a normal (secure) state. Recommended activities addressing<br />

Unauthorised Access and Unauthorised Acquisition are described in Appendix H­ “Corrective<br />

Measures”.<br />

Corrective measures are designed with the primary objectives of:<br />

• Secure the processing environment<br />

• Restore the processing environment to its normal state<br />

CLOSURE<br />

The IRT will stay actively engaged throughout the life cycle of the <strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> to<br />

assess the progress/status of all containment and corrective measures and determine at what point<br />

the incident can be considered resolved. Recommendations for improvements to processes,<br />

policies, procedures, etc. will exist beyond the activities required for incident resolution and should<br />

not delay closing the <strong>Information</strong> <strong>Security</strong> <strong>Incident</strong>.<br />

REQUIRED DOCUMENTATION FOR CRITICAL INCIDENT RESPONSE<br />

MEETINGS<br />

All <strong>Incident</strong> activities, from receipt of the initial report through Post‐<strong>Incident</strong> Review, are to be<br />

documented. The IRT Lead is responsible for ensuring all events are recorded, assembling these<br />

Page 12 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

records in preparation and performance of the post‐incident review, and ensuring all records are<br />

preserved for review. IRT members may be employed in these efforts.<br />

1. General overview of the <strong>Incident</strong><br />

Summary of the <strong>Incident</strong> providing a general description of events, approximate timelines, and<br />

parties involved, resolution of the incident, external notifications required and<br />

recommendations for prevention and remediation.<br />

2. Detailed review of the <strong>Incident</strong>.<br />

Description of <strong>Incident</strong> events, indicating specific timelines, personnel involved, hours spent on<br />

various activities, impact to Customer, 3 rd Party and user communities (e.g., system not<br />

available, business continuity issues), ensuing discussions, decisions and assignments made,<br />

problems encountered, successful and unsuccessful activities, notifications required or<br />

recommended, steps taken for containment and remediation, recommendations for prevention<br />

and remediation (short‐term and long‐term), identification of policy and procedure gaps,<br />

results of post‐incident review.<br />

3. Retention.<br />

All relevant documentation will be retained by IRT Lead for archival in a central repository.<br />

Access to the documentation and repository is typically restricted to IRT membership.<br />

POST‐INCIDENT REVIEW (PIR)<br />

A review of <strong>Information</strong> <strong>Security</strong> <strong>Incident</strong>‐related activities is a required element of this policy. All<br />

members of the IRT primary and secondary teams are recommended participants.<br />

Discussion<br />

The IRT Lead or IRT IM will host a PIR after each <strong>Incident</strong> has been resolved; this discussion should<br />

be scheduled within 2‐3 weeks of the <strong>Information</strong> <strong>Security</strong> <strong>Incident</strong>’s remediation. The review is an<br />

examination of the <strong>Incident</strong> and all related activities and events. All activities performed relevant<br />

to the <strong>Incident</strong> should be reviewed with the aim of improving and honing the over‐all incident<br />

response process.<br />

1. Recommendations<br />

The IRT’s recommendations on changes to policy, process, safeguards, etc. are both an input to<br />

and by‐product of this review. “Fix the problem, not the blame” is the focus of this activity. All<br />

discussions, recommendations and assignments are to be documented for distribution to the<br />

IRT and follow‐up by IRT Lead.<br />

2. Follow­up<br />

The IRT Lead will follow‐up with the Client and 3 rd Party or other parties, as required and<br />

appropriate.<br />

Page 13 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

GLOSSARY<br />

The following terms should be used to assure commonality of reporting:<br />

Confidentiality. “Confidentiality provides the ability to ensure that the necessary level of security<br />

is enforced at each junction of data processing and prevention of unauthorised disclosure. This<br />

level of confidentiality should prevail while data resides on systems and devices within the<br />

network, as it is transmitted, and once it reaches its destination.” Harris 2003,p55<br />

Integrity. “Integrity is upheld when the assurance of accuracy and reliability of information and<br />

systems is provided, and unauthorised modification of data is prevented.” Harris 2003, p. 55.<br />

Availability. “Systems and networks should provide adequate capacity in order to perform in a<br />

predictable manner with an acceptable level of performance.” Harris 2003, p. 54.<br />

Vulnerability. “Vulnerability is a software, hardware, or procedural weakness that may provide an<br />

attacker the open door he is looking for to enter a computer or network and have unauthorised<br />

access to resources within the environment. Vulnerability characterises the absence or weakness<br />

of a safeguard that could be exploited”. Harris 2003, p. 56.<br />

Threat. “A threat is any potential danger to information systems. The threat is that someone or<br />

something will identify a specific vulnerability and use it against the organisation or individual”.<br />

Harris 2003, p. 56.<br />

Risk. “A risk is the likelihood of a threat agent taking advantage of vulnerability. A risk is the<br />

possibility and probability that a threat agent will exploit vulnerability”. Harris 2003, p. 56.<br />

Exposure. “An exposure is an instance of being exposed to losses from a threat agent.<br />

Vulnerability can cause an organisation to be exposed to possible damages”. Harris 2003, p. 56.<br />

Countermeasure. “A countermeasure, or safeguard, mitigates a potential risk. A countermeasure<br />

is a software configuration, hardware, or procedure that eliminates vulnerability or reduces the risk<br />

of a threat agent from being able to exploit vulnerability”. Harris 2003, p. 57.<br />

Page 14 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

APPENDICES<br />

A – Process Flow<br />

B – <strong>Incident</strong> Severity and Criticality Level<br />

C – Primary and Alternate Contact List<br />

D – Guidelines for Service Desk and IS Personnel<br />

E – Notification Tree<br />

F – <strong>Incident</strong> Assessment Checklist<br />

G – <strong>Incident</strong> Containment Activities<br />

H – Corrective Measures<br />

APPENDIX A ‐ PROCESS FLOW<br />

Page 15 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

Page 16 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

APPENDIX ‐ B INCIDENT SENSITIVITY AND CRITICALITY LEVELS<br />

SCOPE<br />

It is important that the SPOC provide consistent and timely response to the customer, and that sensitive<br />

information is handled appropriately. This document provides the guidelines needed for IRT <strong>Incident</strong><br />

Managers (IM) to classify the case category, sensitivity level, and criticality level for each IRT case. This<br />

information will be entered into the IRIMS when a case is created. Consistent case classification is required<br />

for the IRT to provide accurate reporting to management on a regular basis. In addition, the classifications<br />

will provide IRT IM’s with proper case handling procedures and will form the basis of SLA’s between the IRT<br />

and IS Service Delivery and other university clients.<br />

INCIDENT CATEGORY AND SENSITIVITY LEVELS<br />

All incidents managed by the IRT should be classified into one of the categories listed in the table below:<br />

<strong>Incident</strong><br />

Sensitivity* Description<br />

Category<br />

Denial of service S3 • DOS or DDOS attack.<br />

Forensics S1 • Any forensic work to be done by IRT.<br />

Compromised<br />

<strong>Information</strong><br />

S1 • Attempted or successful destruction, corruption, or disclosure of sensitive university<br />

information or Intellectual Property.<br />

Compromised Asset S1, S2 • Compromised host (root account, Trojan, rootkit), network device, application, user<br />

account. This includes malware‐infected hosts where an attacker is actively<br />

controlling the host.<br />

Unlawful activity S1 • Theft / Fraud / Human Safety / Child Pornography. Computer‐related incidents of a<br />

criminal nature, likely involving law enforcement or Loss Prevention.<br />

Internal Hacking S1, S2, S3 • Reconnaissance or Suspicious activity originating from inside the university network,<br />

excluding malware.<br />

External Hacking S1, S2, S3 • Reconnaissance or Suspicious Activity originating from outside the university network<br />

(partner network, Internet), excluding malware.<br />

Malware S3 • A virus or worm typically affecting multiple university devices. This does not include<br />

compromised hosts that are being actively controlled by an attacker via a backdoor or<br />

Trojan. (See Compromised Asset)<br />

Email S3 • Spoofed email, SPAM, and other email security‐related events.<br />

Consulting S1, S2, S3 • <strong>Security</strong> consulting unrelated to any confirmed incident.<br />

<strong>Policy</strong> Breaches S1, S2, S3 • Sharing offensive material, sharing/possession of copyright material.<br />

• Deliberate violation of <strong>Information</strong> <strong>Security</strong> policy.<br />

• Inappropriate use of university asset such as computer, network, or application.<br />

• Unauthorised escalation of privileges or deliberate attempt to subvert access<br />

controls.<br />

* ‐ Sensitivity will vary depending on circumstances.<br />

SENSITIVITY CLASSIFICATION EXAMPLES<br />

Page 17 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

The IRT should always apply the “need to know” principle when communicating case details with other<br />

parties. The sensitivity matrix below helps to define “need to know” by classifying cases according to<br />

sensitivity level. The ‘Required’ column defines the parties that “need to know” for a given sensitivity level.<br />

The ‘Optional’ column defines the other parties that may be included on communications, if necessary.<br />

Typically the SPOC who own the incident will initially determine the sensitivity level. In some cases it will be<br />

appropriate for the SPOC to work with the customer to determine the sensitivity level and then if deemed<br />

necessary escalate the incident to the IRT.<br />

Sensitivity Classification<br />

Sensitivity<br />

Level<br />

S1<br />

S2<br />

S3<br />

Sensitivity<br />

Level<br />

Definition<br />

Extremely<br />

Sensitive.<br />

Sensitive.<br />

Not Sensitive.<br />

Typical <strong>Incident</strong> Categories<br />

Global Investigations<br />

Initiated<br />

Forensics Request<br />

Destruction of property<br />

Compromised asset<br />

Compromised <strong>Information</strong><br />

Unlawful activity<br />

Inappropriate use of<br />

property<br />

<strong>Policy</strong> violations<br />

External Hacking<br />

Internal Hacking<br />

Unauthorised Access<br />

Denial of service<br />

Virus / Worm<br />

Email<br />

Required On Case<br />

Communications **<br />

IRT Lead or IRT IM,<br />

<strong>Security</strong> Architect or<br />

SPOC<br />

IRT Lead or IRM IM,<br />

<strong>Security</strong> Architect ,<br />

SPOC<br />

IRT Lead or IRT IM, ,<br />

<strong>Security</strong> Architect or<br />

SPOC<br />

Optional On Case<br />

Communications **<br />

JANET CERT<br />

JANET CERT<br />

OWNERS<br />

ANY<br />

OWNERS<br />

ANY<br />

IRIMS Access<br />

IRT Lead or IRT IM<br />

<strong>Security</strong> Architect<br />

or SPOC<br />

<strong>Security</strong> Architect<br />

or SPOC<br />

Service Desk<br />

personnel with<br />

access to IRIMS<br />

<strong>Security</strong> Architect<br />

Definitions:<br />

• SPOC: The service delivery single point of contact, the person that initiated the case with the IRT. This is the person in the Service<br />

Desk team that initiated a case with the IRT.<br />

• IRT: This includes the dedicated IRT members, and the IRT Lead/IRT IM handling the case.<br />

• OWNERS: The owners of the affected device/application (system administrator, webmaster, etc.)<br />

• ANY: Any other relevant parties that may be affected including various university customers. It is left to the discretion of the IRT IM to<br />

determine who should be included.<br />

• IRIMS: <strong>Incident</strong> <strong>Response</strong> <strong>Information</strong> Management System,<br />

Notes:<br />

** “Case Communications” include the following: Initial email from SPOC to customer, periodic case reports to customer, and final case report to<br />

customer. It is not necessary to include these parties on all interim communications that occur throughout the life of a case, just the case updates<br />

and summary.<br />

CRITICALITY CLASSIFICATION<br />

Page 18 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

The criticality matrix defines the minimal customer response time and ongoing communication requirements<br />

for a case. The criticality level should be entered into the IRIMS when a case is created, and it should not be<br />

altered at any point during the case lifecycle except when it was incorrectly classified in the first place.<br />

Typically the SPOC will determine the initial criticality level and in some cases it will be appropriate to<br />

escalate to the IRT Lead or IRT IM to work with the SPOC and the customer to determine the criticality level.<br />

Level<br />

C1<br />

Criticality Level<br />

Definition<br />

<strong>Incident</strong> affecting critical<br />

systems or information<br />

with potential to be<br />

revenue or customer<br />

impacting.<br />

Typical <strong>Incident</strong><br />

Categories<br />

Denial of service<br />

Compromised Asset<br />

(critical)<br />

Internal Hacking<br />

(active)<br />

External Hacking<br />

(active)<br />

Virus / Worm<br />

(outbreak)<br />

Destruction of property<br />

(critical)<br />

Criticality Classification<br />

Initial<br />

Respon<br />

se Time<br />

60<br />

Minutes<br />

Ongoing<br />

<strong>Response</strong><br />

(Critical Phase)<br />

IRT Lead or IRT<br />

<strong>Incident</strong> Manager<br />

assigned to work<br />

case on 24x7 basis.<br />

Ongoing<br />

<strong>Response</strong><br />

(Resolution<br />

Phase)<br />

IRT <strong>Incident</strong><br />

Manager<br />

assigned to work<br />

on case during<br />

normal business<br />

hours.<br />

Ongoing<br />

Communication<br />

Requirement<br />

Case update sent to<br />

appropriate parties<br />

on a daily basis<br />

during critical<br />

phase. If IRT<br />

involvement is<br />

necessary to<br />

restore critical<br />

systems to service<br />

then case update<br />

will be sent a<br />

minimum of every<br />

2 hours.<br />

Case update sent to<br />

appropriate parties<br />

on a weekly basis<br />

during resolution<br />

phase.<br />

C2<br />

<strong>Incident</strong> affecting noncritical<br />

systems or<br />

information, not revenue<br />

or customer impacting.<br />

Employee investigations<br />

that are time sensitive<br />

should typically be<br />

classified at this level.<br />

C3 Possible incident, noncritical<br />

systems.<br />

<strong>Incident</strong> or employee<br />

investigations that are<br />

not time sensitive.<br />

Long‐term investigations<br />

involving extensive<br />

research and/or detailed<br />

forensic work.<br />

Internal Hacking (not<br />

active)<br />

External Hacking (not<br />

active)<br />

Unauthorised access.<br />

<strong>Policy</strong> breaches<br />

Unlawful activity.<br />

Compromised<br />

information.<br />

Compromised asset.<br />

(non‐critical)<br />

Destruction of property<br />

(non‐critical)<br />

Email<br />

Forensics Request<br />

Inappropriate use of<br />

property.<br />

<strong>Policy</strong> breaches.<br />

4 Hours IRT Lead or IRT<br />

<strong>Incident</strong> Manager<br />

assigned to work<br />

case on 24x7 basis.<br />

48 Hours Case is worked as<br />

IRT time/resources<br />

are available.<br />

IRT <strong>Incident</strong><br />

Manager<br />

assigned to work<br />

on case during<br />

normal business<br />

hours.<br />

Case is worked as<br />

IRT<br />

time/resources<br />

are available.<br />

Case update sent to<br />

appropriate parties<br />

on a daily basis<br />

during critical<br />

phase.<br />

Case update sent to<br />

appropriate parties<br />

on a weekly basis<br />

during<br />

resolution phase.<br />

Case update sent to<br />

appropriate parties<br />

on a weekly basis.<br />

Page 19 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

e.g. A compromised root account on a Sun Solaris Server containing personnel information would be:<br />

• Compromised Asset ­ Sun Solaris Server<br />

• Sensitivity ­ S1<br />

• Criticality Classification ­ C1<br />

Definitions:<br />

• Initial <strong>Response</strong> Time – This specifies the maximum amount of time that should elapse before the Service<br />

Desk responds to the customer/user. Again, this is the maximum amount of time. In most cases the IRT<br />

Lead or IRT IM will respond sooner than the specified response time. At a minimum, the following should<br />

occur within this timeframe:<br />

•<br />

1. Initial assessment and triage.<br />

2. Case classification is determined.<br />

3. The case will be entered into the IRIMS.<br />

4. The case will be owned by the Service Desk<br />

5. An email will be sent to the customer. This is the initial “we have your case” email. This email will<br />

include various information ( to be defined in another document ) such as the date/time of the<br />

request, IRT case number, the name, phone, and email of the IRT Lead or IRTM IM, a SPOC<br />

escalation contact, the criticality and sensitivity level of the case, and an indication of when the<br />

customer will receive case updates. Ideally, the IRIMS will generate this email automatically when a<br />

case is entered into the system.<br />

• Ongoing <strong>Response</strong> Requirement ‐ This specifies the level of service that the customer will receive from the<br />

IRT.<br />

• Ongoing Communication Requirement – This specifies the frequency in which communications with the<br />

customer should occur throughout the case lifecycle. These are the minimum requirements and<br />

communications may occur more frequently as required.<br />

<strong>Incident</strong> Phases:<br />

<strong>Incident</strong> Phase Description Typical Activities<br />

Critical Phase<br />

The period of time in the case lifecycle when active incident<br />

response is required in order to ensure the successful<br />

resolution of the case. Typically this includes system or service<br />

outages, and/or urgent evidence preservation.<br />

Detection, assessment, triage, containment,<br />

evidence preservation, initial recovery<br />

Resolution Phase<br />

The period of time in the case lifecycle when active incident<br />

response is not required to successfully resolve the case.<br />

Evidence collection, analysis and investigation,<br />

forensics, remediation, full recovery, post‐mortem.<br />

Notes:<br />

The IRIMS should be modified to include a drop‐down selection for incident phase ( ‘Critical’, ‘Resolution’, ‘N/A’ ). For<br />

cases that are classified as C1 or C2 this field will be set to Critical when the case is opened, and will be reset to<br />

‘Resolution’ at the appropriate time in the case lifecycle. For cases that are not time‐sensitive (typically C3) the SPOC<br />

will set this field to ‘N/A’. Having this distinction will allow IRT personnel and IS Management to easily distinguish<br />

between cases that are critical and active from those that are not being actively worked.<br />

Page 20 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

APPENDIX D ‐ GUIDELINES FOR SERVICE DESK AND IS STAFF<br />

PRIMARY OBJECTIVE<br />

The primary objective is to determine if the problem being reported is a security incident. In most<br />

instances, the problem being reported will not constitute an incident as defined within the policy ‐<br />

(see Definitions – <strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> ‐ Categories).<br />

No set of questions will address every circumstance; previous experience with an individual and<br />

intuition may be relied upon to help determine if an incident has occurred. Service Desk personnel<br />

are accountable for asking the questions about an incident, making a reasonable attempt at<br />

determining if an incident has occurred, recording facts and responses to questions, and forwarding<br />

pertinent information to the responsible parties.<br />

PROBLEM REPORTING<br />

Familiarity with the policy definitions and glossary will assist support personnel in making a<br />

determination if a security incident has occurred. Individuals reporting problems and/or incidents<br />

should be informed as to the reason for the questions (i.e., the University is attempting to<br />

determine if sensitive data is at risk or compromised) and all individuals should be encouraged to<br />

openly discuss the problem being reported. Any information provided by an individual that helps<br />

in the determination is of considerable value; the individual’s cooperation is critical, greatly<br />

appreciated and should be recognised.<br />

INQUIRIES<br />

For those individuals who may be reporting a security incident, questions that might be asked<br />

include but are not limited to:<br />

• Were Network ID’s and/or passwords accessed or released?<br />

• Were medical records of individuals present or accessed?<br />

• Were credit card numbers or financial information disclosed?<br />

• Did physical theft of computer equipment occur?<br />

• Was “foreign” or unauthorised equipment connected to the network?<br />

DISCOVERY AND REPORTING<br />

If the answers to the inquiries indicate that an incident may have occurred, service desk staff<br />

personnel should assume that an incident has actually occurred and perform the following<br />

activities:<br />

• Obtain and record the contact information for the individual reporting the problem<br />

(name, telephone numbers, e­mail address)<br />

• Record relevant information about the incident (e.g., time/date of suspected<br />

occurrence, type of information compromised, location of the compromise)<br />

Page 21 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

• Inform the individual to expect contact from a member of the <strong>Incident</strong> <strong>Response</strong><br />

Team<br />

• Request the individual to treat the incident as a confidential matter<br />

• Contact a member of the IRT team or “on call” IS IRT IM for further assistance.<br />

ESCALATION<br />

The Service Delivery Team SPOC is responsible for making an early determination if an incident has<br />

occurred or might be indicated. If the SPOC believes an incident has occurred, might be indicated,<br />

or unsure, the IRT Lead or IRT <strong>Incident</strong> Manager should be contacted immediately, using the<br />

department’s notification procedures.<br />

Page 22 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

APPENDIX E ‐ NOTIFICATION TREE<br />

Page 23 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

APPENDIX F ‐ INCIDENT ASSESSMENT CHECKLIST<br />

The activities described in this checklist are designed to assist in the initial assessment process<br />

performed and/or conducted by the Service Desk SPOC.<br />

Completion of this checklist is essential for any incident that calls for the execution of the IS<br />

<strong>Incident</strong> <strong>Response</strong> Procedures. Once the IS IRT is assembled, the Assessment Checklist is reviewed<br />

for completion to ensure all pertinent facts have been established.<br />

A. DESCRIPTION OF INCIDENT ­ DATA RELEVANT TO THE INCIDENT SHOULD BE<br />

COLLECTED FOR USE IN THE PROCESS OF INCIDENT DETERMINATION.<br />

A1. Record the current date and time.<br />

A2. Provide a brief description of the <strong>Incident</strong>.<br />

A3. Who discovered the <strong>Incident</strong>? Provide name and contact information.<br />

A4. Indicate when the incident occurred and when it was discovered.<br />

A5. How was the <strong>Incident</strong> discovered?<br />

A6. Describe the evidence that substantiates or corroborates the <strong>Incident</strong> (e.g., eye‐witness, timestamped<br />

logs, screenshots, video footage, hardcopy, etc.).<br />

A7. Identify all known parties with knowledge of the <strong>Incident</strong> as of current date and time.<br />

A8. Have all parties with knowledge of the <strong>Incident</strong> been informed to treat information about the<br />

<strong>Incident</strong> as “sensitive or confidential”?<br />

B. TYPES OF INFORMATION, SYSTEMS AND MEDIA ­ PROVIDE INFORMATION ON THE<br />

NATURE OF THE DATA THAT IS RELEVANT TO THE INCIDENT.<br />

B1. Provide details on the nature of the data (e.g., student information, research data, personnel<br />

information etc.).<br />

B2. Does the information (if compromised) constitute a breach of of regulatory requirements (e.g.,<br />

Data Protection or University policy? Describe what is known.<br />

B3. Was the compromised information maintained by a University Client or a 3 rd Party? Provide<br />

details.<br />

B4. How was the information held? Identify the types of information systems and/or the media on<br />

which the information was stored (e.g., hardcopy, laptop, CD‐Rom, etc.).<br />

Page 24 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

B5. If the information was held electronically, was the data encrypted or otherwise disguised or<br />

protected (e.g., redacted, partial strings, password required, etc.)? If so, describe measures taken.<br />

B6. If a customer held the information:<br />

‐ Establish the customer point of contact. (e.g. JANET )<br />

‐ Assign responsibility to IRT member to contact the customer.<br />

B7. If a 3 rd Party held the information:<br />

‐ Identify the individual within the University who best represents the 3 rd Party. If there is no<br />

suitable University contact, an IRT member will be assigned responsibility for directly contacting<br />

the 3 rd Party.<br />

‐ Assign responsibility to IRT member to contact that individual.<br />

‐ IRT member will work with the University contact or 3 rd Party to obtain a copy of any contract or<br />

confidentiality agreement and ascertain what knowledge of the <strong>Incident</strong> the 3 rd Party might have<br />

and what action if any has been taken.<br />

B8. Who currently holds evidence of the <strong>Incident</strong>? Provide name and contact information.<br />

B9. What steps are required or being taken to preserve evidence of the <strong>Incident</strong>? Describe.<br />

C. RISK/EXPOSURE ­ ATTEMPT TO DETERMINE TO WHAT EXTENT RISK AND/OR<br />

EXPOSURE IS PRESENTED BY THIS INCIDENT.<br />

C1. Can we reasonably determine the risk or exposure?<br />

C2. To what degree are we certain that the data has or has not been released?<br />

C3. Do we have contact with someone who has “firsthand” knowledge of the circumstance (e.g., the<br />

owner of a stolen laptop)? Provide name and contact information.<br />

C4. What firsthand knowledge have we determined? Describe what is known.<br />

C5. Can we identify and do we have contact with the party that received the data or caused the<br />

compromise? Describe what is known.<br />

C6. Identify the impacted parties, if possible. Are they University Customers or 3 rd Parties? Provide<br />

estimated number, if known.<br />

C7. What is the risk or exposure to the University? Describe.<br />

Page 25 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

C8. What is the risk or exposure to the Customer? Describe.<br />

C9. What is the risk or exposure to the 3 rd Party? Describe.<br />

C10. Can we determine to what extent the media may know of this <strong>Incident</strong>? Describe.<br />

D. NEXT STEPS ‐ DETERMINE WHAT INFORMATION OR ACTION IS REQUIRED TO BETTER<br />

ASSESS OR ADDRESS THIS INCIDENT.<br />

D1. Do we have enough information to establish the category and severity of the <strong>Incident</strong>?<br />

‐ If “yes”, declare the <strong>Incident</strong> category and severity.<br />

‐ If “no”, describe what else might be required.<br />

D2. If additional data collection data is required, assign responsibility to IRT member for collection<br />

and reporting to IRT.<br />

D3. Is there any deadline or reporting requirement (self‐imposed or regulatory) we need to<br />

address? Provide details.<br />

D4. Based on current knowledge, do we require resources of the Secondary Team? If so, determine<br />

the makeup and assign responsibility for contact to IRT members.<br />

D5. What communications need to be established? Provide details.<br />

D6. Are there any immediate issues that have not been addressed? Describe.<br />

D7. Recap all work and responsibility assignments.<br />

D8. When do we meet again to follow‐up? Provide details.<br />

Page 26 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

APPENDIX G ‐ INCIDENT CONTAINMENT ACTIVITIES<br />

The IRT will determine and execute the appropriate activities and processes required to quickly<br />

contain and minimise the immediate impact to the University, Customer and 3 rd Parties.<br />

Containment activities are designed with the primary objectives of:<br />

• Counteract the immediate threat<br />

• Prevent propagation or expansion of the incident<br />

• Minimise actual and potential damage<br />

• Restrict knowledge of the incident to authorised personnel<br />

• Preserve information relevant to the incident<br />

A. CONTAINMENT ACTIVITIES ­ UNAUTHORISED ACCESS<br />

ACTIVITIES THAT MAY BE REQUIRED TO CONTAIN THE THREAT PRESENTED TO<br />

SYSTEMS WHERE UNAUTHORISED ACCESS MAY HAVE OCCURRED.<br />

A1. Disconnect the system or appliance from the network or access to other systems.<br />

A2. Isolate the affected IP address from the network.<br />

A3. Power off the appliance(s), if unable to otherwise isolate.<br />

A4. Disable the affected application(s).<br />

A5. Discontinue or disable remote access.<br />

A6. Stop services or close ports that are contributing to the incident.<br />

A7. Remove drives or media known or suspected to be compromised.<br />

A8. Where possible, capture and preserve system, appliance and application logs, network flows,<br />

drives and removable media for review.<br />

A9. Notify IRT of status and any action taken.<br />

Page 27 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

B. CONTAINMENT ACTIVITIES ­ UNAUTHORISED ACQUISITION<br />

ACTIVITIES THAT MAY BE REQUIRED TO CONTAIN THE THREAT PRESENTED TO<br />

ASSETS WHERE UNAUTHORISED ACQUISITION MAY HAVE OCCURRED.<br />

B1. Identify missing or compromised assets.<br />

B2. Gather, remove, recover and secure sensitive materials to prevent further loss or access.<br />

B3. Power down, recycle or remove equipment known to be compromised.<br />

B4. Where possible, secure the premises for possible analysis by local management and law<br />

enforcement.<br />

B5. Gather and secure any evidence of illegal entry for review by local management and law<br />

enforcement.<br />

B6. Where possible, record identities of all parties who were a possible witness to events.<br />

B7. Preserve camera logs and sign‐in logs for review by local management and law enforcement.<br />

B8. Notify IRT of disposition of assets and any action taken.<br />

Page 28 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

APPENDIX G ‐ PRIMARY AND ALTERNATE CONTACT LIST<br />

Department or<br />

Function<br />

Director of<br />

<strong>Information</strong><br />

Services<br />

University<br />

Applications<br />

Group<br />

University<br />

Systems Group<br />

University Mobile<br />

Communications<br />

Group<br />

<strong>Security</strong> Architect<br />

Primary Contact<br />

Andrew Minter<br />

Peter Datchens<br />

Julian Lintell‐Smith<br />

Robert Cox<br />

Nigel Jeffries<br />

Alternate Contact<br />

Dave Rowen<br />

Martin McNaught<br />

Administration Sharon Cole Sandy Wells<br />

IS Technical<br />

Mike Meredith , Dave Early, James<br />

Holland<br />

(others as required)<br />

Change Management<br />

Service Delivery<br />

Jackie Dwyer<br />

Dave Gratton<br />

In the event of an IRT Team member not being available then their role must be delegated by<br />

the IRT Chair or IRT IM<br />

Page 29 of 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

APPENDIX H ‐ CORRECTIVE MEASURES<br />

The IRT will determine and cause the execution of the appropriate activities and processes<br />

required to quickly restore circumstances to a normal secure state.<br />

Corrective measures are designed with the primary objectives of:<br />

• Secure the processing environment<br />

• Restore the processing environment to its normalized state<br />

A. CORRECTIVE MEASURES – UNAUTHORISED ACCESS<br />

ACTIVITIES THAT MAY BE REQUIRED TO RETURN CONDITIONS FROM UNAUTHORISED<br />

ACCESS TO A NORMAL AND SECURE PROCESSING STATE.<br />

A1. Change passwords on all local user and administrator accounts or otherwise disable the accounts<br />

as appropriate.<br />

A2. Change passwords for all administrator accounts where the account uses the same password<br />

across multiple appliances or systems (servers, firewalls, routers).<br />

A3. Re image systems to a secure state.<br />

A4. Restore systems with data known to be of high integrity.<br />

A5. Apply OS and application patches and updates.<br />

A6. Modify access control lists as deemed appropriate.<br />

A7. Implement IP filtering as deemed appropriate.<br />

A8. Modify/implement firewall rule sets as deemed appropriate.<br />

A9. Ensure anti‐virus is enabled and current.<br />

A10. Make all personnel “security aware”.<br />

A11. Monitor/scan systems to ensure problems have been resolved.<br />

A12. Notify IRT of status and any action taken.<br />

B. CORRECTIVE MEASURES – UNAUTHORISED ACQUISITION<br />

PAGE 30 OF 31


<strong>Information</strong> <strong>Security</strong> <strong>Incident</strong> <strong>Response</strong> <strong>Policy</strong> & Procedures<br />

ACTIVITIES THAT MAY BE REQUIRED TO RETURN CONDITIONS FROM AN UNAUTHORISED<br />

ACQUISITION TO A NORMAL AND SECURE PROCESSING STATE.<br />

B1. Retrieve or restore assets where possible.<br />

B2. Store all sensitive materials in a secure manner (e.g., lockable cabinets or storage areas/container).<br />

B3. Install/replace locks and issue keys only to authorised personnel.<br />

B4. Restore security devices and/or apparatus to working condition.<br />

B5. Remove and retain unauthorised equipment from network/area.<br />

B6. Implement physical security devices and improvements (e.g., equipment cables, alarms) as deemed<br />

appropriate.<br />

B7. Make all personnel “security aware”.<br />

B8. Notify IRT of status and any action taken.<br />

PAGE 31 OF 31

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!