Enterprise Data Center Security with Software Defined Networking ...

amphionforum.com

Enterprise Data Center Security with Software Defined Networking ...

Enterprise Data Center Security with

Software Defined Networking

and OpenFlow

Renato Recio

IBM Fellow &

System Networking CTO

© 2012 IBM Corporation


2

Session Objectives

�At the conclusion of this session, you should be:

–Able to describe:

• OpenFlow (OF) & Distributed Overlay Virtual Ethernet (DOVE) networking

Software Defined Networking (SDN)

• Mechanisms introduced by OF, DOVE and SDN that need to be protected

–Hopefully motivated to contribute to the SDN community �

�Quick background on me:

–Storage (SCSI, iSCSI, NFS, FC, FCoE),

–Cluster (InfiniBand, RDMA),

–Ethernet (CEE, Qbg, DOVE),

–Virtualization (PCIe IO, vSwitches, virtual security appliance framework)

–Integrated systems (blade, rack, multi‐rack � total solution)

Not me: Security protocols, appliances or services �

© 2012 IBM Corporation


3

Agenda

Enterprise Data Center Networking

–Issues and Requirements

Software Defined Networking approach to the solution

–Optimization

•OpenFlow approach & overview

•Open Network Foundation (ONF) Overview

–Automation

•Distributed Overlay Virtual Ethernet (DOVE) approach & overview

–Integrated Software Defined Network stack

Security vulnerabilities & protection focus areas

�Summary

–Call to Action

3 product plugs � somebody had to pay my travel �

© 2012 IBM Corporation


4

Early Ethernet Campus Evolution

Layer3

Layer2

Campus

Core Layer

95%

Aggregation Layer

Access Layer

5%

WAN

� In the beginning, Ethernet was used to

interconnect stations (e.g. dumb terminals),

initially through repeater & hub topologies,

eventually through switched topologies.

� Ethernet campuses evolved into a

structured network typically divided

into a Core, Service (e.g. firewall),

Aggregation and Access Layer.

– Traffic pattern is mostly North‐South

(directed outside campus vs peer‐peer).

– To avoid spanning tree problems, campus

networks typically are divided at access.

© 2012 IBM Corporation


5

Ethernet Data Center Issues

Layer2

Data Center

Core Layer

Services Layer

25%

Aggregation Layer

Access Layer

75%*

SAN

WAN

Layer2

� Issues with tiered data center topology:

– Traffic pattern is mostly East‐West

(e.g. Web to Application tier)

– But… some East‐West traffic

(e.g. Application to Database tier)

goes through Service plane

(e.g. Firewall, IPS, Load Balancers), which is

inefficient (long latency, fattens up‐links).

– Large layer‐2 domains needed for

clustering and Virtual Machine mobility.

� Partly due to Ethernet limitations

(e.g. lack of flow control), the data center

used additional networks, such as

– Fibre Channel Storage Area Networks (SAN)

– InfiniBand cluster networks.

*IMC 2010 ACM paper “Network Traffic Characteristics of Data Centers in the Wild”, T. Benson, et. al.

© 2012 IBM Corporation


Example of Data Center Network Security Issues

Layer2

6

WAN

Core Layer

Aggregation Layer

Access Layer

SAN

Data Center

Layer2

© 2012 IBM Corporation


7

Example of Recent Security Breaches

2010 Attack Type

SQL Injection

URL Tampering

Spear Phishing

3 rd Party SW

DDoS

Secure ID

Unknown

Size of circle estimates relative

impact of breach

Shake Businesses and Governments

Feb

HB Gary

Epsilon

RSA

Fox News

X-Factor

Sony

L3

Communications Sony BMG

Greece

Mar April May June July Aug

© 2012 IBM Corporation

Northrop

Grumman

Citigroup

Lockheed

Martin

Spanish Nat.

Police

PBS

Gmail

Accounts

Bethesda

Software

IMF

Italy

PM

Site

Sega

PBS

SOCA

Malaysian

Gov. Site Peru

Special

Police

Nintendo Brazil

Gov.

Turkish

Government

AZ Police

US Senate NATO

Booz

Allen

Hamilton

Monsanto

SK

Communications

Korea

Vanguard

Defense


Traditional Data Center Network Issue Summary

Layer2

8

Data Center

Core Layer

< 25%

Aggregation Layer

Access Layer

> 75%

SAN

WAN

Discrete &

Decoupled

Manual &

Painful

Layer2

Limited

Scale

© 2012 IBM Corporation

Discrete components and piece parts

Multiple managers and

management domains

Box level point Services (e.g. IPS, FW)

Dynamic workload

management complexity

Multi‐tenancy complications

SLAs & security are error‐prone

Too many network types,

with too many nodes & tiers

Inefficient switching

Expensive network resources


Clients are looking for

smarter Data Center

Infrastructure that

solves these issues.

© 2012 IBM Corporation


Data Center Infrastructure Requirements

10

Integrated

Automated

Optimized

© 2012 IBM Corporation

Expandable Integrated System

Simple, consolidated management

Software Defined Network stack

Workload Aware Networking

Dynamic provisioning

Wire‐once fabric. Period.

Converged network

Single, flat fabric

Secure, grow as you need architecture


11

Optimized

Fabric

VM

Fabric Requirements

�Scalable fabric

WAN

Ethernet

Migration

scalability

© 2012 IBM Corporation

– Multi‐pathing for Virtual Machines

– Large cross‐section bandwidth

– HA, with fast convergence

– Switch clustering

(less switches to manage)

– Secure fabric services,

for physical and virtual workloads

�Converged network

– Storage: FCoE, iSCSI, NAS & FC‐attach

– Cluster: RDMA over Ethenet

– Link: flow control, bandwidth

allocation, congestion management

�High bandwidth links

– 10GE � 40 GE � 100 GE


12

Optimized

Fabric

100

10

1

.1

.01

System IO and Fabric Trends

Uni-Directional Bandwidth (GBytes/s)

x16

x8

x4

uP IO

PCIe

SAN

© 2012 IBM Corporation

Legend

2000 2005 2010 2015 2020

�Ethernet performance growth is causing disruptions in DC fabrics:

PCIe

•10 GE & CEE � Disrupting storage market (Fibre Channel SAN)

•40 GE & CEE � Will further disrupt cluster market (InfiniBand)

• 400 GE & CEE � Will disrupt server IO market & structure in 4‐6 years.

IB

Ethernet IB 4x FC


Optimized

Remote Direct

Memory Access

13

RDMA

over

Ethernet

Enhanced

Transmission

Selection

Data Center Convergence

Enabling Technologies

Flow

Control &

Congestion

Management

Fibre

Channel

over

Ethernet

Best available

& per priority

bandwidth

guaranteed

services

© 2012 IBM Corporation

FC Forwarders,

FC Data Forwarders,

FC Snooping

Bridges

Per priority

pause &

Congestion

Mgt

Offered traffic 10G Utilization

3G/s 3G/s

2G/s

3G/s 3G/s 3G/s

3G/s 4G/s 6G/s

t1 t2 t3

1

2

3

4

5

6

7

8

Transmit Buffers

3G/s HPC Traffic

3G/s

3G/s

3G/s

Pause

Storage Traffic

3G/s

LAN Traffic

4G/s

t1 t2 t3

Receive Buffers

2G/s

3G/s

5G/s

1

2

3

4

5

6

7

8


14

Getting

there �

© 2012 IBM Corporation

I thought this

talk was about

Software Defined

Networking!!!


15

What are the

options for

a scalable

fabric?

© 2012 IBM Corporation


Optimized

Layer‐3 ECMP

•Distributed control

•Scalable

•Fast convergence

•Standard

•Established

•Small layer 2

•Many devices

16

Standard Multi‐pathing Options

TRILL

TRILL

•Distributed control

•Scalable

•Fast convergence

•Standard

•Large layer‐2

•Emerging

•Many devices

© 2012 IBM Corporation

OpenFlow Controllers

Open‐Flow Switches

OpenFlow (OF)

•Control cluster

•Scalable

•Fast convergence

•Large layer‐2

Networking delivered

as Services

•Large virtual device

•Standards underway

•Emerging


But… are these

the best way to deliver

fabric services?

Traditional multi‐path

© 2012 IBM Corporation

approaches

Layer‐3 (ECMP) and

Layer‐2 (TRILL) are fine.


18

Let’s rewind for a minute

© 2012 IBM Corporation


19

Attributes of traditional network devices

Mgt

Plane SW

Control

Plane SW

Data Plane

ASIC(s)

�Control plane implements networking protocols

Networking protocols can be compared to a language

–Like a language, protocols have usage rules

–A language consist of words and grammar,

to be able to understand the context,

you need to understand them both:

•If only words (syntax) are understood,

the context will likely be interpreted incorrectly

•If only grammar (semantics) is understood,

important information will be missed

�Analogous to the complexity of speaking multiple

languages proficiently, traditional switches are:

–A collection of complex protocols,

–Implemented in millions of lines of code

© 2012 IBM Corporation


20

Attributes of traditional network devices

Mgt

Plane SW

Control

Plane SW

Data Plane

ASIC(s)

Additionally, the availability of network services

has been gated by vendor’s business priorities

No Linux, development community, equivalent

Warranties

© 2012 IBM Corporation

Features

Base


21

Why Open Networking?

Business Drivers

• Increasing need for agile network services

• Simplify network control and management

• Secure, automated network virtualization services

Technical Drivers

• Higher Bandwidths (10G > 40 G > 100 G)

•Server Virtualization

• Intelligent Management/Automation

Why OpenFlow?

• Provides a standard that opens the control plane

• Flow paradigm offers granular traffic control

• Global vantage point of the network

• Coexists with standard L2/L3 protocols

© 2012 IBM Corporation


Optimized

22

Memory

Flash

Traditional Network

CPU

Switching ASIC

Transceivers

OS

Mgt Plane

Telnet, SSH, SNMP,

NTP, SYSLOG,

HTTP, FTP/TFTP

Control Plane

Network topology, ACLs,

Forwarding & Routing,

QoS, Link Management

Data Plane

Link, Switching,

Forwarding, Routing

Each network element

has its own control and

management plane

© 2012 IBM Corporation


Optimized

23

Memory

Flash

CPU

Switching ASIC

Transceivers

OS

Mgt Plane

Telnet, SSH, SNMP,

NTP, SYSLOG,

HTTP, FTP/TFTP

Data Plane

Link, Switching,

Forwarding, Routing

Network

OpenFlow Protocol

Control plane is extracted

from the network

© 2012 IBM Corporation

Services run as Apps

Software Defined

Network Stack

Mgt Plane

Telnet, SSH, SNMP,

NTP, SYSLOG,

HTTP, FTP/TFTP

Apps

Multipath,

Security,

FCF,…

Control Plane

Network topology, ACLs,

Forwarding & Routing,

QoS, Link Management


Optimized

24

Rule

(match fields)

Switch

Port

VLAN

ID

OpenFlow rules

Action

(instructions)

MAC

src

MAC

dst

Stats

(counters)

1. Forward packet to port(s)

2. Encapsulate and forward to controller

3. Drop packet

4. Send to normal processing pipeline

5. Modify Fields

6. Optional vendor provided hash algorithms (e.g. for load balancing)

Eth

type

+ mask what fields to match

Packet + byte counters

IP

Src

© 2012 IBM Corporation

IP

Dst

IP

Prot

TCP

sport

TCP

dport


Optimized

25

Standard switching on MAC addrs

Switch

Port

port3

MAC

src

MAC

dst

Example operations

Eth

type

VLAN

ID

IP

Src

IP

Dst

© 2012 IBM Corporation

IP

Prot

TCP

sport

TCP

dport

Action

* * 00:1f:.. * * * * * * * port6

Switching based on application-level flow

Switch

Port

MAC

src

MAC

dst

Eth

type

VLAN

ID

IP

Src

IP

Dst

IP

Prot

TCP

sport

TCP

dport

Action

00:20.. 00:1f.. 0800 vlan1 1.2.3.4 5.6.7.8 4 17264 80 port6

Firewall / filtering rules

Switch

Port

MAC

src

MAC

dst

Eth

type

VLAN

ID

IP

Src

IP

Dst

IP

Prot

TCP

sport

TCP

dport

Forward

* * * * * * * * * 22 drop

IP destination-based routing

Switch

Port

MAC

src

MAC

dst

Eth

type

VLAN

ID

IP

Src

IP

Dst

IP

Prot

TCP

sport

TCP

dport

Action

* * * * * * 5.6.7.8 * * * port6

MAC address switching with VLAN check

Switch

Port

MAC

src

MAC

dst

Eth

type

VLAN

ID

IP

Src

IP

Dst

IP

Prot

TCP

sport

TCP

dport

Action

*

* 00:1f.. * vlan1 * * * * *

port7


Optimized

26

1 st OpenFlow single chip

switch to pass the 1 Terabit

per second barrier!

IBM RackSwitch G8264 OF Switch

4x 40 GE uplinks

or 4x*10G with

QSFP to SFP+ cable

Tbps

1.0

1.2

�OpenFlow‐based flow handling in hardware at line rate (1.28Tbps)

�Support Layer 2 (MAC) forwarding table manipulated thru OF:

– Layer 2 (MAC) table: Max 97K flow entries

– 12 tuple flow table: Max 750 flow entries

�Partner based OF Controllers (NEC pFlow, BigSwitch Floodlight,…)

© 2012 IBM Corporation

Plug 1

Specifications

Forwarding • Delay less than 1us

1.28Tbps; 960Mpps

Number of ports • 48 x 1 Gb/10 Gb SFP+ ports;

• 4 x 40 Gb QSFP+ ports

• Up to 64 x 1 Gb/10 Gb SFP+ ports with

optional breakout cables

Model • Airflow-type rear to front

• Airflow-type front to rear

Dimensions • 17.3” wide; 19.0” deep; 1U high

Protocol version • OpenFlow 1.0.0

Number of instances •1

Protocols • No legacy protocols running in OpenFlow

switch mode

Management • Telnet, SSH, SNMP, sFlow

Redundancy • Power/fan


27

OF Customer: Tervela

Provider of a market‐leading distributed data fabric for

global trading, risk analysis and e‐commerce

Optimized

Test 1: OpenFlow deliver

fast packet forwarding

Test 3: OpenFlow switches

Manage multiple trunks

Deterministic

Latency

Predictable

Network

Performance

Rapid

Convergence

Key Benefits

Tervela’s testing validated the IBM and NEC OpenFlow

solution ensures predictable performance of Big Data for

complex and demanding business environments.

© 2012 IBM Corporation

Test 2: OpenFlow switches

segregate traffic


Optimized

28

OF Customer: Selerity

Ultra‐low latency, real‐time financial information provider

Low Latency Event Data

U.S. Corporate Earnings

U.S. Earnings

Preannouncements

Energy Statistics

Macroeconomic

Indicators

Selerity’s IBM and NEC’s OpenFlow solution improves

real‐time decision‐making for global financial markets.

© 2012 IBM Corporation

Key Benefits


29

Service Plane

Appliances ?

OpenFlow

Modular

Switch

OpenFlow

TOR

OpenFlow

TOR

Server Server Server

Overlay Network

VM VM VM VM

Summary so far

SDN

Ctrl

Can OpenFlow configure:

�Forwarding

rules?

�Access

Controls?

�Core

switch attributes?

Storage

© 2012 IBM Corporation

What about Service

plane Appliances

(e.g. IPS)?

Overlay vSwitch

Infrastructure?


30

Service Plane

Appliances ?

OpenFlow

Modular

Switch

OpenFlow

TOR

OF

Ctrl

OpenFlow

TOR

Server Server Server

Overlay Network

VM VM VM VM

Storage

© 2012 IBM Corporation

Summary so far

Can OpenFlow configure:

�Forwarding

rules?

�Access

Controls?

�Core

switch attributes?

What about Service

plane Appliances

(e.g. IPS)?

Overlay vSwitch

Infrastructure?


31

What’s next?

Service Plane

Appliances ?

OpenFlow

Modular

Switch

OpenFlow

TOR

SDN

Ctrl

OpenFlow

TOR

Server Server Server

Overlay Network

VM VM VM VM

© 2012 IBM Corporation

Intrusion

Prevention

Service

Network APIs

OpenFlow

Driver

OS

Path

Service

Overlay

Driver

How can we provide:

virtual overlays?

Service plane

Appliances?


32

Why a Distributed Overlay Virtual

Ethernet (DOVE) network?

Because Virtualization

increased network complexity

© 2012 IBM Corporation

Overlay

Driver


33

Virtualization increased

network complexity

© 2012 IBM Corporation

Before Virtualization

� Static workload

� Static network state

� Network was simple

(configured once)


34

Virtualization increased

network complexity

Virtualization extended the physical network into the server’s

virtualization infrastructure, without mechanisms to tie the 2 together.

vSwitch state moves

physical switch state?

© 2012 IBM Corporation

After Virtualization

� Dynamic workload

� Dynamic network state

� Complex network

(state moves with VM)

Before Virtualization

� Static workload

� Static network state

� Network was simple

(configured once)


Automated

35

IBM VMready and DVS‐5000V

With IEEE 802.1Qbg vSwitches (e.g. DVS 5000v)

co‐ordinate the migration of network state

with the physical network.

vSwitch state moves

physical switch state moves

© 2012 IBM Corporation

DVS 5000v

� System Networking OS

� vCenter integration

� Network virtualization

automation

Before Virtualization

� Static workload

� Static network state

� Network was simple

(configured once)

Plug 2


Automated

IBM BC-H

VM VM VM VM VM

DVS

5000v

DVS

5000v

DVS

5000v

IBM Rack Server

DVS

5000v

VM

DVS

5000v

Virtual Machine network state migration with Qbg

Virtual Machine network state migration with Qbg

Communication path between Virtual Machines

� IBM System Networking OS

feature set on a distributed,

virtual switch for VMware

� Seamless integration with

VMware vCenter

� Standards (Qbg) based network

virtualization coordination

between Hypervisor & physical

switch

� Optimizes East-West traffic

between VMs in a single server

(VEB) & VMs within a

rack/chassis (VEPA)

� Administration simplicity (VEPA)

36


Automated

1000

37

100

10

Network Virtualization Trends

(approximately 10x every 10 years)

1

2006 2008 2010 2012

Infrastructure

Groupware

Virtual Machines per 2‐Socket Server

Database

Web

�Number of VMs per socket is rapidly growing (10x every 10 years).

– Increases amount of VM‐VM traffic in Enterprise Data Centers

(e.g. co‐resident Web, Application & Database).

– VM growth increases network complexity associated with creating/migrating:

layer‐2 (VLANs, ACLs…) & layer‐3 (e.g. Firewall, IPS) attributes.

© 2012 IBM Corporation

Email

Application

2014

Terminal Server

2016


Automated

38

Distributed Overlay Virtual Ethernet

(DOVE) Network

SDN

Controller

© 2012 IBM Corporation

� Multi‐tenant aware,

including virtual service

plane appliances

(e.g. Firewall, IPS)

� Layer‐3 DOVE switch

(e.g. VXLAN) decouples

virtual networks

from physical network

� Simple “configure

once” physical network

(vs configured per VM)


Automated

39

Encapsulation

Outer

MAC

DOVE Technology

VXLAN based Encapsulation Example

Outer IP

UDP

EP Header

Inner

MAC

Inner

MAC

Version I R R R Reserved

© 2012 IBM Corporation

Original Packet

Inner IP

Inner IP

Domain ID Reserved

Encapsulation Protocol (EP) Header (e.g. VXLAN based)

(VXLAN extension in Yellow � necessary IETF version field)

Payload

Payload


Automated

40

Site

Server

Improving Networking Efficiency for

Consolidated Servers

10.0.5.7 vAppliance

00:23:45:67:00:04

10.0.5.4

APP

00:23:45:67:00:14

A Virtual Machine

APP

10.0.5.1

10.0.5.5

APP 00:23:45:67:00:01

00:23:45:67:00:15

APP

Database

HTTP

HTTP

Server

10.0.0.42

00:23:45:67:00:25

10.0.3.41

00:23:45:67:00:23

10.0.3.3

00:23:45:67:00:24

Layer-2 Layer-3Distributed

vSwitch

vAppliance

� Hypervisor vSwitches enable addition of virtual appliances (vAppliances),

which provide secure communication across subnets (e.g. APP to Database tier).

– However, all traffic must be sent to an external Layer‐3 switch,

which is inefficient considering VM/socket growth rates and integrated servers.

� To solve this issue requires cross‐subnet communications in Hypervisor’s vSwitch.

© 2012 IBM Corporation

10.0.3.6

00:23:45:67:00:16

10.0.3.9

00:23:45:67:00:17

10.0.3.8

00:23:45:67:00:18

HTTP

HTTP

HTTP

Layer-3

Appliance

(e.g. IPS)


Automated

4

41

Site

Server

Multi‐Tenant with

Overlapping Address Spaces

A Virtual Machine

Note, vSwitches and

vAppliances are not shown.

APP

HTTP

Database

HTTP

Site

10.0.5.7

00:23:45:67:00:04

DOVE

Network

10.0.3.1

00:23:45:67:00:01

10.0.0.4

00:23:45:67:00:25 Pepsi

10.0.3.42

00:23:45:67:00:01

Coke

Overlay Network

© 2012 IBM Corporation

10.0.3.1

00:23:45:67:00:01

10.0.5.7

00:23:45:67:00:04

Server

� Multi‐tenant, Cloud environments require multiple IP address spaces

within same server, within a Data Center and across Data Centers (see above).

– Layer‐3 Distributed Overlay Virtual Ethernet (DOVE) switches enable

multi‐tenancy all the way into the Server/Hypervisor,

with overlapping IP Address spaces for the Virtual Machines.

Overlay Network

vAppliance

vAppliance

10.0.3.42

00:23:45:67:00:25

10.0.5.1

00:23:45:67:00:01

10.0.5.4

00:23:45:67:00:01

Database

HTTP

APP

HTTP

HTTP


Integrated

42

Network

Element

Manager

SDN Controller Platform

Path

Services

Native Switch

(L‐2/3) Driver

Software Defined Networking

Technologies

Multi-tenant

Security

Services

Network APIs

Network Control OS

DOVE

Driver

Software

HW & embedded SW

WAN

5KV

SAN

Services

5KV 5KV

OpenFlow

Driver


Control Plane


© 2012 IBM Corporation

� Network functions

delivered as services

– Multi‐tenant VM security

– Virtualized load balancing

� Network API’s provides

an abstract interface

into underlying controller

– Distributes, configures & controls

state between services &

controllers

– Provides multiple abstract views

� Network Operating System

drives set of devices

– Physical devices (e.g. TOR)

– Virtual devices (e.g. DVS 5000v)


43

For SDN, what vulnerabilities exist?... and

What protections shield those vulnerabilities?

© 2012 IBM Corporation


44

Traditional Network Vulnerabilities

�Attack targets:

–Applications: network Apps,

general Apps (e.g. Database)

–Servers: Transport, OS,

Hypervisor

–Network: Routers, switches,

virtual switches

5KV

WAN

5KV 5KV

Layer Vulnerability Examples

Application

(includes

OS “Apps”)

Transport

IP FC

Ethernet

Link

Ethernet

Physical

© 2012 IBM Corporation

General: Cross-site scripting,

Buffer overflow, SQL injection…

Net: DNS cache poisoning, …

TCP: SYN flood; SYN/ACK

scan; spoofed RST; hijack; …

UDP: smurf DoS attack;

spoofed ping-pong; …

IP/Routing: MIM routing attack,

FIRP attack, IP spoofing, ping

flood, ICMP destination

unreachable, smurf attack,

source attack, …

FC: target/initiator spoofing;

MAC (FCF) spoofing

ARP cache poisoning

Physical link tap


45

Traditional Network Defenses

�Protection mechanisms:

–Physical: secure perimeter,…

–Servers: security protocols,

defensive programming, …

–Network layers & Apps:

Firewall, intrusion detection,

intrusion prevention,

security protocols, ACLs…

5KV

WAN

5KV 5KV

Layer Protection Examples

Application

(includes

OS “Apps”)

Transport

IP FC

Ethernet

Link

Ethernet

Physical

© 2012 IBM Corporation

General: Firewall, Intrusion

Detection, Intrusion

Prevention…

Net: DNSSec, SSL, …

Encrypt session: SSH, IPSec…

Intrusion detection/prevention

IP/Routing: IP ACL filters,

firewall, intrusion detection,

OSPF with IPSec, split horizon,

“infinite hop count” detection,

override source routing, …

FC: zoning, FC ACL filters,

Ingress/Egress MAC ACL filters;

VLANs

Physical security

Authentication protocol


Traditional Network Defense Products

46

5KV

WAN

5KV 5KV

GX4004

GX5208

Inspected Throughput = 1.5 � 4 Gbps

Inspected Throughput = 800 Mbps

© 2012 IBM Corporation

GS7800

Inspected Throughput = 23 Gbps

IBM Virtual Server Security

for VMware

Plug 3


47

What are the New Vulnerabilities for SDN?

Apps�

Network

Mgr

5KV

Path

Services

SDN Controller Platform

Native NOS

(L‐2/3) Driver

5KV

Security SAN

Services Services

Network APIs

WAN

Network Control OS

DOVE

Driver

OpenFlow

Driver

5KV



�All traditional attack targets...

© 2012 IBM Corporation

–Applications: network Apps, general

Apps (e.g. Database)

–Servers: Transport, OS,

Hypervisor

–Network: Routers, switches,

virtual switches

Plus, new attack targets:

–SDN controller: Traditional

application, server & network attacks

–Virtual infrastructure: Traditional

application & server attacks on the

hypervisor, virtual switch and VM

–Network: OpenFlow protocol

for OF enabled devices


Possible SDN defense approaches

Virtual Network �Traditional protection mechanisms,

plus one or more of the following:

48

VM

Apps

OS

Hypervisor vSwitch

1

2

VM VM

WAN

Apps

OS

3

1) 1 Virtual Security Apps: Push VM‐VM

traffic thru virtual appliance

2) 2 Physical Security Appliance: Push

traffic thru physical appliance

3) 3 SDN hosted Security Appliance: Push

traffic thru SDN based appliance

© 2012 IBM Corporation


Possible SDN defense approaches

Virtual Overlay

Network

49

VM

Apps

OS

Hypervisor vSwitch

1

2

VM VM

WAN

Apps

OS

3

�Traditional protection mechanisms,

plus one or more of the following:

1) 1 Virtual Security Apps: Push VM‐VM

traffic thru virtual appliance

2) 2 Physical Security Appliance: Push

traffic thru physical appliance

3) 3 SDN hosted Security Appliance: Push

traffic thru SDN based appliance

�What if a DOVE switch is used?

Same answers as above, but a

DOVE gateway embedded in the

Appliance is preferable for 2) 2 and 3). 3

© 2012 IBM Corporation


Possible SDN defense approaches

OpenFlow Network

& Controller

50

VM

Apps

OS

Hypervisor vSwitch

2

VM VM

WAN

Apps

1

OS

3

�Traditional protection mechanisms,

plus one or more of the following:

1) 1 SSL: Part of OF 1.0, protects access to

OpenFlow switch’s OF processing.

2) 2 Physical Security Appliance: To

protect the physical network

3) 3

SDN hosted Security Appliance: To

protect the SDN controller

© 2012 IBM Corporation


51

Call to Action

�Review the 1.2 OF & 1.0 OF‐Config specifications:

–https://www.opennetworking.org/images/stories/downloads/

openflow/openflow‐spec‐v1.2.pdf

–https://www.opennetworking.org/images/stories/downloads/

openflow/of‐config10‐final.pdf

� Attend the Open Networking Summit:

–http://opennetsummit.org/

� Join the Open Network Foundation:

–http://opennetworking.org/

�Help provide the security protection software

needed for Enterprise Data Center deployments

of Software Defined Networking

© 2012 IBM Corporation


52

Why Open Network Foundation?

Open Networking Foundation

• Dedicated to development, standardization and

promotion of Software Defined Networking (SDN).

• Maintains the OpenFlow specifications

ONF Vision

• Make SDN the new norm for networks

ONF Mission

• ONF will create the most relevant SDN standards

ONF Goals

• Create & promote ecosystem based on OpenFlow

to support SDN

• Drive industry-wide thought leadership for SDN

© 2012 IBM Corporation


53

ONF Ecosystem

Operators &

Service

Providers

Network

Systems

Software &

Virtualization

© 2012 IBM Corporation

Consumers

Silicon


54

ONF governance

�Board of Directors

– Users, not vendors

�Executive Director (employee)

– Reports to the Board; vendor neutral

�Technical Advisory Group

– ONF CTO function

�Working Groups

– Chartered by the Board

– Chaired by Board

appointee

Technical

Working

Group

Council of

Chairs

. . .

© 2012 IBM Corporation

Board of

Directors

Chairs Council

of Chairs

Technical

Working

Group

Market

Education

Committee

Executive

Director

Technical

Advisory

Group

Regional

Activities

Academic

Associates


55

OpenFlow (OF) & ONF Milestones

OF

v0.2

–1.1

IPv4,

MPLS

ONF

Launch

23 mbrs

OF

v. 1.2

© 2012 IBM Corporation

IPv6,

51

mbrs

OF

v.1.3

–1.4

Testing

Interop.

1H2010 2H2010 1Q2011 2Q2011 3Q2011 4Q2011 1Q2012 2Q2012


56

Software Defined Networking Summary

�Network Services value:

Network

Element

Manager

Software

HW & embedded SW

5KV

SDN Controller Platform

Path

Services

Native Switch

(L‐2/3) Driver

Multi-tenant

Security

Services

5KV

Network APIs

DOVE

Driver

5KV

SAN

Services

Network Control OS

OpenFlow

Driver

Control Plane

5KV



© 2012 IBM Corporation

–Eco‐system for network Apps

vs today’s closed switch

model

�DOVE Network value:

–Cloud scale resource

provisioning

–De‐couples virtual network

from physical network

�OpenFlow value:

–De‐couples switch’s control

plane from data plane

Data center wide physical

network control


57

Thank You

Renato J Recio

IBM Fellow & Systems

Networking CTO

11400 Burnett Road

Austin, TX 78758

512 973 2217

recio us ibm com

© 2012 IBM Corporation


58

Acknowledgements

�List of folks who contributed information used in this presentation �

– Kent Browne

– Marc Cohn

– Brandon Heller

– Jay Kidambi

– Pascal Meunier

– Vijoy Pandey

– Dan Pitt

– Rakesh Saha

– Pietro Volante

© 2012 IBM Corporation


59

© 2012 IBM Corporation


60

sw

hw

OpenFlow-enabled Switch

Vendor provided

forwarding /

routing standard

control protocols

Vendor provided

data plane

forwarding table

Overview

OpenFlow provides a protocol for loading switch’s forwarding rules.

OpenFlow components:

–OpenFlow enabled Ethernet switch (e.g. IBM 8264)

–OpenFlow protocol to add/remove flow entries

–OpenFlow controller(s)

OpenFlow

secure

channel

OpenFlow based

forwarding table

© 2012 IBM Corporation

OpenFlow Controller

cluster

(uses secure channel to

load forwarding table)


Optimized

61

� Controller‐to‐switch messages

– Configuring the switch

– Exchanging the switch capabilities

– Managing the flow table

– Send out packet to the network

� Asynchronous messages

OF Protocol Message Types

– From switch to the controller without controller solicitation

– Announce changes in the switch state, network state and errors;

For example: port‐status change.

– Sending ingress packet to the controller; For example: ARP from VM.

� Symmetric messages

– Sent in either direction

– Discover switch controller connection (using Hello messages) and

maintain it (using Echo request/reply messages).

© 2012 IBM Corporation


Optimized

62

�Research

– Helios (NEC)

– Floodlight (BigSwitch)

– NOX (C++, Python)

– Maestro (Rice University)

– Beacon (Java)

– Others in development

�Commercial

OF Controllers

– ONIX [OSDI 2010, Google, Nicira, NEC]

– Others expected over time

© 2012 IBM Corporation


ONF Technical

Working Groups

�Extensibility (OF 1.x)

– Wire protocol, extensible match &

error messages, forwarding model,

MAC, IPv4, IPv6

– Config‐mgmt (OF‐Config 1.x)

– Protocol & schema for config,

management of a switch

�Testing‐interop (OF‐Test 1.x)

– Interoperability tests, plug‐fests;

conformance test suites;

performance benchmarking

�Hybrid programmable

forwarding plane

– Insertion of OpenFlow into legacy

network: hybrid switches, hybrid

networks

© 2012 IBM Corporation

�OpenFlow‐future

– Forwarding‐plane modeling

– Hardware abstraction (not

mandating TCAM per table)

– Eventually L2/L3 protocol agnostic?

�Northbound API/SDN

abstractions

– Object & service models,

virtualization, characterization,

interaction

– SDN interfaces above OpenFlow

�Match‐action‐table

– Eventual home of field‐based rules?

�Use cases

Mailing Lists


64

ONF Legal

�A non‐profit industry consortium 501(c)(6)

– Incorporated 2010, Launched March 22, 2011

– Funded by member dues

– Open to any org. that pays annual dues, agrees to bylaws, IPR policy

�ONF legal

�IPR policy

– RAND‐Z: royalty‐free use of protocol, OpenFlow trademark, logo

•Automatic cross‐licensing of all related IP to all other members

•No licensing charges to members

•No protection for non‐members

– ONF itself: no IP

– Open interfaces, not open source or reference implementations

(great for others)

© 2012 IBM Corporation


65

Packet Flow through the processing pipeline

� Packets are matched against multiple tables in the pipeline

Packet

In

Ingress

Port

Empty

Action

Set

Table

0

Packet +

ingress port +

metadata

Table

1

� Action Set: Set of actions associated with the pkt that are accumulated –and

executed when the instruction set instructs the pkt to exit processing pipeline

� Metadata –a register to carry information from one table to the next

© 2012 IBM Corporation

Table

n

Action Set Action

Set

OpenFlow Switch

Packet

Execute

Action

Set

Packet

Out


66

Flowchart of packet flow through the switch

Packet In

Start at table 0

Match in

table n

Yes

Based on table config, do one:

• Send to controller

•Drop

• Continue to next table

1. Update Counters

2. Execute

Instructions:

• Update action set

• Update packet/match

set fields

• Update metadata

© 2012 IBM Corporation

Gototable

n

No No

Yes

Execute action

set


67

OpenFlow Actions (Partial list)

� Output to switch port (Physical ports & virtual ports).

Virtual ports include the following:

– ALL (all standard ports excluding the ingress port)

– CONTROLLER (encapsulate and send the packet to controller)

– LOCAL (switch’s stack)

– NORMAL (process the packet using traditional non‐OpenFlow pipeline of the switch)

� Set fields

– Ethernet Source address

– Ethernet Dest address

– IP source & dest addresses, IP ToS, IP ECN, IP TTL

– TCP/UDP source and destination ports

� Decrement IP TTL

� Add (push) a new VLAN tag

� Strip (pop) the outer VLAN tag

� Set queue ID when outputting to a port

� Apply Group (more on this next slide)

© 2012 IBM Corporation


68

OpenFlow Groups

� The ability for a flow to point to a group enables OpenFlow to represent

additional methods of forwarding (e.g. select & all)

� Group entry consists of group ID, group type, counters and list of action buckets

� Action buckets: each action buckets contains set of actions to execute

� Group Type:

– All: Execute all buckets in the group. Used for multicast or broadcast forwarding.

– Select: Execute one bucket in the group –based on a switch‐computed algorithm (e.g.

hash on some tupple, round‐robin, etc.). Used for load balancing.

– Indirect: Execute the one defined bucket in this group. Allows multiple flows or groups

to point to a common group identier, supporting faster, more efficient convergence

(e.g. next hops for IP forwarding).

– Fast failover: Execute the first live bucket. Enables the switch to change forwarding

without requiring round trip to the controller. Can be used for active – standby.

© 2012 IBM Corporation


Traditional Network Vulnerabilities & Protection

Layer Vulnerability Examples Protection Examples

69

Application

Transport

IP FC

Ethernet

Link

Ethernet

Physical

General: Cross-site scripting,

Buffer overflow, SQL injection…

Net: DNS cache poisoning, …

TCP: SYN flood; SYN/ACK

scan; spoofed RST; hijack; …

UDP: smurf DoS attack;

spoofed ping-pong; …

IP/Routing: MIM routing attack,

FIRP attack, IP spoofing, ping

flood, ICMP destination

unreachable, smurf attack,

source attack, …

FC: target/initiator spoofing;

MAC (FCF) spoofing

ARP cache poisoning

Physical tap

Wireless “tap”

© 2012 IBM Corporation

General: Firewall, Intrusion

Detection, Intrusion Prevention…

Net: DNSSec, SSL, …

Encrypt session: SSH, IPSec…

Intrusion detection/prevention

IP/Routing: IP ACL filters,

firewall, intrusion detection,

OSPF with IPSec, split horizon,

“infinite hop count” detection,

override source routing, …

FC: zoning, FC ACL filters,

Ingress/Egress MAC ACL filters;

VLANs

Physical security

Authentication protocol


70

Trademarks and disclaimers

Trademarks IBM Corporation 2012

– IBM, the IBM logo and ibm.com are trademarks or registered trademarks of International Business Machines

Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on

their first occurrence in this information with the appropriate symbol (® or ), these symbols indicate US registered or

common law trademarks owned by IBM at the time this information was published. Such trademarks may also be

registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at

“Copyright and trademark information” at www.ibm.com/legal/copytrade.shtml.

– Other company, product and service names may be trademarks or service marks of others.

– References in this publication to IBM products or services do not imply that IBM intends to make them available in all

countries in which IBM operates.

Disclaimers

– This information is provided on an "AS IS" basis without warranty of any kind, express or implied, including, but not

limited to, the implied warranties of merchantability and fitness for a particular purpose. Some jurisdictions do not

allow disclaimers of express or implied warranties in certain transactions; therefore, this statement may not apply to

you.

– This information is provided for information purposes only as a high level overview of possible future products.

PRODUCT SPECIFICATIONS, ANNOUNCE DATES, AND OTHER INOFORMATION CONTAINED HEREIN ARE SUBJECT TO

CHANGE AND WITHDRAWAL WITHOUT NOTICE.

– IBM reserves the right to change product specifications and offerings at any time without notice. This publication

could include technical inaccuracies or typographical errors. References herein to IBM products and services do not

imply that IBM intends to make them available in all countries.

– IBM makes no warranties, express or implied, regarding non‐IBM products and services. IBM makes no

representations or warranties with respect to non‐IBM products. Warranty, service and support for non‐IBM

products is provided directly to you by the third party, not IBM.

– All part numbers referenced in this publication are product part numbers and not service part numbers. Other part

numbers in addition to those listed in this document may be required to support a specific device or function.

© 2012 IBM Corporation

Similar magazines