10. Machine learning for cooperative networks - bell labs belgium

belllabs.be

10. Machine learning for cooperative networks - bell labs belgium

Machine learning for cooperative networks

Kavé Salamatian

Lancaster University


Actual view of Internet networks

Nodes implements

network layers

Layers are shielded and have

direct interaction only with upper

and lower layer

Protocols are indirect interaction

between layer of the same level

Different nodes play different

roles

Routers, Hosts, firewalls, …

application

transport

network

data link

physical

network

data link

physical

network

data link

physical

Courtesy of J. Kurose & K. Ross

network

data link

physical

network

data link

physical

network

data link

physical

network

data link

physical

network

data link

physical

network

data link

physical

application

transport

network

data link

physical

from “Computer Networking: A Top-Down Approach.”


Going back to basics

A network is build of components

Local in a node or distributed

Sitting in one layer or crossing layers

Tightly or lightly coupled

Collaborating to transmit information from point to

point.

Actual layered architecture is just one specific type of

collaboration

Collaboration through protocols

Autonomous networking idea

Moving from pile view to puzzle view

Need for new theoretical framework


Cooperation ?

Full cooperation

Do the best possible behavior to achieve a performance goal

Is the goal achievable ?

How to achieve the goal ?

Non–cooperative

Selfish behavior

Different rational goal

How to mitigate conflicting rational goal ?

Malicious behavior

Harmful goal

How to contain irrational objectives ?

Cooperation assumes rationality

Social intelligence

Machine learning


Cooperation framework

• Each Node implement a forwarding function

• The forwarding function implement the cooperation


0:

Y

X

t

t'

t'

t'

0: t 0: t

1

, Y2

, ,

YN

f X X

i 1

,

2

, ,

t

M i

t

X 1

N

t

X 2

t

X N

f

i


Y

t'

t'

1

, Y2

,

Y

t'

M


Forwarding function Examples

Flooding

t

i,

j,

t Y X j

t

i

Routing

Y

t

j






X


t

i

if

else

cond

(

X

t

i

)

Distributed computation

Network coding

Y t kT f (X t , X

t T ,K , X

t kT )

o

i i i

Any other ?

Y j

t T


j,t j t T

j

X i

t j


Cooperation Incentives

Nodes are selfish

Just forward message

when there is a benefit

pragmatic and rational

limited patience and

resources

You have to convince

them to cooperate

By incentive or

punishment


Classical forwarding

?


More general framework

?


Why to forward ?

A 1 A 2 A n

ID , A

Let’s define for each packet a set of attributes A i

Destination address D(P i )

Some Attributes are extracted from packet, some are coming from

local context

Let’s define a utility function U(A i, D(P i ), ID, A)

The utility of forwarding message i directed to D(P i ) to node ID with

context A

The utility function capture the selfishness of the node

Forwarding scheme :

Calculate for each packet in buffer its utility

Forward the largest utility


Utility functions

Classical routing : Assign the utility function 1 if the

node ID is on the path to destination D(P i ) null

otherwise

PROPHET: The delivery likelihood is the utility

Community or content networking :Give a higher utility

to some specific contents or community.

What if the utility doesn’t depend on destination adress

?

Results in epidemic forwarding

Utility function can change over time and adapt to

change in the environment

Spray and focus

Move from opportunistic to infrastructure mode

Learning the utility function


Case study

Cooperative anomaly detection in a network

Selfish and Resource aware nodes

Bandwidth/processing power/power

constraints

Ready to learn to increase their

efficiency

Unreliable links and nodes

Cannot count on their permanent

presence

Might be opportunistically off

Knowing the state of other nodes is useful for


State sharing

Each node maintains a state vector

How to share useful information about

states with interested nodes

In the simplest setting all the node are

interested about all states

Nodes should be able to define which

variables they are interested in


A crash course in linear

estimation

Let’s assume that we observe

and we want to estimate

The MMSE estimator is known to be

For Jointly Gaussian distribution this reduce

to

Example


A crash course in source

compression

How to represent n vectors

using nR bits.

The optimal local compression scheme consists

of two stages:

a projection stage where the state vector is

projected linearly into an orthonormal space

a quantization phase that assigns the bit

budget to different projection dimensions

following a water filling argument.


Single Hop case

m nodes are all connected (directly or

through an overlay ) to a node c.

An approximation of the states of the m

nodes should be derived.

Compression is needed.

Sampling

Local compression

Distributed compression


Local compression

Each node calculate locally

its covariance

apply a KLT

apply the resulting projection

do quantization

forward to node c the compressed state

vector

The state vector is reconstructed at node

c


Distributed compression

Node i send to node c a noisy projection

What is the optimal projection and the

optimal quantization ?

At node c,

is received

This can be used to estimate

Do not need to send!

just send the

estimation error


An iterative approach

Each terminal optimize its

local encoder, while all

other encoders are held

fixed

Algorithm terminates

when converged and

leads to a local minimum.

Can be seen as a

distributed learning phase

using an EM loop

An iteration of the distributed

KLT algorithm: C2 and C3 are

kept fixed while encoder 1 is

chosen optimally


Multi-hop scenario

A node maintains three data structures

A vector of local states

A preference list

a weighting list

with variance

and a maximal

variance

A list of received projections


Reception processing

Extracting node and state variable IDs and

assigning each received value to the correct

variables.

if a new projection or a new remote state

variable is observed, update the data

structures

Re-estimate the covariance matrix

Each 30 transmissions

Knowing , the estimation of remote state

variables proceed

Update


Preference list processing

At time k=0

contains the IDs of the state

variables node i is interested in with a high

weight.

This is forwarded to neighbors

After reception neighbor's preference list,

a lower weight is assigned to the new

values.


Forwarding scheme

Forward variables that are correlated with

preference list of neighbors

incentive/punishment mechanism

The node implements the distributed

compression.

Apply the optimal projection

Forward the projections when they

change


Incentive or punishment ?

The node adds estimated variables

from its own preference list to the

forwarded variables

This adds its estimation error as

a noise in the neighbors

estimation process

Neighbors have an incentive to

help it to reduce its estimation

errors

Cooperation by punishment

rather than by incentive


Enforcing collaboration

The proposed punishment mechanism

acts as a shadow price

If mechanism for enforcing shadow

price exists we can implements a

Pareto-Optimal cooperation

mechanism

Being social becomes helpful

How to enforce shadow prices ?

By entangling the performance of or

neighbors estimation with our own

estimation

Linear projections provide an elegant

solution

Cooperation by punishment not by

incentives

Could we add such a functionality in

future internet ?


Cooperation scheme

Cooperation

Forward projections

Learn the environment

Infer covariance with neighbors

Use it to estimate best

projections

Benefit from it

Estimate variable in your

preference list

Want more benefits

Behave better with your


Performance


Convergence


Being social is helpful


Conclusion & Perspectives

We define a cooperative framework for

networks

Illustrated with distributed state sharing

Applicable to a large set of scenarios

DTN, distributed

compression/transmission

The essence of distributed setting is social

intelligence

Node selfishness is essential

More magazines by this user
Similar magazines