3.2 Machine Learning Methods for Code Design 97

In order to decide which edges of the mother code should be pruned away to lower the

least the minimum distance, the idea is to formalize a certain analogy that may be found

between the graph of a code **and** an artificial neural network (ANN). The ANN definition

is presented further. In that case, the addressed problem of pruning edges appears to be

not a common artificial learning problem. Indeed, applying a learning process to an ANN

basically means that the structure, i.e., the connections between neurons, are already determined,

**and** what is learnt is the weight of each connection. When the learning process

is said "supervised", the desired output of each neuron, on the output layer, is known for

each input prototype from a training set.

Our problem is rather different since it consists in finding the structure of the network:

what should be the connections between neurons. However, the structure of the network

is usually decided in an ad hoc way or with simple heuristic rules [74]. Indeed, except

an exhaustive search, none method is known to determine the optimal architecture for a

given problem. A suboptimal solution consists in using constructive algorithms starting

from a minimal architecture then adding neurons **and** connections progressively during

the learning process [74]. Another solution considers an inverse technique: starting from

a fully interconnected structure, they remove neurons or connexions which seem nonessential.

We are going to focus on the latter **methods**.

3.2.2 Neural networks **and** **codes** Tanner graphs

Definition

Definition 10 A formal neuron is basically a processor which applies a simple operation

to its inputs, **and** which can be connected to other identical processors in order to form a

network.

Such a neuron is depicted on figure 3.1, **and** defined in [75].

x 1

w 1

x 2 w2

h f g y

x w 3 3

w 4

x 4

A = f(h({x i } i=1...4 , {w i } i=1...4 ))

x i : neuron inputs

A: neuron activation

y: neuron output

w i : synaptic weights

h: input function

f: activation (or transfert) function

g: output function

y = g(A) (= A most often)

Figure 3.1 : General definition of a formal neuron