108 Chapitre 3 : Machine Learning Methods for Code and Decoder Design iteration t = 0 weights initialiazed to 1 initialization of alleles of the genetic algorithm run genetic algorithm with cost function f (C,t) cost = x (F) vc (t) − y (C) vc (t) best w (2t) and w (2t+1) found t = t + 1 yes t < N iter no Figure 3.7 : Flow chart of the optimization procedure using a genetic algorithm to find the best weights minimizing the cost function, for each iteration. N iter is the number of decoding iterations for which we look for the correcting weights.

3.3 Machine Learning Methods for Decoder Design 109 of the genetic algorithm to the global minimum of the cost function, the size of the population must be as high as possible. In practice, to limit the computation time, it is widely accepted that the population size must be at least many hundreds [74]. When the mutual information is close to 1, it turns to be very difficult to get an accurate estimation of the actual mutual information of the messages of the code C, thanks to equation 3.10. Indeed, the closer to 1 is the mutual information, the rarer are the observations which give rise to decoding errors. Since the number N c of decodings, for one set of weights, has to be limited for computational time reasons, an accurate estimation of the mutual information becomes almost impossible. This problem is related to the error-floor estimation, about which works exist [35]. However, in our case, the method would require an error-floor estimation for each decoder, corresponding to each population vector. This is the prohibitive drawback of the method that made all our tries unsuccessful.Moreover, such a correction of the BP algorithm would be very interesting in the error-floor region, but the above mentioned prohibitive drawback is, more than ever, present in this region. Finally, it is interesting to note that all these decoders inspired from neural network models do not preserve the symmetry of messages. Indeed, it is easy to check that if a random variable X (standing for a LDR message) is symmetric in the sense of definition 1 in [10] (which is just the binary instance of definition 1.13), then the the random variable Y = αX, for any α in R, is not symmetric anymore. 3.3.6 Some other methods With the goal of investigating how artificial learning methods could contribute to the design of efficient coding systems, we have tried to see how other kinds of learning approaches could be applied to channel coding. Min-cut max-flow analysis Our purpose is to detect bad topologies in the Tanner graph of a code, bad topologies being sets of edges which make the decoding to get stuck. Still using the mutual information of messages on a given edge as a quality descriptor of this edge, one may think to consider the iteration when the mutual information on each edge remains stable or periodic but does not converge anymore to 1. At this point, the idea would be to consider the mutual information as a quantity of liquid which has to increase until being maximum in a water pipe network. Let us consider a water pipe network. For each pipe, the theoretical maximum throughput of liquid inside is called the capacity. The current throughput is called the flow. If the capacity of each pipe is known, then the Ford-Fulkerson algorithm [80] allows to find the maximal flow, shorten as max-flow, between a source at the beginning of the network and a sink at the end. It also allows to detect the minimum cut, that says the set of pipes which limit the flow. The minimum cut is shorten as min-cut. For the pipes defining the min-cut, the flow in each pipe is equal its capacity. Then the idea was to consider the mutual information of messages on each edge, when