21.04.2013 Views

ETTC'2003 - SEE

ETTC'2003 - SEE

ETTC'2003 - SEE

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

BACK<br />

Electromagnetic waves propagation and coupling:<br />

The statistical approach to information retrieval<br />

W. Tabbara 1 , Ph. De Doncker 2 , M. Hélier 1 , D. Lecointe 1<br />

1 Supélec/DRE, Plateau de Moulon, 91192 Gif-sur-Yvette, France<br />

2 Université Libre de Bruxelles, CP 165/51, Av. Roosevelt 50, 1050 Brussels, Belgium<br />

tabbara@lss.supelec.fr<br />

Key words: Electromagnetic compatibility, statistics, kriging.<br />

Abstract<br />

The aim of this paper is to provide examples of applications of probabilistic and statistical methods (ProStat) to<br />

problems dealing, on one hand with the coupling of an electromagnetic wave to cavities, cables and circuits, and others<br />

related to indoor propagation. The kriging approach, developed in the field of geostatistics, will be emphasized. It<br />

allows the prediction of the values of an observable, within some given set, from a reduced number of values taken in<br />

the same set, providing also an estimation of the accuracy of the predicted values.<br />

The statistical approach<br />

The coupling of an electromagnetic wave to a structure depends on a large number of factors such as the dimensions of<br />

the structure, the electromagnetic parameters of the media, the polarization of the incident wave... The parametric<br />

analysis of an observable (field, current, reflection coefficient...) resulting from the interaction is conducted, by means<br />

of numerical modeling or experimentation. If an exact solution is sought, a numerical code can be executed repeatedly<br />

leading to a high cost in terms of computing time. Similar conclusions as to the cost of repeated experimentations can<br />

be drawn when measurements provide the desired results. Thus, it is important, in particular when complex systems are<br />

investigated, to consider the possible means of reducing the cost of the analysis, and to estimate the accuracy of the<br />

results.<br />

The cost-reduction problem can be looked at in a somewhat unusual manner. Each combination of values of the factors,<br />

define one of the many possible configurations of the system. The latter are then considered as the outcome of a random<br />

experiment, and the following analysis is conducted:<br />

1- Problems of practical interest involve a large number of factors each having more or less influence on the<br />

observable. In order to reduce the computational cost we only need to keep the most important factors.<br />

Moreover, the factors may not be independent parameters, thus there is a need to point out the correlation<br />

between them. To achieve these two goals, one may resort to the analysis of variance (ANOVA) and to<br />

regression techniques [1, 2]. The ANOVA, that allows one to investigate the influence of one factor at a time, is<br />

a fast and simple method but it does not provide information on the correlations between factors. Regression is<br />

a more elaborate approach that may be used in place of ANOVA and is able to give correlation information. It<br />

may also be used to interpolate the values of the observable in some data base. The interpolating function is<br />

then a linear combination of functions of the factors, called regressors (these are usually polynomials in many<br />

variables) and the coefficients of the linear combination are obtained by applying a mean square minimization<br />

to some cost function.<br />

2- A configuration of the system is defined by considering a combination of values of the factors. The latter and<br />

the observable are considered as random variables. For some simple systems, it is possible to determine<br />

numerically the probability law of the observable from those of the factors. In that case, one usually employs<br />

the Monte-Carlo [1] technique where a large number of combinations of the values of the factors are chosen at<br />

random and the value of the observable is computed, or measured, for each of these combinations. A histogram<br />

of the observable is then drawn and fitted with a function known as the probability density function (pdf) of the<br />

observable. The knowledge of the pdf permits the computation of the probability of occurrence of any value of<br />

the observable, it also alows to compute the quantile which is a convenient observable in EMC problems.<br />

3- Build a small sample of configurations of the system. Then, for each configuration in the sample, the value of<br />

the observable is computed (measured) by standard techniques and this set of values provides the data base.<br />

4- When one considers a configuration that does not belong to the data base, it can be defined by a combination of<br />

values of the factors. The value of the observable, for this configuration, is predicted by interpolating the values<br />

in the data base by mean of Kriging [3, 4], a multi-factor parametric approach to system modeling which leads<br />

to adequate prediction, together with an estimation of the accuracy of the predicted values of an observable.<br />

With this technique, we study the evolution of an observable with respect to all the factors simultaneously, and<br />

not one at a time as in a classical parametric analysis.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!