09.07.2015 Views

multidimensional scaling and kohonen's self-organizing maps

multidimensional scaling and kohonen's self-organizing maps

multidimensional scaling and kohonen's self-organizing maps

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Only the neurons within the neighborhood h ci (t) are moved near to X(t). Thelearning rate (t) 2 [0 1] decreases monotonically with time, (t) <strong>and</strong>r(t) are neighborhoodradiuses decreasing also monotonically. Although one-dimensional Kohonen<strong>maps</strong> have been analyzed in some details little is known about the <strong>self</strong>-organizationprocess in two or three dimensions [9]. The main problem is the lack ofquantitativemeasure to determine what exactly \the good map" is.III. MULTIDIMENSIONAL SCALINGMDS techniques emerged from the need to visualize in a two- or three-dimensionalspace high dimensional objects described by some measure of their similarities ordissimilarities. The problem is to nd the coordinates of points representing themultivariate items in the two or three-dimensional space in such a manner that thelow-dimensional interpoint distances correspond to the dissimilarities of the originalobjects. MDS takes as input a symmetric matrix of the similarities or dissimilaritiesbetween objects, whereas SOM needs absolute coordinates of these objects in thehigh-dimensional space. Note that for the MDS input space does not even need to bea metric space. If a given observation concerns n objects there are n(n;1)=2 distancesbetween these objects. SOM algorithm in the same case needs n N input values,where N is the dimension of the input vectors. If the number of objects n>2N +1Kohonen map uses more information than MDS.Let n be the number of observed objects in the high-dimensional input spaceX 1 X 2 ::: X n , <strong>and</strong> let ij be the observed similarities between objects X i , equivalentto distances ij = kX i ; X j k in metric spaces. Let Y i be the low dimensionaltarget space point representing the input object X i <strong>and</strong> let d ij be the distance betweenY i <strong>and</strong> Y j .Wehave to place the points fY i i=1 ::: ng in the target space insuch away that the distances d ij are as close as possible to the original distances ij .A sum-of-squared error function can be used as a criterion to decide whether a givenconguration of image points is better than another. There are two commonly usedcriterion:- Kruskal's stress: S =sPi>jP(ij ;dij )2i>j 2 ij- Lingoes' alienation coefficient: K =r1; (ijdij )2Pi>jPd 2 ijThe best conguration is found iteratively:0. Dene a starting conguration for the points Y i r<strong>and</strong>omly or by a principalcomponents analysis,While the criterion function signicantly decreases, do:1. Compute the distances d ij .2. Compute the value of the criterion functions S <strong>and</strong> K.3. Find a new conguration of the points Y i by a gradient-descent procedure suchas Kruskal's linear regression or Guttman's rank-image permutation.Looking for quantitative measures of the preservation of topography between thehigh-dimensional input <strong>and</strong> low-dimensional target spaces Duch [11] has introducedthe stress-like measure D 1 = S <strong>and</strong> its quadratic version:

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!