11.07.2015 Views

View - Universidad de Almería

View - Universidad de Almería

View - Universidad de Almería

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

58 Sándor Bozókiwhere for any i, j = 1, . . . , n,a ij > 0,a ij = 1a ji.The matrix element a ij expresses the relative importance or preference of i-th object comparedto j-th object given by the <strong>de</strong>cision maker (i, j = 1, 2, . . . , n). For example, the firstobject is a 12 times more important/preferred than the second one.A pairwise comparison matrix A = [a ij ] i,j=1..n is called consistent, if it satisfies the followingproperties for all indices i, j, k = 1, . . . , n:a ij = 1a ji,a ij a jk = a ik .In practical <strong>de</strong>cision problems, pairwise comparison matrices given by the <strong>de</strong>cision makerare not consistent. Based on the elements of the matrix, we want to find a weight vectorw = (w 1 , w 2 , . . . , w n ) T ∈ R n + representing the priorities of the objects where R n + is the positiveorthant. The Eigenvector Method [21] and some distance minimizing methods suchas the Least Squares Method [7, 18], Logarithmic Least Squares Method [1, 9–11], WeightedLeast Squares Method [2,7], Chi Squares Method [18] and Logarithmic Least Absolute ValuesMethod [8,17], Singular Value Decomposition [15] are of the tools for computing the prioritiesof the alternatives.After some comparative analyses [6,9,22,23] Golany and Kress [16] have compared most ofthe scaling methods above by seven criteria and conclu<strong>de</strong>d that every method has advantagesand weaknesses, none of them is prime.Since LSM problem has not been solved fully, comparisons to other methods are restricted toa few specific examples.The aim of the paper is to present a method for solving LSM for matrices up to the size8 × 8 in or<strong>de</strong>r to ground for further research of comparisons to other methods and examiningits real life application possibilities.In the paper we study the Least Squares Method (LSM) which is a minimization problemof the Frobenius norm of (A − w 1 Tw ), where1 Tw <strong>de</strong>notes the row vector (1 1 1w 1,w 2, . . . ,w n).2. Least Squares Method (LSM)The aim is to solve the following optimization problem for a given matrix A = [a ij ] i,j=1..n .n∑ n∑(min a ij − w ) 2i(1)w ji=1 j=1n∑w i = 1,i=1w i > 0, i = 1, 2, . . . , n.LSM is rather difficult to solve because the objective function is nonlinear and usuallynonconvex, moreover, no unique solution exists [13, 18, 19] and the solutions are not easilycomputable. Farkas, Lancaster and Rózsa [12] applied Newton’s method of successive approximation.Their method requires a good initial point to find the solution.It is shown in the paper that the LSM minimization problem can be transformed into solvinga multivariate polynomial system. For a given n × n pairwise comparison matrix, thenumber of equations and variables in the corresponding polynomial system is n − 1.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!