17.02.2014 Views

Mathematical Methods for Physicists: A concise introduction - Site Map

Mathematical Methods for Physicists: A concise introduction - Site Map

Mathematical Methods for Physicists: A concise introduction - Site Map

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

LEAST-SQUARES FIT<br />

is quite involved and because of limited space we shall not cover it here, but it is<br />

discussed in any standard textbook on numerical analysis.<br />

Least-squares ®t<br />

We now look at the problem of ®tting of experimental data. In some experimental<br />

situations there may be underlying theory that suggests the kind of function to be<br />

used in ®tting the data. Often there may be no theory on which to rely in selecting<br />

a function to represent the data. In such circumstances a polynomial is often used.<br />

We saw earlier that the m ‡ 1 coecients in the polynomial<br />

y ˆ a 0 ‡ a 1 x ‡‡a m x m<br />

can always be determined so that a given set of m ‡ 1 points (x i ; y i ), where the xs<br />

may be unequal, lies on the curve described by the polynomial. However, when<br />

the number of points is large, the degree m of the polynomial is high, and an<br />

attempt to ®t the data by using a polynomial is very laborious. Furthermore, the<br />

experimental data may contain experimental errors, and so it may be more<br />

sensible to represent the data approximately by some function y ˆ f …x† that<br />

contains a few unknown parameters. These parameters can then be determined<br />

so that the curve y ˆ f …x† ®ts the data. How do we determine these unknown<br />

parameters?<br />

Let us represent a set of experimental data (x i ; y i ), where i ˆ 1; 2; ...; n, by some<br />

function y ˆ f …x† that contains r parameters a 1 ; a 2 ; ...; a r . We then take the<br />

deviations (or residuals)<br />

d i ˆ f …x i †y i<br />

…13:32†<br />

and <strong>for</strong>m the weighted sum of squares of the deviations<br />

S ˆ Xn<br />

w i …d i † 2 ˆ Xn<br />

w i ‰ f …x i †y i Š 2 ;<br />

iˆ1<br />

iˆ1<br />

…13:33†<br />

where the weights w i express our con®dence in the accuracy of the experimental<br />

data. If the points are equally weighted, the ws can all be set to 1.<br />

It is clear that the quantity S is a function of as: S ˆ S…a 1 ; a 2 ; ...; a r †: We can<br />

now determine these parameters so that S is a minimum:<br />

@S<br />

@a 1<br />

ˆ 0;<br />

@S<br />

@a 2<br />

ˆ 0; ...;<br />

@S<br />

@a r<br />

ˆ 0:<br />

…13:34†<br />

The set of r equations (13.34) is called the normal equations and serves to<br />

determine the r unknown as iny ˆ f …x†. This particular method of determining<br />

the unknown as is known as the method of least squares.<br />

477

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!