12.07.2015 Views

Background Subtraction Using Ensembles of Classifiers with an ...

Background Subtraction Using Ensembles of Classifiers with an ...

Background Subtraction Using Ensembles of Classifiers with an ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Updating the background image via Equation 2.3 <strong>of</strong>fers a solution to not having <strong>an</strong> adaptivebackground. However a major problem still exists because the algorithm still relies heavily on τ.A poor selection <strong>of</strong> τ will either result in m<strong>an</strong>y false positives or false negatives. One solutionthat eliminates the usage <strong>of</strong> the rather arbitrary threshold τ is using a Gaussi<strong>an</strong> distribution tomodel each pixel instead <strong>of</strong> merely the me<strong>an</strong> or running average, as is demonstrated in [2, 3].Because a pixel is modeled as a Gaussi<strong>an</strong> distribution, foreground detection is based on a frame’scurrent pixel value against the vari<strong>an</strong>ce.if |I t (p) − B(p)| ≥ σ · k then F G else BG (2.4)In Equation 2.4, now τ = σ · k, where k is some const<strong>an</strong>t, typically around 2. Because thethreshold is directly related to the vari<strong>an</strong>ce the equation is able to adapt to its environment.In [4] (one <strong>of</strong> the most influential papers in background modeling), Stauffer et al. first proposedmodeling the background as a combination <strong>of</strong> multiple Gaussi<strong>an</strong> distributions. This algorithmis referred to the Mixture <strong>of</strong> Gaussi<strong>an</strong>s algorithm. Each pixel p is modeled <strong>with</strong> a group <strong>of</strong> KGaussi<strong>an</strong> distributions for each <strong>of</strong> the red <strong>an</strong>d green color components <strong>of</strong> p (It is assumed thatthe blue color component is ignored due to its poor reception in hum<strong>an</strong> vision), where K is aheuristic value generally set between the values <strong>of</strong> 3 <strong>an</strong>d 5. The algorithm is first initialized,where a series <strong>of</strong> image frames are used to train each pixel by clustering the pixel’s observedtraining values into K sets using simple K-me<strong>an</strong>s clustering [5]. For each set k ∈ K, the me<strong>an</strong>(µ k ) <strong>an</strong>d vari<strong>an</strong>ce (σk 2 ) are computed to parameterize the corresponding Gaussi<strong>an</strong> distribution.Because there are multiple distributions for a single pixel, each distribution k is initially weightedsuch that w(k) = ||k||||K||. There is a future constraint thatK∑w(k) = 1k=1which, holds initially as well.When a new image is processed for segmentation, a pixel is considered to match a particulardistribution if the pixel’s value is <strong>with</strong>in 2.5 st<strong>an</strong>dard deviations, where 2.5 is a heuristic thatmay ch<strong>an</strong>ge based on a particular domain. So <strong>with</strong> K Gaussi<strong>an</strong> distributions, <strong>an</strong>d a pixel historyfor some pixel p = I t (x, y) at time t being {X 1 ...X t }, the probability <strong>of</strong> observing the pixel X t is:8

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!