13.07.2015 Views

The Light Field Camera: Extended Depth of Field, Aliasing, and ...

The Light Field Camera: Extended Depth of Field, Aliasing, and ...

The Light Field Camera: Extended Depth of Field, Aliasing, and ...

SHOW MORE
SHOW LESS
  • No tags were found...

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

980 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 34, NO. 5, MAY 2012Fig. 10. Antialiasing filtering, increasing from (a) to (d). Top row: Detail <strong>of</strong>subimages. Middle: Corresponding filtered full view. Bottom: Magnifieddetail <strong>of</strong> the view.step. Since each subimage is a windowed projection <strong>of</strong> r ontothe sensor (ignoring blur for now), we may equivalentlyproject the filters in the same way. This is approximate atsubimage boundaries, where we must use filters with asupport limited to the domain <strong>of</strong> . Hence, we upper boundthe filter size using a Lanczos windowed version <strong>of</strong> the idealSinc kernel. <strong>The</strong> antialiasing filter h f0 , defined in r, isprojected onto the sensor via the conjugate image at z 0 , i.e.,scaling by jj, as in (16). Hence, the scaled filter has physicalcut<strong>of</strong>f frequency f 0 jj. We propose an iterative method,beginning with a strong antialiasing filter, <strong>and</strong> refining theestimate based upon the current depth map. Too muchfiltering might remove detail for valid matches, while toolittle may leave aliasing behind (see Fig. 10). We summarizethe algorithm as follows:1. Initialize all filters with cut<strong>of</strong>f f 0 jj max , i.e., assumingthe depth which yields the most aliasing in theworking volume.2. Estimate the disparity map sðc k Þ (see Section 6.5).3. Rearrange the views as subimages ^S k ðqÞ.4. For each k, filter ^S k ðqÞ by h f0 jj, using ¼ dsðc k Þ .5. Repeat from 2 until the disparity map update isnegligible.6.4 Microlens BlurWith finite microlens apertures each pixel integrates over alarger area <strong>and</strong> aliasing is reduced due to additional blur(see Fig. 8). By taking this into account we can use milderantialiasing.As the antialiasing filter for an array <strong>of</strong> pinhole lenses isa Sinc filter, we define the antialiasing kernel size as this1filter’s first zero crossing, i.e.,2f 0 jj. <strong>The</strong> correct amount <strong>of</strong>antialiasing is readily obtained by comparing this size withthe blur radius b. <strong>The</strong>n, the final antialiasing filter has a1radius approximated as j2f 0 jjbj, clipped from below at 0<strong>and</strong> from above by d 2. Fig. 11 shows the resulting filtersizes for the settings used in Section 8.1.2.6.5 Regularized <strong>Depth</strong> EstimationWe now have all the necessary ingredients to work on theenergy introduced in (3). <strong>The</strong> depth map s is discretized atc k as a vector s ¼fsðuÞg u2fck ;8kg. Due to the ill-posedness<strong>of</strong> the problem, we introduce regularization, favoringpiecewise constant solutions by using the total variationFig. 11. Microlens blur <strong>and</strong> antialiasing filter sizes versus depth.(a) Overlap <strong>of</strong> filter kernel size <strong>and</strong> microlens blur radius for differentdisparity (depth) values. (b) Resulting antialiasing kernel size fordifferent depth values.term krsðuÞk 1 , where r is the 2D gradient with respect tou. Hence, we wish to solve~s ¼ arg min E data ðsÞþkrsðuÞks1 ; ð21Þwhere >0 determines the trade<strong>of</strong>f between regularization<strong>and</strong> data fidelity (in our experiments we chose ¼ 10 3 ).We minimize this energy by using an iterative solution. Bynoticing that E data can be written as a sum <strong>of</strong> termsdepending on a single entry <strong>of</strong> s at once, we find aninitialization s 0 by performing a fast brute force search inE data for each c k independently. <strong>The</strong>n, we approximateE data via a second order Taylor expansion, i.e.,E data ðs tþ1 Þ’E data ðs t ÞþrE data ðs t Þðs tþ1s t Þþ 1 2 ðs tþ1 s t Þ T H Edata ðs t Þðs tþ1 s t Þ;ð22Þwhere rE data <strong>and</strong> H Edata are the gradient <strong>and</strong> Hessian <strong>of</strong>E data , <strong>and</strong> subscripts t <strong>and</strong> t þ 1 denote iteration number.To ensure our local approximation is convex we take theabsolute value (component wise) <strong>of</strong> H Edata ðs t Þ. In the case<strong>of</strong> the term krsðuÞk 1 , we use a first order Taylor expansion<strong>of</strong> its gradient. Computing the Euler-Lagrange equations <strong>of</strong>the approximate energy E with respect to s tþ1 thislinearization results inrE data ðs t ÞþjH Edata ðs t Þjðs tþ1 s t Þ r rðs tþ1 s t Þ¼ 0;jrs t jð23Þwhich is a linear system in the unknown s tþ1 , <strong>and</strong> can beefficiently solved using conjugate gradients (CG).7 LIGHT FIELD SUPERRESOLUTIONSo far we devised an algorithm to reduce aliasing in views<strong>and</strong> estimate the depth map. We now define a computationalPSF model, <strong>and</strong> formulate the MAP problempresented in Section 3.7.1 <strong>Light</strong> <strong>Field</strong> <strong>Camera</strong> Point Spread Function7.1.1 PSF DefinitionCombining the analysis from Sections 4 <strong>and</strong> 5, we c<strong>and</strong>etermine the system PSF <strong>of</strong> the plenoptic camera h LIs —which is unique for each point in 3D space, <strong>and</strong> will be acombination <strong>of</strong> main lens <strong>and</strong> microlens array blurs. Wedefine this PSF such that the intensity at a pixel i caused bya unit radiance point at u with a disparity sðuÞ is

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!