27.09.2014 Views

An Integrated and Scalable Approach to Video Enhancement in ...

An Integrated and Scalable Approach to Video Enhancement in ...

An Integrated and Scalable Approach to Video Enhancement in ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

4<br />

Fig. 2: The his<strong>to</strong>gram of the m<strong>in</strong>imum <strong>in</strong>tensity of each pixel’s three color channels of haze videos (Left), low light<strong>in</strong>g videos<br />

(Middle) <strong>and</strong> high dynamic range videos (Right).<br />

TABLE I: Results of chi square tests<br />

Data of chi square test Degrees of Freedom Chi square values<br />

Haze videos <strong>and</strong> <strong>in</strong>verted low light<strong>in</strong>g videos 7 13.21<br />

Haze videos <strong>and</strong> <strong>in</strong>verted high dynamic range videos 7 11.53<br />

TABLE II: Parameter sett<strong>in</strong>gs for the haze detection algorithm.<br />

Color attribute Threshold range<br />

S 0 ∼ 255 0 ∼ 130<br />

V 0 ∼ 255 90 ∼ 240<br />

A. Au<strong>to</strong>matic Impairment Source Detection<br />

The function of the au<strong>to</strong>matic impairment source detection<br />

module is <strong>to</strong> classify <strong>in</strong>put video <strong>in</strong><strong>to</strong> normal video that does<br />

not need <strong>to</strong> be processed, low light<strong>in</strong>g video (also <strong>in</strong>clude<br />

high-dynamic range video) for which pixel-wise <strong>in</strong>version is<br />

performed first, or hazy video (also <strong>in</strong>clude video captured<br />

<strong>in</strong> ra<strong>in</strong>y <strong>and</strong> snowy weathers) which is processed by the core<br />

enhancement module directly.<br />

A flow diagram for this au<strong>to</strong>matic detection system is shown<br />

<strong>in</strong> Fig. 3. Our detection algorithm is based on the technique<br />

<strong>in</strong>troduced by R. Lim et al. [11]. To reduce complexity, we<br />

only perform the detection for the first frame <strong>in</strong> a Group of<br />

Pictures (GOP), coupled with scene chang<strong>in</strong>g detection. The<br />

correspond<strong>in</strong>g algorithm parameters are given <strong>in</strong> Table II. The<br />

same test was conducted for each pixel <strong>in</strong> the frame. If the<br />

percentage of hazy pixels <strong>in</strong> a picture is higher than 60%, we<br />

designate the picture as a hazy picture. Similarly, if an image<br />

is determ<strong>in</strong>ed <strong>to</strong> be a hazy picture after <strong>in</strong>version, it is a low<br />

light<strong>in</strong>g image.<br />

B. <strong>Video</strong> De-Haz<strong>in</strong>g Based Core <strong>Enhancement</strong><br />

Similar <strong>to</strong> [8], <strong>in</strong> our experiments, we used a system <strong>in</strong> which<br />

the core enhancement algorithm is an improved video dehaz<strong>in</strong>g<br />

algorithm based on the image de-haz<strong>in</strong>g algorithm of<br />

[14].<br />

As is the case <strong>in</strong> many other advanced haze-removal algorithms<br />

such as [14], [15], [16], <strong>and</strong> [17], was also based on<br />

aforementioned Koschmieder model <strong>in</strong> (2). The critical part<br />

of all image de-haz<strong>in</strong>g algorithms based on the Koschmieder<br />

Fig. 3: Flow diagram of the impairment source detection<br />

module.<br />

model is <strong>to</strong> estimate A <strong>and</strong> t(x) from the recorded image<br />

<strong>in</strong>tensity I(x) so as <strong>to</strong> recover the J(x) from I(x).<br />

Follow<strong>in</strong>g [14], we estimate the medium transmission <strong>and</strong><br />

airlight us<strong>in</strong>g the Dark Channel method:<br />

{<br />

R c }<br />

(y)<br />

t(x) = 1 − ω m<strong>in</strong> m<strong>in</strong><br />

c∈{r,g,b} y∈Ω(x) A c , (3)<br />

where ω = 0.8 <strong>and</strong> Ω(x) is a local 3 × 3 block centered at x.<br />

In our experiments, the cpu-<strong>and</strong>-memory-costly soft matt<strong>in</strong>g<br />

method proposed <strong>in</strong> [14] was not implemented <strong>in</strong> the basel<strong>in</strong>e<br />

system, but could be used as a post-process<strong>in</strong>g step, e.g. if the<br />

output from the basel<strong>in</strong>e system is subsequently uploaded <strong>to</strong><br />

a high power server <strong>in</strong> the cloud.<br />

To estimate airlight, we first note that the schemes <strong>in</strong> exist<strong>in</strong>g<br />

image haze removal algorithms are usually not very robust.<br />

Even very small changes <strong>to</strong> the airlight value might lead<br />

<strong>to</strong> very large changes <strong>in</strong> the recovered images or video<br />

frames. As a result, calculat<strong>in</strong>g the airlight frame-by-frame

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!