Abstract book (pdf) - ICPR 2010
Abstract book (pdf) - ICPR 2010
Abstract book (pdf) - ICPR 2010
- TAGS
- abstract
- icpr
- icpr2010.org
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
13:30-16:30, Paper ThBCT9.7<br />
Image Retargeting in Compressed Domain<br />
Murthy, O.v. Ramana, Nanyang Tech. Univ.<br />
Muthuswamy, Karthik, Nanyang Tech. Univ.<br />
Rajan, Deepu, Nanyang Tech. Univ.<br />
Chia, Liang-Tien, Nanyang Tech. Univ.<br />
A simple algorithm for image retargeting in the compressed domain is proposed. Most existing retargeting algorithms<br />
work directly in the spatial domain of the raw image. Here, we work on the DCT coefficients of a JPEG-compressed image<br />
to generate a gradient map that serves as an importance map to help identify those parts in the image that need to be<br />
retained during the retargeting process. Each 8x8 block of DCT coefficients is scaled based on the least importance value.<br />
Retargeting can be done both in the horizontal and vertical directions with the same framework. We also illustrate image<br />
enlargement using the same method. Experimental results show that the proposed algorithm produces less distortion in<br />
the retargeted image compared to some other algorithms reported recently.<br />
13:30-16:30, Paper ThBCT9.8<br />
Progressive MAP-Based Deconvolution with Pixel-Dependent Gaussian Prior<br />
Tanaka, Masayuki, Tokyo Inst. of Tech.<br />
Kanda, Takafumi, Tokyo Inst. of Tech.<br />
Okutomi, Masatoshi,<br />
A deconvolution is a fundamental technique and used in various vision applications. A maximum a posteriori estimation<br />
is known as a powerful tool. In this paper, we propose a progressive MAP-based deconvolution algorithm with a pixel dependent<br />
Gaussian image prior. In the proposed algorithm, a mean and a variance for each pixel are adaptively estimated.<br />
Then, the mean and the variance are progressively updated. We experimentally show that the proposed algorithm is comparable<br />
to the state-of-the-art algorithms in the case that the true point spread function (PSF) is used for the deconvolution,<br />
and that the proposed algorithm outperforms in the non-true PSF case.<br />
13:30-16:30, Paper ThBCT9.9<br />
A Fast Image Inpainting Method based on Hybrid Similarity-Distance<br />
Liu, Jie, Chinese Acad. of Sciences<br />
Zhang, Shuwu, Chinese Acad. of Sciences<br />
Yang, Wuyi, Chinese Acad. of Sciences<br />
Li, Heping, Chinese Acad. of Sciences<br />
A fast image in painting method based on hybrid similarity-distance is proposed in this paper. In Criminisi et al.’s work<br />
[1], similarity distance are not reliable enough in many cases and the algorithm performs inefficiently. To solve these problems,<br />
we propose a new searching strategy to accelerate the algorithm. In addition, we modify the confidence-updating<br />
rule to make more reasonable the distributions of the confidences in source region. Besides, taking account of the stationarity<br />
of texture and the reliability of the source regions, we present a hybrid similarity-distance, which combines the<br />
distance in color space with the distance in spatial space by weight coefficients related to the confidence value. A more<br />
reasonable patch will be found out by this hybrid similarity-distance. The experiments verify that the proposed method<br />
yields qualitative improvements compared to Criminisi et al.’s work [1].<br />
13:30-16:30, Paper ThBCT9.10<br />
Reversible Integer 2-D Discrete Fourier Transform by Control Bits<br />
Dursun, Serkan, Univ. of Texas at San Antonio<br />
Grigoryan, Artyom M., Univ. of Texas at San Antonio<br />
This paper describes the 2-D reversible integer discrete Fourier transform (RiDFT), which is based on the concept of the<br />
paired representation of the 2-D image, which is referred to as the unique 2-D frequency and 1-D time representation. The<br />
2-D DFT of the image is split into a minimum set of short transforms, and the image is represented as a set of 1-D signals.<br />
The paired 2-DDFT involves a few operations of multiplication that can be approximated by integer transforms, such as<br />
one-point transforms with one control bit. 24 control bits are required to perform the 8x8-point RiDFT, and 264 control<br />
bits for the 16x16-point 2-D RiDFT of real inputs. The fast paired method of calculating the 1-D DFT is used. The computational<br />
complexity of the proposed 2-D RiDFTs is comparative with the complexity of the fast 2-D DFT.<br />
- 316 -