PhD Thesis Defense
Title: A Robust Patch-based Synthesis Framework for
Combining Inconsistent Images
By: Mr. Aliakbar Darabi
Advisor: Dr. Pradeep Sen
Date: Aug 9th 2012, 3:00 PM
Location: ECE, Room 118
Current methods for combining different images produce visible artifacts when the sources have very different textures and structures or coming from far view points or from dynamic scenes with motions. In this thesis, we propose a patch-based synthesis algorithm, to plausibly combine different images that have color, texture, structural, and geometric inconsistencies. For some applications such as cloning and stitching that a gradual blend is required, we present a new method for synthesizing a transition region between two source images, such that inconsistent properties all change gradually from one source to the other. We call this process Image Melding. For gradual blending, we generalized patch-based optimization foundation with three key generalizations: First, we enrich the patch search space with additional geometric and photometric transformations. Second, we integrate image gradients into the patch representation and replace the usual color averaging with a screened Poisson equation solver. And third, we propose a new energy based on mixed L_2/L_0 norms for colors and gradients that produces a gradual transition between sources without sacrificing texture sharpness. Together, all three generalizations enable patch-based solutions to a broad class of image melding problems involving inconsistent sources: object cloning, stitching challenging panoramas, hole filling from multiple photos, and image harmonization.
We also demonstrate another application which requires us to address inconsistencies across images: High Dynamic Range (HDR) reconstruction using sequential exposures. In this application, when blending input images together, if the inconsistencies caused by significant scene motions are not handled properly, the results will suffer from objectionable artifacts for those dynamic scenes. In this thesis, we propose a new approach to HDR reconstruction that uses information in all exposures while being more robust to motion than previous techniques. Our algorithm is based on a novel patch-based energy-minimization formulation that integrates alignment and reconstruction in a joint optimization through an equation we call the HDR image synthesis equation. This allows us to produce an HDR result that is aligned to one of the exposures yet contains information from all of them.