Fast moving objects and low light conditions, result to blurred, low-quality photographs. Post-processing methods can remedy this. Daniel Cunningham describes motion-estimation as a method for deblurring images.
The first step is modelling the image-formation process.
A camera captures images over a definite exposure time. Each pixel is an integration of the image over time. And can be modeled as a transformation mapping of the ideal image.
The idea is to reverse this transformation. Camera shake and relative motion of the object to the detector contribute to this transformation. The lens can also cause a Gaussian optical blur if not focused perfectly. Detector resolution limits the captured image as an average of ideal image.
Cunningham discussed three techniques for imge motion deblurring – motion estimation from a single image, multiple images and hybrid imaging.
With a single image, uniform linear or harmonic motion is assumed. Incorrect estimation can cause new artifacts to emerge. This method is also referred to as blind deconvolution.
For multiple-motion images, the object of interest is tracked and its motion is estimated.
Hybrid imaging, uses two sensors, one optimized temporally and another, spatially. From the second sensor, the point spread function is determined.
For my Applied Physics 186 project, I will do a deblurring for a moving camera and stationary subject. I guess this would require the use of multiple images.