Active In SP
Joined: Sep 2010
12-01-2011, 03:32 PM
Eduardo Fernández Canga
PROJECT REPORT for the DEGREE of MEng. in ELECTRICAL &
With the recent rapid developments in the field of sensing technologies multisensor systems have become a reality in a growing number of fields such as remote sensing, medical imaging, machine vision and the military applications for which they were first developed. The result of the use of these techniques is a great increase of the amount of data available. Image fusion provides an effective way of reducing this increasing volume of information while at the same time extracting all the useful information from the source images.
Multi-sensor data often presents complementary information about the region surveyed, so image fusion provides an effective method to enable comparison and analysis of such data. The aim of image fusion, apart from reducing the amount of data, is to create new images that are more suitable for the purposes of human/machine perception, and for further image-processing tasks such as segmentation, object detection or target recognition in applications such as remote sensing and medical imaging. For example, visible-band and infrared images may be fused to aid pilots landing aircraft in poor visibility.
Multi-sensor images often have different geometric representations, which have to be transformed to a common representation for fusion. This representation should retain the best resolution of either sensor. A prerequisite for successful in image fusion is the alignment of multi-sensor images. Multi-sensor registration is also affected by the differences in the sensor images.
However, image fusion does not necessarily imply multi-sensor sources, there are interesting applications for both single-sensor and multi-sensor image fusion, as it will be shown in this report.
MULTI-RESOLUTION IMAGE REPRESENTATION
In recent years, multi-resolution transformations have been recognized as a very useful approach to analyse the information content of images for the purpose of image fusion. The notion of multiresolution analysis was initiated by Burt and Adelson  who introduced a multiresolution image representation, called Gauss- Laplacian pyramid. Their under-lying idea is to decompose an image into a set of band-pass filtered component images, each of which represents a different band of spatial frequency. This idea was further elaborated by other researchers such as Mallat and Meyer, to establish a multiresolution analysis for continuous functions in connection with wavelet transformation.
This section gives a briefly introduction to these two multiresolution decomposition approach.
One of the issues a fusion system has to deal with is the registration of the source images. Most of the times, images from the same scene are acquired from different sensors, or from the same sensor but at different times. These images may have relative translation, rotation, scale, and other geometric transformations between them. The goal of image registration is to establish the correspondence between two images and determine the geometric transformation that aligns one image with the other.
Although all the source images used in this research were already registered, it was found convenient to develop a image registration technique. The first stage was to use a Block Matching Algorithm (BMA), to find local correspondence between both images, giving rise to a motion vector field. Then horizontal and vertical shifting and rotation parameters were obtained with a Global Motion Estimation (GME) based in these vectors.
In this research, four different fusion approaches have been developed. These four approaches are the Laplacian Pyramid, the Wavelet Transform, a Computationally Efficient Pixel-level Image Fusion (CEMIF) method and a Multifocus Technique based in Spatial Frequency. The first two methods, were selected for being the most representative approaches, especially the approaches based on the Wavelet Transform that has became the most relevant fusion domain in the last years. The last ones were selected to show two of the many alternative approaches found in the literature.
This section describes the technique specifications, their implementation and presents experimental results and conclusions for each of them. It is important to note that all the source images used in this research were already correctly aligned on a pixel-by-pixel basis, a prerequisite for successful image fusion. The fusion techniques have been tested with those seven sets of images, which represent different possible applications where fusion can be performed. The first and second set of images, Figure 4-1 and Figure 4-2, called ‘clock’ and ‘pepsi’ represent the situation where, due to the limited depth-of-focus of optical lenses in some cameras, it is not possible to get an image which is in focus everywhere.
The third set of images, Figure 4-3, corresponds to concealed weapon detection example. In this case, a millimetre wave sensor is used in combination with a visual image.
An example of fusion applied to medicine is represented in the forth set of images, Figure 4-4. One of the images was captured using a nuclear magnetic resonance (MR) and the other using a computed tomography (CT). Remote sensing is another typical application for image fusion. The fifth set of images illustrates the captures of two bands of a multispectral scanner, Figure 4-5. And finally, the last two sets of images, Figure 4-6 and Figure 4-7, are composed by a visible camera image and an infrared camera image. The purposes of them are, navigation applications and surveillance applications for the sixth and seventh set respectively.
for full report and images click here