Image registration (IR) is a process of overlaying images (two or more) of the same scene taken at different times from different viewpoints, and/or by different sensors. The registration geometrically aligns two images.
The reviewed approaches are classified according to their nature
1. Area based
2. Feature based
Image registration procedures:
- Feature detection
- Feature matching
- Mapping function design
- Image transformation and re-sampling
In this post we will look into following
- Various aspects and problem of image registration.
- Both, area based and feature based approaches to feature selection.
- Review of existing algorithm for feature matching.
- Methods for mapping function design.
IR widely used in remote sensing, medical imaging, computer vision. In general its application can be divided into four main groups.
Different viewpoints – (multi view analysis):
Images of same scene are acquired from different viewpoints. The aim is to gain larger 2D view or a 3D representation of the scanned scene.
Different times – (multi temporal analysis)
Images of same scene are acquired from different times often on regular basis, possibly under different condition. The aim is to find and evaluate changes in the scene which appeared between the consecutive image acquisitions
Different sensors – (multi modal analysis)
Images of same scene are acquired from different sensors. The aim is to integrate the information obtained from different source streams to gain more complex and detailed scene representation.
Scene to model registration
Images of a scene and a model of the scene are registered. The model can be computer representation of the scene (for instance maps).
Majority of registration method consists of following four steps.
- Feature detection
- Feature matching
- Transform model estimation
- Image re-sampling
Salient and distinctive objects like closed boundary regions, edges, contours, line intersections, corners etc are manually or preferably automatically detected. For further processing these features can be represented by their point representative’s like center of gravity, line endings, distinctive points, which are called control points (CPs) in the literature.
In this step, the correspondence between the features detected in the sensed images and those detected in reference images is established. Various feature descriptors and similarity measures along with spatial relationships among the features are sued for that purpose.
Transform model estimations:
The type and parameters of the so-called mapping functions, aligning the sensed image with the reference image, are estimated. The parameters of the mapping functions are computed by means of established feature correspondence.
Image re-sampling and transformation:
The scene image is transformed by means of mapping functions. Image values in non-integer coordinates are computed by the appropriate interpolation technique.
The implementation of each registration step has its typical problems
First, we have to decide what kind of features is appropriate for the given task. The features should be distinctive objects, which are frequently spread over the images and which are easily detectable. The detection should have good localization accuracy and should not be sensitive to the assumed image degradation.
Mapping function should be chosen according to the prior known information about the acquisition process and expected image degradation.
Feature detection: Feature based methods
Region features – Closed boundary regions of appropriate size
-The regions are often represented by their center of gravity which is invariant w.r.t rotation, scaling and skewing and stable under random noise and gray level variation.
- Region features are detected by means of segmentation methods. The accuracy of segmentation can significantly influence the resulting registration.
The LF can be representations of general line segments, object contours, costal lines, and roads.
Standard edge detection methods, like canny detector or a detector based on the laplacian of Gaussian are employed for the line feature detection.
The point features group consists of methods working with line intersection, road crossings centroids of water regions..etc high variances points, local curvature discontinuous detected using gobar wavelet, inflection points of curves. The core algorithms of feature detectors in most cases follow the definitions of the ‘point’ as line intersection, centroids closed-boundary region or local modules maxima of the wavelet transform.
Feature based methods is recommended if the images contain enough distinctive and easily detectable objects.
The detected features in the reference and sensed images can be matched by means of the image intensity values in their close neighborhood the feature special distribution or the feature symbolic description. Some methods, while looking for the feature correspondence, simultaneously estimate the parameters of mapping function and merge the feature matching and transform mode estimation registration steps.
Area based methods:
Some time called correlation-like methods or template matching. These methods deal with the images without attempting to detect salient objects. Classical area based methods like cross correlation exploit for matching directly image intensity, without any structural analysis consequently they are sensitive to the intensity changes introduced for instance by none varying illumination.
Correlation like methods
The classical representative of the area based method is normalized CC and its modification.
If an acceleration of the computation speed is needed or if the images were acquired under varying conditions or they are corrupted by frequency dependent noise, then Fourier methods is preferred rather than the correlation like methods. They exploit representation of the image in the frequency domain. The phase correlation methods based on the Fourier shift theorem and were originally proposed for the registration of translated images.
It computed the cross power spectrum of the sense and reference image and looks for the location of the peak in its inverse. The method shows strong robustness against the correlated and frequency dependent noise and non-uniform time varying illumination disturbance.
Mutual information methods:
The MI methods are the last group of the area based methods to be reviewed. They have appeared recently and represent the leading tech in multimodal registration. Registration of multimodal images is the difficult tasks, but often necessary to solve, especially in medical imaging. The MI, originating from the information theory, is a measure of statistical dependency between two data sets and it is particularly suitable for registration of images from different modalisation. MI was maximized using the gradient descent optimization method.
Feature based methods:
We assume that two sets of features in the reference and sensed images represented by the CP’s have been detected.
Methods using spatial registration:
Points method based primarily on the spatial relation among the features are usually applied if detected features are ambiguous or if their neighborhood are locally distorted.
Few more methods: Methods using invariant descriptions, relax methods, pyramid and wavelets.
Transform model estimation:
After the feature correspondence has been established the mapping function is constructed . It should transform the sensed image to overlay it over the reference one. The correspondence of the CP’s from the sensed and reference image together with the fact that the corresponding CP Paris should be as close as possible after the sense image transformation are employed in the mapping function design.
- Global Mapping model
- Local mapping models
- Elastic registration