Underwater Image Reconstruction Using Image Fusion Technique

1 Relevance:-
                       Reconstruction of underwater object from sequence of images distorted by moving water waves is challenging task. Underwater image processing is necessary due to quality of image captured. This image suffers from quality degradation, distortion, blurring. When taking a picture of an underwater object from outside the water container, waves on the surface cause distortion. If a series of pictures is taken at various times, a different distortion will be seen in each picture. The image seen by the camera is distorted by refraction as a function of both the angle of the water surface normal at the point of refraction and the amplitude of the water waves. If the water is perfectly flat (i.e., there are no waves), there will be no distortion due to refraction However, if the surface of the water is being disturbed by waves, the nature of the image distortion becomes considerably more complex.
                      To recover the object from such from such distorted images is challenging task. There are various papers which describes the how to identify the objects from distorted images.
                        There is method suggested by Efros et al. [2] 2004 for recovering images of underwater objects distorted by surface waves. This method provides improved results over computing a simple temporal average. Arturo Donate and Eraldo Ribeiro [3] proposed method where they performed a temporal comparison of sub regions to estimate the center of the Gaussian-like distribution of an ensemble of local sub regions. They introduced new aspect of “leakage” problem.
                       Some focused on reconstructing the surface of the water, some tried statistical theory to recover the target, some studied light refraction, and some applied image processing techniques. One simple method is the average-based method [1], another method is to locate the minimum distorted regions and form the final image from these regions. Both methods work well in many situations. But the limitation is that it fails when there is lots of information. There is one more technique that is bispectrum technique. Bispectrum technique estimates the Fourier of image ensemble but the limitation of Bispectrum method is it requires large computer memory and it has computational complexity as well as it reduces the resolution of final output image. To reduce computational complexity Li Chen [8] suggested an idea i.e. to reduce computational cost of cumulant estimation. Moreover it was observed that some of sub-pixels of images are independent of frequency parameter therefore they did not have to compute he entire 4D bispectrum ,instead simply chosen low frequency slices. Second problem is low resolution of image. Image fusion technique is one of powerful technique for increasing resolution of image. These years wavelet transform has become useful tool in multi resolution image fusion. Those existing image fusion algorithm based on wavelet transform use mallet algorithm in combination with certain selection rules to perform image decomposition and image reconstruction.                    

2. Present theories and practices:-.
                       Arturo Donate and Eraldo Ribeiro [3] Computer Vision and Bio-Inspired Computing Laboratory Presented new method for removing geometric distortion in images of submerged objects observed from outside shallow water. They focused on the problem of analyzing video sequences when the water surface is disturbed by waves. The water waves affected the appearance of the individual video frames such that no single frame is completely free of geometric distortion. They used a multistage clustering algorithm combined with frequency domain measurements that allowed selecting the best set of undistorted sub-regions of each frame in the video sequence. They introduced the new aspect for reducing the “leakage” problem where they used frequency-domain analysis that allowed quantifying the level of distortion caused by motion blur in the local sub-regions.
                         Arturo Donate, Gary Dahme Eraldo Ribeiro [4] presented problem of classifying images of underwater textures as observed from outside the water. They combined a geometric distortion removal algorithm with a texture classification method to solve the problem of classifying images of submerged textures when the water is disturbed by waves.
                         Z. Wang and A. Bovik, [5] proposed   new universal objective image quality index, Instead of using traditional error summation methods, they used index which was  designed by modeling any image distortion as a combination of three factors: loss of correlation, luminance distortion, and contrast distortion. Although the index was mathematically defined and no human visual system model was explicitly employed, experiments on various image distortion types indicate that it performs significantly better than the widely used distortion metric mean squared error.
                         H. Murase[6],presented appearance of pattern behind a transparent, moving object was  distorted by refraction at moving objects surface. They were presented algorithm for reconstructing surface shape of nonrigid transparent object such as water from apparent motion of observed pattern. The algorithm was based on optical and statistical analysis of distortion.
                         Dr.G.Padmavathi, Dr.P.Subashini, Mr.M.Muthu Kumar and Suresh Kumar Thakur [7], compared some of the famous edges preserving filtering techniques used for underwater pre-processing are Homomorphic filtering, anisotropic diffusion, Wavelet denoising by average filter. The performances of the filters were compared and analyzed by the PSNR and MSE vales for underwater images. The speckle reduction by anisotropic filter improved the image quality, suppressed the noise, preserved the edges in an image, enhance and smoothen the image.
                        Li chen[8]presented effective technique to to address subpixel image registration. There  main features included  the capability to perform reliable image registration under low SNR environments as well as cross-correlated channel noise. Bbecause the method utilizes higher order spectra of the observed images to suppress Gaussian noise.they also represented an idea to reduce computational complexity of bispectrum method which was very efficient.
                         Naveen Kumar and Maninder Kaur[9]presented Multi-sensor image fusion technique for reconstruction of images.wavelet based image fusion technique was used to get improvement in resolution of images.


3 Proposed Work:
In this dissertation work it is proposed to reconstruct underwater image. The purpose of this dissertation work is to improve the resolution of underwater image using image fusion technique. The image fusion technique combines the clear parts of input images. It produces a single image from a set of input images. The fused image should have more complete information which is more useful for human or machine perception. There are four types of image fusion techniques: signal level, pixel level, feature level and decision level. Here pixel level based image fusion technique will be used.
Objectives of Image Fusion Schemes is to extract all the useful information from the source images
Block diagram of Wavelet based image fusion technique is as shown in figure 1

Figure 1: block diagram of wavelet based pixel level image fusion technique
The basic steps of image fusion are as follows
1)      Two input images will be taken by digital camera.
2)      Pre-processing of images will be carried out.It involves operations like contrast stretching, brightness adjusting, deblurring etc to enhance quality of image so that it looks better & can be useful for further operations literature survey will be carried out and appropriate pre processing method will be chosen.
3)       Decompose the images by using wavelet analysis where it will be decomposed at different level. The purpose of decomposition is to obtain high and low resolution frequency bands. One level decomposition produces 4 frequency bands i.e. Low-Low (LL), Low-High (LH), High-Low (HL) and High-High (HH).  Next level decomposition will be applied to the LL band of the current decomposition stage, which will forms a recursive decomposition procedure. Thus, N-level decomposition will finally have 3N+1 different frequency bands, which include 3N high frequency bands and just one LL frequency band. The frequency bands in higher decomposition levels will have smaller size.
4)       Fusion will be carried out based on set of fusion rules. Fusion rules will be                               used for selection of wavelet coefficient. It includes
1)      Selection of low frequency sub band
2)      Reconstruction of high frequency sub band coefficient in horizontal and vertical orientation
3)      Selection of diagonal high frequency sub band coefficient
5)       Fused wavelet coefficient map will be constructed according to step 4
6)       Carry Inverse Discrete Wavelet Transform will be obtained for fused image, which means to reconstruct the image.
7)      Evaluate image fusion effect using similitude measure.  Similitude Measure shows the correlation of the fused image and the ideal 

Where F (I,j) and R(I,j) are the fused and ideal image. If the value of SM is close to 1, the fusion effect will be better.


PH1
 1111IIII
I1
 
 


Ø                    - Duration of Dissertation phase 1 (10 Sept-31st Oct)                    
ü  Literature survey.
ü  Image acquisition and pre-processing.
PH2
 
           
Ø                  - Duration of Dissertation phase 2 (16 Nov-31st Dec)
ü  Image  decomposition using wavelet transform
ü  Fusion of decomposed frames to get wavelet pyramid
PH3                                                           
 
                     
Ø                      - Duration of Dissertation phase 3 (16 Jan-31st March)
ü  Reconstruct the image using inverse wavelet transform
ü  Evaluate image fusion effect                 
G1                                    G1                                               
 
- Gap of 15 days for Pending work & Review of work done after                                   dissertation phase1
G2                                    G1                                               
 
                         - Gap of 15 days for Pending work & Review of work done after dissertation phase 2  


          



Comments

Popular posts from this blog

Top 16 Mobile App Development Companies in India | App Developers 2017

CCEE CDAC Exam Questions

CDAC CCEE EXAM Prepration Link/PDF [Syllabus Topic Wise ] C++ & DS