Evaluating supervised learning approaches for spatial-domain multi-focus image fusion

Abstract Image fusion is the generation of an image f that combines the most relevant information from a set of images of the same scene, acquired with different cameras or camera settings. Multi-Focus Image Fusion (MFIF) aims to generate an image fe with extended depth-of-field from a set of images taken at different focal distances or focal planes, and it proposes a solution to the typical limited depth-of-field problem in an optical system configuration. A broad variety of works presented in the literature address this problem. The primary approaches found there are domain transformations and block-of-pixels analysis. In this work, we evaluate different systems of supervised machine learning applied to MFIF, including k-nearest neighbors, linear discriminant analysis, neural networks, and support vector machines. We started from two images at different focal distances and divided them into rectangular regions. The main objective of the machine-learning-based classification system is to choose the parts of both images that must be in the fused image in order to obtain a completely focused image. For focus quantification, we used the most popular metrics proposed in the literature, such as: Laplacian energy, sum-modified Laplacian, and gradient energy, among others. The evaluation of the proposed method considered classifier testing and fusion quality metrics commonly used in research, such as visual information fidelity and mutual information feature. Our results strongly suggest that the automatic classification concept satisfactorily addresses the MFIF problem.

Saved in:
Bibliographic Details
Main Authors: Atencio-Ortiz,Pedro, Sanchez-Torres,German, Branch-Bedoya,John William
Format: Digital revista
Language:English
Published: Universidad Nacional de Colombia 2017
Online Access:http://www.scielo.org.co/scielo.php?script=sci_arttext&pid=S0012-73532017000300137
Tags: Add Tag
No Tags, Be the first to tag this record!