Comparison of the Theoretical and Statistical Effects of the PCA and CNN Image Fusion Approaches

Comparison of the Theoretical and Statistical Effects of the PCA and CNN Image Fusion Approaches

Ashi Agarwal, Binay Kumar Pandey, Poonam Devi, Sunil Kumar, Mukundan Appadurai Paramashivan, Ritesh Agarwal, Pankaj Dadheech
DOI: 10.4018/978-1-6684-8618-4.ch012
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

An image plays a vital role in today's environment. An image is a visual representation of anything that can be used in the future for recollecting or memorizing that scene. This visual representation is created by recording the scene through an optical device like a camera or mobile phone. The image fusion process helps integrate relevant data of the different images in a process into a single image. Image fusion applications are wide in range, and so is the fusion technique. In general, pixel, feature, and decision-based techniques for picture fusion are characterised. This study's main thrust is the application and comparison of two approaches to the image fusion process: PCA (principal component analysis) and CNN (convolutional neural network).The study implemented a practical approach to MATLAB. The result of the study is that CNN is much more favorable in terms of image quality and clarity but less favorable in terms of time and cost.
Chapter Preview
Top

1. Introduction

Images are created when something, such as a person, thing, place, etc., is portrayed visually. Depending on the frames taken and projections maintained, images may be 2D or 3D. A 3D image is a compilation of several 2D images at various projection levels and angles. 2D images are still pictures. In general, the phrase “fusion” refers to a method of extracting data from multiple sources. In order to create a new image with information of a quality that can only be achieved this way, picture fusion (IF) tries to merge complementary multisensor, multitemporal, or multiview data.

Figure 1.

Image fusion process

978-1-6684-8618-4.ch012.f01

The definition of quality, how it is measured, and how it is used vary depending on the application. The goal of the Image Fusion method is to gather all the important information from various photographs and combine it into a small number of images, usually just one. Compared to any other image from a single source, a single image is more accurate and informative and has all the necessary information. The goal of picture fusion is to create images that are better suited for human and mechanical perception, not just to reduce the number of records.In essence, two pictures or more of a single scene are combined to create a Single photo, with the best data characteristics of all the images used. An important step and a prerequisite for image fusion is geometry and feature matching of the input images. The growing availability of space-based sensors in distant sensing applications inspires various picture fusion algorithms.

Depending on the particular purpose, many sorts of images can be fused. Some common sorts of photos that are frequently fused are listed below:

  • a)

    Hyperspectral or multi-spectral Images: Including the visible, infrared, and ultraviolet spectrums, these photographs record data from a variety of electromagnetic spectrum bands. Utilising fusion techniques, the spectral data from these photos can be combined, enhancing the overall information or increasing the accuracy of the categorization.

  • b)

    Images with Thermal Infrared and Visible Light: Combining thermal infrared photographs with images with visible light can give a more thorough understanding of the scene in applications like surveillance or search and rescue. The thermal data can be overlaid on the visible image using image fusion techniques, improving item recognition or detection.

  • c)

    Images with a high dynamic range (HDR): These images can capture a wide range of luminosity, from light highlights to deep shadows. With the help of image fusion, it is possible to build an HDR image with improved details and a wider dynamic range by combining multiple photographs taken at various exposure settings.

  • d)

    Medical Images: In the field of medical imaging, several modalities like computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET) can capture various features of the same patient. Through the use of image fusion algorithms, data from multiple modalities can be combined to provide doctors with a more thorough and precise diagnosis.

  • e)

    Panoramic Images: Panoramic images are created by stitching together several photographs taken from various angles. A high-resolution panoramic image with a consistent visual style can be created by using image fusion algorithms to integrate the overlapping sections between photographs flawlessly.

  • f)

    Satellite or Surveillance Images: Image fusion can be used to merge images taken by several sensors or at various times in applications like surveillance or satellite photography. The capabilities for object detection, tracking, and change detection may be enhanced by this fusion.

  • g)

    Multiresolution Images: Images that have been created or processed at different scales or resolutions are referred to as multiresolution images, also known as multi-scale images. The original image is generally divided into several versions with various levels of information in a multiresolution image.

Complete Chapter List

Search this Book:
Reset