Pulse coupled neural network (PCNN) is an ANN based on the principle of simulated cat vision. It has the characteristics of pulse modulation and coupling links, and can extract useful information from images well. The fusion performance of thermal infrared image and visible light image based on PCNN is good. In the PCNN model, the number of neurons is the input. The number of pixels in the image. Input the source image into PCNN, generate pulses by stimulating neurons, calculate the total number of ignition times of the pixels of the source image at the same position, and then compare them, and select the pixel with a larger number of ignition times as the fusion pixel value, so Get a fused image. PCNN can also be combined with multi-scale analysis, sparse representation, fuzzy theory, etc. to build distinctive fusion algorithms. For example, in the combination of PCNN and multi-scale analysis, PCNN is used as a fusion rule to realize the fusion of low-frequency and high-frequency offspring coefficients, and finally the inverse transformation of multi-scale analysis is used to obtain the fused image.
Deep neural network (DNN) has developed rapidly in the fields of computer vision and image processing in recent years. This type of neural network has a strong ability to model complex relationships between data, and can automatically extract features from data. For example, DNN models such as convolutional neural network (convolutional neural network CNN), generative adversarial network (GAN), and ResNet have been successfully applied in image fusion. In terms of implementation principle, taking GAN as an example, the infrared image and the visible light image are input into the generator to obtain a fusion image, and then the fusion image and the visible light image are input into the discriminator together. If the discriminator cannot distinguish, the final fusion image is obtained. .
Fusion algorithms based on neural networks performed well in general. The main problem is that the training of the neural network requires a large amount of data sets, which requires high hardware and takes a long time to train. Therefore, it is necessary to consider optimizing the network structure to reduce computing overhead. In addition, considering that the source image has more information redundancy, and the fusion image contains the feature information of the source image, in order to remove the information redundancy in the source image, you can try to adjust the neural network parameters and optimize the loss function to improve the efficiency of the algorithm. .