Multi-focus Image Fusion with Cooperative Image Multiscale Decomposition
作者:
Tan Y.;Yang B.
作者机构:
College of Electric Engineering, University of South China, Hengyang, CO 421001, China
会议名称:
4th Chinese Conference on Pattern Recognition and Computer Vision, PRCV 2021
会议时间:
29 October 2021 through 1 November 2021
关键词:
Image enhancement;Image fusion;Lenses;Optical data processing;Cooperative image multiscale decomposition;Depth-of-focus;Focus region detection;Fusion strategies;Guided filters;Multi-scale Decomposition;Multifocus image fusion;Mutually-guided filter;Region detection;Two sources;Iterative methods
摘要:
Multi-focus image fusion plays an important role in the field of image processing for its ability in solving the depth-of-focus limitation problem in optical lens imaging by fusing a series of partially focused images of the same scene. The improvements on various fusion methods focus on the image decomposition methods and the fusion strategies. However, most decompositions are separately conducted on each image, which fails to sufficiently consider the nature of multiple images in fusion tasks, and insufficiently explores the consistent and inconsistent features of two source images simultaneously. This paper proposes a new cooperative image multiscale decomposition (CIMD) based on the mutually guided filter (MGF). With CIMD, two source multi-focus images are simultaneously decomposed into base layers and detailed layers through the iterative operation of MGF cooperatively. A saliency detection based on a mean-guide combination filter is adopted to guide the fusion of detailed layers and a spatial frequency-based fusion strategy is used to fuse the luminance and contour features in the base layers. The experiments are carried on 28 pairs of publicly available multi-focus images. The fusion results are compared with 7 state-of-the-art multi-focus image fusion methods. Experimental results show that the proposed method has the better visual quality and objective assessment. © 2021, Springer Nature Switzerland AG.
语种:
英文
展开
Superpixel-based Structural Similarity Metric for Image Fusion Quality Evaluation
作者:
Eryan Wang;Bin Yang;Lihui Pang
期刊:
Sensing and Imaging ,2021年22(1):1-25 ISSN:1557-2064
通讯作者:
Yang, Bin(yangbin01420@163.com)
作者机构:
[Eryan Wang; Bin Yang; Lihui Pang] College of Electric Engineering, University of South China, Hengyang;421001, China;[Eryan Wang; Bin Yang; Lihui Pang] 421001, China
通讯机构:
[Bin Yang] C;College of Electric Engineering, University of South China, Hengyang, China
关键词:
Image Fusion Quality Assessment;Image Feature;Human Visual System;Superpixel Segmentation;Structural Similarity Metric;Adaptive Superpixel
摘要:
Image fusion refers to integrate multiple images of the same scene into a high-quality fused image. Universal quality evaluation for fused image is one of the urgent problems in the field of image fusion. Typically, local features extracted from rectangular blocks of the fused images are used to achieve objective evaluation. However, the fixed shape of image block is neither suitable for the natural attributes of an image, nor for the perceptual characteristics of human visual system. To deal with the problem, a superpixel-based structural similarity metric for image fusion quality evaluation is proposed in this paper. The image features extracted from adaptive superpixels are used to calculate the structural similarity between the corresponding superpixels. Then all local structural similarity indicators are weighted and averaged according to their significance to obtain the final evaluation score. Several classical image fusion quality evaluation metrics are used for comparative experimental analysis. A series of experimental results show that the stability of the proposed quality evaluation index is about 10−6 orders of magnitude, whose accuracy and performance are more advantageous than the latest evaluation index. Meanwhile, the evaluation results obtained by the proposed metric is closer to the human visual evaluation results. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
语种:
英文
展开
Image fusion with structural saliency measure and content adaptive consistency verification
作者:
Yang, Bin* ;Sun, Yuhan;Li, Yuehua
期刊:
Journal of Electronic Imaging ,2020年29(1):013014 ISSN:1017-9909
通讯作者:
Yang, Bin
作者机构:
[Li, Yuehua; Sun, Yuhan; Yang, Bin] Univ South China, Coll Elect Engn, Hengyang, Peoples R China.;[Yang, Bin] Univ South China, Hunan Prov Key Lab Ultrafast Micro Nano Technol &, Hengyang, Peoples R China.
通讯机构:
[Yang, Bin] U;Univ South China, Coll Elect Engn, Hengyang, Peoples R China.;Univ South China, Hunan Prov Key Lab Ultrafast Micro Nano Technol &, Hengyang, Peoples R China.
关键词:
Medical imaging;Consistency verifications;Decomposition coefficient;Guided filters;Multimodal medical images;Non subsampled contourlet transform (NSCT);Quantitative comparison;State-of-the-art algorithms;structural saliency;Image fusion
摘要:
Image fusion obtains a desired image by integrating the useful information of multiple input images. Most traditional fusion strategy is usually guided by image local contrast or variance, which cannot well represent visual discernable features of source images. And the undesirable seam effects or artifacts produced due to the inconsistency between fusion weight map and image content may severely degrade the visual quality of the fused images. An efficient image fusion method with structural saliency measure and content adaptive consistency verification was proposed. The fusion is implemented under the nonsubsampled contourlet transform (NSCT)-based image fusion framework. The low-frequency NSCT decomposition coefficients are fused with the weight map constructed by considering both structural saliency and visual uniqueness features and refined by spatial consistency with guide filter. The high-frequency NSCT decomposition coefficients are fused with structural saliency. The performances of the proposed method have been verified on several pairs of multifocus images, infrared-visible images, and multimodal medical images. Experimental results clearly demonstrate the superiority of the proposed algorithm compared with several existing state-of-the-art algorithms in terms of both visual and quantitative comparison. © 2020 SPIE and IS&T.
语种:
英文
展开
Multi-Focus Image Fusion With Point Detection Filter and Superpixel-Based Consistency Verification
作者:
Chen, Qiang;Yang, Bin* ;Li, Yuehua;Pang, Lihui
期刊:
IEEE ACCESS ,2020年8:99956-99973 ISSN:2169-3536
通讯作者:
Yang, Bin
作者机构:
[Chen, Qiang; Li, Yuehua; Yang, Bin; Pang, Lihui] Univ South China, Coll Elect Engn, Hengyang 421001, Peoples R China.
通讯机构:
[Yang, Bin] U;Univ South China, Coll Elect Engn, Hengyang 421001, Peoples R China.
关键词:
Clarity measure;Consistency verification;Image superpixel;Multi-focus image fusion;Point detection filter
摘要:
An accurate and efficient measurement of pixel's sharpness is a critical factor in most multi-focus image fusion methods. In our practice, we found that the focused regions get more blurred than the defocus regions when the multi-focus images are blurred digitally. Based on this observation, a novel multi-focus image fusion method is presented in this paper. In the given fusion scheme, focused regions detection is achieved by point detection filter and Gaussian filter, which has been certified more effective than other frequently used image clarity measures. Moreover, unlike the other commonly used consistency verification, we propose a superpixel-based consistency verification (SCV) method by integrating the image superpixels to improve the fusion performance. Image superpixels can perceptually represent meaningful image local features. Two datasets of multi-focus images are used to conduct experiments. Experimental results demonstrate that the proposed method can be competitive with or even outperform the state-of-the-art fusion methods in terms of both subjective visual perception and objective evaluation metrics. © 2013 IEEE.
语种:
英文
展开
解决复杂工程问题能力在电子信息工程专业毕业要求中的体现
作者:
杨斌;李月华;庞利会;陈忠泽
期刊:
教育现代化 ,2020年(19):90-93 ISSN:2095-8420
作者机构:
南华大学电子信息工程系
关键词:
复杂工程问题;毕业要求;工程教育认证
摘要:
以南华大学电子信息工程专业解决复杂工程问题能力的人才培养为例,探讨了以工程教育专业认证为导向人才培养模式。论述了我校电子信息工程专业在实施培养方案的过程中是如何让学生获得解决复杂工程问题能力的要求。为其他工科类相近专业工程教育专业认证为导向人才培养模式的创新与改革提供参考。
语种:
中文
展开
基于深度残差去噪网络的遥感融合图像质量提升
作者:
杨斌 Bin Yang;王翔 Xiang Wang
期刊:
激光与光电子学进展 ,2019年56(16):161009 ISSN:1006-4125
作者机构:
南华大学电气工程学院,湖南衡阳421001;南华大学超快微纳技术与激光先进制造湖南省重点实验室,湖南衡阳421001;[王翔 Xiang Wang; 杨斌 Bin Yang] 南华大学电气工程学院, 湖南 衡阳 <&wdkj&> 南华大学超快微纳技术与激光先进制造湖南省重点实验室, 湖南 衡阳
关键词:
图像处理;图像增强;图像融合;残差学习;卷积神经网络
摘要:
将理想高分辨率多光谱图像与遥感融合结果之间的残差视为广义噪声,提出了基于深度残差去噪网络(DnCNN)的遥感融合图像质量提升算法。通过DnCNN学习固定融合算法中细节丢失或光谱扭曲的规律,将输入的遥感图像融合结果映射得到残差图像,再用残差图像补充和修复遥感融合结果。在Quickbird卫星遥感图像数据上,利用本文算法对不同方法的融合结果进行增强处理测试,实验结果表明所有算法结果经过DnCNN的后置增强之后,融合质量都大为改善,其中基于支持值变换(SVT)的方法与DnCNN结合的算法性能最好,其性能优于现有最新的遥感图像融合方法。
语种:
中文
展开
An Efficient Adaptive Interpolation for Bayer CFA Demosaicking
作者:
Bin Yang* ;Dongsheng Wang
期刊:
Sensing and Imaging ,2019年20(1):1-17 ISSN:1557-2064
通讯作者:
Bin Yang
作者机构:
[Bin Yang; Dongsheng Wang] College of Electric Engineering, University of South China, Hengyang, China
通讯机构:
[Bin Yang] C;College of Electric Engineering, University of South China, Hengyang, China
关键词:
Demosaicking;Color filter array;Color interpolation;Gradient;estimation
摘要:
Demosaicking refers to the image processing of reconstruction full color image from the raw incomplete samples recorded by a single-chip image sensor. For most available demosaicking methods, the image edges are important cues to design the interpolation scheme, and the accuracy of edge estimation has great influence on the reconstructed image quality. In this paper, a block edge estimation method is proposed by considering all the color channels comprehensively. Based on the novel edge estimation method, the proposed algorithm firstly interpolates the G channel pixels with the guidance of the color correlation and edge directions. The interpolated G channel is further used to help the R and B channel interpolations in sequence. In addition, a border pixels interpolation strategy is also presented to overcome the difficulties of the gradient estimation at border positions. Both Kodak and IMAX data set are used to test the validation of the proposed methods. The experimental results demonstrate that the proposed algorithm provides superior performances in terms of both objective evaluate and subjective visual quality.
语种:
英文
展开
基于梯度局部一致性的自适应Bayer模式彩色图像恢复算法
作者:
王东升;杨斌
期刊:
南华大学学报(自然科学版) ,2019年33(2):40-48 ISSN:1673-0062
作者机构:
南华大学 电气工程学院,湖南 衡阳,421001;[王东升; 杨斌] 南华大学
关键词:
彩色滤波阵列;Bayer模式阵列;去马赛克;自适应插值
摘要:
基于图像局部梯度具有明显一致性先验,提出了一种基于梯度局部一致性的自适应Bayer模式彩色图像恢复算法。算法在恢复绿色通道时首先从垂直和水平两个方向的进行插值,得到候选插值结果。然后根据彩色通道结构相似的特点,提出利用多通道梯度线性加权估计图像局部梯度,并利用引导滤波实现用时域图像边缘结构对图像局部梯度进行一致化校验,从而提高边缘估计的精度。然后将一致化校验后的梯度特征与候选插值进行自适应融合得到最终绿色通道插值。最后依据图像通道之间的结构相似性先验,利用绿色通道结构细节实现对红色和蓝色通道重构图像的修正,从而恢复完整的去马赛克彩色图像。通过在Kodak和IMAX两个数据库进行的有效性验证,并与其他最新方法进行的性能对比,结果表明恢复图像主观视觉特性和客观评价指标都优于传统方法。此外,由于不涉及迭代和网络训练,具有高效性。
语种:
中文
展开
Infrared and Visible Images Fusion with Multi-visual Cues
作者:
Yuhan Sun;Bin Yang
期刊:
Sensing and Imaging ,2018年19(1):1-17 ISSN:1557-2064
通讯作者:
Yang, Bin(yangbin01420@163.com)
作者机构:
[Yuhan Sun; Bin Yang] College of Electric Engineering, University of South China, Hengyang;421001, China;[Yuhan Sun; Bin Yang] 421001, China
通讯机构:
[Bin Yang] C;College of Electric Engineering, University of South China, Hengyang, China
关键词:
Image fusion;Visual saliency uniqueness;Objectness
摘要:
Image fusion is an important technique which combines the original information from multiple input images into a single composite image. The fused images will be more beneficial to human visual perception or further computer processing tasks than any individual input. Most of the traditional infrared and visible fusion approaches perform the fusion on the assumption that the original information is measured by local saliency features such as contrast or gradient. There is little consideration of the 'interesting' or 'useful' information in global. In this paper, an infrared and visible image fusion method is proposed by considering the final aim of image fusion, the human visual perception and further image processing tasks. The fusion is implemented under the non-subsampled contourlet transform based image fusion framework. The low frequency sub-band coefficients which represent the intensity of the scene are fused with the weight map which is constructed by considering both visual saliency uniqueness and task-oriented objectness, and refined by spatial consistency with guide filter. The new fusion strategy ensures that the objects being 'interesting' or 'useful' are preserved in the fused image. Sixteen pairs of infrared and visual images are used to test the validation of the proposed method. The experimental results show obvious improvement of the proposed method in terms of both objective and subjective quality measurements corresponding to other methods. © 2018, Springer Science+Business Media, LLC, part of Springer Nature.
语种:
英文
展开
基于主成分分析网络的车牌检测方法
作者:
钟菲;杨斌
期刊:
计算机科学 ,2018年45(3):268-273 ISSN:1002-137X
作者机构:
[钟菲; 杨斌] 南华大学电气工程学院, 湖南, 衡阳, 421001
关键词:
车牌检测;主成分分析网络;特征提取;非极大值抑制算法
摘要:
车牌识别是智能交通系统的核心技术,车牌检测是车牌识别技术中至关重要的一步。传统的车牌检测方法多利用浅层的人工特征,在复杂场景下的车牌检测率不高。基于主成分分析网络的车牌检测算法,能够无监督地逐级提取车牌深层特征,可有效提高算法的鲁棒性。算法首先采用Sobel算子边缘检测和边缘对称性分析获取车牌候选区域;然后将候选区域输入到主成分分析网络中进行车牌深度特征提取,并利用支持向量机实现对车牌区域的判别;最后采用非极大值抑制算法标记最佳车牌检测区域。利用收集的复杂场景下的车辆图像对所提方法的参数进行分析,并将其与传统方法进行比较。实验结果表明,所提算法的鲁棒性高,性能优于传统的车牌检测方法。
语种:
中文
展开
基于工程教育专业认证的课程体系构建探讨——以南华大学电子信息工程专业工程教育专业认证为例
作者:
李月华;郭玮;杨斌;陈忠泽;盛义发;...
期刊:
教育现代化 ,2018年5(14):107-111 ISSN:2095-8420
作者机构:
南华大学 电气工程学院,湖南 衡阳;[郭玮; 盛义发; 李月华; 陈忠泽; 杨斌; 王新林] 南华大学
关键词:
工程教育;专业认证;课程体系
摘要:
根据中国工程教育专业认证标准,为进一步深化本科教学改革,丰富人才培养内涵,提升人才培养质量,本文根据地方电子产业发展实际,面向国内外电子信息产业和行业的发展,首先介绍了教育部工程教育专业认证下的电子信息工程专业课程体系构建流程,重点阐述了实践课程体系的构建,以及本专业在课程体系改革方面的主要举措等,为其它院校工科专业参与工程教育专业认证课程体系构建提供参考。
语种:
中文
展开
一种新型的基于深度学习的单幅图像去雨方法
作者:
钟菲;杨斌
期刊:
计算机科学 ,2018年45(11):283-287 ISSN:1002-137X
作者机构:
[钟菲; 杨斌] 南华大学电气工程学院, 湖南, 衡阳, 421001
关键词:
雨滴去除;深度学习;卷积神经网络
摘要:
雨滴严重影响了图像的视觉效果和后续的图像处理应用。目前,基于深度学习的单幅图像去雨方法能够有效挖掘图像的深度特征,其去雨效果优于传统方法;然而,随着网络深度的增加,网络容易出现过拟合的现象,使得去雨效果遇到瓶颈。文中在继承深度学习优点的基础上,学习有雨/无雨图像之间的残差,然后将残差与源图像进行重构,从而获得无雨图像。该方式大幅增加了网络深度,并加快了算法的收敛速度。分别利用通过不同方式获取的雨滴图像对所提方法进行实验验证,并将该方法与当前最新的去雨滴方法作比较,结果表明所提算法的去雨效果更好。
语种:
中文
展开
高校嵌入式系统人才培养模式探究——基于“卓越工程师”培养计划
作者:
李月华;郭玮;杨斌;陈忠泽;朱卫华
期刊:
新西部:下旬·理论 ,2017年(03):111-112 ISSN:1009-8607
作者机构:
南华大学电气工程学院 湖南衡阳 421001;[郭玮; 李月华; 陈忠泽; 杨斌; 朱卫华] 南华大学
关键词:
嵌入式系统;卓越工程师;人才培养;特点;不足;改革路径
摘要:
嵌入式系统是实现"中国制造2025"的主要载体,而培养优秀的嵌入式系统卓越工程师是关键.本文分析了嵌入式系统工程技术人才的特点、市场需求与目前高校嵌入式系统课程体系存在的不足,探讨了打破高校在嵌入式系统技术人才培养瓶颈的改革路径,以期扎实推进嵌入式系统卓越工程师培养的各项建设工作.
语种:
中文
展开
Multi-focus image fusion and super-resolution with convolutional neural network
作者:
Yang, Bin* ;Zhong, Jinying;Li, Yuehua;Chen, Zhongze
期刊:
International Journal of Wavelets, Multiresolution and Information Processing ,2017年15(4):1750037 ISSN:0219-6913
通讯作者:
Yang, Bin
作者机构:
[Chen, Zhongze; Li, Yuehua; Yang, Bin; Zhong, Jinying] Univ South China, Coll Elect Engn, Hengyang 421001, Peoples R China.
通讯机构:
[Yang, Bin] U;Univ South China, Coll Elect Engn, Hengyang 421001, Peoples R China.
关键词:
Convolution;Neural networks;Optical resolving power;All-in-focus image;Computation efficiency;Convolutional neural network;High-resolution fused images;Multifocus image fusion;Reconstruction networks;Super resolution;Superresolution methods;Image fusion
摘要:
The aim of multi-focus image fusion is to create a synthetic all-in-focus image from several images each of which is obtained with different focus settings. However, if the resolution of source images is low, the fused images with traditional fusion method would be also in low-quality, which hinders further image analysis even the fused image is all-in-focus. This paper presents a novel joint multi-focus image fusion and super-resolution method via convolutional neural network (CNN). The first level network features of different source images are fused with the guidance of the local clarity calculated from the source images. The final high-resolution fused image is obtained with the reconstruction network filters which act like averaging filters. The experimental results demonstrate that the proposed approach can generate the fused images with better visual quality and acceptable computation efficiency as compared to other state-of-the-art works. © 2017 World Scientific Publishing Company.
语种:
英文
展开
Remote Sensing Image Fusion with Convolutional Neural Network
作者:
Jinying Zhong;Bin Yang;Guoyu Huang;Fei Zhong;Zhongze Chen
期刊:
Sensing and Imaging ,2016年17(1):140-155 ISSN:1557-2064
通讯作者:
Yang, Bin(yangbin01420@163.com)
作者机构:
[Bin Yang; Zhongze Chen; Jinying Zhong; Guoyu Huang; Fei Zhong] College of Electric Engineering, University of South China, Hengyang, 421001, China
通讯机构:
[Bin Yang] C;College of Electric Engineering, University of South China, Hengyang, China
关键词:
Remote sensing image fusion;Super-resolution;Convolutional neural;network;Gram-Schmidt transform
摘要:
Remote sensing image fusion (RSIF) is referenced as restoring the high-resolution multispectral image from its corresponding low-resolution multispectral (LMS) image aided by the panchromatic (PAN) image. Most RSIF methods assume that the missing spatial details of the LMS image can be obtained from the high resolution PAN image. However, the distortions would be produced due to the much difference between the structural component of LMS image and that of PAN image. Actually, the LMS image can fully utilize its spatial details to improve the resolution. In this paper, a novel two-stage RSIF algorithm is proposed, which makes full use of both spatial details and spectral information of the LMS image itself. In the first stage, the convolutional neural network based super-resolution is used to increase the spatial resolution of the LMS image. In the second stage, Gram–Schmidt transform is employed to fuse the enhanced MS and the PAN images for further improvement the resolution of MS image. Since the spatial resolution enhancement in the first stage, the spectral distortions in the fused image would be decreased in evidence. Moreover, the spatial details can be preserved to construct the fused images. The QuickBird satellite source images are used to test the performances of the proposed method. The experimental results demonstrate that the proposed method can achieve better spatial details and spectral information simultaneously compared with other well-known methods. ©2016, Springer Science+Business Media New York.
语种:
英文
展开
卷积神经网络的研究进展综述
作者:
杨斌;钟金英
期刊:
南华大学学报(自然科学版) ,2016年30(3):66-72 ISSN:1673-0062
作者机构:
南华大学 电气工程学院,湖南 衡阳,421001;[钟金英; 杨斌] 南华大学
关键词:
深度学习;卷积神经网络;特征提取;智能识别
摘要:
深度学习(deep learning,DL)强大的建模和表征能力很好地解决了特征表达能力不足和维数灾难等模式识别方向的关键问题,受到各国学者的广泛关注.而仿生物视觉系统的卷积神经网络(convolutional neural network,CNN)是DL中最先成功的案例,其局部感受野、权值共享和降采样三个特点使之成为智能机器视觉领域的研究热点.对此,本文综述CNN最新研究成果,介绍其发展历程、最新理论模型及其在语音、图像和视频中的应用,并对CNN未来的发展潜力和发展方向进行了展望和总结.
语种:
中文
展开
基于图像修复技术的抗核辐射图像恢复方法
作者:
杨斌;赵立宏;邓骞
期刊:
南华大学学报(自然科学版) ,2016年30(4):56-61 ISSN:1673-0062
作者机构:
南华大学 电气工程学院,湖南 衡阳,421001;南华大学 机械工程学院,湖南 衡阳,421001;[杨斌; 赵立宏; 邓骞] 南华大学
关键词:
核辐射;图像恢复;噪声检测;图像去噪
摘要:
强辐射环境下数码成像设备由于受到高能粒子射线的影响,采集的视频或图像信噪比非常低,严重影响了其在辐射环境下的应用.为了去除视频或图像中的噪声,减少辐射粒子对成像设备的影响,提出了一种基于修复技术的新型强辐射图像去噪技术.首先在分析强辐射环境下成像设备受干扰的机理基础上检测图像的强干扰噪声;然后将图像噪声区域看成待修复区域,利用图像修复技术进行噪声消除;最后利用基于非下采样轮廓波变换的去噪方法对修复后的图像进行后处理.实验结果及分析表明提出的方法算法效率高、降噪效果显著,能够很好地去除强度大、分布密集的噪声,有效提高了数码成像设备在强辐射环境下的工作性能.
语种:
中文
展开
基于多光谱图像超分辨率处理的遥感图像融合
作者:
杨超 Yang Chao;杨斌 Yang Bin;黄国玉 Huang Guoyu
期刊:
激光与光电子学进展 ,2016年53(2):021001-1-021001-8 ISSN:1006-4125
作者机构:
[杨超 Yang Chao; 杨斌 Yang Bin; 黄国玉 Huang Guoyu] 南华大学电气工程学院, 湖南, 衡阳, 421001
关键词:
遥感;图像融合;图像超分辨;YUV变换;静态小波变换
摘要:
传统遥感图像融合方法没有充分利用低分辨率多光谱图像本身的空间信息。针对这一问题,提出了一种基于低分辨率多光谱图像超分辨率处理的遥感图像融合方法。通过对低分辨率多光谱图像进行基于稀疏表示的图像超分辨率处理,在保持其光谱信息的基础上增强其空间信息;利用静态小波变换对增强的多光谱图像的亮度分量Y和全色图像进行融合;由YUV逆变换得到融合多光谱图像。在真实遥感图像上进行的实验结果表明该算法能有效提高融合图像的空间细节信息,同时保留了光谱信息,对比实验验证了该方法的优越性。
语种:
中文
展开
Simultaneous image fusion and demosaicing via compressive sensing
作者:
Yang, Bin* ;Luo, Jie;Guo, Ling;Cheng, Fang
期刊:
Information Processing Letters ,2016年116(7):447-454 ISSN:0020-0190
通讯作者:
Yang, Bin
作者机构:
[Luo, Jie; Yang, Bin; Cheng, Fang; Guo, Ling] South China Univ, Coll Elect & Engn, Hengyang 421001, Peoples R China.
通讯机构:
[Yang, Bin] S;South China Univ, Coll Elect & Engn, Hengyang 421001, Peoples R China.
关键词:
Demosaicing;Image fusion;Software design and implementation
摘要:
In this paper, a compressive sensing based simultaneous fusion and demosaicing method for raw data of single-chip imaging camera is introduced. In order to meet the incoherence constraints of compressive sensing theory, the popular Bayer CFA is replaced with a random panchromatic color filter array. Then, the demosaicing problem is cast as an ill-posed inverse problem inherently and the compressive sensing technology is employed to solve the inverse problem. The restored sparse coefficients of different images are further fused with 1 -norm of the coefficients being served as activity measurements. The final fused image is reconstructed from the fused sparse coefficients. The extended joint sparse model is further used to exploit the inter-channel correlation of different color components. The simulation results illustrated in Experimental Section demonstrate that the proposed method gives both superior quantitative and qualitative performances. The proposed method is performed on raw mosaic images of single-sensor color imaging devices.The demosaicing and fusion problem is cast as an ill-posed inverse problem inherently and the fused image is obtained directly.The extended joint sparse model is used to exploit the inter-channel correlation of color components of the source images. In this paper, a compressive sensing based simultaneous fusion and demosaicing method for raw data of single-chip imaging camera is introduced. In order to meet the incoherence constraints of compressive sensing theory, the popular Bayer CFA is replaced with a random panchromatic color filter array. Then, the demosaicing problem is cast as an ill-posed inverse problem inherently and the compressive sensing technology is employed to solve the inverse problem. The restored sparse coefficients of different images are further fused with 1 -norm of the coefficients being served as activity measurements. The final fused image is reconstructed from the fused sparse coefficients. The extended joint sparse model is further used to exploit the inter-channel correlation of different color components. The simulation results illustrated in Experimental Section demonstrate that the proposed method gives both superior quantitative and qualitative performances. The proposed method is performed on raw mosaic images of single-sensor color imaging devices.The demosaicing and fusion problem is cast as an ill-posed inverse problem inherently and the fused image is obtained directly.The extended joint sparse model is used to exploit the inter-channel correlation of color components of the source images.
语种:
英文
展开
Image Fusion and Super-Resolution with Convolutional Neural Network
作者:
Zhong, Jinying;Yang, Bin* ;Li, Yuehua;Zhong, Fei;Chen, Zhongze
期刊:
Communications in Computer and Information Science ,2016年663:78-88 ISSN:1865-0929
通讯作者:
Yang, Bin
作者机构:
[Zhong, Fei; Chen, Zhongze; Li, Yuehua; Yang, Bin; Zhong, Jinying] Univ South China, Sch Elect Engn, Hengyang 421001, Peoples R China.
通讯机构:
[Yang, Bin] U;Univ South China, Sch Elect Engn, Hengyang 421001, Peoples R China.
会议名称:
Chinese Conference on Pattern Recognition
关键词:
Image fusion;Super-resolution;Convolutional neural network
摘要:
Image fusion aims to integrate multiple images of the same scene into an artificial image which contains more useful information than any individual one. Due to the constraints of imaging sensors and signal transmission broadband, the resolution of most source images is limited. In traditional processing framework, super-resolution is conducted to improve the resolution of the source images before the fusion operations. However, those super-resolution methods do not make full use of the multi-resolution characteristics of images. In this paper, a novel jointed image fusion and super-resolution algorithm is proposed. Source images are decomposed into undecimated wavelet (UWT) coefficients, the resolution of which is enhanced with convolutional neural network. Then, the coefficients are further integrated with certain fusion rule. Finally, the fused image is constructed from the combined coefficients. The proposed method is tested on multi-focus images, medical images and visible light and near infrared ray images respectively. The experimental results demonstrate the superior performances of the proposed method.
语种:
英文
展开