上海交通大学学报, 2023, 57(3): 366-378 doi: 10.16183/j.cnki.jsjtu.2021.238

电子信息与电气工程

融合MCAP和GRTV正则化的无人机航拍建筑物图像去雾方法

黄鹤a,b, 胡凯益a,b, 李战一a,b, 王会峰a,b, 茹锋a,b, 王珺,a

a.长安大学 西安市智慧高速公路信息融合与控制重点实验室,西安 710064

b.长安大学 电子与控制工程学院,西安 710064

An Image Dehazing Method for UAV Aerial Photography of Buildings Combining MCAP and GRTV Regularization

HUANG Hea,b, HU Kaiyia,b, LI Zhanyia,b, WANG Huifenga,b, RU Fenga,b, WANG Jun,a

a. Xi’an Key Laboratory of Intelligent Expressway Information Fusion and Control

b. School of Electronics and Control Engineering, Chang’an University, Xi’an 710064, China

通讯作者: 王 珺,副教授,电话(Tel.): 029-88308121; E-mail:jwang@nwu.edu.cn.

责任编辑: 石易文

收稿日期: 2021-06-20   接受日期: 2021-08-1  

基金资助: 国家重点研发计划项目(2021YFB2501200)
国家自然科学基金面上项目(52172324)
国家自然科学基金面上项目(52172379)
陕西省重点研发计划项目(2021GY-285)
陕西省自然科学基础研究计划面上项目(2021JM-184)
西安市智慧高速公路信息融合与控制重点实验室(长安大学)开放基金项目(300102321502)

Received: 2021-06-20   Accepted: 2021-08-1  

作者简介 About authors

黄鹤(1979-),教授,主要从事信息融合,图像处理等方向研究.

摘要

针对传统去雾处理复原得到的图像清晰度和对比度较低、整体颜色偏暗的问题,提出了一种改进的图像去雾方法,应用于无人机航拍建筑物图像处理中.针对全局大气光取值易受场景中景物影响的问题,提出一种颜色衰减先验投影最小方差的大气光求解方法,构建明度与饱和度差值图像,求解最小方差出现区域,并确定全局大气光估计.将利用图像场景深度信息求解的区域大气光与全局大气光相融合,获得新的大气光图.采用基于非局部信息的雾霾线先验理论对透射率进行优化,提出了一种基于雾霾线理论和引导相对总变分正则化的算法,通过计算透射率可靠性函数对透射率修正,并消除图像中存在的大量无用纹理信息,提升了透射率估计精度,有效改善了无人机航拍场景中浓雾及景深突变区域的复原图像质量.实验结果表明,所提算法与其他算法相比,获得的复原图像平均梯度、对比度、雾霾感知密度估计及模糊系数等指标分别平均提升了12.2%、7.0%、11.9%和12.5%,运算时长也优于部分算法,航拍图像更加清晰,更符合人眼视觉感受.

关键词: 颜色衰减先验; 图像处理; 变差函数; 去雾; 无人机

Abstract

Aimed at the problems of low resolution, low contrast,and dark color of images recovered by traditional dehazing processing, an improved images dehazing method is proposed and applied to the unmanned aerial vehicle (UAV) aerial building image processing. First, to solve the problem that the value of global atmospheric light is easily affected by the scene objects, a method of atmospheric light with minimum variance of color attenuation prior projection is proposed. The difference image of brightness and saturation is constructed to solve the region where the minimum variance occurres, and the estimation of global atmospheric light is determined. Then, the regional atmospheric light is fused with the global atmospheric light by using the depth information of the image scene, and a new atmospheric light image is obtained. Finally, the haze line based on the non-local information prior theory in view of the transmittance is optimized. Moreover, this paper proposes a method based on the theory of haze line and guide relative to the total variation regularization algorithm. The transmission rate is fixed through calculating transmittance reliability function. A large amount of useless texture information existing in the image is eliminated, which enhances the precision of transmission rate estimation. It effectively improves the image quality of thick haze and abrupt depth-of-field in UAV aerial shooting scene. The experimental results show that, compared with other algorithms, the average gradient, contrast, haze aware density evaluator, and blur coefficient of the recovered images are improved by 12.2%, 7.0%, 11.9%, and 12.5%, respectively. The operation time of the proposed algorithm is shorter than that of some other algorithms, and the processed aerial images are clearer, which are more consistent with the visual perception of human eyes.

Keywords: color attenuation prior; image processing; variation function; dehazing; unmanned aerial vehicle (UAV)

PDF (42329KB) 元数据 多维度评价 相关文章 导出 EndNote| Ris| Bibtex  收藏本文

本文引用格式

黄鹤, 胡凯益, 李战一, 王会峰, 茹锋, 王珺. 融合MCAP和GRTV正则化的无人机航拍建筑物图像去雾方法[J]. 上海交通大学学报, 2023, 57(3): 366-378 doi:10.16183/j.cnki.jsjtu.2021.238

HUANG He, HU Kaiyi, LI Zhanyi, WANG Huifeng, RU Feng, WANG Jun. An Image Dehazing Method for UAV Aerial Photography of Buildings Combining MCAP and GRTV Regularization[J]. Journal of Shanghai Jiaotong University, 2023, 57(3): 366-378 doi:10.16183/j.cnki.jsjtu.2021.238

无人机航拍城市建筑图像是获取建筑物的勘测和三维建模信息的重要手段,也是城市计算与智慧城市建设的重要组成部分.在雾霾天气下,航拍图像受到大气吸收和散射的影响,场景能见度和色彩饱和度下降,需要进行图像去雾.目前,主流图像去雾方法一般都采用物理模型[1].文献[2]利用统计模型估计参数来恢复场景的可见性,文献[3]设计了模糊彩色图像的能见度恢复方法,以上方法通常使用高精度测距设备测出图像景深,在实际应用中具有局限性.文献[4]利用同一场景在不同天气下的退化图像作为先验信息计算场景深度,实用性同样受限,不易在不同条件下获取同一场景的多幅图像,图像景深的求解成为去雾的难点.另一个重要参数透射率描述了光线在空气中的传播能力,与场景深度密切相关.而在复原过程中,大气光估计值是否准确决定了去雾后图像的亮暗程度,影响透射率求解的准确性.因此,大气光值估计成为求解场景深度的一个思路.文献[5]从大量无雾图像的统计数据中总结得出了暗通道先验 (DCP) 理论,为估计大气光和透射率提供先验信息计算复原图像.文献[6]通过统计大量模糊图像发现,雾霾浓度与亮度和饱和度的差呈正比,称为颜色衰减先验(CAP),但在雾霾浓度较高的图像中效果不佳.同时,基于DCP、CAP等局部先验理论的方法,在求解透射率时过度依赖局部像素点,使得复原图能见度提升不显著、去雾效果不均匀.文献[7]认为大气光值并不是单一全局常量,而是与透射率一起被模型学习的变量,同时提出了基于DehazeNet的图像去雾算法,无需手动对模糊图像特征进行总结,减少了先验知识对复原图像质量的影响.文献[8]通过分析含雾图像在红、绿、蓝(RGB)颜色空间的像素分布,首次提出了雾霾线先验非局部先验理论,假设了像素簇内必有部分像素点位于场景深度较小、受雾霾影响较轻的区域.这一假设在绝大多数区域成立,但实际中存在部分像素数量较少的像素簇,簇内可能不存在无雾像素点,使得透射率估计值存在偏差.文献[9]采用了改进加权最小二乘模型求解最优透射率估计,改善了透射率估计精度,但模型缺乏纹理及结构区分项,优化后的透射率存在较多纹理.相对总变分(RTV)模型[10]适合去除纹理,但边缘容易出现过度平滑,RTV模型本身仍需改进.

针对以上问题,本文提出了一种融合颜色衰减先验投影最小方差(MCAP)和引导相对总变分(GRTV)正则化的无人机图像去雾方法,所获得的复原图像更加清晰,平均梯度、对比度、雾霾感知密度估计(FADE)[11]及模糊系数等指标均有提升.

1 雾霾图像降质模型

悬浮颗粒在雾霾天会散射和折射光线,从而导致图像模糊或离焦,降低图像质量.为了获得清晰的复原图像,现有主流去雾算法多通过分析图像模糊的原因建立光在雾天的传输过程模型,即反射光衰减模型和大气光模型[12],如图1所示.由图1可以看出,雾霾天无人机获取图像的光源可分为两大类:来自目标物体表面的反射光和场景外的大气光即空气中的光.该模型的数学表达式如下式所示:

I(x,y)=I0(x,y)e-hd(x,y)+Is(1-e-hd(x,y))

式中: I(x,y)为像素点坐标为(x,y)的含雾降质图像像素值;d(x,y)为像素点坐标点(x,y)的场景深度; I0(x, y)e-hd(x, y)为经过入射光衰减模型后得到的图像分量,该分量光源来自物体表面的反射光, I0(x, y) 为像素点坐标点(x,y)的反射光强度, e-hd(x, y)为透射分布率,h为大气散射系数; Is(1-e-hd(x, y)) 为经过大气光成像模型后得到的图像分量,该分量光源来自大气光, Is为环境光亮度图像分量,其衰减受景深影响.定义κ为微粒大小,η为波长,其关系式如下:

h(η)1ηκ

图1

图1   雾霾颗粒对成像的影响过程

Fig.1   Influence of haze particles on imaging process


在大气光模型中,雾霾颗粒大小为1~10 μm,明显大于正常空气颗粒的10-4μm.由式(2)可以看出,雾霾颗粒的尺寸决定了h,此时不同波长可见光近似等量散射,因此所获取的无人机航拍图像呈灰白色,清晰度较低.此外,反射光衰减模型揭示了反射光的固有亮度随深度呈指数衰减.雾霾颗粒在光折射和散射时也会引入较多噪声,引起离焦模糊进而导致无人机航拍图像的质量进一步恶化.

2 MCAP的大气光图估计

传统DCP[5]、CAP[6]去雾方法,通常直接选取暗通道图像中最亮的0.1%像素点对应含雾图像中像素的亮度值作为大气光值,受近景部分中白色或高亮区域的影响,其估计的大气光值不够准确,存在较大局限性.无人机航拍图像中常含有大量天空区域,若采用单一值作为大气光值,当具有较大场景深度且出现光源直射现象时,由于天空区域出现部分失真,单一大气光值易取在失真区域,致使含有丰富信息的中近景区域难以被有效复原,航拍有用目标信息效果较差.所以,本文提出一种MCAP求取大气光图的方法,可以更加准确地获取大气光估计值.

2.1 CAP理论

根据CAP理论,无人机航拍图像中任意区域雾的浓度与该区域像素点的明度和饱和度之差呈正相关,根据这一规律可以得到含雾图像中不同区域景深的差异,其公式描述如下:

d(x, y)c(x, y)v(x, y)-s(x, y)

式中: c(x,y)为像素点坐标(x,y)处的雾气浓度;v(x,y)和s(x,y)分别为像素点坐标(x,y)处的明度和色彩饱和度.场景深度同亮度与饱和度之差的相关性如图2所示.由图2可知,场景深度越大,色彩明度与饱和度的差异越大.

图2

图2   CAP原理

Fig.2   Theory of CAP


2.2 MCAP求取大气光值

CAP理论中若直接选用图像中景深信息最大的点作为大气光值,则忽略了相邻像素间的联系,并且容易受到图像中噪声的影响,使大气光估计值产生偏差.首先,将含雾图像转换到色调、饱和度、明度(HSV)颜色空间,求解各个像素点的色彩明度和饱和度之差,即:

Idif(x,y)=vI(x,y)-sI(x,y)

式中:vIi, jsIi, j分别为图像I中像素点(x,y)处的色彩明度和饱和度;Idif(x, y)为该像素点对应的颜色衰减率图像.求解整幅含雾图像对应的颜色衰减率图像,根据式(3)可以得出:

d(x,y)c(x,y)Idif(x,y)

为了避免单一极大场景深度值对大气光估计带来的影响,对图像中每一点附近区域的颜色衰减率图像即场景深度进行比较,找出方差最小的区域则可以准确确定图像大气光所在区域,则有:

Ma(x,y)=minΩ(x,y)Idif(x,y)δΩ(x,y)

式中:Max, y为大气光值所在区域;Ω(x,y)为颜色衰减率图中以像素点(x,y)为中心的局部区域;δ[•]为求取方差的运算函数.通过遍历图像寻找满足条件的局部区域是最直接的方法,但计算量大且参考像素数量过少,难以有效分析区域间景深的变化趋势.为简化计算,可以统计图像中每一行或列的颜色衰减率及其变化趋势.以颜色衰减率较高,且与临近行列差异较小的区域作为大气光候选区域.根据这一原则,提出MCAP的大气光求解方法.首先以每一列颜色衰减率的均值为该列场景深度的投影值,并求取邻近2r+1(r为选取半径)列投影值之和,则有:

Rr(x)=1ymaxy=1ymax[Idif(x,y)]
Rr_s(x)=k=x-rx+rRr(k)

式中:ymax为图像最大列坐标值;Rrx为列场景深度的投影值;Rr_s(x)为场景深度的列投影值之和.

然后,找到Rr_s中所有极值的坐标点作为候选行坐标集合Pr_p,则有:

Pr_p=findpeakRr_s(x)

式中:findpeak[·]为求取峰值的函数.同理,求取候选列坐标集合Pc_p.以获得的行、列坐标为中心,确定大小为(2r+1)×(2r+1)候选区域,并利用式(6)得到图像大气光值所在区域,求解区域像素点中值,即为大气光估计值asur,公式如下:

asur=medianMa(x,y)

式中:median[·]为求取中值的函数.

2.3 大气光图估计

根据大气光与像素位置相关的推论,通过实验发现图像复原效果与大气光估计值及像素所处景深位置密切相关.在大气光为单一全局常量条件下,当大气光估计值偏大时,远景区域复原效果较好,但近景区域亮度较低,细节难以辨认;当大气光估计值偏小时,远景区域出现“亮度饱和”,但近景区域亮度和清晰度较高.因此在求解大气光时,如果能够按照场景深度对图像中各个像素独立地求解大气光估计,使得大气光值不再是单一常量而是随场景深度变化,理论上能改善复原近景光照与远景能见度无法兼顾的问题,将满足这一条件的大气光定义为大气光图.由于场景深度变化为近似估计值,当图像内出现大量天空,且发生光源直射现象时,受成像设备影响,对远景高亮区域及天空区域的处理会出现部分失真,所以对大气光图的估计应在首先保证近景区域清晰的前提下,尽量避免天空区域的失真.大气光图的求解方法是采用含雾图像的暗通道图像作为景深信息图像Id,利用景深相似度进行聚类分割得到子块图像Igd(g为子块图像索引).然后,求取各子块图像中亮度值前1%的像素点对应的含雾图像中的像素点,以这些像素点在RGB这3个通道的平均值作为当前深度子块的大气光估计图Agd,最终经拼接、平滑边界后作为图像的大气光估计图Ad,则有:

Idg=KMId, gN*
Adg=meansortTop1%Idg, gN*
Ad=Ad1Ad2

式中:KM[·]为K-means聚类函数;mean[·]为取平均值函数;sortTop1%·为降序排列后取集合的前1%.

2.4 颜色衰减先验投影最小方差的大气光图估计

为了进一步提高大气光估计的鲁棒性,利用提出的颜色衰减先验最小方差投影法求得的全局大气光为基准大气光,以基于场景深度的区域大气光为位置相关大气光,将两种大气光估计进行融合,提出一种新的大气光值图求解方法.

定义基于最小方差投影法求得的全局大气光估计为基准大气光估计图Asur,基于场景深度求得的区域大气光估计图为Ad,则定义大气光估计图的计算方式为

Ae=λ1Asur+λ2GF(Igra,Ad)

式中:λ1,λ2为调节系数,用来调整基准大气光图及区域大气光图占比,一般情况下λ1=1-λ2;GF[·] 为引导滤波器,以灰度图Igra为引导图像,Ad为待滤波图像.由于引导滤波具有较好的保边平滑和融合引导图像特征的优势,以Igra为引导图对Ad进行平滑处理,不仅能为大气光图引入区域间相关性,改善其区域间变化过大的问题,同时也能够使得平滑后的大气光图在局部区域内随图像内缓慢变化.引导滤波公式如下:

Iqi=αkIini+βk, iΩk 
αk=1ωiΩkIiniIpi-μkI-pkσk2+ε0
βk=I-pk-αkμk

式中:Iin为输入的引导图像;Iq为滤波后图像;Ip为待滤波图像; Ωk为局部滤波窗口;k为滤波窗口索引;|ω|和σk为局部滤波窗口中的像素点数量及标准差; ε0为正则化参数,防止分母为0;i为滤波窗口像素索引; μkI-pk为第k个窗口引导图像均值及待滤波图像均值; αkβk为窗口滤波系数.

3 基于GRTV正则化的透射率估计

针对透射率粗估计在部分位置存在偏差的问题,通过计算透射率可靠性函数,提出了基于最小通道的透射率修正方法.同时,针对透射率粗估计图像中存在大量无用纹理信息,设计了GRTV正则化方法来提升透射率估计精度,改善无人机航拍场景中浓雾及景深突变区域的复原图像质量.

3.1 雾霾线先验理论

文献[8]通过实验证明,清晰图像中色彩及亮度相似的像素点会被聚为同一簇,进一步发现同一簇内像素点一般散布在图像中不同景深区域.当这些像素点受雾霾影响时,降质后像素值大小仅与透射率t相关,如下式所示:

Ilin(x, y)=  tlin(x, y)J(x, y)+(1-tlin(x, y))alin=tlin(x, y)(J(x, y)-alin)+alin

式中: J(x, y)Ilin(x, y) 分别为在坐标点(x,y)清晰图像的像素值及雾霾线先验理论下含雾降质后图像的像素值; alin为雾霾线先验理论的大气光估计值; tlin(x, y) 为雾霾线先验理论下坐标点(x,y)处的透射率.由于J(x,y)与alin在簇内可视为常量,所以定义tlin(x, y)(J(x, y)-alin)为J'(x,y),可得Ilin(x, y) 大小及方向由tlin(x, y) 来决定,且对于同一簇内,不同tlin(x, y) 对应Ilin(x, y)的终点在同一直线上,坐标如图3(a)所示.因此,清晰图像中同一簇内像素点受雾霾影响时,在RGB空间OxRyGzB沿同一直线分布,将此直线定义为雾霾线.图3(b)中红色与绿色标记位置为两束像素簇在图像中的分布,图3(c)3(d)分别为这两簇像素点在RGB颜色空间的分布,可以看出像素点大致呈线型分布.

图3

图3   含雾图像像素点在RGB颜色空间的分布

Fig.3   Distribution of pixel points in RGB color space of hazy image


3.2 基于雾霾线理论的透射率估计方法

雾霾线先验理论提出了一种基于整幅图像像素分布的非局部特征,能够避免局部先验理论仅依赖局部像素来估计透射率的缺陷,为透射率估计引入全局相关性.透射率求解的关键是找出图像中存在的雾霾线.假设大气光已知,定义:

IA(x, y)=I(x, y)-alin

式中: IA(x, y)(x, y)处像素亮度与大气光估计值的差值,即以大气光为坐标原点,图像中像素点的坐标表述形式.利用大气散射模型对式(19)对进行变形,可得表达式如下:

IA(x, y)=tlin(x, y)(J(x, y)-alin)

在笛卡尔坐标系下通过聚类求解雾霾线时坐标求解比较复杂,为了便于聚类计算,将IA(x, y)变换到以大气光估计为中心的球面坐标系下表示:

IA(x, y)=(γ(lSph), θ(lSph), φ(lSph))

式中: lSph为球面坐标系中的坐标; γ(lSph) 为球面坐标系中去除大气光亮度后像素点到坐标原点的距离,即IA(x, y) 的模; θ(lSph)φ(lSph) 分别为球面坐标系下IA(x, y) 的俯仰角和方位角.由式(20)可知,当场景内容一定时,J(x,y)和alin为三通道向量常量, tlin(x, y) 为单通道标量,则IA(x, y) 的模值仅与透射率tlin(x, y)的取值相关.在球面坐标系下,雾霾线求解的透射率tlin(x, y) 的改变并不影响θ(lSph)φ(lSph).因此,若像素点lSph1lSph2在球面坐标系下存在θ(lSph1)θ(lSph2)φ(lSph1)φ(lSph2),则其在无雾图像中也应该具有相似的RGB值,公式表述如下:

{θ(lSph1)θ(lSph1), φ(lSph1)φ(lSph1)},tlinJ(x1,y1)J(x2,y1)

在球面坐标系下,若两个像素点具有相似的方位角φ(lSph)和俯仰角θ(lSph),则其在空间上沿同一直线分布,两个像素点属于同一条雾霾线.因此,可以按θ(lSph)及φ(lSph)的相似性对像素进行聚类,求解像素点所属的簇类.实验中为了简化聚类过程,将球面坐标系按角度提前划分为 1000 个位置,采用构建临近查找树的方法来分别计算图中像素点与这些位置的相似性,将像素点归为不同簇类.为了便于描述,将向量IA(x, y) 的模γ(x,y)命名为像素点(x,y)的辐射度,由式(19)可得γ(x,y)与场景深度d(x,y)关系如下:

γ(x, y)=tlin(x, y)||J(x, y)-alin||=1e-hd(x, y)||J(x, y)-alin||

根据雾霾线理论,同一簇内的像素点一般散布在图像中不同景深位置上.在簇内像素点充足的条件下,总存在未受到雾霾影响或影响很小的像素点.假设像素x为簇内不受雾霾影响的像素点,其对应的透射率tlinx, y为1.由式(23)可知,像素点(x,y)的辐射度与清晰图像和大气光之差的模相等,将其记作簇内最大辐射度:

γmax=||J(x, y)-alin||

结合式(23)和(24)可知,对于簇内其他受雾霾影响的像素点,透射率可由该点辐射度γ(x, y)与簇内最大辐射度γmax的比值近似表示,如下式所示:

tlin(x, y)=γ(x, y)/γmax

利用式(25)求解透射率时,假设像素簇内必有部分像素点位于场景深度较小,受雾霾影响较轻的区域.这一假设在绝大多数区域成立,但对于部分像素数量较少的像素簇,簇内可能不存在无雾像素点,使透射率估计值存在偏差.若簇内景深最小的像素点的辐射度为γm,根据式(23)推出γm<γmax,则由γm求得簇内其他像素点的透射率如下式所示:

t'lin(x, y)=γ(x, y)/γmt'lin(x, y)>tlin(x, y)

此时该簇内像素点的透射率估计普遍偏大,产生偏差,影响复原图像质量.为解决这一问题,提出一种基于最小通道的透射率修正方法.

为了有针对性地修正透射率,首先需要计算透射率粗估计的可靠性.根据雾霾线理论,影响透射率估计精度的主要因素是簇内像素点的数量及分布情况.当簇内像素点数量越多,簇内辐射度越大的像素点受雾霾影响的可能性越小,γmγmax越接近,透射率估计准确性越高.当簇内像素沿雾霾线分布间隔越大,即簇内像素点在图像中越分散,求得的雾霾线方向越准确,透射率精度越高.定义透射率可靠性计算方式如下式所示:

Γ(x, y)=min{N'/150, 1}×minλ3γ(x, y)γmax+λ4γmax-γminγmax, 1

式中: Γ(x,y)为透射率可靠性;N'为簇内像素点数量; γmin为簇内最小辐射值; λ3λ4为权重系数,一般经验取值为0.9和0.1.在评价像素点的可靠性时,首先统计簇内像素点的数量,判断数量N'是否小于150个(经验取值),若是则视为该簇内像素点数量过少,透射率估计可能不可靠,返回值为N'/150,否则返回1,用来衡量该束像素簇是否可靠.然后分别计算该点的辐射值与簇内最大辐射值的比值,以及簇内最大最小辐射值之差与簇内最大辐射值的比值,将其加权求和后衡量该点的透射率可靠性.按照式(27)计算图3(b)中的透射率可靠性,结果如图4所示,其中:XYX,Y轴方向.由图4可知,由于透射率(见图3(b))在近景地面存在部分影响透射率精度的噪点,所以这些位置上像素的透射率可靠性较低,且随着场景深度增大,透射率估计可靠性逐渐降低.

图4

图4   透射率可靠性图

Fig.4   Diagram of transmissibility reliability


3.3 RTV模型

RTV不依赖纹理先验知识和人工干预,仅利用函数的总变差区分纹理和结构信息,其公式[10]

   argS min p(Sp-Ip)2+  τDX(p)LX(p)+ψ+DY(p)LY(p)+ψ

式中: p为某一局部区域; SpIp分别为分解出的结构图和原图在该区域的值, (Sp-Ip)2为结构图和原图的相似度;τ为权重系数; DXDYLXLY分别为X和Y方向上区分纹理和结构信息的正则化系数,即总变分及固有变分;ψ为用来预防分母为0的调整系数.由于RTV模型本身的固定性,对于含雾降质图像而言,在多样的气象条件及拍摄方式影响下,无法适应去雾模型中大气透射率的粗略估计,所以必须改进.

3.4 GRTV模型

雾霾线理论求解透射率估计时,部分像素可能存在偏差且含有较多纹理,设计了一种GRTV模型进行透射率优化.按照透射率最优估计条件,优化后的透射率需要满足在可靠性较高的位置保真,而在可靠性较低的位置根据局部相关性结合临近像素点修正透射率,在纹理部分能够增大平滑能力,在结构部分能够保持强度,从而获得透射率的最优解.通过引入引导正则项与加权数据保真项,定义GRTV模型数学表达如下:

 mintbstp {Γ(p)(tbst(p)-t(p))2+  λ5pρXtbst(p)X2+ρYtbst(p)Y2+  λ6pDX(p)LX(p)+ε+DY(p)LY(p)+ε}

式中: ttbst分别为基于雾霾线理论的透射率粗估计与最优估计; ρX, ρY为引导平滑系数;ε为调节系数,用来避免分母项为0; λ5,λ6为正则项系数;第1项为数据保真保证项,用来控制优化后透射率与粗估计的偏差;第2项为引导正则项,通过引导图像来改善边缘过度平滑问题;第3项为RTV正则项,通过计算窗口总变分与固有变分来区分纹理与结构信息.定义ρX,ρY如下所示:

ρX=1/IG(p)X2+ε
ρY=1/IG(p)Y2+ε

式中:IG为引导图像.在梯度变化剧烈区域ρX,ρY取值较小,避免透射率过度平滑.根据前文定义,水平方向的RTV正则项如下式所示:

pDX(p)LX(p)+ε=   pqΩ(p)gp, q|(Xtbst)q|qΩ(p)gp, q(Xtbst)q+ε

式中${{({{\partial }_{X}}{{t}_{\text{bst}}})}_{q}}$为对tbst在点q处求取X方向的偏导; gp,q为区域p中以q为中心的标准高斯核函数.

为了求解式(29),需将相对总变分项构造为非线性项与二次项的组合.利用|(Xtbst)q|分离并构造二次项时,以p为中心的计算形式由于窗口内各点|(Xtbst)q|不同,无法分离|(Xtbst)q|与非线性项,需要构造以像素q为中心的计算形式,变形后如下所示:

pDX(p)LX(p)+ε=qpΩ(q)gp, qLX(p)+ε|(Xtbst)q|qpΩ(q)gp,qLX(p)+ε1|(Xtbst)q|+εsr(Xtbst)q2=quX, qwX, q(Xtbst)q2

式中:εsr为调节系数.令uX, q,wX, q

uX, q=pΩ(q)gp, qLX(p)+ε=Gσ*1|Gσ*Xtbst|+εq
wX, q=1|(Xtbst)q|+εsr

式中: Gσ为方差为σ的高斯滤波器;*为卷积运算符; Xtbsttbst求取X方向的偏导.

同理,可得垂直方向相对总变分正则项及其参数uY, q,wY, q表示为

pDY(p)LY(p)+ε=quY, qwY, q(Ytbst)q2
uY, q=pΩ(q)gp, qLY(p)+ε=Gσ*1|Gσ*Ytbst|+εq
wY, q=1|(Ytbst)q|+εsr

根据式(30)~(31)、(33)~(38)将式(29)改写为矩阵形式:

min((vbst-vI)TΓ(vbst-vI)+ λ5(vbstTCTXPXCXvbst+vbstTCTYPYCYvbst)+ λ6(vbstTCTXUXWXCXvbst+vbstTCTYUYWYCYvbst))

式中: vbstvI分别为tbst与t的列向量表示; Γ,PX,PY, UX, WX, UY, WY分别为在RNM×NM(R、N、M分别为图像的颜色通道数量、列像素数及行像素数)空间内以可靠性系数Γ(x)和平滑权重ρX, ρY, uX,q, wX,q, uY,q, wY,q为对角元素的对角矩阵; CX, CY为X及Y方向离散前向差分算子.对式(39)求导,采用与相对总变分模型类似的求解方法,模型最优化问题转化为求下式中一组线性方程:

(ΓI+λ5L1n+λ6L2n)vbstn+1=ΓvI
L1n=CTXPXCX+CTYPYCY
L2n=CTXUXnWXnCX+CTYUYnWYnCY

对式(40)变形可得:

vbstn+1=(ΓI+λ1L1n+λ2L2n)-1ΓvIn

式中: n为迭代次数.根据式(43),以雾霾线理论得到的透射率估计tlin(x) 、可靠性函数Γ、含雾图像I以及正则项系数λ5, λ6为输入,求解透射率向量表示形式vbst,具体算法步骤如下所示.

算法 GRTV滤波算法

输入 透射率图像t,透射率可靠性Γ;含雾图像I;正则项系数λ5,λ6

输出 优化后透射率tbst

(1) function GTRV (t,Γ, I,λ5,λ6)

(2) tbst = t

(3) for n = 0 to 2 do

(4) 根据式(30)~(31)、(33)~(38)计算正则项

(5) 根据式(43)求解当前vbst

(6) 以当前vbst作为下一轮vI

(7) end for

(8) 输出vbst

(9) 将vbst变形为MN列即为优化后tbst

(10) return tbst

(11) end function

4 算法流程设计

按照前文所述的颜色衰减先验投影最小方差的大气光图估计求得大气光图Adak以及结合基于引导相对总变差正则化的透射率估计优化后的透射率tfil,雾天图像复原公式如下所示:

J(x,y)=I(x,y)-Adakmax{tfil(x,y),t0}+Adak

式中:t0为定义的透射率下阈值,取经验值为0.1.算法流程如图5所示,算法步骤如下所示.

图5

图5   算法流程

Fig.5   Flow chart of algorithm


步骤1 获取含雾图像.

步骤2 采用最小方差投影法,对含雾图像的颜色衰减率分别向水平和垂直方向投影,以每一行或列像素的平均颜色衰减率为投影值,求满足与周围行、列像素投影和较大,且方差最小的行列位置.以该像素点为中心,半径为r的矩形区域内像素点的均值为全局大气光估计值,实验中r取值为40,将由此求得的全局大气光估计记作基准大气光.

步骤3 采用基于场景深度的区域大气光求解方法,以暗通道图为场景深度参考图像,使用聚类方法对含雾图像按深度进行分割,求解不同景深区域上的大气光估计值,记作区域大气光估计.

步骤4 按照大气光值图计算方法,对求得的基准大气光和区域大气光进行融合处理,使得融合后的大气光值图含有区域间相关性与景深位置信息.

步骤5 按照雾霾线理论,将图像中全部像素转换到以大气光为原点的球面坐标系中表示,而后按照像素点在球面坐标系中的方位角及俯仰角相似性,通过聚类的方式确定该像素归属的像素簇.根据每簇像素点构成的雾霾线,求解簇内最大、最小辐照度,计算透射率粗估计及透射率可靠性.

步骤6 以含雾图像、透射率粗估计、透射率可靠性为输入参数,按照式(30)~(42)及GRTV的求解过程,实验中正则项系数λ5λ6分别为0.05和0.015,求解透射率最优估计tbst.

步骤7 将求解的大气光值图和优化后透射率代入雾天图像退化模型求解复原图像.

5 实验结果与分析

实验使用一组自行采集和下载的不同场景下无人机航拍建筑物图片(目前并无任何公开数据集),分别采用暗通道去雾[5] 算法、基于融合的变分图像去雾(FVID)[13] 算法、基于雾霾线的单图像去雾(Haze-Line)[8]算法、基于深度学习的DehazeNet[7]方法、基于改进梯度相似度核(IGSK)[14] 的去雾算法、基于多尺度窗口 (MSW)[15] 的自适应透射率图像去雾方法及本文算法进行处理.其中,图6~9为选取4幅不同场景下关键帧的实验结果.同时,表1中引入了平均梯度、灰度图像对比度、FADE[11]、模糊系数[16-17]及运算时长等参数来客观评价去雾效果.其中,FADE为2015年提出的目前最权威的图像去雾评价指标[11],无需参考额外的清晰图像,不依赖局部特征等信息,而是通过统计雾霾图像及其对应的清晰图像差异,对其特征进行学习建立的一种雾霾浓度评价模型,能准确评价复原图像中的雾霾浓度.为了对比各算法运算过程的耗时,在计算运算时长时统一将含雾图像尺寸大小调整为540像素×400像素.

图6

图6   E1图像去雾效果

Fig.6   Image dehazing effect of E1


图7

图7   E2图像去雾效果

Fig.7   Image dehazing effect of E2


图8

图8   E3图像去雾效果

Fig.8   Image dehazing effect of E3


图9

图9   E4图像去雾效果

Fig.9   Image dehazing effect of E4


表1   去雾图像参数评价

Tab.1  Evaluation of image parameters for dehazing images

实验组去雾算法平均梯度灰度图像对比度FADE模糊系数运算时长/s
E1原始图像3.413238.04383.3746
DCP[5]4.630841.50661.62401.287510.83
FVID[13]5.040244.84321.51071.39731.22
Haze-Line[8]7.123566.16600.91111.99795.34
DehazeNet[7]5.070855.18411.54341.35581.27
IGSK[14]2.905543.68541.82650.405816.89
MSW[15]5.414353.99131.40221.44072.54
本文方法8.142072.95300.71712.30176.21
E2原始图像2.728649.51672.3164
DCP[5]3.694756.49301.07171.391310.81
FVID[13]4.422053.60601.24491.68630.81
Haze-Line[8]4.904458.52680.57961.80934.07
DehazeNet[7]4.167867.00870.74251.56581.27
IGSK[14]2.046249.33892.91100.631616.40
MSW[15]4.464271.87930.79741.64941.56
本文方法6.881972.32190.35602.53595.00
E3原始图像4.027734.33811.4294
DCP[5]6.201937.05520.55121.560610.78
FVID[13]6.589536.33470.57751.72640.81
Haze-Line[8]8.556134.45460.26122.16074.68
DehazeNet[7]6.478544.14850.44641.64951.25
IGSK[14]3.024733.79821.84860.642716.47
MSW[15]7.322450.10620.49321.84921.54
本文方法11.616557.21060.19532.99865.01
E4原始图像3.696428.82632.2842
DCP[5]5.162635.80741.18851.611710.89
FVID[13]5.775419.58941.23841.71130.81
Haze-Line[8]9.798957.83290.42502.51714.34
DehazeNet[7]5.342038.97651.34341.64791.31
IGSK[14]1.862028.67093.49980.242116.58
MSW[15]6.203348.60461.03221.75461.53
本文方法9.669059.41370.27882.54834.72

新窗口打开| 下载CSV


5.1 主观评价

主观效果是评价去雾的首要考虑因素.针对航拍图像处理的需求,设定去雾的主观评价准则首先保证近景地面目标建筑细节复原清晰,其次保证中远景图质量,最后尽量保证天空清晰.实验组1(E1)为城市建筑航拍图及其复原图像,如图6所示,较大的天空区域及城市远处较浓的雾霾区域为丰富的建筑细节复原带来了较大困难.采用DCP虽然较好复原出近景区域的建筑物,但是远景和天空区域仍残留一定浓度的雾霾,且图像整体色调偏暗.FVID对近景建筑物区域有一定程度提升,但对远景和天空区域的复原较差.Haze-Line对远景区域有明显效果,但有一定程度失真,远处建筑发暗甚至偏黑,图像整体亮度和饱和度较低;基于深度学习的DehazeNet虽然在图像色调上与原图相似度较好,但整幅图像上存在一定薄雾,没有滤除干净.IGSK有一定的去雾效果,但是建筑细节不够突出,视觉上略显模糊.MSW对近景区域建筑有较好的修复效果,但中远景效果不佳,色彩有一定失真.而本文方法得到的复原图像饱和度有较大程度提升,对所航拍的近、中、远景建筑物细节处理更加细致清晰.

实验组2(E2)为包含大量浓雾的河畔建筑航拍图像及其复原图像,如图7所示.DCP复原了建筑的细节信息,但大量建筑区域色调偏暗,且在中景处仍含有薄雾,复原效果不佳.FVID的复原图像相比于DCP略有提升,但中远景区域仍有雾霾噪声.Haze-Line的复原图像整体色调产生偏差,图像色彩偏绿,色泽上明显失真.DehazeNet在图像复原上有所提升,但图像整体偏暗,对河畔远处区域复原效果不佳.IGSK复原后建筑的细节信息更加平滑,但天空区域有明显光圈,浓雾区域复原的建筑细节处理较差.MSW对薄雾的处理效果明显,但对较亮的浓雾区域处理效果差.与上述算法相比,本文方法的复原图像可以清楚看到建筑的纹理信息,对比度、亮度明显优于其他算法,河畔远处建筑也清晰可见.

实验组3(E3)为城市路口建筑物图像及其复原图像,包含近处绿化区域和远处建筑物,如图8所示.DCP方法对近景区域和部分建筑区域进行了较好复原,但图像整体偏暗.FVID方法虽然复原质量有一定提高,但仍含有较多雾霾噪声未去除.Haze-Line方法受强光影响,远景建筑物区域色调偏绿,近景区域也出现了一定的失真.DehazeNet方法复原图像细节区域亮度偏低,局部区域仍含有雾霾噪声.IGSK起到一定的去雾作用,但对细节的复原不及其他算法.MSW对远处浓雾区域没有起到复原的效果,导致部分信息丢失.而本文方法有效提升了图像的色彩饱和度,亮度和对比度明显优于其他算法,且色彩也更加鲜艳.

实验组4(E4)为城市建筑航拍图像,包括远处的高楼大厦区域和近处的平房区域,如图9所示.DCP能够较好复原近处平房建筑的细节,但整体色彩暗淡.FVID整体去雾效果较为均匀,但复原图像相比于DCP整体色调更暗,且对远处建筑处理也不理想.Haze-Line较好复原了远处浓雾区域的细节,但图像色调偏蓝偏绿,颜色明显失真.DehazeNet的复原图像有一层薄雾,细节处理不够明显.IGSK复原后整体上比较模糊,损失了很多细节信息.MSW整体复原效果有所提升,但是远处区域效果不佳.相比之下,本文方法较好复原出城市建筑细节,对远处高楼的处理优于其他算法,近处建筑均清晰可见,而且色彩更加鲜艳.

5.2 客观评价

在本文算法与同类文献主观效果对比占优的基础上,进一步考虑客观评价因素.采用平均梯度、灰度图像对比度、FADE及模糊系数等客观评价指标对复原图像质量进行评价,评价结果如图10表1所示.

图10

图10   去雾图像参数评价

Fig.10   Evaluation of image parameters for dehazing images


结合图6~10表1的实验结果分析可以看出,FVID算法得到的复原图像与原图像相比,客观评价指标有所提升,但灰度图像对比度较小,图像整体颜色产生偏差.且由于无人机航拍图片受天空影响过大,造成整体亮度估计过高,去雾后图像偏暗.DCP的复原图像的评价指标虽然都有所提升,但其灰度图像对比度仍然较低,导致整体图像色调偏暗.Haze-Line的复原图像在平均梯度等评价参数上表现良好,但结合实际去雾效果发现图像中有部分区域仍含有大量雾霾噪声,图像色彩有较多失真.DehazeNet的复原图像虽然在平均梯度指标上表现良好,但仍存在图像整体色调偏暗的问题,且由于其原有训练数据集为大量室内图像及人工合成雾霾图像,所以对自然浓雾及包含景深较大区域的复原效果较差.IGSK对含有噪声的含雾图像有较好的处理效果,但对没有噪声的图像效果并不理想,客观指标均不及其他方法且运算耗时长.MSW各指标良好,但远景浓雾区域的处理效果明显下降甚至会导致信息丢失.本文算法的复原图像与原图像相比各评价最优,图像内的有效细节信息最多,平均梯度及灰度图像对比度较高,参数FADE较低可说明图像的整体色调与原始场景相似度较高,结合实际复原图像可以发现对于城市中一些雾霾较重区域处理效果较好,得到图像色彩更加鲜艳.

6 结语

本文提出了一种融合MCAP和GRTV正则化的航拍建筑物图像去雾方法,通过MCAP大气光求解,确定全局大气光估计,解决了全局大气光取值易受场景中景物影响的问题.利用图像场景深度信息求解的区域大气光与全局大气光相融合,获得新的大气光.完成了基于雾霾线理论和GRTV正则化透射率估计与修正,消除了大量无用的纹理信息,提升了透射率估计精度.实验结果表明,通过与主流方法进行实验对比并采用主、客观评价指标进行分析,本文算法有效改善了无人机航拍场景中浓雾及景深突变区域的复原图像质量,多项客观评价指标均有较大幅度提升,对恶劣雾霾天气条件下,改善无人机航拍图像处理的复原质量具有积极意义.

参考文献

KIM J Y, KIM L S, HWANG S H.

An advanced contrast enhancement using partially overlapped sub-block histogram equalization

[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2001, 11(4): 475-484.

DOI:10.1109/76.915354      URL     [本文引用: 1]

OAKLEY J P, SATHERLEY B L.

Improving image quality in poor visibility conditions using a physical model for contrast degradation

[J]. IEEE Transactions on Image Processing, 1998, 7(2): 167-179.

DOI:10.1109/83.660994      PMID:18267391      [本文引用: 1]

In daylight viewing conditions, image contrast is often significantly degraded by atmospheric aerosols such as haze and fog. This paper introduces a method for reducing this degradation in situations in which the scene geometry is known. Contrast is lost because light is scattered toward the sensor by the aerosol particles and because the light reflected by the terrain is attenuated by the aerosol. This degradation is approximately characterized by a simple, physically based model with three parameters. The method involves two steps: first, an inverse problem is solved in order to recover the three model parameters; then, for each pixel, the relative contributions of scattered and reflected flux are estimated. The estimated scatter contribution is simply subtracted from the pixel value and the remainder is scaled to compensate for aerosol attenuation. This paper describes the image processing algorithm and presents an analysis of the signal-to-noise ratio (SNR) in the resulting enhanced image. This analysis shows that the SNR decreases exponentially with range. A temporal filter structure is proposed to solve this problem. Results are presented for two image sequences taken from an airborne camera in hazy conditions and one sequence in clear conditions. A satisfactory agreement between the model and the experimental data is shown for the haze conditions. A significant improvement in image quality is demonstrated when using the contrast enhancement algorithm in conjuction with a temporal filter.

TAN K, OAKLEY J P.

Enhancement of color images in poor visibility conditions

[C]// Proceedings 2000 International Conference on Image Processing. Vancouver, BC, Canada: IEEE, 2000: 788-791.

[本文引用: 1]

NARASIMHAN S G, NAYAR S K.

Contrast restoration of weather degraded images

[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(6): 713-724.

DOI:10.1109/TPAMI.2003.1201821      URL     [本文引用: 1]

HE K M, SUN J, TANG X O.

Single image haze removal using dark channel prior

[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33(12): 2341-2353.

DOI:10.1109/TPAMI.2010.168      PMID:20820075      [本文引用: 7]

In this paper, we propose a simple but effective image prior-dark channel prior to remove haze from a single input image. The dark channel prior is a kind of statistics of outdoor haze-free images. It is based on a key observation-most local patches in outdoor haze-free images contain some pixels whose intensity is very low in at least one color channel. Using this prior with the haze imaging model, we can directly estimate the thickness of the haze and recover a high-quality haze-free image. Results on a variety of hazy images demonstrate the power of the proposed prior. Moreover, a high-quality depth map can also be obtained as a byproduct of haze removal.

ZHU Q S, MAI J M, SHAO L.

A fast single image haze removal algorithm using color attenuation prior

[J]. IEEE Transactions on Image Processing, 2015, 24(11): 3522-3533.

DOI:10.1109/TIP.2015.2446191      PMID:26099141      [本文引用: 2]

Single image haze removal has been a challenging problem due to its ill-posed nature. In this paper, we propose a simple but powerful color attenuation prior for haze removal from a single input hazy image. By creating a linear model for modeling the scene depth of the hazy image under this novel prior and learning the parameters of the model with a supervised learning method, the depth information can be well recovered. With the depth map of the hazy image, we can easily estimate the transmission and restore the scene radiance via the atmospheric scattering model, and thus effectively remove the haze from a single image. Experimental results show that the proposed approach outperforms state-of-the-art haze removal algorithms in terms of both efficiency and the dehazing effect.

CAI B L, XU X M, JIA K, et al.

DehazeNet: An end-to-end system for single image haze removal

[J]. IEEE Transactions on Image Processing, 2016, 25(11): 5187-5198.

DOI:10.1109/TIP.2016.2598681      PMID:28873058      [本文引用: 6]

Single image haze removal is a challenging ill-posed problem. Existing methods use various constraints/priors to get plausible dehazing solutions. The key to achieve haze removal is to estimate a medium transmission map for an input hazy image. In this paper, we propose a trainable end-to-end system called DehazeNet, for medium transmission estimation. DehazeNet takes a hazy image as input, and outputs its medium transmission map that is subsequently used to recover a haze-free image via atmospheric scattering model. DehazeNet adopts convolutional neural network-based deep architecture, whose layers are specially designed to embody the established assumptions/priors in image dehazing. Specifically, the layers of Maxout units are used for feature extraction, which can generate almost all haze-relevant features. We also propose a novel nonlinear activation function in DehazeNet, called bilateral rectified linear unit, which is able to improve the quality of recovered haze-free image. We establish connections between the components of the proposed DehazeNet and those used in existing methods. Experiments on benchmark images show that DehazeNet achieves superior performance over existing methods, yet keeps efficient and easy to use.

BERMAN D, TREIBITZ T, AVIDAN S.

Non-local image dehazing

[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA: IEEE, 2016: 1674-1682.

[本文引用: 7]

BERMAN D, TREIBITZ T, AVIDAN S.

Single image dehazing using haze-lines

[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(3): 720-734.

DOI:10.1109/TPAMI.2018.2882478      PMID:30475710      [本文引用: 1]

Haze often limits visibility and reduces contrast in outdoor images. The degradation varies spatially since it depends on the objects' distances from the camera. This dependency is expressed in the transmission coefficients, which control the attenuation. Restoring the scene radiance from a single image is a highly ill-posed problem, and thus requires using an image prior. Contrary to methods that use patch-based image priors, we propose an algorithm based on a non-local prior. The algorithm relies on the assumption that colors of a haze-free image are well approximated by a few hundred distinct colors, which form tight clusters in RGB space. Our key observation is that pixels in a given cluster are often non-local, i.e., spread over the entire image plane and located at different distances from the camera. In the presence of haze these varying distances translate to different transmission coefficients. Therefore, each color cluster in the clear image becomes a line in RGB space, that we term a haze-line. Using these haze-lines, our algorithm recovers the atmospheric light, the distance map and the haze-free image. The algorithm has linear complexity, requires no training, and performs well on a wide variety of images compared to other state-of-the-art methods.

XU L, YAN Q, XIA Y, et al.

Structure extraction from texture via relative total variation

[J]. ACM Transactions on Graphics, 2012, 31(6): 1-10.

[本文引用: 2]

CHOI L K, YOU J, BOVIK A C.

Referenceless prediction of perceptual fog density and perceptual image defogging

[J]. IEEE Transactions on Image Processing, 2015, 24(11): 3888-3901.

DOI:10.1109/TIP.2015.2456502      PMID:26186784      [本文引用: 3]

We propose a referenceless perceptual fog density prediction model based on natural scene statistics (NSS) and fog aware statistical features. The proposed model, called Fog Aware Density Evaluator (FADE), predicts the visibility of a foggy scene from a single image without reference to a corresponding fog-free image, without dependence on salient objects in a scene, without side geographical camera information, without estimating a depth-dependent transmission map, and without training on human-rated judgments. FADE only makes use of measurable deviations from statistical regularities observed in natural foggy and fog-free images. Fog aware statistical features that define the perceptual fog density index derive from a space domain NSS model and the observed characteristics of foggy images. FADE not only predicts perceptual fog density for the entire image, but also provides a local fog density index for each patch. The predicted fog density using FADE correlates well with human judgments of fog density taken in a subjective study on a large foggy image database. As applications, FADE not only accurately assesses the performance of defogging algorithms designed to enhance the visibility of foggy images, but also is well suited for image defogging. A new FADE-based referenceless perceptual image defogging, dubbed DEnsity of Fog Assessment-based DEfogger (DEFADE) achieves better results for darker, denser foggy images as well as on standard foggy images than the state of the art defogging methods. A software release of FADE and DEFADE is available online for public use: http://live.ece.utexas.edu/research/fog/index.html.

HUANG H, SONG J, GUO L, et al.

Haze removal method based on a variation function and colour attenuation prior for UAV remote-sensing images

[J]. Journal of Modern Optics, 2019, 66(12): 1282-1295.

DOI:10.1080/09500340.2019.1615141      URL     [本文引用: 1]

GALDRAN A, VAZQUEZ-CORRAL J, PARDO D, et al.

Fusion-based variational image dehazing

[J]. IEEE Signal Processing Letters, 2017, 24(2): 151-155.

[本文引用: 5]

汪贵平, 宋京, 杜晶晶, .

基于改进梯度相似度核的交通图像去雾算法

[J]. 中国公路学报, 2018, 31(6): 264-271.

[本文引用: 5]

雾霾天气下,交通图像采集设备获取的降质图像含有较多噪声,图像边缘不突出,整体偏暗且对比度不高,灰白不清。针对传统交通图像滤波和去雾算法存在着滤波效果和边缘保持能力不能兼顾,容易出现噪声斑块,导致去雾后图像质量较低的问题,在传统梯度双边滤波基础上,设计了一种新的梯度相似度核,提出了基于改进梯度相似度核的雾霾天气下交通图像去雾算法。新算法首先将采集的含雾图像转换到Lab颜色空间,提升色域宽度,再利用改进梯度相似度核和空间相似度核分别计算图像中每一像素点与滤波框内临近像素点的梯度相似度和空间相似度权值,根据权值对图像进行滤波处理,然后将其转换到RGB颜色空间。最后根据大气光散射模型和暗通道先验原理,对滤波后的交通图像进行去雾处理,得到复原图像。试验结果表明:与传统双边滤波和梯度双边滤波算法相比,使用新算法处理后的复原图像峰值信噪比、归一化灰度差平均提升了13.25%、9.41%和21.76%、22.7%。新算法在保证了滤波效果,避免“噪声斑块”的同时,能够尽可能保持图像边缘细节信息,提升了雾霾天气下交通图像的去雾质量,对加强交通监控,保障交通安全有十分重要的应用价值和现实意义。

WANG Guiping, SONG Jing, DU Jingjing, et al.

Haze defogging algorithm for traffic images based on improved gradient similarity kernel

[J]. China Journal of Highway and Transport, 2018, 31(6): 264-271.

[本文引用: 5]

In fog and haze weather, the degraded images collected by traffic image acquisition equipment contain more noise and the edge of the image is not prominent. Images are dark and the contrast is low, and the images are pale and gray. For traditional traffic image filtering and image defogging algorithms, the filtering effect and edge preserving ability cannot be taken into account, which is prone to cause noise patches, resulting in low image quality after defogging. Based on traditional gradient bilateral filtering, a new gradient similarity kernel is designed, and a haze defogging algorithm for traffic images based on the improved gradient similarity kernel is proposed. Firstly, collected images containing noise are transformed into the Lab color space to enhance the color gamut width. Next, by using the improved similarity kernel and the space similarity, the weights of gradient similarity and spatial similarity are calculated, which are between each pixel of the image and the neighboring pixels in the filter box, respectively. The images are filtered according to the weights, and the filtered images are transformed into the RGB color space. Finally, according to the atmospheric scattering model and the dark channel prior principle, the filtered traffic images are subjected to defogging processing, and restored images can be obtained. The experimental results show that compared with the traditional bilateral filter and gradient bilateral filter, after processing with the proposed algorithm, the peak signal-to-noise ratio and normalized gray difference of the restored image are improved by 13.25% and 9.41%, and 21.76% and 22.7%, respectively. While ensuring the filtering effect and avoiding the "noise patch", as many of the edge details of the image as possible can be retained. This can improve the quality of fog traffic images, and has important application value and practical significance for enhancing traffic monitoring and ensuring traffic safety.

黄鹤, 李昕芮, 宋京, .

多尺度窗口的自适应透射率修复交通图像去雾方法

[J]. 中国光学, 2019, 12(6): 1311-1320.

[本文引用: 5]

HUANG He, LI Xinrui, SONG Jing, et al.

A traffic image dehaze method based on adaptive transmittance estimation with multi-scale window

[J]. Chinese Optics, 2019, 12(6): 1311-1320.

DOI:10.3788/co.      URL     [本文引用: 5]

董亚运, 毕笃彦, 何林远, .

基于非局部先验的单幅图像去雾算法

[J]. 光学学报, 2017, 37(11): 83-93.

[本文引用: 1]

DONG Yayun, BI Duyan, HE Linyuan, et al.

Single image dehazing algorithm based on non-local prior

[J]. Acta Optica Sinica, 2017, 37(11): 83-93.

[本文引用: 1]

黄鹤, 胡凯益, 宋京, .

雾霾线求解透射率的二次优化方法

[J]. 西安交通大学学报, 2021, 55(8): 130-138.

[本文引用: 1]

HUANG He, HU Kaiyi, SONG Jing, et al.

A twice optimization method for solving transmittance with haze-lines

[J]. Journal of Xi’an Jiaotong University, 2021, 55(8): 130-138.

[本文引用: 1]

/