Loading [MathJax]/jax/output/HTML-CSS/jax.js
高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

零样本图像识别

兰红 方治屿

于家傲, 彭世蕤, 陈晓坤, 李有权. 六边形环复合吸波超材料性能的等效电路分析方法[J]. 电子与信息学报, 2018, 40(8): 1873-1878. doi: 10.11999/JEIT171103
引用本文: 兰红, 方治屿. 零样本图像识别[J]. 电子与信息学报, 2020, 42(5): 1188-1200. doi: 10.11999/JEIT190485
Jiaao YU, Shirui PENG, Xiaokun CHEN, Youquan LI. Equivalent Circuit Method for Hexagonal Loop Composite Absorbing Material[J]. Journal of Electronics & Information Technology, 2018, 40(8): 1873-1878. doi: 10.11999/JEIT171103
Citation: Hong LAN, Zhiyu FANG. Recent Advances in Zero-Shot Learning[J]. Journal of Electronics & Information Technology, 2020, 42(5): 1188-1200. doi: 10.11999/JEIT190485

零样本图像识别

doi: 10.11999/JEIT190485
基金项目: 国家自然科学基金(61762046),江西省自然科学基金(20161BAB212048)
详细信息
    作者简介:

    兰红:女,1969年生,教授,硕士生导师,主要研究方向为计算机视觉、图像处理与模式识别

    方治屿:男,1993年生,硕士生,研究方向为计算机视觉与深度学习

    通讯作者:

    兰红 lanhong69@163.com

  • 中图分类号: TN911.73; TP391.41

Recent Advances in Zero-Shot Learning

Funds: The National Natural Science Foundation of China (61762046), The Natural Science Foundation of Jiangxi Province (20161BAB212048)
  • 摘要:

    深度学习在人工智能领域已经取得了非常优秀的成就,在有监督识别任务中,使用深度学习算法训练海量的带标签数据,可以达到前所未有的识别精确度。但是,由于对海量数据的标注工作成本昂贵,对罕见类别获取海量数据难度较大,所以如何识别在训练过程中少见或从未见过的未知类仍然是一个严峻的问题。针对这个问题,该文回顾近年来的零样本图像识别技术研究,从研究背景、模型分析、数据集介绍、实验分析等方面全面阐释零样本图像识别技术。此外,该文还分析了当前研究存在的技术难题,并针对主流问题提出一些解决方案以及对未来研究的展望,为零样本学习的初学者或研究者提供一些参考。

  • 基于有耗周期单元的复合吸波超材料(CAM)通过单元阵列与金属背板产生的多谐振特性可获得宽带吸波性能,该材料具有轻、薄、易弯曲的特点,作为宽带雷达吸波材料,在军事隐身领域具有广泛的应用前景[14]。针对频率选择表面(FSS)及其复合吸波材料的主要分析方法有变分法、等效电路法(ECM)等近似分析方法和矩量法(MOM)、时域有限差分(FDTD)、有限元法(FEM)等全波分析方法[58]。其中全波分析方法可提供精确的S参量、电场及电流分布等信息,但消耗更多的计算成本。当前,由于全波仿真软件在处理复杂CAM结构和多层结构时更具优势,通过全波分析结果进行曲线拟合提取等效电路参数的方法成为主流,但该方法需要大量的仿真或实测数据作为支撑[911]。等效电路法通过分析CAM周期结构特点,利用LC等效电路模型获得其频率响应特性,能更直观地反映周期结构的物理特性,具有快速简洁的特点,对于FSS及其复合吸波材料的理论设计和性能分析具有重要的参考意义。等效电路模型建立的关键是谐振电路的组成和L, C参数的计算。

    针对有耗周期单元材料的特性,文献[1214]对宽带结构的方环吸波超材料进行了研究,分析了结构参数对RCS减缩性能的影响。文献[15,16]设计了六边形拓扑结构的吸波超材料,实现了多频段的吸波特性,同时具有较好的极化稳定性和入射角稳定性。本文针对一种具有宽带吸波特性的六边形环CAM的结构特点,建立了相应的等效电路分析模型,通过对六边形点阵分布的傅里叶分析,提出了等效分布周期参数,给出了基于模型尺寸的RLC参数提取方法。相比于基于全波仿真结果或实测数据来提取RLC参数的方法,本文方法无需先验数据,直观地体现了集总参数与CAM物理尺寸的关系。通过与HFSS仿真结果比较,验证了该ECM模型在多种六边形环CAM结构参数条件下的适用性和准确性。最后通过样品制作和测量,进一步验证了该模型的有效性。

    六边形环CAM通过将损耗材料以一定距离置于金属板上实现宽带吸波效果,其结构如图1所示。在六边形环金属环每边加入贴片电阻,电阻值为 R ,空气层厚度为 t2 。六边形环内接圆半径为 d ,线宽为 w ,环间距为 g ,六边形环分布周期,即任意两环中心之间的距离为 p ,六边形环印刷于相对介电常数为 εr 的薄介质基板上,其厚度 t1

    图 1  电阻加载六边形环复合吸波超材料周期结构

    六边形环周期结构具有单谐振的频率选择特性,其传输特性可等效于一个串联RLC谐振电路[1,8],反射背腔等效成末端短路的一段传输线,建立六边形环CAM的等效电路模型,如图2所示。 L 为等效电感, C 为等效电容,其中集总电参数随入射波频率变化, Reff 为等效电阻, Y0 为自由空间特性导纳, Yin 为从FSS表面处看入的特性导纳。

    图 2  六边形环复合吸波超材料等效电路模型

    文献[7]中提出了方环频率选择表面的LC参数的计算公式,但用该计算方法得到的ECM模型无法准确分析六边形环CAM。考虑到六边形环周期分布与方环周期的不同,其点阵分布更为密集交错,因此采用2维傅里叶变换的方法对两种点阵分布规律进行比较分析。

    周期为 psq 的正方形点阵分布函数为

    nsq(x,y)=nsq(x+psqu,y+psqv)=uv[δ(xpsqu)+δ(ypsqv)] (1)

    其中, u , v 为整数,则分布函数 nsq(x,y) 的2维傅里叶级数展开如式(2)和式(3),其中 s 为单个周期的正方形面积元。

    nsq(u,v)=1p2squvcuvexp(j2πuxp+j2πvyp) (2)
    cuv=sn(x,y)exp(j2πuxpj2πvyp)ds (3)

    六边形环周期结构的点阵分布是周期为 phex 的正三角形点阵分布,其分布函数为

    nhex(x,y)=nhex(x+32phexu,y+12phexv)=uv[δ(x32phexu)+δ(y12phexv)] (4)

    其中, u , v , n 为整数,且 u+v=2n 。其2维傅里叶级数展开如式(5)和式(6),其中 s 为单个周期的正三角形面积元。

    nhex(x,y)=431p2hexuvcuvexp(j2πux3p+j2πvyp) (5)
    cuv=snhex(x,y)exp(j2πux3pj2πvyp)ds

    (6)

    根据式(2)和式(5),点阵分布函数可以表示成一个与周期有关的系数与周期级数和的乘积,比较式(2)和式(5)中求和级数之前的系数,提出等效分布周期参数 peff ,使得该系数具有相同的形式,并定义其值为

    1p2eff=431p2hex (7)
    peff=432phex=432p (8)

    在相同的周期分布参数 psq=phex=p 的条件下,相比于方环分布周期,六边形环具有更小的等效周期分布特性。

    仿真分析垂直入射电磁波激励下,方环和六边形环CAM产生的分布电参数,其中面电流和相应的分布电感如图3(a)所示,电场和相应的分布电容如图3(b)所示, E 为入射波电场方向。由于周期单元具有中心对称特性,其在不同电场方向的入射波所激励的分布电参数在数值上近似,仿真所设置的电场方向不失一般性。可以看出,入射波在六边形金属边上产生了更复杂的分布电参数,当入射电磁波的电场分量 EL 平行于金属线时,在六边形周期结构上产生了等效电感 L ,当入射电磁波的电场分量 EC 垂直于金属线时,在六边形周期结构上产生了等效电容 C

    图 3  六边形环CAM分布电参数

    六边形环周期结构的分布电容由于单元之间的更多接触而变得更为复杂,通过引入等效分布周期参数 peff ,得到适合六边形环CAM的RLC参数的计算方法。

    L=1Y0ωdpeffF(peff,w,λ) (9)
    C=Y0ω8dpeffεeffF(peff,g,λ) (10)
    Reff=Rpeffd (11)

    其中,

    F(p,w,λ)=pλcosθ[ln(cosecπw2p)+G(p,w,λ)]

    (12)

    薄介质层的等效介电常数为[7]

    εeff=εr+(εr1)[1expN(x)] (13)

    考虑到金属环周期结构的影响,取 x=0.5 , N=1.8 。入射波频率为 f0 ,入射角为 θ , ω=2πf0 , λ=c/f0 G(p,w,λ) 为修正项,在 p/λ1 时,该项可忽略[8]

    YCAM=(jωL+1jωC+Reff)1 (14)
    Yd=jY1(Y1tan(β2t2)Y0cot(β1t1))Y1+Y0cot(β1t1)tan(β2t2) (15)
    Yin=YCAM+Yd (16)
    |Γ|2dB=20lg|Y0YinY0+Yin| (17)

    其中, Y1=Y0εr , β1=2π/λ , β2=2πεr/λ

    图 4  YdYCAM虚部仿真曲线

    Yin 的虚部为0时,复合吸波超材料具有谐振特性,其中 Yd 的虚部和 YCAM 的虚部随频率变化曲线如图4所示,通过调整周期单元的尺寸参数可实现对 YCAM 的虚部的控制[12],采用参数优化方法使其值随频率递减并与 Yd 的虚部相匹配,即可得到宽带特性的吸波超材料。

    采用HFSS建立了基于Floquet端口的无限周期全波分析模型进行对比分析。设置六边形环CAM的主要尺寸参数值为 p =25.0 mm, d =11.5 mm, w =0.5 mm, εr =4.4, t1 =0.5 mm, t2 =20.0 mm, R =200 W。对其中六边形环周期、空气层厚度 t2 和集总电阻值 R 进行参数分析,计算不同尺寸参数条件下CAM的吸波性能。

    六边形环周期分别为 p =24.0 mm, 26.0 mm, 28.0 mm,其反射系数的等效电路模型仿真结果与HFSS仿真结果比较如图5所示,ECM中等效电容电感见表1。等效电路模型可以有效地给出该类型吸波材料的谐振特性和–10 dB吸收频段,吸波频段内反射系数的最大误差小于0.05。模型中 p 变大时,吸波频段向低频移动且带宽变窄,结果与HFSS仿真结果相吻合。在式(12)的等效计算中,当p/ λ >0.1时,随着频率的升高,其等效LC参数的计算误差变大, G(p,w,λ) 函数修正效果下降。因而本文模型在频率小于4 GHz,即 0.1<p/λ<0.4 时,具有较高精度。当 p/λ>0.4 时,与全波仿真精度虽然存在一定误差,但仍能较好反映出材料的吸波特性变化规律,具有较好的适用性。

    图 5  六边形环周期p对反射系数的影响
    表 1  不同六边形环周期p的ECM模型中等效电容电感
    p (mm) 等效电容 (pF) 等效电感 (μH)
    24.0 0.50 6.6
    26.0 0.28 6.8
    28.0 0.19 7.0
    下载: 导出CSV 
    | 显示表格

    空气层厚度分别为 t2 =18.0 mm, 20.0 mm, 22.0 mm,其反射系数的等效电路模型仿真结果与HFSS仿真结果比较如图6所示。上层周期结构的等效电容电感值不随 t2 变化,其值分别为 C =0.36 pF, L =6.7 μH,等效电路模型的仿真结果与HFSS仿真结果相吻合,随着空气层 t2 的增加吸波段向低频偏移。

    图 6  空气层厚度t2对反射系数的影响

    集总电阻值分别为 R =170 W、210 W、250 W,其反射系数的等效电路模型仿真结果与HFSS仿真结果比较如图7所示,ECM中周期结构的等效电容电感不随 R 变化,等效电路模型的仿真结果与HFSS仿真结果相吻合。等效电阻计算忽略了插入电阻两端的寄生电容,当频率较高时,集总电阻中的寄生电容变大,使得该模型在频率大于4 GHz时误差变大。随着电阻值 R 减小,吸波频段变宽,吸波性能降低。

    图 7  集总电阻阻值R对反射系数的影响

    根据仿真参数加工实物样品如图8所示。在尺寸为300 mm×286 mm的FR4介质基板上印刷有限周期的六边形环结构,其在 x 轴方向交错分布15个单元,在 y 轴方向均匀分布13个单元。根据GJB 2038A-2011,通过测量垂直入射条件下,CAM与同等大小金属板的RCS比值来计算吸波材料的反射系数,实测中入射TE波电场沿 x 轴方向。仿真结果与实测结果比较如图9所示,结果表明:实测结果与等效电路模型计算结果吻合较好,该材料在1.7~5.7 GHz频段实现了良好的宽带吸波特性,仿真结果和实测结果的差异可能是由周期截断带来的边沿效应产生。

    图 8  六边形环复合吸波超材料样品
    图 9  仿真与实测结果比较

    本文提出了一种针对六边形环复合吸波超材料吸波性能的等效电路分析方法。基于对六边形环CAM点阵分布特点的分析,提出了等效分布周期参数 peff ,并进一步给出基于模型尺寸的RLC参数提取方法和建立了相应的等效电路模型。该模型能实现对多种尺寸参数的六边形环CAM吸波性能的计算,并与HFSS仿真结果相吻合,对实现宽带雷达吸波材料的设计与优化具有参考意义。通过样品制作和测量,进一步验证了该模型的有效性,最后实现了一款宽带雷达吸波材料,其在1.7~5.7 GHz频段内具良好的宽带吸波特性。

  • 图  1  零样本学习技术结构图

    图  2  零样本学习示意图

    图  3  经典归纳式零样本模型示意图[7]

    图  4  AwA类-属性关系矩阵[7]

    图  5  3种视觉-语义映射示意图

    图  6  领域漂移示例图[55]

    图  7  语义间隔示例图

    表  1  机器学习方法对比表

    训练集{X,Y}测试集{X,Z}训练类Y与测试类Z间关系R最终分类器C
    无监督学习大量无标签图片已知类图片Y=ZC:XY
    有监督学习大量带标签图片已知类图片Y=ZC:XY
    半监督学习较少带标签图片和大量无标签图片已知类图片Y=ZC:XY
    少样本学习极少带标签图片和大量无标签图片已知类图片Y=ZC:XY
    零样本学习大量带标签图片未知类图片YZ=C:XZ
    下载: 导出CSV

    表  2  零样本学习中深度卷积神经网络使用情况统计表

    网络论文数量
    VGG501
    GoogleNet271
    ResNet397
    下载: 导出CSV

    表  3  零样本学习性能比较(%)

    方法传统零样本学习泛化零样本学习
    AwACUBSUNAwACUBSUN
    SSPSSSPSSSPSUTS→THUTS→THUTS→TH
    IAP46.935.927.124.017.419.40.987.61.80.272.80.41.037.81.8
    DAP58.746.137.540.038.939.90.084.70.01.767.93.34.225.17.2
    DeViSE68.659.753.252.057.556.517.174.727.823.853.032.816.927.420.9
    ConSE67.944.536.734.344.238.80.590.61.01.672.23.16.839.911.6
    SJE69.561.955.353.957.153.78.073.914.423.559.233.614.730.519.8
    SAE80.754.133.433.342.440.31.182.22.27.854.013.68.818.011.8
    SYNC71.246.654.155.659.156.310.090.518.011.570.919.87.943.313.4
    LDF83.470.4
    SP-AEN58.555.459.223.390.937.134.770.646.624.938.630.3
    QFSL84.879.769.772.161.758.366.293.177.471.574.973.251.331.238.8
    下载: 导出CSV
  • SUN Yi, CHEN Yuheng, WANG Xiaogang, et al. Deep learning face representation by joint identification-verification[C]. The 27th International Conference on Neural Information Processing Systems, Montreal, Canada, 2014: 1988–1996.
    LIU Chenxi, ZOPH B, NEUMANN M, et al. Progressive neural architecture search[C]. The 15th European Conference on Computer Vision, Munich, Germany, 2018: 19–35.
    LEDIG C, THEIS L, HUSZÁR F, et al. Photo-realistic single image super-resolution using a generative adversarial network[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 105–114.
    BIEDERMAN I. Recognition-by-components: A theory of human image understanding[J]. Psychological Review, 1987, 94(2): 115–147. doi: 10.1037/0033-295X.94.2.115
    LAROCHELLE H, ERHAN D, and BENGIO Y. Zero-data learning of new tasks[C]. The 23rd National Conference on Artificial Intelligence, Chicago, USA, 2008: 646–651.
    PALATUCCI M, POMERLEAU D, HINTON G, et al. Zero-shot learning with semantic output codes[C]. The 22nd International Conference on Neural Information Processing Systems, Vancouver, Canada, 2009: 1410–1418.
    LAMPERT C H, NICKISCH H, and HARMELING S. Learning to detect unseen object classes by between-class attribute transfer[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Miami, USA, 2009: 951–958. doi: 10.1109/CVPR.2009.5206594.
    HARRINGTON P. Machine Learning in Action[M]. Greenwich, CT, USA: Manning Publications Co, 2012: 5–14.
    ZHOU Dengyong, BOUSQUET O, LAL T N, et al. Learning with local and global consistency[C]. The 16th International Conference on Neural Information Processing Systems, Whistler, Canada, 2003: 321–328.
    刘建伟, 刘媛, 罗雄麟. 半监督学习方法[J]. 计算机学报, 2015, 38(8): 1592–1617. doi: 10.11897/SP.J.1016.2015.01592

    LIU Jianwei, LIU Yuan, and LUO Xionglin. Semi-supervised learning methods[J]. Chinese Journal of Computers, 2015, 38(8): 1592–1617. doi: 10.11897/SP.J.1016.2015.01592
    SUNG F, YANG Yongxin, LI Zhang, et al. Learning to compare: Relation network for few-shot learning[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 1199–1208.
    FU Yanwei, XIANG Tao, JIANG Yugang, et al. Recent advances in zero-shot recognition: Toward data-efficient understanding of visual content[J]. IEEE Signal Processing Magazine, 2018, 35(1): 112–125. doi: 10.1109/MSP.2017.2763441
    XIAN Yongqin, LAMPERT C H, SCHIELE B, et al. Zero-shot learning—A comprehensive evaluation of the good, the bad and the ugly[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41(9): 2251–2265. doi: 10.1109/TPAMI.2018.2857768
    WANG Wenlin, PU Yunchen, VERMA V K, et al. Zero-shot learning via class-conditioned deep generative models[C]. The 32nd AAAI Conference on Artificial Intelligence, New Orleans, USA, 2018: 4211–4218.
    FU Yanwei, HOSPEDALES T M, XIANG Tao, et al. Attribute learning for understanding unstructured social activity[C]. The 12th European Conference on Computer Vision, Florence, Italy, 2012: 530–543.
    ANTOL S, ZITNICK C L, and PARIKH D. Zero-shot learning via visual abstraction[C]. The 13th European Conference on Computer Vision, Zurich, Switzerland, 2014: 401–416.
    ROBYNS P, MARIN E, LAMOTTE W, et al. Physical-layer fingerprinting of LoRa devices using supervised and zero-shot learning[C]. The 10th ACM Conference on Security and Privacy in Wireless and Mobile Networks, Boston, USA, 2017: 58–63. doi: 10.1145/3098243.3098267.
    YANG Yang, LUO Yadan, CHEN Weilun, et al. Zero-shot hashing via transferring supervised knowledge[C]. The 24th ACM international conference on Multimedia, Amsterdam, The Netherlands, 2016: 1286–1295. doi: 10.1145/2964284.2964319.
    PACHORI S, DESHPANDE A, and RAMAN S. Hashing in the zero shot framework with domain adaptation[J]. Neurocomputing, 2018, 275: 2137–2149. doi: 10.1016/j.neucom.2017.10.061
    LIU Jingen, KUIPERS B, and SAVARESE S. Recognizing human actions by attributes[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Colorado, USA, 2011: 3337–3344.
    FU Yanwei, HOSPEDALES T M, XIANG Tao, et al. Learning multimodal latent attributes[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(2): 303–316. doi: 10.1109/TPAMI.2013.128
    JAIN M, VAN GEMERT J C, MENSINK T, et al. Objects2action: Classifying and localizing actions without any video example[C]. The IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 4588–4596.
    XU Baohan, FU Yanwei, JIANG Yugang, et al. Video emotion recognition with transferred deep feature encodings[C]. The 2016 ACM on International Conference on Multimedia Retrieval, New York, USA, 2016: 15–22.
    JOHNSON M, SCHUSTER M, LE Q V, et al. Google’s multilingual neural machine translation system: Enabling zero-shot translation[J]. Transactions of the Association for Computational Linguistics, 2017, 5: 339–351. doi: 10.1162/tacl_a_00065
    PRATEEK VEERANNA S, JINSEOK N, ENELDO L M, et al. Using semantic similarity for multi-label zero-shot classification of text documents[C]. The 23rd European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Bruges, Belgium, 2016: 423–428.
    DALAL N and TRIGGS B. Histograms of oriented gradients for human detection[C]. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, USA, 2005: 886–893.
    LOWE D G. Distinctive image features from scale-invariant keypoints[J]. International Journal of Computer Vision, 2004, 60(2): 91–110. doi: 10.1023/B:VISI.0000029664.99615.94
    BAY H, ESS A, TUYTELAARS T, et al. Speeded-up robust features (SURF)[J]. Computer Vision and Image Understanding, 2008, 110(3): 346–359. doi: 10.1016/j.cviu.2007.09.014
    ROMERA-PAREDES B and TORR P H S. An embarrassingly simple approach to zero-shot learning[C]. The 32nd International Conference on International Conference on Machine Learning, Lille, France, 2015: 2152–2161.
    ZHANG Li, XIANG Tao, and GONG Shaogang. Learning a deep embedding model for zero-shot learning[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 3010–3019.
    LI Yan, ZHANG Junge, ZHANG Jianguo, et al. Discriminative learning of latent features for zero-shot recognition[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 7463–7471.
    WANG Xiaolong, YE Yufei, and GUPTA A. Zero-shot recognition via semantic embeddings and knowledge graphs[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 6857–6866.
    WAH C, BRANSON S, WELINDER P, et al. The caltech-UCSD birds-200-2011 dataset[R]. Technical Report CNS-TR-2010-001, 2011.
    MIKOLOV T, SUTSKEVER I, CHEN Kai, et al. Distributed representations of words and phrases and their compositionality[C]. The 26th International Conference on Neural Information Processing Systems, Lake Tahoe, USA, 2013: 3111–3119.
    LEE C, FANG Wei, YEH C K, et al. Multi-label zero-shot learning with structured knowledge graphs[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 1576–1585.
    JETLEY S, ROMERA-PAREDES B, JAYASUMANA S, et al. Prototypical priors: From improving classification to zero-shot learning[J]. arXiv: 2015, 1512.01192.
    KARESSLI N, AKATA Z, SCHIELE B, et al. Gaze embeddings for zero-shot image classification[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 6412–6421.
    REED S, AKATA Z, LEE H, et al. Learning deep representations of fine-grained visual descriptions[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 49–58.
    ELHOSEINY M, ZHU Yizhe, ZHANG Han, et al. Link the head to the "beak": Zero shot learning from noisy text description at part precision[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 6288–6297. doi: 10.1109/CVPR.2017.666.
    LAZARIDOU A, DINU G, and BARONI M. Hubness and pollution: Delving into cross-space mapping for zero-shot learning[C]. The 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, Beijing, China, 2015: 270–280.
    WANG Xiaoyang and JI Qiang. A unified probabilistic approach modeling relationships between attributes and objects[C]. The IEEE International Conference on Computer Vision, Sydney, Australia, 2013: 2120–2127.
    AKATA Z, PERRONNIN F, HARCHAOUI Z, et al. Label-embedding for attribute-based classification[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Portland, USA, 2013: 819–826.
    JURIE F, BUCHER M, and HERBIN S. Generating visual representations for zero-shot classification[C]. The IEEE International Conference on Computer Vision Workshops, Venice, Italy, 2017: 2666–2673.
    FARHADI A, ENDRES I, HOIEM D, et al. Describing objects by their attributes[C]. 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, USA, 2009: 1778–1785. doi: 10.1109/CVPR.2009.5206772.
    PATTERSON G, XU Chen, SU Hang, et al. The sun attribute database: Beyond categories for deeper scene understanding[J]. International Journal of Computer Vision, 2014, 108(1/2): 59–81.
    XIAO Jianxiong, HAYS J, EHINGER K A, et al. Sun database: Large-scale scene recognition from abbey to zoo[C]. 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, USA, 2010: 3485–3492. doi: 10.1109/CVPR.2010.5539970.
    NILSBACK M E and ZISSERMAN A. Delving deeper into the whorl of flower segmentation[J]. Image and Vision Computing, 2010, 28(6): 1049–1062. doi: 10.1016/j.imavis.2009.10.001
    NILSBACK M E and ZISSERMAN A. A visual vocabulary for flower classification[C]. 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, USA, 2006: 1447–1454. doi: 10.1109/CVPR.2006.42.
    NILSBACK M E and ZISSERMAN A. Automated flower classification over a large number of classes[C]. The 6th Indian Conference on Computer Vision, Graphics & Image Processing, Bhubaneswar, India, 2008: 722–729. doi: 10.1109/ICVGIP.2008.47.
    KHOSLA A, JAYADEVAPRAKASH N, YAO Bangpeng, et al. Novel dataset for fine-grained image categorization: Stanford dogs[C]. CVPR Workshop on Fine-Grained Visual Categorization, 2011.
    DENG Jia, DONG Wei, SOCHER R, et al. ImageNet: A large-scale hierarchical image database[C]. 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, USA, 2009: 248–255.
    CHAO Weilun, CHANGPINYO S, GONG Boqing, et al. An empirical study and analysis of generalized zero-shot learning for object recognition in the wild[C]. The 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 2016: 52–68.
    SONG Jie, SHEN Chengchao, YANG Yezhou, et al. Transductive unbiased embedding for zero-shot learning[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 1024–1033.
    李亚南. 零样本学习关键技术研究[D]. [博士论文], 浙江大学, 2018: 40–43.

    LI Yanan. Research on key technologies for zero-shot learning[D]. [Ph.D. dissertation], Zhejiang University, 2018: 40–43
    FU Yanwei, HOSPEDALES T M, XIANG Tao, et al. Transductive multi-view zero-shot learning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(11): 2332–2345. doi: 10.1109/TPAMI.2015.2408354
    KODIROV E, XIANG Tao, and GONG Shaogang. Semantic autoencoder for zero-shot learning[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 4447–4456.
    STOCK M, PAHIKKALA T, AIROLA A, et al. A comparative study of pairwise learning methods based on kernel ridge regression[J]. Neural Computation, 2018, 30(8): 2245–2283. doi: 10.1162/neco_a_01096
    ANNADANI Y and BISWAS S. Preserving semantic relations for zero-shot learning[J]. arXiv: 2018, 1803.03049.
    LI Yanan, WANG Donghui, HU Huanhang, et al. Zero-shot recognition using dual visual-semantic mapping paths[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 5207–5215.
    CHEN Long, ZHANG Hanwang, XIAO Jun, et al. Zero-shot visual recognition using semantics-preserving adversarial embedding networks[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 1043–1052.
  • 期刊类型引用(7)

    1. 张聿远,闫文君,张立民. 基于多模态特征融合网络的空时分组码识别算法. 电子学报. 2023(02): 489-498 . 百度学术
    2. 张聿远,闫文君,林冲,姚成柱. 利用卷积-循环神经网络的串行序列空时分组码识别方法. 信号处理. 2021(01): 19-27 . 百度学术
    3. 张聿远,闫文君,张立民,张媛. 基于卷积神经网络的串行空时分组码盲识别算法. 系统工程与电子技术. 2021(11): 3360-3370 . 百度学术
    4. 闫文君,张聿远,凌青,于柯远,谭凯文,刘恒燕. 基于频域互相关序列与峰值检测的空频分组码盲识别算法. 系统工程与电子技术. 2021(12): 3709-3715 . 百度学术
    5. 张天骐,范聪聪,喻盛琪,赵健根. 基于JADE与特征提取的正交/非正交空时分组码盲识别. 系统工程与电子技术. 2020(04): 933-939 . 百度学术
    6. 方伟,闫文君,凌青,张立民. 传输损耗条件下基于循环平稳检测的空时分组码盲识别方法. 计算机与现代化. 2018(02): 6-11 . 百度学术
    7. 黄波,潘爽,李雪. 基于四阶循环平稳的STBC-OFDM信号盲识别. 信号处理. 2017(09): 1221-1229 . 百度学术

    其他类型引用(1)

  • 加载中
图(7) / 表(3)
计量
  • 文章访问数:  7933
  • HTML全文浏览量:  3629
  • PDF下载量:  518
  • 被引次数: 8
出版历程
  • 收稿日期:  2019-07-01
  • 修回日期:  2019-11-03
  • 网络出版日期:  2019-11-13
  • 刊出日期:  2020-06-04

目录

/

返回文章
返回