Loading [MathJax]/jax/output/HTML-CSS/jax.js
高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

数据驱动与知识引导结合下人工智能算法模型

金哲 张引 吴飞 朱文武 潘云鹤

周凌云, 丁立新, 马懋德, 唐菀. 一种正交反向学习萤火虫算法[J]. 电子与信息学报, 2019, 41(1): 202-209. doi: 10.11999/JEIT180187
引用本文: 金哲, 张引, 吴飞, 朱文武, 潘云鹤. 数据驱动与知识引导结合下人工智能算法模型[J]. 电子与信息学报, 2023, 45(7): 2580-2594. doi: 10.11999/JEIT220700
Lingyun ZHOU, Lixin DING, Maode MA, Wan TANG. Orthogonal Opposition Based Firefly Algorithm[J]. Journal of Electronics & Information Technology, 2019, 41(1): 202-209. doi: 10.11999/JEIT180187
Citation: JIN Zhe, ZHANG Yin, WU Fei, ZHU Wenwu, PAN Yunhe. Artificial Intelligence Algorithms Based on Data-driven and Knowledge-guided Models[J]. Journal of Electronics & Information Technology, 2023, 45(7): 2580-2594. doi: 10.11999/JEIT220700

数据驱动与知识引导结合下人工智能算法模型

doi: 10.11999/JEIT220700
基金项目: 中国工程科技知识中心项目(CKCEST-2021-1-8),国家自然科学基金(62037001)
详细信息
    作者简介:

    金哲:男,博士生,研究方向为自然语言处理和知识图谱

    张引:女,副教授,研究方向为数据挖掘、知识工程

    吴飞:男,教授,研究方向为人工智能、多媒体分析

    朱文武:男,教授,研究方向为多媒体网络计算、大数据智能

    潘云鹤:男,教授,研究方向为人工智能、计算机图形学、智能CAD

    通讯作者:

    张引 yinzh@zju.edu.cn

  • 中图分类号: TN911; TP391

Artificial Intelligence Algorithms Based on Data-driven and Knowledge-guided Models

Funds: China Knowledge Centre for Engineering Sciences and Technology Project (CKCEST-2021-1-8), The National Natural Science Foundation of China (62037001)
  • 摘要: 当前人工智能的学习模式主要以数据驱动为主要手段,以深度神经网络为主流的机器学习算法取得了显著进展。但是这种数据驱动的人工智能手段依然面临数据获取成本高、可解释性弱、鲁棒性不强等不足。该文认为在现有机器学习算法中引入先验假设、逻辑规则和方程公式等知识,建立数据和知识双轮驱动的人工智能方法,将推动更通用计算范式的变革创新。该文将可用于引导人工智能算法模型知识归纳为4种——逻辑知识、视觉知识、物理定律知识和因果知识,探讨将这些知识与现有数据驱动模型相互结合的典型方法。
  • 萤火虫算法(Firefly Algorithm, FA)是一种模拟萤火虫吸引行为的群智能优化算法[1]。它在特征提取、聚类等问题上的性能胜过遗传算法、粒子群算法等[25]。因此,FA被应用于解决网络、图像处理等越来越多的工程和科学领域的重要优化问题[5]。随着应用范围的扩大,人们对算法性能的期望也变得更高。

    为了提高算法性能,学者们对FA进行了许多改进。这些改进首先是参数控制,如Yu等人[6]提出了一种步长动态调整策略。Haji等人[3]则探讨了步长、吸引力等多个参数的最优取值组合。Wang等人[7]对步长和吸引力进行了理论性分析,并结合参数自身特性提出自适应调整策略。参数控制有助于提高算法性能,但面临复杂优化问题时对算法性能的提升非常有限。一些学者开始研究不同的吸引模型。Wang等人[8]提出了随机吸引模型,从群体中随机选择一只萤火虫。Verma等人[9]提出基于维度构造一只全局最优的虚拟萤火虫。Zhang等人[4]则根据最大回报成本比选择一只萤火虫。萤火虫受选出的萤火虫吸引而移动。这些吸引模型降低了算法时间复杂度,但需结合其它技术才能提高算法的收敛精度。另一类改进是与其它技术混合,如Gandomi等人[10]将混沌技术引入FA,提高了算法的全局搜索能力。Hassanzadeh等人[2]结合模糊逻辑,让当前萤火虫受模糊集合中萤火虫的吸引,从而增强优质萤火虫对其它个体的吸引作用。这类改进结合不同技术的优点,能够取得较好的效果,本文的研究属于这一类。

    近年来,国内外不少学者将反向学习(Opposition-Based Learning, OBL)[11]与群智能算法结合,如差分演化算法、粒子群优化算法等。这些方法对算法性能的提升表现出显著效果[12,13]。目前OBL与FA的结合也有些研究成果,其中比较有代表性的工作有,Verma等人[9]在FA的种群初始化时用OBL获得更好的初始种群。Yu等人[14]则利用OBL计算迭代最差萤火虫的反向解。这些方法利用了反向个体提供的丰富信息,提高了算法收敛精度。但他们在计算反向个体时,将个体的所有维都取其反向值。Park等人[15]指出,对于一个个体,并不是所有维上的反向值都优于个体的值。他们在应用OBL到差分演化算法时提出将个体与反向个体进行二项式交叉,得到两个部分维是反向值的候选解,从而保存个体和反向个体中的部分信息,进一步增强了算法的搜索能力。然而,上述方法虽能够保存个体和反向个体中的部分信息,但不能获得两者中有用信息的最优组合。因此,进一步提升算法性能的关键在于如何发现和保存个体和反向个体中有用信息的组合。

    为了发现并充分利用个体和反向个体中隐藏的有用信息,本文结合正交试验设计和反向学习技术,设计一种正交反向学习策略(Orthogonal Opposition-Based Learning, OOBL),利用正交试验设计产生一组部分维上取反向值的候选解,从而挖掘并保存个体和反向个体中的有用信息;再将OOBL应用到FA算法,提出一种基于正交反向学习策略的萤火虫算法(Orthogonal Opposition-based Firefly Algorithm, OOFA)。

    FA中,每只萤火虫代表一个可行解,被随机分布在目标函数的搜索空间内。对于一个D维搜索空间,设种群个数为N,第i只萤火虫的位置表示为Xi=(xi1,xi2,···,xiD),则其位置更新方程的定义为[1]

    xt+1i=xti+β0eγrij2(xtjxti)+αtεt (1)

    式(1)中,xt+1i表示萤火虫it+1时刻的位置。等号右侧第2项表示萤火虫i因受到萤火虫j的吸引而产生的位移,其中,β0表示光源(距离r=0)处的吸引力,通常取1; rij表示两只萤火虫i, j之间的距离,计算公式见式(2); γ表示媒介对光的吸收率,通常取1。

    rij=xixj2=dk=1(xikxjk)2 (2)

    式(1)等号右侧的第3项是随机项,其中,εtt时刻服从均匀分布的随机因子,αt[0,1]表示t时刻的步长。为了更好地平衡算法的探索与开发能力,Yang[16]提出迭代递减的α,计算公式见式(3)。

    αt=αt1δ, 0<δ<1 (3)

    其中,δ是一个冷却系数。

    OBL是Tizhoosh[11]提出的一种智能技术,其思想是同时评估当前点和其反向点,择优使用,以此来加速搜索进程。OBL的基本定义见定义1。

    定义1  若x是定义在实数集R上[a, b]区间的一个实数,即x[a, b],则x的反向点定义如式(4)[11]

    x=a+bx (4)

    在OBL的基础上,Rahnamayan等人[17]为了充分利用群体搜索信息,提出重心反向(Centroid Opposition, CO),以群体重心为参考点计算反向点。重心与基于重心的反向点定义如下:

    定义2  设(X1,X2,···XN)D维搜索空间中带有单位质量的N个点,则整体的重心定义为

    Gj=1NNi=1xij, j=1,2,···,D (5)

    定义3  若一个离散均匀的整体的重心为G,则该整体中某一点Xi的反向点定义为

    Xi=2×GXi, i=1,2,···,n (6)

    反向点位于一个具有动态边界的搜索空间[aj, bj]中,动态边界按式(7)计算。当反向点超出边界时,按式(8)重新计算反向点,其中rand(0,1)是[0, 1]上的一个随机数。

    aj=min(Xj),bj=max(Xj) (7)
    xij={aj+rand(0,1)×(Gjaj), xij<ajGj+rand(0,1)×(bjGj), xij>bj (8)

    正交试验设计是通过使用正交表以较少的试验次数发现各因素的不同水平之间的最优组合[18]。例如,对于2水平7因素的试验,若测试所有组合,则需27=128次试验。但用正交试验设计,采用正交表L8(27),见式(9),则仅需8次试验就可以找出最优组合,大幅减少了试验次数。

    L8(27)=[11111111112222122112212222112121212212212122112212212112] (9)

    OOBL策略中,问题的维度对应正交试验设计中的因素,个体和反向个体在各维度上的值作为各因素的水平,这样就可以进行一个2水平D因素的正交试验。构建试验解时,当正交表的元素为1,试验解对应维上取个体的值;当正交表的元素为2,试验解对应维上取反向个体的值。为了直观地说明OOBL中试验解的构建,以一个7维问题为例,给出了用L8(27)构建的试验解,见图1。OOBL的具体步骤见算法1(表1)。

    表 1  算法1:OOBL策略
     输入:种群X,一个个体的索引ind和正交表L
     输出:新种群X
     步骤:
     (1) 根据式(5)计算当前种群重心G
     (2) 根据式(6)计算指定个体的反向个体ox
     (3) 根据式(7)更新群体边界;根据式(8)对ox进行边界检查;
     (4) for i=1: L的行数M
     (5)  for j=1: 问题维数D
     (6)   if L(i, j)==1
     (7)    oox(i, j)=X(ind, j);
     (8)   else
     (9)    oox(i, j)=ox( j);
     (10)  end if
     (11) end for
     (12) end for
     (13) 评估正交反向候选解,评估次数FEs=FEs+M–1;
     (14) 从X和正交反向候选解中选出适应值最优的N个个体。
    下载: 导出CSV 
    | 显示表格
    图 1  试验解的构建

    OOBL的关键步骤在于试验解的构建,即第4步到第12步。一个个体通过正交试验将得到M个试验解。M也是正交表的行数,计算公式见式(10)。

    M=2log2(D+1) (10)

    根据正交表的特性,第1个试验解与该个体相同,不需要评估。另外的M1个试验解的部分维上是个体的值,部分维上取反向个体的值,这部分试验解称为正交反向候选解。它们是个体和反向个体不同维上信息的代表性的组合,需要评估。因此执行一次OOBL策略需要的函数评估次数是M1次。从种群和正交反向候选解中选择最优的N个个体作为新种群,这样可以充分发掘个体和反向个体中有用信息的组合,并更多地在种群中保存这些信息。

    本节把提出的正交重心反向学习策略OOBL结合到FA中,提出一种正交反向学习的FA算法,简称为OOFA,见算法2(表2)。

    表 2  算法2:OOFA算法
     输入:目标函数;
     输出:全局最优位置及适应值。
     步骤:
     (1) 随机初始化有N个个体的种群X
     (2) 评估初始种群f(X),当前函数评估次数FEs=N
     (3) 根据种群适应值排序;
     (4) 根据函数维数D,生成2水平D因素的正交表L
     (5) while 未达迭代终止条件
     (6)   for i=1:N
     (7)    for j=1: i
     (8)     根据式(1)和式(2),第i个个体向第j个个体移位;
     (9)    end for
     (10)  end for
     (11)  对种群进行边界检查;
     (12)  评估种群,函数评估次数FEs=FEs+N
     (13)  随机选择群体中一个个体,执行OOBL;
     (14)  根据式(3)更新步长因子;
     (15) end while
    下载: 导出CSV 
    | 显示表格

    OOFA算法中,生成2水平D因素的正交表的算法见文献[19]的附录部分。OOFA保持了FA算法的基本框架和主要操作,仅在种群移位之后增加了随机选择一个个体并对它执行OOBL操作。设目标函数的维度为D,种群规模为N,最大迭代次数为T。OOFA算法的主要操作是第6步至第10步的萤火虫移位操作和第13步的OOBL,前者时间复杂度都是O(N2D),后者时间复杂度是O(D2)。这两种操作都在迭代过程中,所以总的时间复杂度为O(TN2D)+O(TD2),略去低阶项,算法总时间复杂度为O(TN2D)。OOFA与FA算法时间复杂度一致,对算法的改进没有增加过多的计算开销。

    迭代初期,OOFA通过执行反向计算引入了不同个体在不同维上的反向搜索空间,从而拓展了潜在的有希望的搜索空间,加强了全局搜索能力。随着迭代不断进行,反向搜索空间不断减小,此时OOFA通过正交试验设计对个体周围较小的空间和反向空间进行局部探索,从而提升了算法的局部勘探能力。

    为了全面客观地对OOFA算法的性能做出评价,选择CEC 2013测试集。该测试集一共28个函数,包含了单峰、多峰和组合函数,比传统测试集的函数更多更复杂,可以更好地代表广泛的实际优化问题,具体定义见文献[20]。实验运行在一台配置为:Intel Core(TM)2 Duo CPU E4600 @ 2.40 GHz;内存2 GB;操作系统为Windows 8 专业版的计算机上。所有算法在Matlab 2012下进行仿真。实验中,最大函数评估次数设为105,种群规模N=30,问题维数D=30,每个算法在每个函数上独立运行25次。

    为了验证OOBL策略的有效性,对OOFA与FA在收敛精度、稳定性、收敛速度和运行时间上进行比较。为了公平比较,根据Yang[16]给出的参数取值范围,两种算法的参数均设为:β0=1,α0=0.5, γ=1, δ=0.97。比较收敛精度和稳定性时,迭代终止条件设为到达最大函数评估次数,记录算法找到的最优解与函数最优值之差的均值(Mean)和标准差(SD)。比较收敛速度和运行时间时,迭代终止条件设为到达给定精度或到达最大函数评估次数,记录算法的平均函数评估次数(FEs)和平均运行时间(T)。时间单位为s。因OOFA的收敛精度高于FA,所以给定精度以FA在各函数上能达到的平均精度为基准进行设定。为了进一步分析两者收敛速度的差异,求各函数上FA与OOFA的平均函数评估次数之比,即加速比(R)。实验结果见表3

    表 3  FA与OOFA的比较结果
    函数FAOOFA加速比R
    MeanSDFEsT (s)MeanSDFEsT (s)
    f15.16E+046.35E+03802323.851.70E–108.44E–1010910.0373.54
    f27.30E+082.15E+08723143.823.39E+071.04E+0717040.0642.44
    f34.74E+161.43E+17166620.941.61E+098.63E+086160.0227.05
    f49.36E+041.29E+04379731.947.25E+041.22E+0456690.176.70
    f51.58E+043.90E+03445442.251.06E+023.32E+0114210.0431.35
    f68.71E+032.01E+03364631.856.90E+012.47E+019040.0340.34
    f71.08E+051.13E+05323222.114.11E+011.12E+0110820.0529.87
    f82.10E+015.29E–02827414.872.10E+016.09E–02859773.230.96
    f94.28E+011.42E+006464318.072.14E+014.18E+0041781.0815.47
    f106.79E+031.12E+03723733.951.67E+011.19E+0110940.0466.15
    f118.38E+029.77E+01524072.851.13E+013.35E+009890.0352.99
    f128.43E+029.43E+01565173.511.05E+023.38E+0111330.0549.88
    f138.17E+029.91E+01605953.898.64E+013.04E+0113040.0646.47
    f148.10E+033.23E+02804184.481.55E+035.45E+0253590.1915.01
    f158.07E+033.59E+02728594.244.41E+037.63E+0255980.2113.02
    f163.19E+005.04E–015779311.011.79E+005.38E–01109981.865.25
    f171.40E+031.91E+02566222.954.09E+013.16E+0019550.0628.96
    f181.44E+031.68E+02526123.011.06E+022.98E+0114840.0535.45
    f191.04E+064.38E+05524222.753.34E+007.17E–0112300.0442.62
    f201.50E+011.43E–0588880.511.50E+013.14E–059620.049.24
    f213.57E+031.80E+02643955.242.92E+022.75E+0119010.1233.87
    f228.87E+033.07E+02610694.751.91E+031.25E+0346290.2713.19
    f239.05E+034.02E+02491344.215.61E+031.21E+0344370.2911.07
    f244.15E+022.78E+013229510.012.17E+026.37E+006520.1949.53
    f254.09E+021.66E+014440413.742.70E+021.38E+017280.2160.99
    f263.08E+024.83E+016435920.942.95E+022.03E+014166712.631.54
    f271.64E+035.93E+013276210.504.51E+026.27E+019550.2934.31
    f286.25E+035.82E+02485514.943.00E+026.06E–0314990.1232.39
    下载: 导出CSV 
    | 显示表格

    分析表3可知,OOFA仅在函数f8f20上与FA取得相同结果,在其它26个函数上所得结果的精度均优于FA,且在f1,f3f19等多个函数上的精度高于FA多个数量级,这说明OOBL有助于提高算法收敛精度。从标准差上看,函数f8,f9,f14,f15,f16,f20,f22,f23f27上,FA的结果略优于OOFA,但其它19个函数上,OOFA的结果均优于FA,这说明OOBL策略使算法的稳定性更好。总之,OOBL策略虽花费了一定的函数评估次数,但获得了更大的收益,它聚合了个体与反向个体中的有用信息,加快了算法的搜索进程,有效提高了算法的收敛精度,在函数评估次数相同的情况下,OOFA的收敛精度明显高于FA。

    从平均迭代次数上来看,到达相同精度,OOFA在除了f8之外的其它27个函数上需要的函数评估次数均少于FA。OOFA平均运行时间也少于FA,其主要原因是到达给定精度时OOFA需要的函数评估次数比FA所需要的更少些。f8上OOFA虽然平均函数评估次数略多于FA,但平均运行时间仍然少于FA。由3.2节OOFA算法时间复杂度的分析可知,萤火虫移位操作的时间复杂度高于OOBL的时间复杂度,而FA中萤火虫移位操作的执行次数多于OOFA中移位操作的执行次数。所以,f8上OOFA使用与FA相近次数的函数评估,但运行时间仍少些。从加速比上可看出,除f8之外的其它27个函数上加速比均大于1,最大加速比到达73.57。可见,OOFA算法的收敛速度和运行速度都更快。

    图2是各算法随机选择一次运行结果的收敛曲线,表现了算法多数情况下的收敛行为。受篇幅限制,仅选择了2个单峰函数(f1f4)和2个多峰函数(f9f12)上的收敛曲线。从图2可看出,算法运行早期,FA和OOFA的收敛曲线都急剧下降,说明这个过程算法的求解精度在迅速提高。与FA的收敛曲线相比,OOFA的收敛曲线下降幅度更大,这说明OOFA算法的收敛速度更快。算法运行后期,FA与OOFA的收敛曲线逐渐稳定于某个值,OOFA曲线收敛的值小于FA的,说明OOFA的收敛精度高于FA。

    图 2  FA与OOFA在函数f1, f4, f9f12上的收敛曲线

    综上所述,对表3图2的分析证明了OOBL在FA算法中的有效性和适用性。

    为了验证OOFA的性能,选择近期有代表性的FA变种算法进行比较。比较算法有:MFA[21], VSSFA[6], OFA[12]、RaFA[8]和ODFA[9]。为了公平比较,各算法共同的参数设置同4.2节,其它参数与原文献保持一致,即MFA的m=20, OFA的p=0.25。迭代终止条件设为到达最大函数评估次数。表4记录各算法找到的最优解与函数最优值之差的均值(Mean)和标准差(SD)。为了从统计意义上对实验结果进行评价,采用Wilcoxon秩和检验[22](显著性水平设为0.05)来分析各算法与OOFA在各函数上的结果之间是否存在显著差异,符号“–”,“+”,“≈”分别表示各算法劣于、优于和相当于OOFA的结果。P-value反映OOFA与各算法性能差异的显著性。最后采用Friedman检验[22]来对算法进行排名。这3项结果记录在表4的最后3行。

    表 4  各FA变种算法结果的比较(Mean±SD)
    函数MFAVSSFAOFARaFAODFAOOFA
    f13.58E+04±5.83E+033.45E+04±4.31E+032.42E+04±5.28E+034.03E+02±7.35E+021.32E+04±5.63E+031.70E–10±8.44E–10
    f25.03E+08±1.67E+083.73E+08±7.44E+073.34E+08±1.65E+083.71E+07±2.17E+072.19E+08±8.17E+073.39E+07±1.04E+07
    f31.47E+15±2.87E+151.68E+13±2.15E+132.01E+14±5.77E+142.62E+10±1.55E+106.04E+14±2.98E+151.61E+09±8.63E+08
    f49.27E+04±1.08E+048.04E+04±8.81E+038.68E+04±1.27E+041.04E+05±3.33E+048.21E+04±2.27E+047.25E+04±1.22E+04
    f59.85E+03±4.47E+038.56E+03±1.64E+035.70E+03±1.94E+034.99E+02±1.01E+031.14E+03±6.20E+021.06E+02±3.32E+01
    f65.84E+03±1.72E+034.49E+03±5.99E+023.48E+03±1.19E+031.71E+02±7.17E+011.64E+03±1.07E+036.90E+01±2.47E+01
    f72.52E+04±2.64E+042.61E+03±1.28E+036.42E+03±1.09E+043.03E+04±4.31E+042.40E+04±5.81E+044.11E+01±1.12E+01
    f82.10E+01±6.57E–022.10E+01±5.85E–022.10E+01±6.54E–022.11E+01±5.93E–022.10E+01±5.39E–022.10E+01±6.09E–02
    f94.16E+01±1.81E+004.05E+01±9.64E–013.83E+01±3.17E+003.96E+01±2.48E+003.72E+01±3.35E+002.14E+01±4.18E+00
    f105.14E+03±1.10E+034.59E+03±5.23E+023.29E+03±8.44E+021.49E+02±1.23E+021.97E+03±9.37E+021.67E+01±1.19E+01
    f116.54E+02±1.11E+026.50E+02±4.33E+014.60E+02±9.00E+011.52E+02±5.41E+014.65E+02±9.63E+011.13E+01±3.35E+00
    f127.40E+02±8.81E+016.52E+02±4.22E+015.13E+02±9.01E+018.69E+02±1.54E+026.73E+02±1.39E+021.05E+02±3.38E+01
    f137.42E+02±9.14E+016.50E+02±5.49E+015.48E+02±7.61E+018.81E+02±1.31E+026.86E+02±9.57E+018.64E+01±3.04E+01
    f147.44E+03±4.96E+027.66E+03±2.56E+025.83E+03±6.32E+021.62E+03±3.60E+027.27E+03±6.99E+021.55E+03±5.45E+02
    f157.56E+03±5.92E+027.65E+03±3.13E+025.72E+03±7.14E+024.89E+03±1.01E+037.26E+03±4.44E+024.41E+03±7.63E+02
    f162.70E+00±4.78E–012.76E+00±3.05E–011.47E+00±4.45E–011.95E+00±5.61E–012.46E+00±4.73E–011.79E+00±5.38E–01
    f171.26E+03±1.41E+021.18E+03±8.23E+016.52E+02±1.04E+029.87E+01±4.59E+018.98E+02±1.68E+024.09E+01±3.16E+00
    f181.28E+03±2.00E+021.16E+03±9.17E+017.18E+02±1.52E+021.55E+03±2.47E+021.03E+03±1.92E+021.06E+02±2.98E+01
    f192.75E+05±2.01E+052.39E+05±6.70E+045.00E+04±3.86E+047.96E+01±1.68E+022.68E+04±5.41E+043.34E+00±7.17E–01
    f201.50E+01±3.72E–021.50E+01±2.29E–021.50E+01±6.32E–081.49E+01±1.78E–011.37E+01±8.18E–011.50E+01±3.14E–05
    f213.03E+03±2.98E+023.10E+03±1.43E+022.47E+03±1.64E+024.57E+02±1.71E+022.16E+03±3.59E+022.92E+02±2.75E+01
    f228.43E+03±4.74E+028.29E+03±2.74E+026.77E+03±1.01E+032.07E+03±5.70E+027.63E+03±9.75E+021.91E+03±1.25E+03
    f238.16E+03±4.96E+028.28E+03±2.43E+026.82E+03±8.01E+026.38E+03±9.46E+027.58E+03±6.23E+025.61E+03±1.21E+03
    f243.83E+02±2.26E+013.57E+02±8.70E+003.45E+02±1.59E+013.69E+02±2.90E+013.48E+02±1.61E+012.17E+02±6.37E+00
    f254.02E+02±1.31E+013.76E+02±6.32E+003.85E+02±1.81E+013.94E+02±1.78E+013.74E+02±1.73E+012.70E+02±1.38E+01
    f262.73E+02±4.82E+012.50E+02±1.53E+012.62E+02±5.77E+013.31E+02±1.08E+023.04E+02±9.55E+012.95E+02±2.03E+01
    f271.56E+03±6.16E+011.47E+03±3.89E+011.41E+03±4.78E+011.60E+03±1.07E+021.46E+03±9.06E+014.51E+02±6.27E+01
    f285.60E+03±5.24E+024.96E+03±2.87E+024.56E+03±5.53E+026.00E+03±1.08E+034.45E+03±6.04E+023.00E+02±6.06E–03
    –/≈/+25/2/125/2/124/2/227/0/126/1/1
    P-value1.18E–51.18E–51.33E–54.46E–67.03E–6
    Friedman5.304.303.133.613.321.34
    下载: 导出CSV 
    | 显示表格

    表4可看出,OOFA在大部分函数上都优于各比较算法。从表4中的P-value值均小于0.05,说明OOFA与各比较算法的性能差异非常显著。由Friedman检验排名可知OOFA与各算法相比排名第一。与VSSFA相比,OOFA性能明显更好,这说明OOBL对算法性能提升的贡献高于VSSFA中参数控制的贡献。MFA和RaFA都采用了精英策略。每次迭代中,MFA让最亮萤火虫飞向一个从多个随机方向中找出的最好方向。RaFA则是让最亮萤火虫执行柯西变异。而OOFA中每只萤火虫都有机会执行OOBL,可以探索不同的反向空间,相比MFA和RaFA, OOFA具有更强的全局搜索能力。OFA与ODFA也采用了反向学习策略。OFA算法仅在f16f26上表现优于OOFA。它用最差萤火虫的反向个体和最优个体依概率替换最差萤火虫,这种策略对群体逃离局部最优有一定作用,但它只利用了最差个体的反向空间,没有探索更多优秀个体的反向空间;而且最差个体的反向个体不一定在每一维上的信息都较差。所以,OFA算法探索的反向空间是非常有限的,且没有充分聚集最差个体和其反向个体中的有用信息。ODFA仅在f20上表现优于OOFA,它仅在种群初始过程中利用反向学习获得更好的初始种群,而迭代过程中,在不同维上结合了不同个体所包含的有利信息,没有利用反向个体中的有利信息。OOFA在24个函数上优于OFA,在26个函数上优于ODFA。这说明OOFA中的OOBL策略不但利用反向学习充分探索了反向空间,而且利用正交试验设计结合并保存个体和反向个体中有用信息的组合。此外,OFA和ODFA采用的反向学习策略是OBL,我们之前的研究[23]发现,OBL容易造成搜索偏向坐标,在带偏移的函数上往往表现不佳。而OOFA采用的是CO,没有坐标搜索偏向,所以在复杂的CEC 2013测试函数上能表现出更好的收敛性。

    基于以上实验结果和统计分析,可得出在CEC 2013测试集上,OOFA算法性能明显优于其它比较算法。

    本文利用反向学习和正交试验设计构造了一种正交反向学习策略OOBL,并应用到FA中,提出了一种改进的萤火虫算法OOFA。OOFA的特点主要是利用正交试验设计充分挖掘和保存个体与反向个体中的有用信息,从而提高算法性能。OOFA保持了FA算法的基本框架,没有增加额外参数,仅在群体中随机选择的一个个体上执行OOBL。本文从理论上分析了OOFA的优越性,且从实验和统计分析上表明OOBL策略的有效性和OOFA的较优性能。后续工作是将OOBL策略引入其它群智能算法中并用于解决命名数据网络中的路由优化等问题。

  • 图  1  可与数据驱动机器学习模型相互结合的4种知识

    图  2  双轮驱动学习算法的常见模式

    表  1  知识引导的典型方法

    知识方法具体思路例子
    逻辑知识知识图谱表示将知识图谱中的实体和关系表示为向量。Bordes等人[35]、Lin等人[117]、Dettmers等人[118]
    约束条件使用知识作为优化的约束条件。Hu等人[39]、Chen等人[40]
    视觉知识视觉知识抽取及应用建立基于视觉知识的闭环学习机制。Wu等人[65]
    构建目标概念的视觉知识字典。Pu等人[62]
    科学定律知识偏微分方程求解流体力学中将不可压纳维-斯托克斯方程组与神经网络损失函数结合。Raissi等人[76]、Jin等人[119]
    生物医学中基于PINN求解心脏激活映射、心血管动脉压力相关函数。Sahli Costabal等人[78]、Kissas等人[120]
    材料领域中基于PINN解决频域麦克斯韦方程组和超材料设计问题、连续体固体力学的几何识别问题。Fang等人[79]、Zhang等人[80]
    电力学中基于PINN求解电力系统动力学中的摆动方程。Misyris等人[121]
    使用神经算子和深度算子网络求解。Li等人[81]、Lu等人[82]
    组合优化问题求解基于自旋哈密顿函数、2元无约束优化等形式训练图神经网络的可微损失函数。Schuetz等人[122]
    先验知识与约束条件蛋白质结构预测中结合蛋白质先验结构、氨基酸链的物理特性约束等知识。Jumper等人[85]、Baek等人[87]、Humphreys等人[88]
    逆合成分析中使用化学反应过程中变化的原子和键构建转化规则集,用于后续蒙特卡罗树搜索。Segler等人[123]
    晶体结构预测中建立晶体结构和生成焓之间的关联模型。Cheng等人[124]
    地表温度反演中使用辐射传递方程作为反演机制的数学推导。Wang等人[125]
    胸部X射线检查中使用基于X射线报告中的知识进行驱动的推理算法提高深度学习模型性能。Jadhav等人[126]
    因果知识引入因果关系尝试将因果关系引入机器学习模型中。Kuang等人[45]、Kuang等
    [46]
    下载: 导出CSV
  • [1] PAN Yunhe. Heading toward artificial intelligence 2.0[J]. Engineering, 2016, 2(4): 409–413. doi: 10.1016/J.ENG.2016.04.018
    [2] 李国杰. 国内AI研究“顶不了天、落不了地”, 该想想了[EB/OL]. https://ysg.ckcest.cn/ysgNews/1738635.html, 2021.

    Li Guojie. Domestic AI research cannot be broken throuth and implemented. It is time to think about it. [EB/OL]. https://ysg.ckcest.cn/ysgNews/1738635.html, 2021.
    [3] PEARL J. Radical empiricism and machine learning research[J]. Journal of Causal Inference, 2021, 9(1): 78–82. doi: 10.1515/jci-2021-0006
    [4] ZHANG Ningyu, JIA Qianghuai, DENG Shumin, et al. AliCG: Fine-grained and evolvable conceptual graph construction for semantic search at alibaba[C]. The 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Singapore, 2021: 3895–3905.
    [5] LUO Xusheng, BO Le, WU Jinhang, et al. AliCoCo2: Commonsense knowledge extraction, representation and application in E-commerce[C]. The 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Singapore, 2021: 3385–3393.
    [6] 凡友荣, 杨涛, 孔华锋, 等. 基于知识图谱的电信欺诈通联特征挖掘方法[J]. 计算机应用与软件, 2019, 36(11): 182–187. doi: 10.3969/j.issn.1000-386x.2019.11.030

    FAN Yourong, YANG Tao, KONG Huafeng, et al. Calling features mining method of telecom fraud based on knowledge graph[J]. Computer Applications and Software, 2019, 36(11): 182–187. doi: 10.3969/j.issn.1000-386x.2019.11.030
    [7] 许振亮, 刘喜美. 电信诈骗研究的知识图谱分析[J]. 中国刑警学院学报, 2017(3): 50–56. doi: 10.14060/j.issn.2095-7939.2017.03.007

    XU Zhenliang and LIU Ximei. The knowledge mapping analysis of telecommunication fraud[J]. Journal of China Criminal Police University, 2017(3): 50–56. doi: 10.14060/j.issn.2095-7939.2017.03.007
    [8] ANDERSON J R. Cognitive psychology[J]. Artificial Intelligence, 1984, 23(1): 1–11. doi: 10.1016/0004-3702(84)90002-X
    [9] 潘云鹤. 形象思维中的形象信息模型的研究[J]. 模式识别与人工智能, 1991, 4(4): 7–14.

    PAN Yunhe. Research on image information model in image thinking[J]. Pattern Recognition and Artificial Intelligence, 1991, 4(4): 7–14.
    [10] 潘云鹤. 综合推理的研究[J]. 模式识别与人工智能, 1996, 9(3): 201–208.

    PAN Yunhe. The synthesis reasoning[J]. Pattern Recognition and Artificial Intelligence, 1996, 9(3): 201–208.
    [11] PAN Yunhe. On visual knowledge[J]. Frontiers of Information Technology & Electronic Engineering, 2019, 20(8): 1021–1025. doi: 10.1631/FITEE.1910001
    [12] KRIZHEVSKY A, SUTSKEVER I, and HINTON G E. ImageNet classification with deep convolutional neural networks[C]. The 25th International Conference on Neural Information Processing Systems, Lake Tahoe, USA, 2012: 1097–1105.
    [13] SIMONYAN K and ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C]. 3rd International Conference on Learning Representations, San Diego, USA, 2015.
    [14] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778.
    [15] REN Shaoqing, HE Kaiming, GIRSHICK R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[C]. The 28th International Conference on Neural Information Processing Systems, Montreal, Canada, 2015: 91–99.
    [16] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: Unified, real-time object detection[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 779–788.
    [17] LONG J, SHELHAMER E, and DARRELL T. Fully convolutional networks for semantic segmentation[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015: 3431–3440.
    [18] RONNEBERGER O, FISCHER P, and BROX T. U-Net: Convolutional networks for biomedical image segmentation[C]. 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 2015: 234–241.
    [19] ZHU Chenchen, CHEN Fangyi, AHMED U, et al. Semantic relation reasoning for shot-stable few-shot object detection[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 8778–8787.
    [20] HU Hanzhe, BAI Shuai, LI Aoxue, et al. Dense relation distillation with context-aware aggregation for few-shot object detection[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, USA, 2021: 10180–10189.
    [21] PAN Yunhe. Miniaturized five fundamental issues about visual knowledge[J]. Frontiers of Information Technology & Electronic Engineering, 2021, 22(5): 615–618. doi: 10.1631/FITEE.2040000
    [22] 曾庆存. 天气预报——由经验到物理数学理论和超级计算[J]. 物理, 2013, 42(5): 300–314. doi: 10.7693/wl20130501

    ZENG Qingcun. Weather forecast——from empirical to physicomathematical theory and super-computing system engineering[J]. Physics, 2013, 42(5): 300–314. doi: 10.7693/wl20130501
    [23] 邓海游, 贾亚, 张阳. 蛋白质结构预测[J]. 物理学报, 2016, 65(17): 178701. doi: 10.7498/aps.65.178701

    DENG Haiyou, JIA Ya, and ZHANG Yang. Protein structure prediction[J]. Acta Physica Sinica, 2016, 65(17): 178701. doi: 10.7498/aps.65.178701
    [24] 杜其通, 刘朝雨, 闵剑, 等. 基于人工神经网络的动力学参数辨识法[J]. 高技术通讯, 2020, 30(5): 495–500. doi: 10.3772/j.issn.1002-0470.2020.05.008

    DU Qitong, LIU Zhaoyu, MIN Jian, et al. Dynamic parameter identification method based on artificial neural network[J]. Chinese High Technology Letters, 2020, 30(5): 495–500. doi: 10.3772/j.issn.1002-0470.2020.05.008
    [25] 黄铭枫, 刘国星, 王义凡, 等. 耦合台风天气预报模式和实测数据的神经网络风速预测[J]. 建筑结构学报, 2022, 43(3): 98–108. doi: 10.14006/j.jzjgxb.2020.0563

    HUANG Mingfeng, LIU Guoxing, WANG Yifan, et al. Neural network forecasts of typhoon wind speeds coupled with WRF and measured data[J]. Journal of Building Structures, 2022, 43(3): 98–108. doi: 10.14006/j.jzjgxb.2020.0563
    [26] BRIDGERS S, BUCHSBAUM D, SEIVER E, et al. Children’s causal inferences from conflicting testimony and observations[J]. Developmental Psychology, 2016, 52(1): 9–18. doi: 10.1037/a0039830
    [27] MAICAS G, BRADLEY A P, NASCIMENTO J C, et al. Training medical image analysis systems like radiologists[C]. 21st International Conference on Medical Image Computing and Computer-Assisted Intervention, Granada, Spain, 2018: 546–554.
    [28] LI Liu, XU Mai, WANG Xiaofei, et al. Attention based glaucoma detection: A large-scale database and CNN model[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 10563–10572.
    [29] XIE Xiaozheng, NIU Jianwei, LIU Xuefeng, et al. A survey on incorporating domain knowledge into deep learning for medical image analysis[J]. Medical Image Analysis, 2021, 69: 101985. doi: 10.1016/j.media.2021.101985
    [30] RATNER A, BACH S H, EHRENBERG H, et al. Snorkel: Rapid training data creation with weak supervision[J]. The VLDB Journal, 2020, 29(2/3): 709–730. doi: 10.1007/s00778-019-00552-1
    [31] YU Yue, ZUO Simiao, JIANG Haoming, et al. Fine-tuning pre-trained language model with weak supervision: A contrastive-regularized self-training approach[C/OL]. The 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021: 1063–1077.
    [32] LIEM C C S, LANGER M, DEMETRIOU A, et al. Psychology meets machine learning: Interdisciplinary perspectives on algorithmic job candidate screening[M]. ESCALANTE H J, ESCALERA S, GUYON I, et al. Explainable and Interpretable Models in Computer Vision and Machine Learning. Cham: Springer, 2018: 197–253.
    [33] JACKSON P T G, ATAPOUR-ABARGHOUEI A, BONNER S, et al. Style augmentation: Data augmentation via style randomization[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 83–92.
    [34] HWANG Y, CHO H, YANG Hongsun, et al. Mel-spectrogram augmentation for sequence to sequence voice conversion[J]. arXiv preprint arXiv: 2001.01401, 2020.
    [35] BORDES A, USUNIER N, GARCIA-DURÁN A, et al. Translating embeddings for modeling multi-relational data[C]. The 26th International Conference on Neural Information Processing Systems, Lake Tahoe, USA, 2013: 2787–2795.
    [36] SUN Zhiqing, DENG Zhihong, NIE Jianyun, et al. RotatE: Knowledge graph embedding by relational rotation in complex space[C]. 7th International Conference on Learning Representations, New Orleans, USA, 2019.
    [37] KIPF T N and WELLING M. Semi-supervised classification with graph convolutional networks[C]. 5th International Conference on Learning Representations, Toulon, France, 2017.
    [38] VELIČKOVIĆ P, CUCURULL G, CASANOVA A, et al. Graph attention networks[J]. arXiv preprint arXiv: 1710.10903, 2017.
    [39] HU Zhiting, MA Xuezhe, LIU Zhengzhong, et al. Harnessing deep neural networks with logic rules[C]. The 54th Annual Meeting of the Association for Computational Linguistics, Berlin, Germany, 2016: 2410–2420.
    [40] CHEN Xiang, ZHANG Ningyu, XIE Xin, et al. KnowPrompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction[C]. The ACM Web Conference 2022, Lyon, France, 2022: 2778–2788.
    [41] JIN Zhe, ZHANG Yin, KUANG Haodan, et al. Named entity recognition in traditional Chinese medicine clinical cases combining BiLSTM-CRF with knowledge graph[C]. 12th International Conference on Knowledge Science, Engineering and Management, Athens, Greece, 2019: 537–548.
    [42] HE Qizhen, WU Liang, YIN Yida, et al. Knowledge-graph augmented word representations for named entity recognition[C]. The 34th AAAI Conference on Artificial Intelligence, New York, USA, 2020: 7919–7926.
    [43] LOGAN R, LIU N F, PETERS M E, et al. Barack’s wife Hillary: Using knowledge graphs for fact-aware language modeling[C]. The 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 2019: 5962–5971.
    [44] ZHANG Zhengyan, HAN Xu, LIU Zhiyuan, et al. ERNIE: Enhanced language representation with informative entities[C]. The 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 2019: 1441–1451.
    [45] LIU Weijie, ZHOU Peng, ZHAO Zhe, et al. K-BERT: Enabling language representation with knowledge graph[C]. The 34th AAAI Conference on Artificial Intelligence, New York, USA, 2020: 2901–2908.
    [46] SUN Yu, WANG Shuohuan, LI Yukun, et al. ERNIE 2.0: A continual pre-training framework for language understanding[C]. The 34th AAAI Conference on Artificial Intelligence, New York, USA, 2020: 8968–8975.
    [47] CHEN Yu, WU Lingfei, and ZAKI M J. Bidirectional attentive memory networks for question answering over knowledge bases[C]. The 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, USA, 2019: 2913–2923.
    [48] BAUER L, WANG Yicheng, and BANSAL M. Commonsense for generative multi-hop question answering tasks[C]. The 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 2018: 4220–4230.
    [49] WANG Xiang, HE Xiangnan, CAO Yixin, et al. KGAT: Knowledge graph attention network for recommendation[C]. The 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, USA, 2019: 950–958.
    [50] WAN Guojia, PAN Shirui, GONG Chen, et al. Reasoning like human: Hierarchical reinforcement learning for knowledge graph reasoning[C]. The Twenty-Ninth International Joint Conference on Artificial Intelligence, Yokohama, Japan, 2021: 267.
    [51] WEN Zhang and PENG Yuxin. Multi-level knowledge injecting for visual commonsense reasoning[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2021, 31(3): 1042–1054. doi: 10.1109/TCSVT.2020.2991866
    [52] GAN Leilei, KUANG Kun, YANG Yi, et al. Judgment prediction via injecting legal knowledge into neural networks[C/OL]. The 35th AAAI Conference on Artificial Intelligence, 2021: 12866–12874.
    [53] 王玉萍, 曾毅. 人类视觉机制与ROI融合的红外行人检测[J]. 中国测试, 2021, 47(9): 87–93. doi: 10.11857/j.issn.1674-5124.2020100100

    WANG Yuping and ZENG Yi. Pedestrian detection in infrared images using ROI fusion and human visual mechanism[J]. China Measurement &Test, 2021, 47(9): 87–93. doi: 10.11857/j.issn.1674-5124.2020100100
    [54] 许丽娜, 肖奇, 何鲁晓. 考虑人类视觉特征的融合图像评价方法[J]. 武汉大学学报:信息科学版, 2019, 44(4): 546–554. doi: 10.13203/j.whugis20170168

    XU Lina, XIAO Qi, and HE Luxiao. Fused image quality assessment based on human visual characteristics[J]. Geomatics and Information Science of Wuhan University, 2019, 44(4): 546–554. doi: 10.13203/j.whugis20170168
    [55] 申天啸, 韩怡园, 韩冰, 等. 基于人类视觉皮层双通道模型的驾驶员眼动行为识别[J]. 智能系统学报, 2022, 17(1): 41–49. doi: 10.11992/tis.202106051

    SHEN Tianxiao, HAN Yiyuan, HAN Bing, et al. Recognition of driver's eye movement based on the human visual cortex two-stream model[J]. CAAI Transactions on Intelligent Systems, 2022, 17(1): 41–49. doi: 10.11992/tis.202106051
    [56] BAKHTIARI S, MINEAULT P, LILLICRAP T, et al. The functional specialization of visual cortex emerges from training parallel pathways with self-supervised predictive learning[C]. 35th Conference on Neural Information Processing Systems, 2021: 25164–25178.
    [57] GREFF K, KAUFMAN R L, KABRA R, et al. Multi-object representation learning with iterative variational inference[C]. The 36th International Conference on Machine Learning, Long Beach, USA, 2019: 2424–2433.
    [58] ZHAO Yongheng, BIRDAL T, DENG Haowen, et al. 3D point capsule networks[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 1009–1018.
    [59] LIANG Yuanzhi, FENG Qianyu, ZHU Linchao, et al. SEEG: Semantic energized Co-speech gesture generation[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, USA, 2022: 10473–10482.
    [60] LUO Yawei, LIU Ping, GUAN Tao, et al. Adversarial style mining for one-shot unsupervised domain adaptation[C]. The 34th International Conference on Neural Information Processing Systems, Vancouver, Canada, 2020: 1731.
    [61] WU Aming, HAN Yahong, ZHU Linchao, et al. Instance-invariant domain adaptive object detection via progressive disentanglement[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(8): 4178–4193. doi: 10.1109/TPAMI.2021.3060446
    [62] PU Shiliang, ZHAO Wei, CHEN Weijie, et al. Unsupervised object detection with scene-adaptive concept learning[J]. Frontiers of Information Technology & Electronic Engineering, 2021, 22(5): 638–651. doi: 10.1631/FITEE.2000567
    [63] STEWART R and ERMON S. Label-free supervision of neural networks with physics and domain knowledge[C]. Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, USA, 2017: 2576–2582.
    [64] PEARL J and MACKENZIE D. The Book of Why: The New Science of Cause and Effect[M]. New York: Basic Books, 2018.
    [65] WU Anpeng, YUAN Junkun, KUANG Kun, et al. Learning decomposed representations for treatment effect estimation[J]. IEEE Transactions on Knowledge and Data Engineering. 2022, 35(5): 4989–5001.
    [66] YUAN Junkun, WU Anpeng, KUANG Kun, et al. Auto IV: Counterfactual prediction via automatic instrumental variable decomposition[J]. ACM Transactions on Knowledge Discovery from Data, 2022, 16(4): 74. doi: 10.1145/3494568
    [67] KUANG Kun, LI Yunzhe, LI Bo, et al. Continuous treatment effect estimation via generative adversarial de-confounding[J]. Data Mining and Knowledge Discovery, 2021, 35(6): 2467–2497. doi: 10.1007/s10618-021-00797-x
    [68] LOSCH M, FRITZ M, and SCHIELE B. Interpretability beyond classification output: Semantic bottleneck networks[J]. arXiv preprint arXiv: 1907.10882, 2019.
    [69] KUANG Kun, ZHANG Hengtao, WU Runze, et al. Balance-subsampled stable prediction across unknown test data[J]. ACM Transactions on Knowledge Discovery from Data, 2022, 16(3): 45. doi: 10.1145/3477052
    [70] KUANG Kun, LI Bo, CUI Peng, et al. Stable prediction via leveraging seed variable[J]. arXiv preprint arXiv: 2006.05076, 2020.
    [71] RAISSI M, PERDIKARIS P, and KARNIADAKIS G E. Physics informed deep learning (Part I): Data-driven solutions of nonlinear partial differential equations[J]. arXiv preprint arXiv: 1711.10561, 2017.
    [72] RAISSI M, PERDIKARIS P, and KARNIADAKIS G E. Physics informed deep learning (Part II): Data-driven discovery of nonlinear partial differential equations[J]. arXiv preprint arXiv: 1711.10566, 2017.
    [73] RAISSI M, PERDIKARIS P, and KARNIADAKIS G E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations[J]. Journal of Computational Physics, 2019, 378: 686–707. doi: 10.1016/j.jcp.2018.10.045
    [74] BAYDIN A G, PEARLMUTTER B A, RADUL A A, et al. Automatic differentiation in machine learning: A survey[J]. The Journal of Machine Learning Research, 2017, 18(1): 5595–5637.
    [75] 李野, 陈松灿. 基于物理信息的神经网络: 最新进展与展望[J]. 计算机科学, 2022, 49(4): 254–262. doi: 10.11896/jsjkx.210500158

    LI Ye and CHEN Songcan. Physics-informed neural networks: Recent advances and prospects[J]. Computer Science, 2022, 49(4): 254–262. doi: 10.11896/jsjkx.210500158
    [76] RAISSI M, YAZDANI A, and KARNIADAKIS G E. Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations[J]. Science, 2020, 367(6481): 1026–1030. doi: 10.1126/science.aaw4741
    [77] RAISSI M, WANG Zhicheng, TRIANTAFYLLOU M S, et al. Deep learning of vortex-induced vibrations[J]. Journal of Fluid Mechanics, 2019, 861: 119–137. doi: 10.1017/jfm.2018.872
    [78] SAHLI COSTABAL F, YANG Yibo, PERDIKARIS P, et al. Physics-informed neural networks for cardiac activation mapping[J]. Frontiers in Physics, 2020, 8: 42. doi: 10.3389/fphy.2020.00042
    [79] FANG Zhiwei and ZHAN J. Deep physical informed neural networks for metamaterial design[J]. IEEE Access, 2020, 8: 24506–24513. doi: 10.1109/ACCESS.2019.2963375
    [80] ZHANG Enrui, DAO Ming, KARNIADAKIS G E, et al. Analyses of internal structures and defects in materials using physics-informed neural networks[J]. Science Advances, 2022, 8(7): eabk0644. doi: 10.1126/sciadv.abk0644
    [81] LI Zongyi, KOVACHKI N B, AZIZZADENESHELI K, et al. Fourier neural operator for parametric partial differential equations[C/OL]. 9th International Conference on Learning Representations, 2021.
    [82] LU Lu, JIN Pengzhan, PANG Guofei, et al. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators[J]. Nature Machine Intelligence, 2021, 3(3): 218–229. doi: 10.1038/s42256-021-00302-5
    [83] BRANDSTETTER J, WORRALL D E, and WELLING M. Message passing neural PDE solvers[C/OL]. The Tenth International Conference on Learning Representations, 2022.
    [84] SENIOR A W, EVANS R, JUMPER J, et al. Improved protein structure prediction using potentials from deep learning[J]. Nature, 2020, 577(7792): 706–710. doi: 10.1038/s41586-019-1923-7
    [85] JUMPER J, EVANS R, PRITZEL A, et al. Highly accurate protein structure prediction with AlphaFold[J]. Nature, 2021, 596(7873): 583–589. doi: 10.1038/s41586-021-03819-2
    [86] GABLER F, NAM S Z, TILL S, et al. Protein sequence analysis using the MPI bioinformatics toolkit[J]. Current Protocols in Bioinformatics, 2020, 72(1): e108. doi: 10.1002/cpbi.108
    [87] BAEK M, DIMAIO F, ANISHCHENKO I, et al. Accurate prediction of protein structures and interactions using a three-track neural network[J]. Science, 2021, 373(6557): 871–876. doi: 10.1126/science.abj8754
    [88] HUMPHREYS I R, PEI Jimin, BAEK M, et al. Computed structures of core eukaryotic protein complexes[J]. Science, 2021, 374(6573): eabm4805. doi: 10.1126/science.abm4805
    [89] KENNEDY J. Swarm intelligence[M]. ZOMAYA A Y. Handbook of Nature-Inspired and Innovative Computing: Integrating Classical Models with Emerging Technologies. New York: Springer, 2006: 187–219.
    [90] CHU Shuchuan, RODDICK J F, SU C J, et al. Constrained ant colony optimization for data clustering[C]. 8th Pacific Rim International Conference on Artificial Intelligence, Auckland, New Zealand, 2004: 534–543.
    [91] 陈健瑞, 王景璟, 侯向往, 等. 挺进深蓝: 从单体仿生到群体智能[J]. 电子学报, 2021, 49(12): 2458–2467. doi: 10.12263/DZXB.20201448

    CHEN Jianrui, WANG Jingjing, HOU Xiangwang, et al. Advance into ocean: From bionic monomer to swarm intelligence[J]. Acta Electronica Sinica, 2021, 49(12): 2458–2467. doi: 10.12263/DZXB.20201448
    [92] SILVER D, HUANG A, MADDISON C J, et al. Mastering the game of Go with deep neural networks and tree search[J]. Nature, 2016, 529(7587): 484–489. doi: 10.1038/nature16961
    [93] SILVER D, SCHRITTWIESER J, SIMONYAN K, et al. Mastering the game of go without human knowledge[J]. Nature, 2017, 550(7676): 354–359. doi: 10.1038/nature24270
    [94] LI Wei, WU Wenjun, WANG Huaimin, et al. Crowd intelligence in AI 2.0 era[J]. Frontiers of Information Technology & Electronic Engineering, 2017, 18(1): 15–43. doi: 10.1631/FITEE.1601859
    [95] COLORNI A, DORIGO M, and MANIEZZO V. Distributed optimization by ant colonies[C]. The First European Conference on Artificial Life, Paris, France, 1991: 134–142.
    [96] KENNEDY J and EBERHART R. Particle swarm optimization[C]. Proceedings of ICNN'95-International Conference on Neural Networks, Perth, Australia, 1995: 1942–1948.
    [97] 吴虎胜, 张凤鸣, 吴庐山. 一种新的群体智能算法——狼群算法[J]. 系统工程与电子技术, 2013, 35(11): 2430–2438.

    WU Husheng, ZHANG Fengming, and WU Lushan. New swarm intelligence algorithm—wolf pack algorithm[J]. Systems Engineering and Electronics, 2013, 35(11): 2430–2438.
    [98] 邢立宁, 陈英武. 基于知识的智能优化引导方法研究进展[J]. 自动化学报, 2011, 37(11): 1285–1289. doi: 10.3724/SP.J.1004.2011.01285

    XING Lining and CHEN Yingwu. Research progress on intelligent optimization guidance approaches using knowledge[J]. Acta Automatica Sinica, 2011, 37(11): 1285–1289. doi: 10.3724/SP.J.1004.2011.01285
    [99] CUCKER F and SMALE S. Emergent behavior in flocks[J]. IEEE Transactions on Automatic Control, 2007, 52(5): 852–862. doi: 10.1109/TAC.2007.895842
    [100] LEE M, TAROKH M, and CROSS M. Fuzzy logic decision making for multi-robot security systems[J]. Artificial Intelligence Review, 2010, 34(2): 177–194. doi: 10.1007/s10462-010-9168-8
    [101] NGUYEN T T, NGUYEN N D, and NAHAVANDI S. Deep reinforcement learning for multiagent systems: A review of challenges, solutions, and applications[J]. IEEE Transactions on Cybernetics, 2020, 50(9): 3826–3839. doi: 10.1109/TCYB.2020.2977374
    [102] 孙长银, 穆朝絮. 多智能体深度强化学习的若干关键科学问题[J]. 自动化学报, 2020, 46(7): 1301–1312. doi: 10.16383/j.aas.c200159

    SUN Changyin and MU Chaoxu. Important scientific problems of multi-agent deep reinforcement learning[J]. Acta Automatica Sinica, 2020, 46(7): 1301–1312. doi: 10.16383/j.aas.c200159
    [103] 蒲志强, 易建强, 刘振, 等. 知识和数据协同驱动的群体智能决策方法研究综述[J]. 自动化学报, 2022, 48(3): 627–643. doi: 10.16383/j.aas.c210118

    PU Zhiqiang, YI Jianqiang, LIU Zhen, et al. Knowledge-based and data-driven integrating methodologies for collective intelligence decision making: A survey[J]. Acta Automatica Sinica, 2022, 48(3): 627–643. doi: 10.16383/j.aas.c210118
    [104] 刘云浩. 群智感知计算[J]. 中国计算机学会通讯, 2012, 8(10): 38–41.

    LIU Yunhao. Crowd sensing computing[J]. Communications of the CCF, 2012, 8(10): 38–41.
    [105] HAN Tao, SUN Hailong, SONG Yangqiu, et al. Incorporating external knowledge into crowd intelligence for more specific knowledge acquisition[C]. The Twenty-Fifth International Joint Conference on Artificial Intelligence, New York, USA, 2016: 1541–1547.
    [106] RIVEST R L, ADLEMAN L, and DERTOUZOS M L. On data banks and privacy homomorphisms[J]. Foundations of Secure Computation, 1978, 4(11): 169–180.
    [107] KONEČNÝ J, MCMAHAN H B, YU F X, et al. Federated learning: Strategies for improving communication efficiency[J]. arXiv preprint arXiv: 1610.05492, 2016.
    [108] MCMAHAN H B, MOORE E, RAMAGE D, et al. Communication-efficient learning of deep networks from decentralized data[C]. The 20th International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, USA, 2017: 1273–1282.
    [109] ZHAO Ning, WU Hao, YU F R, et al. Deep-reinforcement-learning-based latency minimization in edge intelligence over vehicular networks[J]. IEEE Internet of Things Journal, 2022, 9(2): 1300–1312. doi: 10.1109/JIOT.2021.3078480
    [110] SHI Dingyuan, TONG Yongxin, ZHOU Zimu, et al. Learning to assign: Towards fair task assignment in large-scale ride hailing[C]. The 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Singapore, 2021: 3549–3557.
    [111] KELLER J. DARPA to develop swarming unmanned vehicles for better military reconnaissance[J]. Military & Aerospace Electronics, 2017, 28(2): 4–6.
    [112] ARNOLD R, JABLONSKI J, ABRUZZO B, et al. Heterogeneous UAV multi-role swarming behaviors for search and rescue[C]. 2020 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA), Victoria, Canada, 2020: 122–128.
    [113] HARDY S, HENECKA W, IVEY-LAW H, et al. Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption[J]. arXiv preprint arXiv: 1711.10677, 2017.
    [114] ZHENG Wenbo, YAN Lan, GOU Chao, et al. Federated meta-learning for fraudulent credit card detection[C]. The Twenty-Ninth International Joint Conference on Artificial Intelligence, Yokohama, Japan, 2020: 4654–4660.
    [115] BADDELEY A. Working memory: Looking back and looking forward[J]. Nature Reviews Neuroscience, 2003, 4(10): 829–839. doi: 10.1038/nrn1201
    [116] BADDELEY A D and HITCH G. Working memory[J]. Psychology of Learning and Motivation, 1974, 8: 47–89. doi: 10.1016/S0079-7421(08)60452-1
    [117] LIN Yankai, LIU Zhiyuan, SUN Maosong, et al. Learning entity and relation embeddings for knowledge graph completion[C]. The Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, USA, 2015: 2181–2187.
    [118] DETTMERS T, MINERVINI P, STENETORP P, et al. Convolutional 2D knowledge graph embeddings[C]. The Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, New Orleans, USA, 2018: 1811–1818.
    [119] JIN Xiaowei, CAI Shengze, LI Hui, et al. NSFnets (Navier-Stokes flow nets): Physics-informed neural networks for the incompressible Navier-Stokes equations[J]. Journal of Computational Physics, 2021, 426: 109951. doi: 10.1016/j.jcp.2020.109951
    [120] KISSAS G, YANG Yibo, HWUANG E, et al. Machine learning in cardiovascular flows modeling: Predicting arterial blood pressure from non-invasive 4D flow MRI data using physics-informed neural networks[J]. Computer Methods in Applied Mechanics and Engineering, 2020, 358: 112623. doi: 10.1016/j.cma.2019.112623
    [121] MISYRIS G S, VENZKE A, and CHATZIVASILEIADIS S. Physics-informed neural networks for power systems[C]. 2020 IEEE Power & Energy Society General Meeting (PESGM), Montreal, Canada, 2020: 1–5.
    [122] SCHUETZ M J A, BRUBAKER J K, and KATZGRABER H G. Combinatorial optimization with physics-inspired graph neural networks[J]. Nature Machine Intelligence, 2022, 4(4): 367–377. doi: 10.1038/s42256-022-00468-6
    [123] SEGLER M H S, PREUSS M, and WALLER M P. Planning chemical syntheses with deep neural networks and symbolic AI[J]. Nature, 2018, 555(7698): 604–610. doi: 10.1038/nature25978
    [124] CHENG Guanjian, GONG Xingao, and YIN Wanjian. Crystal structure prediction by combining graph network and optimization algorithm[J]. Nature Communications, 2022, 13(1): 1492. doi: 10.1038/S41467-022-29241-4
    [125] WANG Han, MAO Kebiao, YUAN Zijin, et al. A method for land surface temperature retrieval based on model-data-knowledge-driven and deep learning[J]. Remote Sensing of Environment, 2021, 265: 112665. doi: 10.1016/j.rse.2021.112665
    [126] JADHAV A, WONG K C L, WU J T, et al. Combining deep learning and knowledge-driven reasoning for chest X-ray findings detection[C]. American Medical Informatics Association Annual Symposium, Chicago, USA, 2020: 593–601.
  • 期刊类型引用(13)

    1. 黄亮,张军,季伟东. 结合正交补空间反向学习策略的自然计算方法. 小型微型计算机系统. 2023(03): 544-552 . 百度学术
    2. 韩沐枫. 计及需求响应的微电网多时间尺度调度仿真. 计算机与现代化. 2023(03): 102-106 . 百度学术
    3. 王文川,徐雷,徐冬梅. 阴阳萤火虫算法及其在全局优化问题中的应用研究. 应用基础与工程科学学报. 2022(01): 64-75 . 百度学术
    4. 李钧超,张辰,陈丹,巩鑫龙. 考虑多站融合的智能变电站选址定容优化方法研究. 电网与清洁能源. 2022(03): 61-67 . 百度学术
    5. 王天雷,张绮媚,李俊辉,周京,刘人菊,谭南林. 基于正交对立学习的改进麻雀搜索算法. 电子测量技术. 2022(10): 57-66 . 百度学术
    6. 赵新超,熊卿,冯帅. 均匀邻域对位的自适应差分进化算法. 集美大学学报(自然科学版). 2021(01): 72-81 . 百度学术
    7. 郭腾,陈剑培,况富强. 基于改进的和声搜索算法的无线通信频谱分配策略研究. 微型电脑应用. 2021(03): 97-99+105 . 百度学术
    8. 李媛媛,魏延,张文泷,王晶仪,蒋俊蕊. 基于K-means的邻域结合随机吸引的萤火虫算法. 重庆师范大学学报(自然科学版). 2021(06): 114-121 . 百度学术
    9. 刘浩洋,杜欣慧,裴玥瑶,李钢. 针对漂浮式光伏电站最大功率追踪的研究. 电源技术. 2020(01): 72-75 . 百度学术
    10. 刘小龙. 基于统计指导的飞蛾扑火算法求解大规模优化问题. 控制与决策. 2020(04): 901-908 . 百度学术
    11. 李煜,郑娟,刘景森. 大规模优化问题的改进花朵授粉算法. 计算机科学与探索. 2020(08): 1427-1440 . 百度学术
    12. 钟章生,陈世炉,陈志龙. 利用并行惯性权重OOL-FA的大数据分类. 计算机工程与设计. 2020(10): 2818-2824 . 百度学术
    13. 厉建军,张建梅,孙晓红,王迎迎,李洋,辛伟娟,王彩虹. 卤煮工艺对酱卤肉制品品质的影响. 农产品加工. 2019(13): 52-54+58 . 百度学术

    其他类型引用(7)

  • 加载中
图(2) / 表(1)
计量
  • 文章访问数:  3694
  • HTML全文浏览量:  5546
  • PDF下载量:  1524
  • 被引次数: 20
出版历程
  • 收稿日期:  2022-05-20
  • 修回日期:  2022-08-24
  • 网络出版日期:  2022-08-29
  • 刊出日期:  2023-07-10

目录

/

返回文章
返回