Loading [MathJax]/jax/output/HTML-CSS/jax.js
高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

三维点云目标识别对抗攻击研究综述

刘伟权 郑世均 郭宇 王程

刘伟权, 郑世均, 郭宇, 王程. 三维点云目标识别对抗攻击研究综述[J]. 电子与信息学报, 2024, 46(5): 1645-1657. doi: 10.11999/JEIT231188
引用本文: 刘伟权, 郑世均, 郭宇, 王程. 三维点云目标识别对抗攻击研究综述[J]. 电子与信息学报, 2024, 46(5): 1645-1657. doi: 10.11999/JEIT231188
Wenze SHAO, Miaomiao ZHANG, Haibo LI. Tiny Face Hallucination via Relativistic Adversarial Learning[J]. Journal of Electronics & Information Technology, 2021, 43(9): 2577-2585. doi: 10.11999/JEIT200362
Citation: LIU Weiquan, ZHENG Shijun, GUO Yu, WANG Cheng. A Survey of Adversarial Attacks on 3D Point Cloud Object Recognition[J]. Journal of Electronics & Information Technology, 2024, 46(5): 1645-1657. doi: 10.11999/JEIT231188

三维点云目标识别对抗攻击研究综述

doi: 10.11999/JEIT231188
基金项目: 中国博士后科学基金(2021M690094),福厦泉国家自主创新示范区协同创新平台(3502ZCQXT2021003)
详细信息
    作者简介:

    刘伟权:男,副教授,研究方向为3维视觉、3维对抗学习、激光雷达数据智能处理

    郑世均:男,博士生,研究方向为3维视觉、3维对抗学习

    郭宇:男,硕士生,研究方向为3维视觉、3维对抗学习

    王程:男,博士,教授,研究方向为3维视觉、激光雷达数据智能处理、空间大数据分析

    通讯作者:

    王程 cwang@xmu.edu.cn

  • 中图分类号: TN249.3 ; TN957.52 ; TN958.98; TN972; TP39

A Survey of Adversarial Attacks on 3D Point Cloud Object Recognition

Funds: The China Postdoctoral Science Foundation (2021M690094), The FuXiaQuan National Independent Innovation Demonstration Zone Collaborative Innovation Platform (3502ZCQXT2021003)
  • 摘要: 当前,人工智能系统在诸多领域都取得了巨大的成功,其中深度学习技术发挥了关键作用。然而,尽管深度神经网络具有强大的推理识别能力,但是依然容易受到对抗样本的攻击,表现出了脆弱性。对抗样本是经过特殊设计的输入数据,能够攻击并误导深度学习模型的输出。随着激光雷达等3维传感器的快速发展,使用深度学习技术解决3维领域的各种智能任务也越来越受到重视。采用深度学习技术处理3维点云数据的人工智能系统的安全性和鲁棒性至关重要,如基于深度学习的自动驾驶3维目标检测与识别技术。为了分析3维点云对抗样本对深度神经网络的攻击方式,揭示3维对抗样本对深度神经网络的干扰机制,该文总结了基于3维点云深度神经网络模型的对抗攻击方法的研究进展。首先,介绍了对抗攻击的基本原理和实现方法,然后,总结并分析了3维点云的数字域对抗攻击和物理域对抗攻击,最后,讨论了3维点云对抗攻击面临的挑战和未来的研究方向。
  • 成对载波多址复用( Paired Carrier Multiple Access, PCMA)是一种用于提高卫星通信容量的技术[1],目前已得到广泛的应用。其非合作接收混合信号盲分离只能利用单通道两路数字同频混合信号盲分离方法来实现[25]

    单通道盲分离由于存在较多未知因素,求解难度远高于正定盲分离,对不同的通信信号已经产生了一些针对性算法[69]。这些成果都集中在算法研究上,最初对算法性能的度量主要通过计算机仿真实现,且只分析了低阶调制混合信号的分离,廖灿辉等人[1012]从双信号联合序列检测的最大似然准则出发,利用Forney 方法推出分离性能上界的解析表达式,但也是依托于维特比算法研究基础上,当前迫切需要摆脱分离算法束缚推导从信号本身角度出发的分离性能界。

    本文针对MPSK, MQAM调制PCMA混合信号,从发送信号角度出发推导与分离算法无关的性能界表达式,首先将问题简化为单路信号接收情形分析其分离性能界,然后扩展为两路同频混合信号形式,推导混合信号单通道盲分离性能界,最后通过仿真对影响性能界的相关因素进行了分析。

    PCMA系统中,地面站接收到两个MPSK或QAM混合而成的调制信号,其调制方式相同、载波频率以及符号速率极为接近[1]。将接收信号按符号速率进行采样,有

    yk=H1ej(2πf1kTs+θ1)x1,k+H2ej(2πf2kTs+θ2)x2,k+vk (1)

    其中,Hi, fi, θi分别是第i路信号幅度、频偏、载波初始相位;vk为高斯白噪声,方差σ2; x1,kx2,k分别为有用信号和干扰信号的数字基带调制波形,Ts为符号周期。假设两路信号的调制方式相同,且两路信号相互统计独立,则xi,k可以表示为

    xi,k=L1k=L1ai,kgi(kTsmTs+τi) (2)

    其中,τi(i=1, 2)是第i路信号的定时偏差,a1,ka2,k(k=0, 1, ···)分别为两路发送信号序列,其取值与调制方式有关;gi(·)是等效的信道脉冲响应,包括成型滤波器、信道滤波器以及匹配滤波器等,滤波器持续的有效区间为[–L1Ts, L1Ts]。

    单通道盲分离的目的是根据接收序列{yk,k=0,1,···}估计出两路信号的符号序列{a1,k,a2,k,k=0,1,···},在Gauss白噪声信道下,分离错误由信号传输中噪声引起。本文首先分析单路信号接收时(即H2=0)解调性能界,然后推广至两路同频混合信号接收形式(PCMA信号),推导混合信号单通道盲分离性能界。

    首先研究单路信号接收(即H2=0)情况[13],发送符号与接收符号分别用XY表示。对于MPSK调制信号,每个发送符号有M种取值,表示为aγ=dζej2γφζ, γ={0,1,···,M1},其中ζ=logM2为每个发送符号携带比特信息位数,φζ=π/2ζ。经AWGN信道得到接收信号y=yc+jys,其中ycys分别表示接收信号y的实部与虚部,则其概率密度函数为2维高斯函数[13]

    pY(y)=f{ycdζcos(2γφζ)}f{ysdζsin(2γφζ)} (3)

    其中,dζ=2Es/N0=2ζEb/N0, EsEb分别为接收信号每符号与每比特能量,N0为单边带噪声功率谱密度,f(t)=12πet2/2。以8PSK调制信号为例,图1给出了调制信号比特与符号的空间映射,符号{aγ|γ=0,1,···,M1}的接收信号判决区域为Rm

    图 1  单路8PSK调制信号空间映射

    格雷映射方式下,令Pm为发送a0情况下接收符号在判决区域Rm的概率,则

    Pm=pY (Y)=Pr{YRm|X=a0,m<M} (4)

    另注意到对于MPSK信号,式(5)关系成立:

    P0>P1=PM1>···>PM/21=PM/2+1>PM/2 (5)

    结合式(3)、式(4)、式(5)可知,在具有加性高斯白噪声信道中,单路MPSK调制信号误符号率(SER),记为Ps[13]

    Ps=M1m=1Pm (6)

    PCMA信号接收时,两路信号发送分量分别用X1X2表示,接收用Y表示,由于两路信号存在时延差(Δτ=τ1τ2),第1路信号分量最佳采样位置与第2路信号分量存在符号串扰,因此L=n时空间映射是以L=n–1时空间映射为中心,向M个方向等幅度对称扩散所得,考虑到扩散后最小欧式距离减小,因此随着符号串扰长度的增加性能界逐渐变差,可见本文考虑的L=1时分离性能界为分离下界,同时定义等效幅度比为h2/h1=G2,0/G1,0。此时相偏对混合信号空间映射判决区域的影响有限,若推导第1路信号发送分量判决正确区域,可由符号串扰长度L=1, θ1θ2=0时空间映射判决区域近似。

    对于符号串扰长度L=1的PCMA混合信号,每个发送符号对(X1, X2)有M2种取值,其中每一路发送符号依旧表示为aγ=dζej2γφζ,γ={0,1,···,M1}。定义混合信号空间映射不同区域,以此为基础进行性能界分析。首先考虑MPSK调制方式混合信号,两路信号分量频偏为零,BPSK调制与QPSK调制方式下混合信号比特与符号的空间映射分别如图2图3所示。其中两路信号分量能量分别为E1=Es(1+η2)N0, E2=η2Es(1+η2)N0, η=h2/h1

    图 2  BPSK调制PCMA信号比特与符号映射
    图 3  QPSK调制PCMA信号比特与符号映射

    定义:

    Pγm=Pr{YRm|X1=a0,X2=aγ},γ{0,1,···,M1} (7)

    aγ=acγ+jasγ,则存在

    Pm=1MM1γ=0Pγm (8)

    由本节分析可知,所推导为PCMA混合信号分离性能下界,即

    PsM1m=1Pm (9)

    对于BPSK调制混合信号,结合式(7)与式(8)可得

    Pm={121γ=01πexp(v2)0exp[(uacγ)2]dudv,m=0121γ=01πexp(v2)0exp[(u+acγ)2]dudv,m=1 (10)

    此时分离误符号率Ps与误比特率(BER)Pb相同,下界为

    Ps=PbP1 (11)

    同理可得QPSK调制混合信号Pm(Pm=PMm),如式(12)。

    Pm={143γ=01π0exp[(uacγ)2]{uuexp[(vasγ)2]dv}du,m=0143γ=01π0exp[(vasγ)2]{vvexp[(uacγ)2]dv}du,m=1143γ=01π0exp[(u+acγ)2]{uuexp[(vasγ)2]dv}du,m=2 (12)

    将式(12)代入式(9)可得误符号率Ps下界。

    接下来推导8PSK调制PCMA信号分离性能界,由于判决区域Rm存在非通情况,此时调制信号比特与符号的空间映射将比单路信号映射复杂得多,图4给出了混合信号接收的空间映射,阴影部分表示X1=a0情况下接收符号在判决区域R0,即正确判决区域,其余判决区域可类推。

    图 4  X1=a0时正确判决区域R0

    之所以会出现图4所示3种映射情况,是由于随着h2/h1取值由0到1, X1=0X10对应接收混合信号的空间映射间最小欧式距离周期变化,将此最小欧氏距离定义为判决误差最小欧氏距离。当h2/h1<tan(π/8)时,由式(8)可推导出对应Pm(Pm=PMm),如式(13)。继而由式(9)可得误符号率Ps。当tan(π/8)<h2/h1<2/2时,判决区域Rm出现不连通情况,被分割在若干扇形与环形中,此时换元u=rcosϕ, v=rsinϕ, dudv=rdrdϕ,如式(14)定义。

    Pm={187γ=01π0exp[(uacγ)2]{utan(π/8)utan(π/8)exp[(vasγ)2]dv}du,m=0187γ=01π0exp[(uacγ)2]{utan(3π/8)utan(π/8)exp[(vasγ)2]dv}du,m=1187γ=01π0exp[(vasγ)2]{utan(π/8)utan(π/8)exp[(uacγ)2]du}dv,m=2187γ=01π0exp[(u+acγ)2]{utan(3π/8)utan(π/8)exp[(vasγ)2]dv}du,m=3187γ=01π0exp[(u+acγ)2]{utan(π/8)utan(π/8)exp[(vasγ)2]dv}du,m=4 (13)
    r1=12[(E1E2)+(2E22)2+(E12E22)2]r2=12[(2E22)2+(E12E22)2+E21+E22]r3=12[E21+E22+(2E22)2+(E1+2E22)2]} (14)

    采取映射空间分集方法结合式(7)求得Pγm

    Pγ0=1ππ/8π/8{r3exp[(rcos(ϕ)acγ)2(rsin(ϕ)asγ)2]rdr}dϕ+1ππ/4π/8{r3r1exp[(rcos(ϕ)acγ)2(rsin(ϕ)asγ)2]rdr}dϕ+1ππ/8π/4{r3r1exp[(rcos(ϕ)acγ)2(rsin(ϕ)asγ)2]rdr}dϕ+1ππ/8π/8{r10exp[(rcos(ϕ)acγ)2(rsin(ϕ)asγ)2]rdr}dϕ (15)
    Pγ0=1ππ/8π/8{r3exp[(rcos(ϕ)acγ)2(rsin(ϕ)asγ)2]rdr}dϕ+1π3π/8π/4{r2r1exp[(rcos(ϕ)acγ)2(rsin(ϕ)asγ)2]rdr}dϕ+1ππ/4π/8{r3r2exp[(rcos(ϕ)acγ)2(rsin(ϕ)asγ)2]rdr}dϕ+1ππ/8π/4{r3r2exp[(rcos(ϕ)acγ)2(rsin(ϕ)asγ)2]rdr}dϕ+1ππ/43π/8{r2r1exp[(rcos(ϕ)acγ)2(rsin(ϕ)asγ)2]rdr}dϕ+1ππ/8π/8{r10exp[(rcos(ϕ)acγ)2(rsin(ϕ)asγ)2]rdr}dϕ (16)

    同理可计算得到其余Pγm{m=1,2,···,7}计算式,将Pγm代入式(8)得Pm,进而由式(9)得到误符号率Ps。当h2/h1>2/2时,由式(7)可推导出对应Pγ0,如式(16)。同理可计算得到其余Pγm{m=1,2,···,7}计算式,进而由式(8),式(9)得到误符号率Ps

    16PSK及更高阶PSK调制方式PCMA混合信号误符号率可由上述空间映射分集算法求得,最终误比特率Pb如式(17)。

    Pb={12(P1+2P2+P3),M=413(P1+2P2+P3+2P4+3P5+2P6+P7),M=812(8k=1Pk+5k=2Pk+P5+2P6+P7),M=16 (17)

    综上所述,利用映射空间分集算法成功推导出MPSK调制PCMA混合信号分离性能界,现在考虑QAM调制情况,以8QAM调制方式为例计算解调误比特率联合界。随着h2/h1取值由0到1会出现图5所示接收混合信号4种空间映射情况。

    图 5  8QAM调制PCMA信号正确判决区域

    与MPSK调制方式不同,由于符号空间映射不对称,QAM调制方式下接收混合信号分离BER与发送符号有关,由图5可见发送符号空间映射分为两类,分别针对X1=a0X1=a2推导性能界,其空间映射分别用“×”和“+”表示,判决正确区域分别用阴影“”和“”表示。推导这两种情况下接收SER与BER。

    定义:

    Pγ0,m=Pr{YRm|X1=a0,X2=aγ},γ{0,1,···,M1}Pγ2,m=Pr{YRm|X1=a2,X2=aγ},γ{0,1,···,M1}} (18)

    则存在

    Pm=12MM1γ=0Pγ0,m+12MM1γ=0Pγ2,m (19)

    h2/h1<1/3时,结合式(7),式(8)可得此时Pγ0,m。同样由图5相应区域划分可计算得到Pγ2m,由式(19)可得Pm(Pm=PMm),进而由式(9)可得误符号率Ps下界,由式(17)可得误比特率Pb下界。

    当前PCMA信号盲分离主要针对为BPSK, QPSK, 8PSK以及8QAM 4种调制类型,本文针对上述4种调制类型给出了混合信号盲分离SER和BER性能界。其余调制类型PCMA混合信号为不常见或者处于分离算法待研究阶段,因此并未给出推导结果,但是依据本文思路也可以进行推导。

    以下仿真中,统一升余弦滚降系数为0.35,单倍采样接收,两路信号分量频偏值为0。

    图6(a)给出了BPSK调制方式下本文算法性能界(式(10)计算结果)与理想情况下Viterbi估计结果[14],并将其与粒子滤波分离结果、PSP分离结果进行比较。仿真条件:两路信号分量等效幅度比1.0:0.8,其余参数相同。PSP算法滤波器持续的有效区间为[–2Ts, 2Ts],盲分离时取LMS更新步长ρ=0.01,粒子滤波算法中取粒子数为300, D=3[14],两路信号相偏为零。

    图 6  性能界计算结果与对比

    图6(a)可见,本文性能界曲线与理想情况下Viterbi估计结果吻合。特别是高信噪比条件下,两者基本一致,从理论上证明本文给出的分离性能下界计算方法合理性。由图6(a)还可以看出,实验条件下粒子滤波算法与PSP算法均取得良好性能。随着等效滤波器符号串扰长度的增加,信道估计精度的提高,以及粒子数等参数选取更加充分,PSP算法与粒子滤波算法性能将更加趋近于性能界,但是同时伴随着复杂度的提升,可见本文性能界推导为分离算法评价提供指标,也为分离算法参数选取提供依据。

    图6(b)给出了MPSK与QAM调制方式下本文性能界与理想情况下Viterbi算法(即参数已知情况下PSP算法)估计结果,可见算法仿真实验结果拟合本文性能界曲线,证明性能界推导的合理性。本文导出性能界与序列检测Viterbi 算法均依据最大后验概率准则,因此结果相近。但由于本文导出性能界从信号模型空间映射角度出发,为与信号调制本身相关的理论推导结果,而序列检测Viterbi 算法为仿真实验结果,其与仿真数据量等实验参数有关,在数据量无穷时渐渐接近于本文导出性能界,因此两者不同。

    当前PCMA通信主要调制方式为BPSK, QPSK, 8PSK以及8QAM 4种调制类型,图7图8给出了上述调制方式下,本文性能界推导结果,参数设置为等效幅度比1.0:0.8。可见随着调制阶数增加,同等信噪比条件下混合信号分离性能界变差。对比图7图8可知,BPSK调制与QPSK调制PCMA信号分离BER性能完全相同,而两者SER性能却存约2倍差异,这是由于QPSK调制信号每符号代表2个比特信息,仅当这2个比特均判决正确时对应的符号才正确,因此相同BER性能的BPSK与QPSK调制PCMA信号分离SER性能不同。

    图 7  PCMA信号SER性能界
    图 8  PCMA信号BER性能界

    两路信号分量等效幅度比影响混合信号空间映射最小欧式距离,进而影响混合信号分离性能。图9(a)图9(b)分别针对QPSK与8PSK调制方式PCMA混合信号,给出不同等效幅度比对分离性能界影响曲线。

    图 9  h1/h2对PCMA信号分离性能界影响曲线

    可见,QPSK调制PCMA信号盲分离中,分离性能界随着等效幅度比增加而降低,这是由于QPSK调制PCMA混合信号判决误差最小欧式距离与两路信号分量等效幅度比成正比,等效幅度比的增加对应更大的最小欧氏距离,进而对应更低的分离性能界。8PSK调制PCMA信号分离性能界同样随着判决误差最小欧式距离增加而减小。

    本文针对PCMA混合信号,从发送信号模型出发,利用最大似然准则,针对PCMA混合信号推导得到其与分离算法无关的分离性能界表达式,对未来PCMA混合信号盲分离算法有着可行性指导与性能评价作用。若两路信号分量存在频偏,固定采样点位置分析时可将频偏影响纳入到相偏影响,进一步纳入到等效幅度比影响中,因此本文性能界推导依然适用。

  • 图  1  原始点云和对抗点云示例

    图  2  数字域3维点云对抗攻击示例图[12]

    图  3  图谱域的3维点云对抗攻击[12]

    图  4  道路场景下的物理域对抗攻击[53]

    图  5  面向自动驾驶系统的对抗攻击[9]

    图  6  基于真实场景对抗位置的对抗攻击[51]

    表  1  3维点云对抗攻击的数据集

    数据集 类型 特点
    ModelNet40 仿真数据集 数据规模小,仿真目标结构完整、形状清晰、无噪声、类别多样
    ShapeNet 仿真数据集 数据规模小,仿真目标结构完整、形状清晰、无噪声、类别多样
    ScanObjectNN 真实世界数据集 数据规模小,真实世界的室内场景、室内目标扫描而获得的物体数据集
    KITTI 真实世界数据集 数据规模较大,面向自动驾驶的真实世界城市和街区的点云数据集
    NuScenes 真实世界数据集 数据规模大,面向自动驾驶的真实世界城市的点云数据集
    Waymo 真实世界数据集 数据规模大,面向自动驾驶的真实世界城市和郊区的点云数据集
    下载: 导出CSV
  • [1] LANCHANTIN J, WANG Tianlu, ORDONEZ V, et al. General multi-label image classification with transformers[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, USA, 2021: 16473–16483. doi: 10.1109/CVPR46437.2021.01621.
    [2] SUN Xiao, LIAN Zhouhui, and XIAO Jianguo. SRINet: Learning strictly rotation-invariant representations for point cloud classification and segmentation[C]. The 27th ACM International Conference on Multimedia, Nice, France, 2019: 980–988. doi: 10.1145/3343031.3351042.
    [3] HUYNH C, TRAN A T, LUU K, et al. Progressive semantic segmentation[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, USA, 2021: 16750–16759. doi: 10.1109/CVPR46437.2021.01648.
    [4] LIU Weiquan, GUO Hanyun, ZHANG Weini, et al. TopoSeg: Topology-aware segmentation for point clouds[C]. The Thirty-First International Joint Conference on Artificial Intelligence, Vienna, Austria, 2022: 1201–1208. doi: 10.24963/ijcai.2022/168.
    [5] CHEN Xiangning, XIE Cihang, TAN Mingxing, et al. Robust and accurate object detection via adversarial learning[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, USA, 2021: 16617–16626. doi: 10.1109/CVPR46437.2021.01635.
    [6] MIAO Zhenwei, CHEN JiKai, PAN Hongyu, et al. PVGNet: A bottom-up one-stage 3D object detector with integrated multi-level features[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, USA, 2021: 3278–3287. doi: 10.1109/CVPR46437.2021.00329.
    [7] SZEGEDY C, ZAREMBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[C]. The 2nd International Conference on Learning Representations, Banff, Canada, 2014.
    [8] 刘复昌, 南博, 缪永伟. 基于显著性图的点云替换对抗攻击[J]. 中国图象图形学报, 2022, 27(2): 500–510. doi: 10.11834/jig.210546.

    LIU Fuchang, NAN Bo, and MIAO Yongwei. Point cloud replacement adversarial attack based on saliency map[J]. Journal of Image and Graphics, 2022, 27(2): 500–510. doi: 10.11834/jig.210546.
    [9] CAO Yulong, WANG Ningfei, XIAO Chaowei, et al. Invisible for both camera and LiDAR: Security of multi-sensor fusion based perception in autonomous driving under physical-world attacks[C]. 2021 IEEE Symposium on Security and Privacy (SP), San Francisco, USA, 2021: 176–194. doi: 10.1109/SP40001.2021.00076.
    [10] LIU Danlei, YU R, and SU Hao. Extending adversarial attacks and defenses to deep 3D point cloud classifiers[C]. 2019 IEEE International Conference on Image Processing (ICIP), Taipei, China, 2019: 2279–2283. doi: 10.1109/ICIP.2019.8803770.
    [11] ZHENG Shijun, LIU Weiquan, SHEN Siqi, et al. Adaptive local adversarial attacks on 3D point clouds[J]. Pattern Recognition, 2023, 144: 109825. doi: 10.1016/j.patcog.2023.109825.
    [12] HU Qianjiang, LIU Daizong, and HU Wei. Exploring the devil in graph spectral domain for 3D point cloud attacks[C]. The 17th European Conference on Computer Vision, Tel Aviv, Israel, 2022: 229–248. doi: 10.1007/978-3-031-20062-5_14.
    [13] ZHOU Hang, CHEN Dongdong, LIAO Jing, et al. LG-GAN: Label guided adversarial network for flexible targeted attack of point cloud based deep networks[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA, 2020: 10353–10362. doi: 10.1109/CVPR42600.2020.01037.
    [14] KURAKIN A, GOODFELLOW I J, and BENGIO S. Adversarial examples in the physical world[M]. YAMPOLSKIY R V. Artificial Intelligence Safety and Security. New York: Chapman and Hall/CRC, 2018: 99–112.
    [15] DONG Yinpeng, LIAO Fangzhou, PANG Tianyu, et al. Boosting adversarial attacks with momentum[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 9185–9193. doi: 10.1109/CVPR.2018.00957.
    [16] CHARLES R Q, SU Hao, KAICHUN M, et al. Guibas. PointNet: Deep learning on point sets for 3D classification and segmentation[C]. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, USA, 2017: 77–85. doi: 10.1109/CVPR.2017.16.
    [17] QI C R, YI Li, SU Hao, et al. PointNet++: Deep hierarchical feature learning on point sets in a metric space[C]. The 31st International Conference on Neural Information Processing Systems, Long Beach, USA, 2017: 5105–5114.
    [18] WANG Yue, SUN Yongbin, LIU Ziwei, et al. Dynamic graph CNN for learning on point clouds[J]. ACM Transactions on Graphics, 2019, 38(5): 1–12. doi: 10.1145/3326362.
    [19] LANG A H, VORA S, CAESAR H, et al. PointPillars: Fast encoders for object detection from point clouds[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, USA, 2019: 12689–12697. doi: 10.1109/CVPR.2019.01298.
    [20] YANG Zetong, SUN Yanan, LIU Shu, et al. 3DSSD: Point-based 3D single stage object detector[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA, 2020: 11037–11045. doi: 10.1109/CVPR42600.2020.01105.
    [21] HE Chenhang, ZENG Hui, HUANG Jianqiang, et al. Structure aware single-stage 3D object detection from point cloud[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA, 2020: 11870–11879. doi: 10.1109/CVPR42600.2020.01189.
    [22] YIN Tianwei, ZHOU Xingyi, and KRÄHENBÜHL P. Center-based 3D object detection and tracking[C]. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, USA, 2021: 11779–11788. doi: 10.1109/CVPR46437.2021.01161.
    [23] SHI Shaoshuai, GUO Chaoxu, JIANG Li, et al. PVRCNN: Point-voxel feature set abstraction for 3D object detection[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA, 2020: 10526–10535. doi: 10.1109/CVPR42600.2020.01054.
    [24] SHI Shaoshuai, JIANG Li, DENG Jiajun, et al. PV-RCNN++: Point-voxel feature set abstraction with local vector representation for 3D object detection[J]. International Journal of Computer Vision, 2023, 131(2): 531–551. doi: 10.1007/s11263-022-01710-9.
    [25] WU Zhirong, SONG Shuran, KHOSLA A, et al. 3D shapeNets: A deep representation for volumetric shapes[C]. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, USA, 2015: 1912–1920. doi: 10.1109/CVPR.2015.7298801.
    [26] YI Li, KIM V G, CEYLAN D, et al. A scalable active framework for region annotation in 3D shape collections[J]. ACM Transactions on Graphics, 2016, 35(6): 210. doi: 10.1145/2980179.2980238.
    [27] UY M A, PHAM Q H, HUA B S, et al. Revisiting point cloud classification: A new benchmark dataset and classification model on real-world data[C]. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), 2019: 1588–1597. doi: 10.1109/ICCV.2019.00167.
    [28] GEIGER A, LENZ P, and URTASUN R. Are we ready for autonomous driving? The KITTI vision benchmark suite[C]. 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, USA, 2012: 3354–3361. doi: 10.1109/CVPR.2012.6248074.
    [29] CAESAR H, BANKITI V, LANG A H, et al. nuScenes: A multimodal dataset for autonomous driving[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA, 2020: 11618–11628. doi: 10.1109/CVPR42600.2020.01164.
    [30] SUN Pei, KRETZSCHMAR H, DOTIWALLA X, et al. Scalability in perception for autonomous driving: Waymo open dataset[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA, 2020: 2443–2451. doi: 10.1109/CVPR42600.2020.00252.
    [31] GOODFELLOW I J, SHLENS J, and SZEGEDY C. Explaining and harnessing adversarial examples[C]. The 3rd International Conference on Learning Representations, San Diego, USA, 2015.
    [32] YANG Jiancheng, ZHANG Qiang, FANG Rongyao, et al. Adversarial attack and defense on point sets[EB/OL]. https://arxiv.org/abs/1902.10899, 2019.
    [33] MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[C]. The 6th International Conference on Learning Representations, Vancouver, Canada, 2018.
    [34] LIU Danlei, YU R, and SU Hao. Adversarial shape perturbations on 3D point clouds[C]. European Conference on Computer Vision, Glasgow, UK, 2020: 88–104. doi: 10.1007/978-3-030-66415-2_6.
    [35] MA Chengcheng, MENG Weiliang, WU Baoyuan, et al. Efficient joint gradient based attack against SOR defense for 3D point cloud classification[C]. Proceedings of the 28th ACM International Conference on Multimedia, Seattle, USA, 2020: 1819–1827. doi: 10.1145/3394171.3413875.
    [36] ZHENG Tianhang, CHEN Changyou, YUAN Junsong, et al. PointCloud saliency maps[C]. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), 2019: 1598–1606. doi: 10.1109/ICCV.2019.00168.
    [37] CARLINI N and WAGNER D. Towards evaluating the robustness of neural networks[C]. 2017 IEEE Symposium on Security and Privacy (SP), San Jose, USA, 2017: 39–57. doi: 10.1109/SP.2017.49.
    [38] XIANG Chong, QI C R, and LI Bo. Generating 3D adversarial point clouds[C]. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, USA, 2019: 9128–9136. doi: 10.1109/CVPR.2019.00935.
    [39] WEN Yuxin, LIN Jiehong, CHEN Ke, et al. Geometry-aware generation of adversarial point Clouds[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(6): 2984–2999. doi: 10.1109/TPAMI.2020.3044712.
    [40] TSAI T, YANG Kaichen, HO T Y, et al. Robust adversarial objects against deep learning models[C]. Thirty-Fourth AAAI Conference on Artificial Intelligence, New York, USA, 2020: 954–962. doi: 10.1609/aaai.v34i01.5443.
    [41] KIM J, HUA, B S, NGUYEN D T, et al. Minimal adversarial examples for deep learning on 3D point clouds[C]. 2021 IEEE/CVF International Conference on Computer Vision, Montreal, Canada, 2021: 7777–7786. doi: 10.1109/ICCV48922.2021.00770.
    [42] ARYA A, NADERI H, and KASAEI S. Adversarial attack by limited point cloud surface modifications[C]. 2023 6th International Conference on Pattern Recognition and Image Analysis, Qom, Islamic Republic of Iran, 2023: 1–8. doi: 10.1109/IPRIA59240.2023.10147168.
    [43] ZHAO Yiren, SHUMAILOV I, MULLINS R, et al. Nudge attacks on point-cloud DNNs[EB/OL]. https://arxiv.org/abs/2011.11637, 2020.
    [44] TAN Hanxiao and KOTTHAUS H. Explainability-aware one point attack for point cloud neural networks[C]. 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, USA, 2023: 4570–4579. doi: 10.1109/WACV56688.2023.00456.
    [45] SHI Zhenbo, CHEN Zhi, XU Zhenbo, et al. Shape prior guided attack: Sparser perturbations on 3D point clouds[C]. Thirty-Sixth AAAI Conference on Artificial Intelligence, Waikoloa, USA, 2022: 8277–8285. doi: 10.1609/aaai.v36i8.20802.
    [46] LIU Binbin, ZHANG Jinlai, and ZHU Jihong. Boosting 3D adversarial attacks with attacking on frequency[J]. IEEE Access, 2022, 10: 50974–50984. doi: 10.1109/ACCESS.2022.3171659.
    [47] LIU Daizong, HU Wei, and LI Xin. Point cloud attacks in graph spectral domain: When 3D geometry meets graph signal processing[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46(5): 3079–3095. doi: 10.1109/TPAMI.2023.3339130.
    [48] TAO Yunbo, LIU Daizong, ZHOU Pan, et al. 3DHacker: Spectrum-based decision boundary generation for hard-label 3D point cloud attack[C]. 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2023: 14294–14304. doi: 10.1109/ICCV51070.2023.01319.
    [49] HUANG Qidong, DONG Xiaoyi, CHEN Dongdong, et al. Shape-invariant 3D adversarial point clouds[C]. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, USA, 2022: 15314–15323. doi: 10.1109/CVPR52688.2022.01490.
    [50] LIU Daizong and HU Wei. Imperceptible transfer attack and defense on 3D point cloud classification[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 45(4): 4727–4746. doi: 10.1109/TPAMI.2022.3193449.
    [51] HAMDI A, ROJAS S, THABET A, et al. AdvPC: Transferable adversarial perturbations on 3D point clouds[C]. The 16th European Conference on Computer Vision (ECCV), Glasgow, UK, 2020: 241–257. doi: 10.1007/978-3-030-58610-2_15.
    [52] TANG Keke, SHI Yawen, WU Jianpeng, et al. NormalAttack: Curvature-aware shape deformation along normals for imperceptible point cloud attack[J]. Security and Communication Networks, 2022, 2022: 1186633. doi: 10.1155/2022/1186633.
    [53] TU J, REN Mengye, MANIVASAGAM S, et al. Physically realizable adversarial examples for LiDAR object detection[C]. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA, 2020: 13713–13722. doi: 10.1109/CVPR42600.2020.01373.
    [54] ABDELFATTAH M, YUAN Kaiwen, WANG Z J, et al. Adversarial attacks on camera-LiDAR models for 3D car detection[C]. 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 2021: 2189–2194. doi: 10.1109/IROS51168.2021.9636638.
    [55] MIAO Yibo, DONG Yinpeng, ZHU Jun, et al. Isometric 3D adversarial examples in the physical world[C]. The 36th International Conference on Neural Information Processing Systems, New Orleans, USA, 2022: 1433.
    [56] YANG Kaichen, TSAI T, YU Honggang, et al. Robust roadside physical adversarial attack against deep learning in Lidar perception modules[C]. The 2021 ACM Asia Conference on Computer and Communications Security, Hong Kong, China, 2021: 349–362. doi: 10.1145/3433210.3453106.
    [57] ZHU Yi, MIAO Chenglin, ZHENG Tianhang, et al. Can we use arbitrary objects to attack LiDAR perception in autonomous driving?[C/OL]. The 2021 ACM SIGSAC Conference on Computer and Communications Security, 2021: 1945–1960. doi: 10.1145/3460120.3485377.
    [58] CAO Yulong, BHUPATHIRAJU S H, NAGHAVI P, et al. You can’t see me: Physical removal attacks on LiDAR-based autonomous vehicles driving frameworks[C]. The 32nd USENIX Security Symposium, USENIX Security 2023, Anaheim, USA, 2023.
    [59] CAO Yulong, XIAO Chaowei, CYR B, et al. Adversarial sensor attack on LiDAR-based perception in autonomous driving[C]. The 2019 ACM SIGSAC Conference on Computer and Communications Security, London, United Kingdom, 2019: 2267–2281. doi: 10.1145/3319535.3339815.
    [60] SUN Jiachen, CAO Yulong, CHEN Q A, et al. Towards robust LiDAR-based perception in autonomous driving: General black-box adversarial sensor attack and countermeasures[C/OL]. The 29th USENIX Security Symposium, USENIX Security 2020, 2020.
  • 期刊类型引用(4)

    1. 边舒芳,张伟. 基于改进LSTM的低压配电网日线损率预测方法. 粘接. 2025(01): 188-192 . 百度学术
    2. 杜佳俊,兰红,王超凡. 基于扩散模型微调的局部定制图像编辑算法. 计算机应用研究. 2025(02): 623-629 . 百度学术
    3. 高欣宇,杜方,宋丽娟. 基于扩散模型的文本图像生成对比研究综述. 计算机工程与应用. 2024(24): 44-64 . 百度学术
    4. 徐飞,邓亚萍,罗钦,陈兴. 基于优化卷积网络的医疗设备成像研究. 自动化与仪器仪表. 2024(12): 56-61 . 百度学术

    其他类型引用(9)

  • 加载中
图(6) / 表(1)
计量
  • 文章访问数:  837
  • HTML全文浏览量:  436
  • PDF下载量:  102
  • 被引次数: 13
出版历程
  • 收稿日期:  2023-10-31
  • 修回日期:  2024-04-24
  • 网络出版日期:  2024-05-11
  • 刊出日期:  2024-05-30

目录

/

返回文章
返回