Filters








483 Hits in 6.0 sec

Universal Spectral Adversarial Attacks for Deformable Shapes [article]

Arianna Rampini, Franco Pestarini, Luca Cosmo, Simone Melzi, Emanuele Rodolà
2021 arXiv   pre-print
In this paper, we offer a change in perspective and demonstrate the existence of universal attacks for geometric data (shapes).  ...  However, the existence of "universal" attacks (i.e., unique perturbations that transfer across different data points) has only been demonstrated for images to date.  ...  Acknowledgments We gratefully acknowledge Luca Moschella for the technical help.  ... 
arXiv:2104.03356v1 fatcat:wrdemn4qw5gcxfyx4rysznl5eq

Adversarially Robust Hyperspectral Image Classification via Random Spectral Sampling and Spectral Shape Encoding

Sungjune Park, Hong Joo Lee, Yong Man Ro
2021 IEEE Access  
The spectral shape feature, f s , holds the overall shape information of target pixel's spectrum, which is not deformed largely under adversarial attacks, so that, f s could have robust property against  ...  craft adversarial examples with small and indistinguishable noise, the overall (increasing/decreasing) shape information of spectral bands is not deformed largely even with adversarial attacks.  ... 
doi:10.1109/access.2021.3076225 fatcat:rcolohbp6zdyhhpzozhkrjee24

Boosting 3D Adversarial Attacks with Attacking On Frequency

Binbin Liu, Jinlai Zhang, Jihong Zhu
2022 IEEE Access  
Otherwise, compared to adversarial point clouds generated by other adversarial attack methods, adversarial point clouds obtained by AOF contain more deformation than outlier.  ...  Recently, 3D adversarial attacks, especially adversarial attacks on point clouds, have elicited mounting interest.  ...  Then we calculate the cumulative distribution of spectral weights for each adversarial sample.  ... 
doi:10.1109/access.2022.3171659 fatcat:z4y34zvngzhvldgaci7khtgfhu

Generating Unrestricted 3D Adversarial Point Clouds [article]

Xuelong Dai, Yanjie Li, Hua Dai, Bin Xiao
2021 arXiv   pre-print
However, deep learning for 3D point clouds is still vulnerable to adversarial attacks, e.g., iterative attacks, point transformation attacks, and generative attacks.  ...  These attacks need to restrict perturbations of adversarial examples within a strict bound, leading to the unrealistic adversarial 3D point clouds.  ...  Our attack is more "universally" when training with the adversarial loss against PointNet.  ... 
arXiv:2111.08973v2 fatcat:tneyldijnrh7zcy2m5pqfmhw5u

Traffic Sign Detection Under Challenging Conditions: A Deeper Look into Performance Variations and Spectral Characteristics

Dogancan Temel, Min-Hung Chen, Ghassan AlRegib
2019 IEEE transactions on intelligent transportation systems (Print)  
We investigate the effect of challenging conditions through spectral analysis and show that challenging conditions can lead to distinct magnitude spectrum characteristics.  ...  Traffic signs are critical for maintaining the safety and efficiency of our roads. Therefore, we need to carefully assess the capabilities and limitations of automated traffic sign detection systems.  ...  [20] showed that a simple compression stage can minimize the effect of adversarial attacks in traffic sign recognition.  ... 
doi:10.1109/tits.2019.2931429 fatcat:zv2a57hxznewneosgz5vhzypwi

Attacking Point Cloud Segmentation with Color-only Perturbation [article]

Jiacen Xu, Zhe Zhou, Boyuan Feng, Yufei Ding, Zhou Li
2021 arXiv   pre-print
While adversarial attacks against point cloud have been studied, we found all of them were targeting single-object recognition, and the perturbation is done on the point coordinates.  ...  ., autonomous driving, geological sensing), it is important to fill this knowledge gap, in particular, how these models are affected under adversarial samples.  ...  Kpconv: Flexible and Minimal adversarial examples for deep learning on 3d point deformable convolution for point clouds.  ... 
arXiv:2112.05871v2 fatcat:rru4yw6drjbrvpboxaruqo7asa

Advances in adversarial attacks and defenses in computer vision: A survey [article]

Naveed Akhtar, Ajmal Mian, Navid Kardan, Mubarak Shah
2021 arXiv   pre-print
In [2], we reviewed the contributions made by the computer vision community in adversarial attacks on deep learning (and their defenses) until the advent of year 2018.  ...  However, it is now known that DL is vulnerable to adversarial attacks that can manipulate its predictions by introducing visually imperceptible perturbations in images and videos.  ...  In another example of universal attacks, Rampini et al. [201] extended the notion to deformable geometric shapes.  ... 
arXiv:2108.00401v2 fatcat:23gw74oj6bblnpbpeacpg3hq5y

Liveness is Not Enough: Enhancing Fingerprint Authentication with Behavioral Biometrics to Defeat Puppet Attacks

Cong Wu, Kun He, Jing Chen, Ziming Zhao, Ruiying Du
2020 USENIX Security Symposium  
However, it is vulnerable to presentation attacks, which include that an attacker spoofs with an artificial replica.  ...  Many liveness detection solutions have been proposed to defeat such presentation attacks; however, they all fail to defend against a particular type of presentation attack, namely puppet attack, in which  ...  Acknowledgments We thank Kevin Butler and the anonymous reviewers for their comments.  ... 
dblp:conf/uss/Wu000D20 fatcat:qobvl5b7i5fclcohuy6n7rkbqy

PointBA: Towards Backdoor Attacks in 3D Point Cloud [article]

Xinke Li, Zhirui Chen, Yue Zhao, Zekun Tong, Yabang Zhao, Andrew Lim, Joey Tianyi Zhou
2021 arXiv   pre-print
Although most of them consider adversarial attacks, we identify that backdoor attack is indeed a more serious threat to 3D deep learning systems but remains unexplored.  ...  Our proposed backdoor attack in 3D point cloud is expected to perform as a baseline for improving the robustness of 3D deep models.  ...  Hongpeng Li for his assistance in data processing and model development.  ... 
arXiv:2103.16074v3 fatcat:ms3225sj3rfw7eoytxc64nqq6y

ICASSP 2020 Table of Contents

2020 ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)  
TOPOLOGY OPTIMIZATION FOR IMAGE DENOISING Wengtai Su, National Tsing Hua University, Taiwan; Gene Cheung, Richard P.  ...  Wildes, York University, Taiwan; Chia-Wen Lin, National Tsing Hua University, Taiwan SS-L3.2: DEFENDING GRAPH CONVOLUTIONAL NETWORKS AGAINST ........................................................  ...  : ELECTRO-MAGNETIC SIDE-CHANNEL ATTACK THROUGH ANALYSIS WITH RECURRENT DEEP LEARNING University IVMSP-P1.9: 3D DEFORMATION SIGNATURE FOR DYNAMIC FACE RECOGNITION .....................................  ... 
doi:10.1109/icassp40776.2020.9054406 fatcat:6h7hh2hxhne4pbmphharu2et2m

Expressive 1-Lipschitz Neural Networks for Robust Multiple Graph Learning against Adversarial Attacks

Xin Zhao, Zeru Zhang, Zijie Zhang, Lingfei Wu, Jiayin Jin, Yang Zhou, Ruoming Jin, Dejing Dou, Da Yan
2021 International Conference on Machine Learning  
Recent findings have shown multiple graph learning models, such as graph classification and graph matching, are highly vulnerable to adversarial attacks, i.e. small input perturbations in graph structures  ...  Existing defense techniques often defend specific attacks on particular multiple graph learning tasks.  ...  RGM is a robust graph matching model against visual noise, including image deformations, rotations, and outliers for image matching, but it fails to defend adversarial attacks on graph topology (Yu et  ... 
dblp:conf/icml/ZhaoZZWJZJD021 fatcat:7thqnpakwvcm5emlwhxry2on4i

Secure and Robust Machine Learning for Healthcare: A Survey

Adnan Qayyum, Junaid Qadir, Muhammad Bilal, Ala Al Fuqaha
2020 IEEE Reviews in Biomedical Engineering  
attacks.  ...  settings (which is traditionally considered quite challenging due to the myriad security and privacy issues involved), especially in light of recent results that have shown that ML/DL are vulnerable to adversarial  ...  adversarial attacks.  ... 
doi:10.1109/rbme.2020.3013489 pmid:32746371 fatcat:wd2flezcjng4jjsn46t24c5yb4

A Probabilistic Approach to Estimating Allowed SNR Values for Automotive LiDARs in "Smart Cities" under Various External Influences

Roman Meshcheryakov, Andrey Iskhakov, Mark Mamchenko, Maria Romanova, Saygid Uvaysov, Yedilkhan Amirgaliyev, Konrad Gromaszek
2022 Sensors  
The authors propose a synthetic approach as a mathematical tool for designing a resilient LiDAR system.  ...  The work presents modelling results for the "false alarm" probability values depending on the selected optimality criterion.  ...  the deforming approaches attack the surface and/or the shape of the object are deformed by altering the position of a number of points.  ... 
doi:10.3390/s22020609 pmid:35062575 pmcid:PMC8781900 fatcat:mwmnzbn6kvc7diboxj2y5z2v4e

Table of Contents

2019 2019 IEEE/CVF International Conference on Computer Vision (ICCV)  
Poster 3.1 Deep Learning Sparse and Imperceivable Adversarial Attacks 4723 Francesco Croce (University of Tübingen) and Matthias Hein (University of Tübingen) l Enhancing Adversarial Example  ...  ) li Targeted Mismatch Adversarial Attack: Query With a Flower to Retrieve the Tower 5036 Giorgos Tolias (Czech Technical University in Prague.  ... 
doi:10.1109/iccv.2019.00004 fatcat:5aouo4scprc75c7zetsimylj2y

2020 Index IEEE Transactions on Image Processing Vol. 29

2020 IEEE Transactions on Image Processing  
Shi, W., +, TIP 2020 375-388 Image Super-Resolution as a Defense Against Adversarial Attacks.  ...  Chen, Z., +, TIP 2020 5431-5446 Online Tensor Sparsifying Transform Based on Temporal Superpixels From Compressive Spectral Video Measurements. Online Alternate Generator Against Adversarial Attacks.  ... 
doi:10.1109/tip.2020.3046056 fatcat:24m6k2elprf2nfmucbjzhvzk3m
« Previous Showing results 1 — 15 out of 483 results