Adversarial Attacks on LiDAR-Based Tracking Across Road Users: Evaluation and Novel Target-Aware Black-Box Method.
Date:
[More information here]
In this study, we delve into the robustness of neural network-based LiDAR point cloud tracking models under adversarial attacks, a critical aspect often overlooked in favor of road user safety. These models, despite incorporating advanced architectures like Transformer or Bird’s Eye View (BEV), tend to neglect robustness in the face of challenges such as adversarial attacks, domain shifts, or data corruption. Our study instead focuses on the robustness of the tracking models against adversarial attacks. We begin by establishing a unified framework for conducting adversarial attacks within the realm of 3D object tracking, enabling a thorough investigation of both white-box and black-box attack settings. For white-box attacks, we design tailored loss functions to accommodate various tracking paradigms and then extend existing methods such as FGSM, C\&W, and PGD to the point cloud domain. For black-box attacks, we introduce an innovative transfer-based approach, the Target-aware Perturbation Generation (TAPG) algorithm, with the dual objectives of achieving high attack performance and maintaining low perceptibility. The TAPG employs a heuristic strategy to enforce sparse attack constraints and utilizes random sub-vector factorization to bolster transferability. Our experimental findings reveal a significant vulnerability in advanced tracking methods when subjected to both black-box and white-box attacks, underscoring the necessity to integrate adversarial attack resistance into the design of LiDAR-based tracking models. Moreover, compared to existing attack methods, the TAPG strikes an optimal balance between attack potency and perturbation concealment.