AVATAR: Autonomous Vehicle Assessment through Testing of Adversarial Patches in Real-time

1University of Waterloo, 2Western University, 3AVL, Graz Austria
*Indicates Main Author, Indicates Equal Advising

The video shows physical real-world testing with the adversarial patch. On the left, the patch is validated on a treadmill-based test-bed with a toy car, and on the right, the patch is applied to a real car moving in an outdoor environment.

Abstract

Autonomy in vehicles is achieved using AI for control and perception tasks. The visual inputs from camera forms the foundation for subsequent control that follows. Existing works have shown adversarial vulnerabilities during AI based visual tasks. One major threat is adversarial patches, which can impact decision making in autonomous vehicles (AVs). Current evaluation methods often utilize static datasets with unrealistic patch placements. This paper proposes a novel framework, AVATAR, to standardize adversarial patch testing and analysis. AVATAR creates a simulation environment, where the patch is integrated with actors in the scene to enhance realism during testing. The vehicle’s behaviour is captured as a time-series trace for post-simulation quantitative analysis. Furthermore, we introduce an Adversarial Trace Classifier (ATC) that analyzes these traces to predict the potential presence of adversarial patches. The aim is to detect vulnerabilities in object detection algorithms for the design of robust perception system for AVs. Hence, AVATAR will pave the way for safer deployment of autonomous vehicles in real-world.

📑 Background

Inference Overview

Figure illustrates the adversarial test paradigm levels. Level 1 applies patches offline to image datasets for object detection, dominating existing research but lacking realism. Level 2 introduces more realistic scenarios by analyzing recorded attacked image frames but lacks real-time validation. Level 3 addresses this gap by enabling real-time adversarial testing within simulation environments, crucial for autonomous vehicle evaluation and often overlooked in existing studies.

Our Contribution

  • 🌟 Motivation and Framework Proposal: Identified the need for Level 3 adversarial patch testing and addressed the absence of existing frameworks by introducing AVATAR, enabling dynamic robustness evaluation for AVs.
  • 🌟 Dataset and Model Preparation: Generated the CARLA Town 10HD (CART) dataset and trained object detection models along with adversarial patches specific to Town 10 in CARLA.
  • 🌟 Simulation and Patch Integration: Integrated trained adversarial patches into the CARLA environment by modifying asset blueprints and developed the AVATAR framework for real-time dynamic evaluation.
  • 🌟 User Interface Development: Designed a GUI for configuring test parameters, setting up experiments, and running simulations in CARLA.
  • 🌟 Adversarial Trace Detection: Proposed the Adversarial Trace Classifier (ATC) to predict the presence of adversarial patches post-simulation, ensuring fair and unbiased evaluation

🚗 Adversarial Patch in Carla as a Material Asset

material blp
vehicle blp

The adversarial patch can be attached to a CARLA actor (e.g., a moving vehicle) or placed statically in the map. The process begins by converting the trained patch image (.png/.jpeg) into a CARLA asset (.uasset) via Unreal Engine (UE). A material instance is created in UE with high roughness and opacity for optimal visibility. Once converted, the patch can be applied to any CARLA asset. For example, as shown in Figure, the patch is affixed to a vehicle’s board, enabling realistic evaluation of its impact on object detection in dynamic, near-real-world scenarios.

📝 AVATAR Block Diagram

Block Diagram

The AVATAR framework advances Level 3 of adversarial testing by replicating real-world conditions for AV testing. It abstracts the CARLA PythonAPI Client, allowing users to configure parameters and generate a setup file. This file initializes the CARLA environment, loads the object detection model, and applies adversarial patches for experiments. The PyGame window then visualizes the system's behavior under attack in the CARLA simulation. AVATAR enhances adversarial patch testing for autonomous vehicles, ensuring realistic, reproducible, and customizable simulations.

🛢️ CART: CARLA Town 10 Dataset

Block Diagram
CARLA Town 10 Dataset

The dataset, generated using CARLA’s Town 10 simulation environment, comprises 4,500 photo-realistic images (4,300 for training, 200 for validation) with a resolution of 1280×720 pixels. It features five key object classes: Person, Vehicle, Motorbike, Traffic Light, and Stop Sign. Captured using autopilot driving in dynamic, moving camera scenarios, the dataset covers diverse lighting conditions (noon, sunset, night) and weather settings (rain, clear, fog). The traffic level includes approximately 40 vehicles and 70 pedestrians, ensuring realistic urban scenes. Annotations are provided in YOLO format. CART is useful for training object detection models and adversarial patches for their use in CARLA or in the real-world.

🖥️ AVATAR GUI in CARLA

AVATAR GUI

Tkinter-based GUI featuring four components: CARLA Server Configuration (connects the server, sets seed, mode, and FPS), CARLA World Setting (configures resolution, town, weather, pedestrians, and vehicles), CARLA Actor Setting (sets car model, agent type/behavior, and detection model), and Adversarial Attack Setting (chooses attack type, magnitude, and patch placement on vehicles or billboards).

⚙️ Adversarial Trace Classifier

atc atc
result
  • Simulations were conducted to evaluate YOLO's confidence scores for detecting vehicles under normal and adversarial conditions.
  • Normal conditions showed consistent confidence scores (>85%) across diverse weather and lighting scenarios, indicating model robustness.
  • Adversarial patches caused significant confidence drops and higher variance, especially under optimal lighting (clear noon).
  • Adversarial patch effectiveness reduced in poor visibility scenarios like rain, dust storms, and nighttime conditions.
  • Statistical analysis included 100 traces: 40 nominal and 60 adversarial, with random vehicle starting points.
  • Kernel Density Estimation (KDE) effectively distinguished nominal (tight cluster) and anomalous traces (spread peaks).

🎥 Video Presentation

📚 BibTeX

@article{sharma2024avatar,
      title={AVATAR: Autonomous Vehicle Assessment through Testing of Adversarial Patches in Real-time},
      author={Sharma, Abhijith and Narayan, Apurva and Azad, Nasser Lashgarian and Fischmeister, Sebastian and Marksteiner, Stefan},
      journal={IEEE Transactions on Intelligent Vehicles},
      year={2024},
      publisher={IEEE}
    }

🙏 Acknowledgements

We extend our gratitude to the members of the Real-time Embedded Software Group @UWaterloo for their invaluable insights, critical brainstorming sessions, and innovative ideas that greatly contributed to this work.