Research

My research interests lie in developing robust and trustworthy AI systems under uncertainty. I focus on generative flow networks, adversarial robustness, decision-making frameworks, and evaluation strategies for neural models, especially in safety-critical domains like autonomous driving and security-sensitive AI applications. My work spans the development of novel testing frameworks, defense strategies, and generative models powered by LLMs and GFlowNets.

Core Research Areas

🎯 GFlowNets for Decision Making

Exploring the utility of Generative Flow Networks to sample diverse, high-reward decisions in uncertain environments, like music generation, code fuzzing, and autonomous testing.

🛡️ Adversarial Robustness and Evaluation

Developed adversarial testing pipelines using CARLA and kernel density-based classifiers to distinguish threats in autonomous driving. Developed novel adversarial patches and defenses during my work at TrojAI Inc.

🔒 Neural Network Privacy and Verification

Analyzed privacy vulnerabilities in spiking neural networks under membership inference attacks and proposed systematic evaluation frameworks, integrating insights from formal methods and empirical risk.

Publications

2025

  1. On the Privacy Risks of Spiking Neural Networks: A Membership Inference Analysis
    Guan, Junyi,  Sharma, Abhijith, Tian, Chong, and Lahlou, Salem
    arXiv preprint arXiv:2502.13191 2025
  2. GAN inversion and shifting: recommending product modifications to sellers for better user preference
    Kumar, Satyadwyoom,  Sharma, Abhijith, and Narayan, Apurva
    PeerJ Computer Science 2025
  3. Loss-guided auxiliary agents for overcoming mode collapse in gflownets
    Malek, Idriss, \myname, , and Lahlou, Salem
    arXiv preprint arXiv:2505.15251 2025

2024

  1. Assist Is Just as Important as the Goal: Image Resurfacing to Aid Model’s Robust Prediction
    Sharma, Abhijith, Munz, Phil, and Narayan, Apurva
    In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision 2024
  2. AVATAR: Autonomous Vehicle Assessment through Testing of Adversarial Patches in Real-time
    Sharma, Abhijith, Narayan, Apurva, Azad, Nasser Lashgarian, Fischmeister, Sebastian, and Marksteiner, Stefan
    IEEE Transactions on Intelligent Vehicles 2024

2023

  1. Vulnerability of cnns against multi-patch attacks
    Sharma, Abhijith, Bian, Yijun, Nanda, Vatsal, Munz, Phil, and Narayan, Apurva
    In Proceedings of the 2023 ACM Workshop on Secure and Trustworthy Cyber-Physical Systems 2023
  2. NSA: Naturalistic Support Artifact to Boost Network Confidence
    Sharma, Abhijith, Munz, Phil, and Narayan, Apurva
    In 2023 International Joint Conference on Neural Networks (IJCNN) 2023

2022

  1. Adversarial patch attacks and defences in vision-based tasks: A survey
    Sharma, Abhijith, Bian, Yijun, Munz, Phil, and Narayan, Apurva
    arXiv preprint arXiv:2206.08304 2022
  2. Soft Adversarial Training Can Retain Natural Accuracy
    Sharma, Abhijith, and Narayan, Apurva
    In In Proceedings of the 14th International Conference on Agents and Artificial Intelligence (ICAART 2022) 2022
  3. Embedded Model Predictive Control Using Robust Penalty Method
    Sharma, Abhijith, Jugade, Chaitanya, Yawalkar, Shreya, Patne, Vaishali, Ingole, Deepak, and Sonawane, Dayaram
    arXiv preprint arXiv:2201.02697 2022