Doctoral researcher in Hallucination Detection for Robust and Trustworthy LVLMs in Autonomous Driving

Updated: 1 day ago
Deadline: ;

Large vision–language models (LVLMs) can describe driving scenes and support decisions [Li25], but they sometimes hallucinate objects, relations, or events that are not present [Liu24,Liu25]. In a safety-critical domain, reducing hallucinations and improving robustness and trustworthiness are essential. This PhD targets principled ways to detect, analyze, and mitigate hallucinations in video-based LVLMs for autonomous driving.

Objectives

  • Design, develop, and evaluate novel method(s) to detect and localize hallucinations in LVLM outputs for autonomous driving tasks
  • Investigate and propose mitigation strategies to reduce hallucinations or improve model confidence
  • Evaluate and benchmark against state-of-the-art methods the reliability and robustness of the hallucination detection system under diverse visual and textual conditions

Primary experiments will be conducted in CARLA (https://carla.org), enabling controlled and repeatable evaluation of hallucinations under diverse driving conditions. The PhD student will work closely with POST Telecom’s company (Luxembourg), who is co-funding this PhD position.

Research Environment

The PhD student will join the Secure and Reliable Software Engineering and Decision-Making (SerVal) group at the University of Luxembourg. SerVal conducts research in security and reliability in software engineering, with a particular focus on AI, data science, and decision-making, as well as designing, testing, and debugging to improve software quality across domains such as FinTech, energy, and Industry 4.0.
Within this context, the PhD will contribute to the group’s growing research on trustworthy AI, focusing on methods that improve the transparency, reliability, and interpretability of intelligent systems in safety-critical domains like autonomous driving.



Similar Positions