PhD candidate in Security Testing of Generative AI

Updated: 36 minutes ago
Deadline: ;

The SnT is seeking a Doctoral Researcher to support the research and development work within the SEDAN group (https://www.uni.lu/snt-en/research-groups/sedan ). We seek a candidate with expertise and/or interest in the following relevant fields: artificial intelligence and cybersecurity.

The candidate will have the opportunity to work on a collaborative project with a leading industry in cybersecurity allowing thus to validate and receive feedback from on-the-field cybersecurity practitioners.

As generative AI (GenAI) platforms and large language models (LLMs) are increasingly integrated into organizational workflows, they introduce new attack surfaces and vulnerabilities. Current efforts to secure these platforms are fragmented. While there exist some commercial and open-source tools, they are often adapted from traditional application security testing methodologies and fail to account for the specific challenges posed by GenAI. For example, adversarial attack testing, model inversion vulnerabilities, and API misuse scenarios are underexplored areas, lacking robust frameworks to systematically address them.

The research project of the PhD student will thus focus on defining a rigorous testing framework to bridge this gap and enable organizations to confidently deploy secure GenAI solutions by evaluating the machine-learning models intrinsically, identifying components of an AI pipeline and their vulnerabilities and provide recommendations to mitigate them.

In addition, the candidate will be also involved in project reporting and dissemination and will participate to meetings with our partner. The project is an academic project oriented but applied research. It is a unique opportunity to develop new concepts with a close collaboration with industry.

During the PhD studies, the candidate will have the opportunity to participate and propose other projects within the group and so also develop his/her/their own research agenda. We are working on various topics related to applied ML and cyber-security, including applications and security of LLMs.

For further information, please contact us at jerome.francois@uni.lu and lama.sleem@uni.lu



Similar Positions