PhD candidate in AI-driven Threat Mitigation Recommendation

Updated: 43 minutes ago
Deadline: ;

The SnT is seeking a Doctoral Researcher to support the research and development work within the SEDAN group (https://www.uni.lu/snt-en/research-groups/sedan ). We seek a candidate with expertise and/or interest in the following relevant fields: artificial intelligence and cybersecurity.

The candidate will have the opportunity to work on a collaborative project with a leading industry in cybersecurity allowing thus to validate and receive feedback from on-the-field cybersecurity practitioners.

The PhD candidate will investigate how AI and above all Generative AI (GenAI) can be leveraged to support effective mitigation of cyber-threats. Indeed, AI is revolutionizing the field of cybersecurity, for instance by enabling faster and more efficient responses to evolving threats. However, despite the advancements brought by AI, there is currently no tool sufficiently intelligent to fully aggregate and utilize diverse data sources to create a comprehensive and adaptive dashboard for taking effective and strategic decisions. Indeed, deciders face the deluge of alerts and dashboard while they would expect less but more efficient and actionable insights which are particularly relevant for the environments they manage and their level of expertise because actions may be from different types and/or at different level (technical vs. organizational for example). This requires a solution to provide coherent actionable insights but tailored to their specific actioners. This will be essential to ensure an effective and collective response to threats while avoiding narrowed or unsynchronized decisions.

The research project of the PhD student will thus focus on aggregating heterogeneous OSINT (Open-Source Intelligence) sources and aggregate retrieved data with cyber-risks indicators of the targeted environment to evaluate. The main research question is how to automatically harmonize the retrieved information allowing a unique analysis and to map them against multiple user-tailored outputs. This is necessary as the one-fits-all model was proven unsuccessful.

Large Language Models (LLMs) and knowledge graph models are expected to harmonize the formats and semantics but there are many open questions about their proper customization, fine-tuning, extension to include context- or time-specific information and above all how to make all the process as much as automated as possible to support the automatic analytics pipeline generation and reconfiguration based on new data feeds.

In addition, the candidate will be also involved in project reporting and dissemination and will participate to meetings with our partner. The project is an academic project oriented but applied research. It is a unique opportunity to develop new concepts with a close collaboration with industry.

During the PhD studies, the candidate will have the opportunity to participate and propose other projects within the group and so also develop his/her/their own research agenda. We are working on various topics related to applied ML and cyber-security, including applications and security of LLMs.



Similar Positions