Sort by
Refine Your Search
-
Listed
-
Category
-
Program
-
Employer
- Delft University of Technology (TU Delft)
- Leiden University
- University of Twente (UT)
- Delft University of Technology (TU Delft); yesterday published
- Eindhoven University of Technology (TU/e)
- Erasmus University Rotterdam
- University of Amsterdam (UvA)
- Utrecht University
- AcademicTransfer
- Delft University of Technology (TU Delft); Delft
- Delft University of Technology (TU Delft); 3 Oct ’25 published
- Delft University of Technology (TU Delft); today published
- Eindhoven University of Technology (TU/e); 27 Sep ’25 published
- Eindhoven University of Technology (TU/e); yesterday published
- European Space Agency
- Leiden University; Leiden
- The Open Universiteit (OU)
- The Open Universiteit (OU); today published
- Universiteit van Amsterdam
- University of Amsterdam (UvA); Amsterdam
- University of Amsterdam (UvA); yesterday published
- University of Groningen
- Wageningen University & Research
- Wageningen University and Research Center
- 14 more »
- « less
-
Field
-
a balanced mix of AI with formal methods and testing techniques, we strive to make vulnerability detection more accurate, intelligent, explainable, and usable in practice. The project is cutting-edge
-
minimizing the amounts of both false positives and false negatives. By combining static and dynamic analysis, using a balanced mix of AI with formal methods and testing techniques, we strive to make
-
focused on integrating generative AI (e.g., LLMs) with symbolic reasoning, where the latter can verify the output of the former is compliant with explicit and formal system constraints. The position is
-
of the former is compliant with explicit and formal system constraints. The position is within the context of the EU-funded SmartEM project , which aims to use AI-assisted methods to create industrial surrogate
-
discrete-event systems, supervisory control theory, and formal methods to apply for the PhD position within the Supervisory Control group (see Group Supervisory Control ), which is part of the Control
-
understandable explanations from machine learning models. We will achieve this together by creating the first mathematical framework for explainable AI and developing new explanation methods. This will involve
-
methods. This will involve using tools from mathematical machine learning theory to prove mathematical guarantees about the performance of such new explanation methods, as well as programming to test out
-
analysis. Challenges in this domain require a rigorous approach and a development of novel methods, including but not limited to: Improvement of our understanding of psychological, cultural and social
-
) modelling and empirical analysis. Challenges in this domain require a rigorous approach and a development of novel methods, including but not limited to: Improvement of our understanding of psychological
-
control synthesis methods. We invite highly motivated students with a strong background in discrete-event systems, supervisory control theory, and formal methods to apply for the PhD position within