-
. The research will combine computational modeling (e.g., NLP, machine learning, deep learning) with human-centered research (e.g., user studies, experimental design, qualitative analysis). We are looking not only
-
emotion safety is crucial; Design interventions to reduce bias and improve fairness and safety in human-AI interaction. The research will combine computational modeling (e.g., NLP, machine learning, deep
-
, or related field; Solid background in machine learning, deep learning and foundation models such as Large Language Models; Strong programming skills (Python/C++); Proven interest in generative models
-
transparent and intelligible. Although explainable AI methods can shed some light on the inner workings of black-box machine learning models such as deep neural networks, they have severe drawbacks and
-
workings of black-box machine learning models such as deep neural networks, they have severe drawbacks and limitations. The field of interpretable machine learning aims to fill this gap by developing
-
degree in Computer Science, Artificial Intelligence, Data Science, or related field; Solid background in machine learning, deep learning and foundation models such as Large Language Models; Strong
-
challenges (such as raw material constraints, hydrogen availability, and infrastructure deployment challenges), and analyze deep uncertainties. The research will guide sustainable transition strategies
-
, identify challenges (such as raw material constraints, hydrogen availability, and infrastructure deployment challenges), and analyze deep uncertainties. The research will guide sustainable transition