Sort by
Refine Your Search
-
Category
-
Country
-
Employer
-
(e.g., conformal prediction, risk-aware decisions); and robotics for fabs?AMR planning/tasking, robotic vision & multimodal perception, wafer/FOUP/tool handling, and RL/Sim2Real. The successful candidate
-
- Multimodal Perception and Localization for Robots in Extreme Visibility Conditions - Source Localization and Hazard Assessment Using Multisensor Fusion. - Semantic-Based Exploration Strategies for High-Risk
-
–robot interaction and ergonomics Automation and robotised workplaces (industry use-cases) Field robotics and harsh-environment robotics Predictive maintenance, sensor systems, multimodal monitoring AI/ML
-
–robot interaction and ergonomics Automation and robotised workplaces (industry use-cases) Field robotics and harsh-environment robotics Predictive maintenance, sensor systems, multimodal monitoring AI/ML
-
of responses to images and model these representations with AI models (deep neural networks (including topographical), multimodal models, Large Language Models), 2) define and model dimensions related
-
- Multimodal Perception and Localization for Robots in Extreme Visibility Conditions - Source Localization and Hazard Assessment Using Multisensor Fusion. - Semantic-Based Exploration Strategies for High-Risk
-
-agents for perception and communication. Candidates ideally have a background in computer science, electrical engineering, or related fields, and a strong interest in machine learning, optimization
-
toward generative and multimodal AI methods that connect simulation, perception, and control within large-scale digital twins of urban traffic systems in Munich. The focus lies on advancing semantic
-
— localization and mapping (e.g., SLAM), motion planning, and semantic perception — focusing on multimodal sensor data fusion (LiDAR, RGB-D, IMUs) for robust real-world performance. Research areas include