Sort by
Refine Your Search
-
Listed
-
Country
-
Employer
-
Field
-
environments Vision–Language–Action Models: Practical experience with multimodal models combining vision, language, and action for embodied agents, robotics, or autonomous driving applications Perception and
-
multimodal interaction. Our research aims to bridge the gap between visual perception and actionable assistance, with applications including: · Video-based skill coaching and instruction
-
the Machine Learning and Artificial Intelligence. Solid mathematical and analytical skills. Knowledge about statistical machine learning, robotic perception, multimodal AI algorithms. Experience in programming
-
of competitive research proposals. You should have experience in the following areas: Applied Machine Learning for Autonomous Systems: Experience developing and deploying ML models for perception, prediction
-
simultaneous participants Developing analysis pipelines combining multimodal data streams Performing complex analyses of multimodal data Communicating findings to internal and external collaborators Publishing
-
language networks arise from multi-scale neurobiological variability. The position focuses on integrating multimodal neuroimaging (structural MRI, diffusion-weighted imaging, functional MRI, and receptor
-
- Autonomous Navigation and Motion Planning in Arctic Environments - Multimodal Perception, Localization and mapping for Robots in Extreme Visibility Conditions - Source Localization and Contaminant Mapping
-
The Department of Electronic Systems at The Technical Faculty of IT and Design invites applications for a position as research assistant or postdoc in the field of Novel AI based perception