Sort by
Refine Your Search
-
Listed
-
Employer
-
Field
-
environments Vision–Language–Action Models: Practical experience with multimodal models combining vision, language, and action for embodied agents, robotics, or autonomous driving applications Perception and
-
robotics paradigms on in-space servicing, assembly, and manufacturing, space debris removal, XR immersive teleoperation, robot multi-modal perception (vision and tactile), and multi-robot cooperation
-
/admittance, force control Experience with Artificial Intelligence and deep learning concepts for robotics computer vision, tactile sensing, reinforcement learning Experience with robotic simulation tools e.g
-
manufacturing, space debris removal, XR immersive teleoperation, robot multi-modal perception (vision and tactile), and multi-robot cooperation. Researchers with an interest in non-terrestrial robotic
-
Application Deadline 15 Jan 2026 - 23:59 (Europe/Luxembourg) Type of Contract Temporary Job Status Full-time Hours Per Week 40 Is the job funded through the EU Research Framework Programme? Not funded by a EU