Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Employer
- Carnegie Mellon University
- NEW YORK UNIVERSITY ABU DHABI
- MOHAMMED VI POLYTECHNIC UNIVERSITY
- Northeastern University
- Technical University of Munich
- Princeton University
- The University of Arizona
- Aalborg University
- CNRS
- Delft University of Technology (TU Delft)
- Delft University of Technology (TU Delft); Delft
- KINGS COLLEGE LONDON
- Luleå University of Technology
- SUNY Polytechnic Institute
- University of Nevada, Reno
- University of North Carolina at Chapel Hill
- Aalborg Universitet
- ETH Zürich
- Embry-Riddle Aeronautical University
- European Space Agency
- George Washington University
- Georgia Southern University
- Imperial College London
- KTH Royal Institute of Technology
- King's College London
- Lulea University of Technology
- New York University
- Oak Ridge National Laboratory
- Sano Centre for Computational Personalized Medicine
- Stanford University
- Texas A&M AgriLife
- Texas A&M University
- UNIVERSITY OF ADELAIDE
- University of Adelaide
- University of Cambridge
- University of Cincinnati
- University of Miami
- University of Oxford
- University of Sydney
- Vrije Universiteit Brussel
- 30 more »
- « less
-
Field
-
and implementing vision processing algorithms that enable robust robot tracking and autonomy. The ideal candidate will possess hands-on experience designing, implementing, and deploying computer vision
-
Position: Postdoctoral Researcher in Vision-Language-Action Models and Physically Informed Neural Networks for Surgical Robotics Publication date: 30.09.2025 Closing date: 29.10.2025 Level of education: PhD
-
of 360-degree vision and robotics (collaborative robot arm, humanoid robot) to enable the dynamic handover of an object from a person to a robot and carry it together to move it. - Activity 1: Substantive
-
computing. You will join the Vision & Human-Robot Interaction (VHR) Group, which brings together researchers working at the intersection of computer vision, robotics, and assistive technologies. The team is
-
adaptive robotic strategies. The work will involve the integration of: Advanced motion planning and control algorithms Multi-modal perception techniques (e.g., vision, tactile, force) Machine learning models
-
computing. You will join the Vision & Human-Robot Interaction (VHR) Group, which brings together researchers working at the intersection of computer vision, robotics, and assistive technologies. The team is
-
skills for this position are: o Good knowledge of the technological challenges of agricultural/viticultural robotics. o Proven skills in: vision-based robot modeling and control; computer vision and
-
We are seeking outstanding candidate for a Postdoctoral position in the field of robot motion and control algorithms for soft material handling, starting immediately. We are seeking a highly
-
on safety, cooperation, and efficiency in human-robot teams. Multi-Modal ML: Expertise in working with diverse data types, such as vision, speech, images, and physiological signals. Experience integrating
-
on Aerial and Space robotics. The vision of RAI is aiming in closing the gap from theory to real life, while the team has a strong expertise in field robotics. Specific application areas of focus