Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Field
-
the relevant field of study or possession of equivalent experience. Artificial Intelligence in Digital Health (AIDH, https://go.osu.edu/aidh ) in the Department of Biomedical Informatics (BMI, http://bmi.osu.edu
-
on robot development and autonomous navigation but based on the interests of the PhD fellow there are also opportunities to investigate planning and optimization of sensor placement, human-robot interaction
-
of expertise in vision-based driving systems. Beyond autonomous vehicles, the project’s advances in sensorimotor imitation learning are expected to benefit a wide range of robotic applications, promoting more
-
the Unit of Automation Technology and Mechanical Engineering . The objective of FAST-Lab is the seamless knowledge integration of humans and machines/robots creating smart environments by capitalising
-
, including autonomous systems, robotics, cognitive and distributed sensing, and machine learning systems, among others. Successful candidates will be responsible team players and passionate about cutting-edge
-
the focus on Non-road Multi-Robot Systems. The position is open at the levels of Assistant Professor, Associate Professor, or Professor, depending on the candidate’s qualifications and experience
-
advances scientific knowledge, develops engineering methodologies, and solves cross-disciplinary problems across four Thrust Areas: Bioscience & Biomedical Engineering, Intelligent Transportation, Robotics
-
the intersection of advanced manufacturing processes, robotics/automation and AI/ML engineering research as part of the Ira A Fulton Schools of Engineering (learn more at https
-
humans how to autonomously perform them. A new robotic system will be developed, which will also contribute to in-house skill retention, combining ergonomics improvements with the possibility of automating
-
world. We look forward to receiving your application! We are looking for a PhD student in AI and autonomous systems with a focus on Vision-Language-Action (VLA) Models to control multiple heterogenous