Sort by
Refine Your Search
-
Listed
-
Category
-
Program
-
Field
-
systems using vision-language-action (VLA ) models. These combine computer vision (to see), natural language understanding (to interpret instructions), and action generation (to respond), enabling robots
-
motivated PhD candidate with interests and skills in computational modelling and simulations, fluid dynamics, mechanical engineering, physics and applied mathematics. You should have experience in one or more
-
allowance of £20,780 (2025/26 UKRI rate). Additional project costs will also be provided. Overview This PhD will develop a Synovium-on-a-Chip, using 3D bioprinting, microfluidic engineering, and computational
-
pathway. This project is in collaboration with Dr Richie Abel from Imperial College London, who is an expert in bone biology and will provide the high-resolution computed tomography (CT) scans of the bone
-
the most energy‐intensive infrastructures in modern economies, with their demand projected to rise sharply as digitalisation, artificial intelligence (AI), and cloud computing expand. This growth presents
-
Synovium-on-a-Chip, using 3D bioprinting, microfluidic engineering, and computational fluid dynamics (CFD), to create a dynamic, perfused system that mimics the human synovial environment. The platform will
-
, artificial intelligence (AI), and cloud computing expand. This growth presents both challenges and opportunities for achieving net‐zero carbon targets. While AI data centres are often perceived as passive
-
-generation regenerative materials. This interdisciplinary project combines mechanical, materials, and biomedical engineering, offering training across fabrication, nanomechanical analysis, and computational
-
, migration, and accumulation of precipitated particles in CO2–water–rock systems using computational fluid dynamics (CFD) coupled with discrete element method (DEM). The research outcomes will provide critical
-
programmed in advance. If anything changes, it may fail. This project explores how to build more adaptable systems using vision-language-action (VLA ) models. These combine computer vision (to see), natural