Sort by
Refine Your Search
-
Listed
-
Category
-
Program
-
Field
-
This fully-funded PhD research opportunity, supported by EPRSC Doctoral Landscape Awards (DLA) and Cranfield offers a competitive bursary of £22,000 per annum, covering full tuition fees. This PhD
-
AI techniques for damage analysis in advanced composite materials due to high velocity impacts - PhD
We are pleased to announce a self-funded PhD opportunity for Quantitative assessment of damage in composite materials due to high velocity impacts using AI techniques. Composite materials, such as
-
support to take relevant adaptation actions to reduce vulnerability to climate and water-related risks. This PhD research will develop a toolkit for agricultural water resources planning, helping to support
-
This self-funded PhD opportunity explores assured multi-sensor localisation in 6G terrestrial and non-terrestrial networks (TN–NTN), combining GNSS positioning, inertial systems, and vision-based
-
This self-funded PhD opportunity sits at the intersection of several research domains: multi-modal positioning, navigation and timing (PNT) systems, AI-enhanced data analytics and signal processing
-
of the overall efficiency of the system. Their degradation behaviour in different fuels (hydrogen, ammonia or bio-fuels) is yet to be understood. This PhD project aims to investigate the effect
-
This fully-funded PhD studentship, sponsored by the EPSRC Doctoral Landscape Awards (DLA), Cranfield University and Spirent Communications, offers a bursary of £24,000 per annum, covering full
-
This is an exciting PhD opportunity to develop innovative AI and computer vision tools to automate the identification and monitoring of UK pollinators from images and videos. Working at
-
This self-funded PhD opportunity focuses on assured multi-domain positioning, navigation, and timing (PNT), integrating data from space-based, terrestrial and platform-based sources of navigation
-
trust in digital communications and readily bypass conventional security controls. This PhD research proposes to design, develop, and validate a novel, explainable, multi-modal detection framework. By