Sort by
Refine Your Search
-
Listed
-
Category
-
Field
-
"A picture is worth a thousands words"... or so the saying goes. How much information can we extract from an image of an insect on a flower? What species is the insect? What species is the flower? Where was the photograph taken? And at what time of the year? What time of the day? What was the...
-
the area of end-to-end modular autonomous driving using computer vison and deep learning methods. This includes developing an efficient and interpretable image processing, vision-based perception and
-
, for instance, utilise conversational agents, computer vision, mixed reality, wearables etc. Disability, Technology, and Society: Research with a sociological or anthropological focus on the use of bespoke and/or
-
time. Our Faculty of Information Technology is globally recognised (ranked #40 in Data Science & AI, QS 2025 and #61 in Computer Science, Times Higher Education 2025), with our DSAI department leading
-
time. Our Faculty of Information Technolog y is globally recognised (ranked #40 in Data Science & AI, QS 2025 and #61 in Computer Science, Times Higher Education 2025), with our DSAI department leading
-
staff with a strong vision for education, engagement and research. A key focus will be on cultivating a collaborative and high-performing academic team. The role demands active participation in curriculum
-
vision and pattern recognition methods, will be utilized to automate the process of fingertip detection. These methods will be trained to learn patterns from fingertip features and detect them using object
-
optimisation of our enterprise and research computing infrastructure. In this pivotal role, you’ll drive excellence across storage, backup, and virtualisation systems—ensuring resilience, scalability, and
-
analysis, contextual analysis, audio feature extraction, and machine learning models to identify and assess potentially dangerous content. Similarly, computer vision models are implemented to analyse images
-
accepted by the intended users due to their limited capabilities to sustain long-term interactions. In this project we propose to develop compositional vision-language models for social robots, enabling them