-
research initiative funded by ARIA, titled Aggregating Safety Preferences for AI Systems: A Social Choice Approach. The project operates at the interface of AI safety and computational social choice, and
-
postdoctoral research associate (PDRA) position on the project project Aggregating Safety Preferences for AI Systems: A Social Choice Approach, funded by ARIA under the Safeguarded AI TA1.4 call. The project
-
to the 4th February 2026. You will be investigating the safety and security implications of large language model (LLM) agents, particularly those capable of interacting with operating systems and external APIs
-
sits within the Institute of Biomedical Engineering (IBME) in the University’s Department of Engineering Science and is supported by a £25m 10-year donation to the University. This full-time post is
-
check • University security screening (eg identity checks) The closing date for applications is 12 noon on 28 August 2025. Applications for this vacancy are to be made online. You will be required
-
fixed-term to the 28th Feb 2027. There has been massive recent interest and efforts into systematic evaluation of the safety of LLM & VLM agents, but these works have focused exclusively on single-agent
-
DPhil students, manage data analysis pipelines, and contribute to publications and grant writing. This post is ideally suited to someone aiming to secure a long-term fellowship and build an independent
-
of cells and modules in realistic drive cycle scenarios. This activity will link closely with the design of BMS and cooling systems to maximise the safety of the developed propulsion system. You should
-
hepatitis and liver disease. This post is funded by the National Institute for Health and Care Research (NIHR) as part of a significant research programme that leverages large-scale healthcare datasets
-
security pre-employment checks: • A satisfactory enhanced Disclosure and Barring Service check • University security screening (eg identity checks) The closing date for applications is 12 noon