178 phd-position-in-database-modeling Postdoctoral positions at University of Oxford in United Kingdom
Sort by
Refine Your Search
-
to the 4th February 2026. You will be investigating the safety and security implications of large language model (LLM) agents, particularly those capable of interacting with operating systems and external APIs
-
Institute). The position is fixed term for 36 months and will provide opportunities to work on aircraft icing modelling and experimental campaigns. Ice crystal icing is one of the least well characterised
-
on evaluating the abilities of large language models (LLMs) of replicating results from the arXiv.org repository across computational sciences and engineering. You should have a PhD/DPhil (or be near completion
-
but part time working would be considered (minimum of 4 days, 30 hours per week, 0.8 FTE About You To be considered for this position you should have a PhD degree (or be near completion) in a relevant
-
on and defensive mechanisms for safe multi-agent systems, powered by LLM and VLM models. Candidates should possess a PhD (or be near completion) in Machine Learning or a highly related discispline. You
-
-£46,913 per annum. This is a full time, fixed term position for 2 years. We are seeking an enthusiastic cardiovascular immunologist or an expert in immunology and/or vascular biology to join Professor
-
Applications are invited for a Postdoctoral Research Associate in Atmospheric Dynamics position. This role is part of the recently funded NERC ‘Arctic Butterflies’ project to investigate the role
-
University weighting We are excited to offer this fixed-term Research Assistant position at the University of Oxford, under the supervision of Professor Nobuko Yoshida. The Research Assistant will be part of
-
Professor Chris Russell. This is an exciting opportunity for you to work at the cutting edge of AI, contributing to a major shift in how we understand and apply foundation models. The position is full-time
-
with the possibility of renewal. This project addresses the high computational and energy costs of Large Language Models (LLMs) by developing more efficient training and inference methods, particularly