169 phd-computer-artificial-machine-human Postdoctoral positions at University of Oxford
Sort by
Refine Your Search
-
Listed
-
Field
-
to underfill at Grade 6 (£34,982 - £40,855 p.a.) if candidate holds a relevant degree and is working on PhD/DPhil) together with established knowledge in computer architecture and hardware security, significant
-
hold, or be close to completion of, a relevant PhD/DPhil in one of the following subjects: computational genomics, genetic or molecular epidemiology, medical statistics or statistical genetics. You must
-
energy densities exceeding LMFP and competitive with NMC. A postdoctoral research position is available on this 3D-CAT project in the area of computer modelling and materials design of lithium battery
-
of collaborative projects, working closely with clinicians, imaging experts, and computational scientists across the Oxford–Novartis Collaboration for AI in Medicine. You must hold a PhD/DPhil in Statistics
-
funded by UKRI EPSRC and is fixed term for 12 months. You will be contributing to joint UKRI EPSRC – NSF CBET project on sustainable computer networks, with a focus on carbon emissions reduction and
-
About the role We are seeking a highly motivated senior postdoctoral researcher in computational biology to join our research group. Our laboratory investigates how T cells interact with
-
We are seeking a Postdoctoral Researcher in Human-AI interaction to join a research group focused on studying learning and decision-making in humans and machine learning systems led by Prof Chris
-
of agentic behaviour and publishing high-impact research. Candidates should possess a PhD (or be near completion) in PhD in Computer Science, AI, Security, or a related field. You will have a Strong background
-
on evaluating the abilities of large language models (LLMs) of replicating results from the arXiv.org repository across computational sciences and engineering. You should have a PhD/DPhil (or be near completion
-
with the possibility of renewal. This project addresses the high computational and energy costs of Large Language Models (LLMs) by developing more efficient training and inference methods, particularly