125 computational-solid-mechanics Postdoctoral research jobs at University of Oxford in Uk
-
full-stack approach to suppressing errors in quantum hardware. This research focuses on achieving practical quantum computation by integrating techniques ranging from hardware-level noise suppression
-
have completed, or be close to completing, a PhD/DPhil in a relevant quantitative field such as computational social science, computer science, or cognitive science. They will have a demonstrable track
-
on evaluating the abilities of large language models (LLMs) of replicating results from the arXiv.org repository across computational sciences and engineering. You should have a PhD/DPhil (or be near completion
-
University of Oxford (https://www.expmedndm.ox.ac.uk/mmm). You will be joining a highly interdisciplinary team of approximately 40 clinicians, computational biologists, statisticians, software engineers and
-
We are looking to appoint a postdoctoral researcher, to work with a group of UK Higher Education Institutions to deliver a programme of mental health research. The work is funded by the Medical
-
research initiative funded by ARIA, titled Aggregating Safety Preferences for AI Systems: A Social Choice Approach. The project operates at the interface of AI safety and computational social choice, and
-
computational workflows on a high-performance cluster. You will test hypotheses using data from multiple sources, refining your approach as needed. The role also involves close collaboration with colleagues
-
the performance of lithium ion technologies. To support the programme, the post holder will be required to carry out research on characterisation of battery degradation, with a particular focus on the application
-
methods suitable for legged systems in physically-realistic simulated environments and on real robots. You should hold or be close to completion of a PhD/DPhil in robotics, computer science, machine
-
with the possibility of renewal. This project addresses the high computational and energy costs of Large Language Models (LLMs) by developing more efficient training and inference methods, particularly