Sort by
Refine Your Search
-
Listed
-
Field
-
Applications are invited for a Postdoctoral Research Assistant position in supernova cosmology and time-domain astrophysics, working with data from Rubin Observatory, 4MOST, Euclid, and low-redshift transient surveys such as ATLAS. The successful candidate will lead independent and collaborative...
-
We are seeking a talented and motivated researcher to join the Mead Group to contribute to a major research programme focused on understanding and preventing disease progression in
-
We are seeking five full-time Postdoctoral Research Assistants to join the Computational Health Informatics Lab at the Department of Engineering Science, based at the Institute of Biomedical
-
hepatitis and liver disease. This post is funded by the National Institute for Health and Care Research (NIHR) as part of a significant research programme that leverages large-scale healthcare datasets
-
to completion of a PhD or equivalent qualification in computational fluid dynamics or applied mathematics. What we offer At the university of Oxford your happiness and wellbeing at work is important to us. We
-
full-stack approach to suppressing errors in quantum hardware. This research focuses on achieving practical quantum computation by integrating techniques ranging from hardware-level noise suppression
-
on evaluating the abilities of large language models (LLMs) of replicating results from the arXiv.org repository across computational sciences and engineering. You should have a PhD/DPhil (or be near completion
-
with an international reputation for excellence. The Department has a substantial research programme, with major funding from Medical Research Council (MRC), Wellcome Trust and National Institute
-
methods suitable for legged systems in physically-realistic simulated environments and on real robots. You should hold or be close to completion of a PhD/DPhil in robotics, computer science, machine
-
with the possibility of renewal. This project addresses the high computational and energy costs of Large Language Models (LLMs) by developing more efficient training and inference methods, particularly