164 parallel-and-distributed-computing-phd Postdoctoral positions at University of Oxford
Sort by
Refine Your Search
-
Listed
-
Field
-
predicts battery performance and properties from fabrication line measurements. About you Hold (or be near completion of) a PhD/DPhil in Control Engineering or a related subject, with the possibility
-
) information-theoretic active learning, and c) capturing uncertainty in deep learning models (including large language models). The successful postholder will hold or be close to the completion of a PhD/DPhil in
-
computational workflows on a high-performance cluster. You will test hypotheses using data from multiple sources, refining your approach as needed. The role also involves close collaboration with colleagues
-
hepatitis and liver disease. This post is funded by the National Institute for Health and Care Research (NIHR) as part of a significant research programme that leverages large-scale healthcare datasets
-
team, and independently, are essential. You will also provide guidance to less experienced members of the research group, including postdocs, research assistants, technicians, plus PhD and project
-
with an international reputation for excellence. The Department has a substantial research programme, with major funding from Medical Research Council (MRC), Wellcome Trust and National Institute
-
of the research group, including postdocs, research assistants, technicians, plus PhD and project students. You must have: A relevant PhD/DPhil (or be close to completion), together with relevant experience in
-
), to develop systems that improve the efficacy of machine learning-based technologies for healthcare applications. You must hold a PhD (or be near completion) in a field such as AI, computer science, signal
-
and leading a programme of numerical simulations relating to all aspects of our research on P-MoPAs; using particle-in-cell computer codes hosted on local and national high-performance computing
-
with the possibility of renewal. This project addresses the high computational and energy costs of Large Language Models (LLMs) by developing more efficient training and inference methods, particularly