Sort by
Refine Your Search
-
Listed
-
Field
-
on evaluating the abilities of large language models (LLMs) of replicating results from the arXiv.org repository across computational sciences and engineering. You should have a PhD/DPhil (or be near completion
-
), based at the Centre for Human Genetics, University of Oxford. The post aims to assist with the preparation and molecular characterisation of tumours using multi-omic analysis focusing principally
-
have completed, or be close to completing, a PhD/DPhil in a relevant quantitative field such as computational social science, computer science, or cognitive science. They will have a demonstrable track
-
This post is a postdoctoral research assistant role within Prof Robert House’s Group in the Department of Materials. The post will be for up to 3 years in association with a new Faraday Institution-funded project entitled “Accelerated Development of Next Generation Li-Rich 3D Cathode Materials...
-
We are looking to appoint a postdoctoral researcher, to work with a group of UK Higher Education Institutions to deliver a programme of mental health research. The work is funded by the Medical
-
learning, at the intersection of reinforcement learning, deep learning and computer vision, in order to train effective robotic agents in simulation. You should hold a relevant PhD/DPhil (or near completion
-
on genetics, OMICs and other assays is essential. We are looking for a candidate who would be able to deliver high quality research outputs, as demonstrated by a strong publication record in high profile
-
Programme Manager, using the contact details below. Only online applications received before 12.00 midday on 29th August will be considered. Interviews will be held as soon as possible thereafter
-
in preventing immune-mediated pathology in autoimmunity remains poorly understood. Using genetic and antibody-based targeting, we aim to dissect how these pathways modulate T-cell signalling
-
with the possibility of renewal. This project addresses the high computational and energy costs of Large Language Models (LLMs) by developing more efficient training and inference methods, particularly