36 natural-language-processing-phd Postdoctoral research jobs at University of London
Sort by
Refine Your Search
-
-luminosity LHC. You will provide support to our PhD students and contribute to the broader activities of the group and the school. About You You will have completed or be about to complete a PhD or research
-
interest to identify cancer drivers from genomic data using machine learning (Mourikis Nature Comms 2019, Nulsen Genome Medicine 2021), study their interplay the immune microenvironment (Misetic Genome
-
disease progression. About You Applicants should hold a PhD degree or equivalent in biological or related science and have a strong background in immune cell biology and animal models of inflammatory and/or
-
help supervise BSc, MSci, and PhD students. The successful applicant will have a PhD in Astrophysics, Theoretical Physics, or a related discipline and prior experience relevant to the post. It is also
-
successful candidate will have a PhD (or equivalent) in the field of space physics or a closely related area. They will have the skills and abilities to conduct high-quality innovative research and to
-
involves high level of collaboration with both the QMUL Space Plasma Group and the QMUL Detector Development Group. About You The successful candidate will have a PhD (or equivalent experience) in the field
-
under real-world conditions. The role includes collaboration with leading 2D materials manufacturers, offering potential for interdisciplinary research and travel. About You You should have a PhD (or be
-
develop, synthesise and characterise materials for this project. About You The post is suited to a PhD graduate with a background in materials chemistry or a related discipline. If you have a vivid
-
of pediatric brain tumours (Vinel et al BMC Biology 2025, Constantinou et al Cell Reports 2024 and Vinel et al. Nature Communications 2021) to develop new personalised therapies. About You We seek an ambitious
-
potential applications in audio and music processing. Standard neural network training practices largely follow an open-loop paradigm, where the evolving state of the model typically does not influence