Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Field
-
of computing and healthcare. Methodologies of interest include: Multi-modal learning Foundation models, including large language models Agentic AI Multi-agent AI systems Transfer learning Self-supervised
-
Commander . Preferred Record of scholarly publications and experience supporting research projects, including data analysis and grant preparation Experience with agent-based models, statistical methods
-
research on model materials and/or materials with potential applications. The ICMCB is a UMR with an average of 280 agents (permanent and non-permanent), with 3 tutelles, hosted by the CNRS and in total ZRR
-
volatile geopolitics. Shortages, trade frictions, and financial mismatches can stall otherwise viable tipping dynamics and establish carbon-intensive lock-ins. This PhD will develop an agent-based inspired
-
with the research themes and goals of the NTO. To apply: https://academicjobsonline.org/ajo/jobs/31443 Files to submit: Applicant’s CV Applicant’s transcript showing proof of PhD; or intended completion
-
background in individual- /agent-based modelling Experience with modelling of animal energetics Strong R and Netlogo skills Good understanding of movement & population ecology Experience in publishing
-
support in developing grant applications and teaching resources. For more information about LIACS, see http://www.cs.leiden.edu What you bring PhD degree in Computer Science or a similar field; An academic
-
-tracking, behavioural data). Your team You will collaborate with several GRS colleagues who have expertise in methods and tools for spatiotemporal analysis of complex land systems (including agent-based
-
to contribute to improving the quantitative accuracy of multimodal data. PhD Work Plan In the initial phase, the candidate will conduct a literature review on MR-based attenuation correction methods for PET
-
reinforcement learning for large language models (LLMs). Research directions include developing next-generation post-training algorithms, exploring diffusion-based approaches to reasoning with language models