Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Field
-
reproducible computational pipelines for HPC/cloud environments. Experience with AI explainability methods, Git repositories, GPU acceleration, and the ability to draft manuscripts for publication are also
-
Data Core services, including high-performance computing pipelines and large-scale GPU resources, to scale LLM development and deployment. Your profile PhD in machine learning, computer science
-
tools such as JupyterHub, and Kubernetes. Experience designing and operating massive-scale GPU and combining CPU/GPU workloads across these services. Design and debug platforms and will work closely with
-
, including extensive departmental CPU/GPU computing resources and Imperial’s Research Computing Service. A vibrant, interdisciplinary research culture, with partnerships such as the CNRS–Imperial de Moivre
-
signal processing and/or survey datasets. ML & AI techniques and applications. HPC and orchestration of scientific data processing workflows. Parallel computing (GPU & CPU). good software engineering
-
science, mathematics, statistics, computational linguistics, physics, electrical engineering, or similar with good grades PyTorch skills: experience in training machine learning models with one or more GPUs; ability to
-
projections and cost modeling for computational systems Knowledge of programming methods and techniques for parallel, GPU, and FPGA computation What to do Apply ! A cover letter and resume will assist in
-
UK biobank data and human imaging would be desirable. The applicant should have proven programming experience including Python and R as well as using HPC and GPU environments. The post offers
-
Institute. They will interact with DataSig’s scalable computation objective of extending our RoughPy framework to support GPU/FPGA acceleration for real-time stream processin. The successful candidates will
-
Inria, the French national research institute for the digital sciences | Saint Martin, Midi Pyrenees | France | 11 days ago
, embeddings with transformers, training with flow matching) and high performance computing (e.g. handling large-scale parallel simulators, multi-node and GPU training on large supercomputers). When considering