Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Field
-
Inria, the French national research institute for the digital sciences | Pau, Aquitaine | France | about 2 months ago
coupling with adaptive meshes. High-order spectral finite element methods (SFEM) offer superior accuracy per degree of freedom and are naturally suited to HPC architectures (CPU/GPU clusters). Two main
-
applications. HPC and orchestration of scientific data processing workflows. Parallel computing (GPU & CPU). good software engineering practices for scientific software (version control, testing, continuous
-
, especially GPUs for AI calculations. Experience with research and/or practice in one or more STEM disciplines. Experience with collaborating with researchers (faculty, staff, postdoctoral researchers, graduate
-
partners ML Systems & Infrastructure Design, build, and operate reproducible ML pipelines for training and inference (e.g., Snakemake or equivalent) including GPU/CPU scheduling, job queuing, and fault
-
of research computing at LSE. Your expertise will be key in future-proofing our research hardware environment, ensuring high availability, scalability and security across HPC clusters; GPU acceleration, high
-
when necessary. Work closely with ODFM to facilitate request for office and lab access. Manage any IT matters, including IT equipment allocation and support for GPU servers (if required). 2. Support on
-
data management plans with PIs and partners ML Systems & Infrastructure Design, build, and operate reproducible ML pipelines for training and inference (e.g., Snakemake or equivalent) including GPU/CPU
-
fluency in tools for AI/real-time/graphics pipelines (e.g., Python, PyTorch, C++, GPU/compute, networking). This is a single AI/ML Software Developer post (Grade 7-8 dependent on experience), full-time
-
: metrics, configs, checkpoints, weight versioning, model registry Simulation and Testing: Run large-scale cloud experiments; track throughput, GPU utilization, cost per run; evaluate robustness to preemption
-
infrastructure, driving the design and evolution of HPC and AI platforms at scale. This role architects and implements next-generation GPU/CPU clusters, high-bandwidth InfiniBand and Ethernet fabrics, large-scale