Sort by
Refine Your Search
-
Listed
-
Country
-
Field
-
managing and administering an NVIDIA DGX SuperPod instrument. You and another HPC administrator will partner closely with a team of data scientists from Stanford Data Science to ensure that the GPU cluster
-
patient records exploiting HPC, including GPUs embedded within NHS infrastructure. Development and deployment of ML operations software and tooling for ML / LLM algorithms working over free-text clinical
-
learning frameworks such as TensorFlow, or PyTorc. Experience with GPU programming and optimization for model training and inference. Familiarity with data preprocessing, feature engineering, and model
-
with high-performance computing capabilities (including approximately 4,000 Nvidia RTX 4000 Ada GPUs and over 30,000 CPU cores) hosted at the project data center in Nevada where the telescope is located
-
of code acceleration (GPU) Participate in numerical modelling (HPC (GPU), MPI Fortran / C, C++ Kokkos, Python, Perl) of SAMS front end and physics/test modules. Write research reports, progress reports
-
has embraced the “infrastructure as code” approach to systems automation. You’ll be working across a range of predominately Linux based systems, including HPC and GPU accelerated compute, large-scale
-
models including scaling models across a large set of GPUs; building or optimizing LLMs to tackle new, complex tasks; developing new models of brain circuits and function; and learning software engineering
-
or more GPUs; ability to work with pre-existing codebases and get a training run going Research interest in one or more of the following: Applied ML, Natural Language Processing, Computer Vision
-
, computational algebra, logic and programming languages. The department is housed in the newly constructed Science & Innovation Center which boasts Data Center with High Performance GPU Cluster and state
-
Engineering, or a related field Strong experience in building and optimizing AI systems using PyTorch, TensorFlow, or JAX Practical knowledge of NVIDIA GPU programming (CUDA) and experience with inference