- 
                
                
                infrastructures, improving system performance, scalability, and efficiency by optimizing resource usage (e.g., GPUs, CPUs, energy consumption). Researchers and students will explore innovative approaches to reduce 
- 
                
                
                ) for reproducible research workflows. Support Optimising GPU-accelerated workloads (e.g., PyTorch, TensorFlow), including multi-GPU scaling and distributed training. Develop training materials, documentation, and 
- 
                
                
                learning frameworks such as PyTorch, JAX, or TensorFlow. Experience with C++ and GPU programming. A strong growth mindset, attention to scientific rigor, and the ability to thrive in an interdisciplinary 
Searches related to gpu
  Enter an email to receive alerts for gpu positions