Sort by
Refine Your Search
-
Listed
-
Country
-
Field
-
can be switched on the network and can be consumed by the GPUs without additional overhead. Main activities: You will be part of the "Scientific Computing and Data Science" (CSSD ) team made up of 7
-
well as large-scale GPU computing facilities for deep learning. We are looking for a Research Engineer to manage the EEE GPU Cluster. The role will focus on enhancing the EEE GPU Cluster team’s ability in terms
-
of three active research communities: operating systems memory management (NUMA policy, page migration, swap behaviour), high-performance and GPU computing (memory coalescing, unified memory, PCIe transfer
-
orchestration in Kubernetes (e.g., NVIDIA device plugin, GPU scheduling, MIG, node affinity). Experience optimizing GPU utilization, memory management, and cost efficiency for compute-intensive workloads
-
, MIG, node affinity). Experience optimizing GPU utilization, memory management, and cost efficiency for compute-intensive workloads. Preferred Competencies Ability to collaborate and interact effectively
-
24 Apr 2026 Job Information Organisation/Company Universitat Politècnica de Catalunya (UPC)- BarcelonaTECH Research Field Engineering » Computer engineering Computer science » Computer architecture
-
parallelization strategies Experience with porting and optimization of parallel HPC software for modern architectures including GPUs Excellent knowledge of a variety of suitable computing languages (including C
-
Massachusetts Institute of Technology (MIT) | Cambridge, Massachusetts | United States | 23 days ago
Internal Number: 25597 INFORMATION SECURITY MANAGER, The Massachusetts Green High Performance Computing Center (MGHPCC ), to serve as the primary security leader across MGHPCC and the AI Computing Resource
-
FLAME-GPU Accelerated Agent-based Modelling of Material Response to Environmental and Operational Loading EPSRC CDT in Developing National Capability for Materials 4.0, with the Henry Royce
-
studies. These pipelines must be capable of efficiently exploiting different types of parallelism, both at the level of a computing node (CPU and GPU) and at the level of a cluster of PCs. This environment