Sort by
Refine Your Search
-
Listed
-
Country
-
Field
-
available hardware accelerator platforms. The most cost-efficient and energy-efficient way to build this correlator system is with FPGAs near the receivers, tensor-core-enabled GPUs for correlations, and an
-
research computing ecosystem spanning on-premises and remote-site infrastructure, including: • HPC compute platforms for research and data-intensive workloads • GPU-enabled environments for AI and machine
-
, and security monitoring tools. PREFERRED: Professional certification (CISSP or equivalent), hands-on experience with securing HPC, GPU cluster, or data center environments, experience with AI/ML
-
well as large-scale GPU computing facilities for deep learning. We are looking for a Research Engineer to manage the EEE GPU Cluster. The role will focus on enhancing the EEE GPU Cluster team’s ability in terms
-
maintain the workload scheduler and architect quality-of-service policies. Administer Linux systems across infrastructure projects and deployment of new GPUs for research and teaching. Troubleshoot complex
-
: SciNet is in the process of installing a new AI capability, with a large number of high performance GPUs, as part of the ISED Sovereign AI Compute initiative. The incumbent is expected to support the
-
researchers from all disciplines and regions of Canada. Your opportunity: SciNet is in the process of installing a new AI capability, with a large number of high performance GPUs, as part of the ISED Sovereign
-
, regional, and national professional meetings, workshops, and conferences. The Machine Learning Engineer will have the opportunity to work with leading-edge GPU and HPC technologies and engage with domain
-
Massachusetts Institute of Technology (MIT) | Cambridge, Massachusetts | United States | about 1 month ago
of complex AI research workloads on state-of-the-art hardware. The role will have heavy focus on optimizing existing NVIDIA GPU-based workloads for top-tier AMD GPUs, such as MI355X and beyond and will analyze
-
processor topology. On modern servers, Non-Uniform Memory Access (NUMA) architectures and GPU accelerators introduce asymmetric memory access costs that remain largely invisible to application-level code yet