Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Field
-
and work together to train models, architect systems, and run trading strategies. We work with petabytes of data, a computing cluster with hundreds of thousands of cores, and a growing GPU cluster
-
the computer science research conferences. Qualifications: PhD in computer science with file systems, GPU architecture experience. Proven ability to articulate research work and findings in peer-reviewed proceedings
-
Engineers. Serve as liaison with Princeton Research Computing staff on GPU cluster related issues. Professional Development Learn the underlying science, mathematics, statistics, data analysis, and algorithms
-
data access. GPU Supercomputing: A GPU server with 8 × NVIDIA RTX A5000 GPUs and 1 TB RAM for machine learning and simulation tasks. External HPC Access: Professional support to obtain access to national
-
programme Reference Number AE2025-0510 Is the Job related to staff position within a Research Infrastructure? No Offer Description Portuguese version: https://repositorio.inesctec.pt/editais/pt/AE2025-0510
-
Mathematics at the Technical University of Munich (TUM) invites applications for one PhD position. The student will work on developing scalable distributed preconditioners in Ginkgo (https://github.com/ginkgo
-
contact, as identified by AFRL through recent past efforts. This includes the implementation of relevant algorithms and solvers for distributed GPU computing within the JAX Python library. Qualifications
-
High Performance Computing Systems Basic knowledge of System Architecture of Supercomputers and NVidia-GPUs Practical experience with ML/DL workflows and common software libraries Your experience should
-
, including extensive departmental CPU/GPU computing resources and Imperial’s Research Computing Service. A vibrant, interdisciplinary research culture, with partnerships such as the CNRS–Imperial de Moivre
-
Inria, the French national research institute for the digital sciences | Saint Martin, Midi Pyrenees | France | about 2 months ago
, embeddings with transformers, training with flow matching) and high performance computing (e.g. handling large-scale parallel simulators, multi-node and GPU training on large supercomputers). When considering