Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Field
-
commonly used on Unix systems. Additional languages or experience with libraries for utilizing GPU hardware efficiently, e.g., CUDA, are a plus. Experience in AI programming with, e.g., PyTorch(-DDP
-
optimized code written by expert programmers and can target different hardware architectures (multicore, GPUs, FPGAs, and distributed machines). In order to have the best performance (fastest execution) for a
-
options Employee and dependent educational benefits Life insurance coverage Employee discounts programs For detailed information on benefits and eligibility, please visit: http://uhr.rutgers.edu/benefits
-
, production-grade pipeline encompassing scalable video preprocessing, model training, and inference workflows. Implement GPU-accelerated training and inference, standardized evaluation protocols, and
-
. This project seeks to overcome key workflow and precision limitations in HDR brachytherapy by enabling real-time adaptive optimization during needle insertion, integrating live ultrasound imaging with GPU
-
at the interface of computational systems biology and mathematics/statistics with a strong attitude to open research software development. For more information visit http://www.fz-juelich.de/ibg/ibg-1/modsim
-
regulation to neuronal function and circuits. State-of-the-Art Infrastructure: Access to advanced sequencing, imaging platforms, and high-performance GPU computing. Research Environment: An international
-
data from the European XFEL facility at DESY. Project website: https://www.mpinat.mpg.de/628848/SM-Ultrafast-XRay-Diffraction Your profile Eligible candidates have strong skills in computational physics
-
-based HPC services, this role will involve supporting SAS researchers who use Penn’s new PARCC (Penn Advanced Research Computing Center) centralized HPC services, including both CPU and GPU cutting-edge
-
architectures. This includes among other: (a) design and implementation of machine learning and GenAI models, (b) efficient training and inference on GPU-based systems, (c) fine-tuning and optimization of large