Sort by
Refine Your Search
-
data access. GPU Supercomputing: A GPU server with 8 × NVIDIA RTX A5000 GPUs and 1 TB RAM for machine learning and simulation tasks. External HPC Access: Professional support to obtain access to national
-
experiments. Help manage and organize sizable machine learning training runs (100s – 1000s of GPUs) and computational experiments (petabytes of data). Communication Informally and formally communicate results
-
to leverage CPU and GPU cluster computing resources for large-scale image analysis. Train and mentor users on a variety of microscopy modalities, including confocal, STED, SIM, FLIM, TIRF, STORM, HCA, and
-
& Development Labs are the backbone of the Institute, combining scientific excellence with real-world impact. They operate within a unique ecosystem that includes the AI Foundry (state-of-the-art GPUs and
-
. Innovative visualization tools and highly automated analytical pipelines powered by GPU technology. Mentorship from experienced scientists in data analysis and management, with an expertise in delivering high
-
, workflow orchestration, and high-throughput data processing, as well as experience working with large biological datasets in GPU-based computing environments. What we provide: A competitive compensation
-
, combining scientific excellence with real-world impact. They operate within a unique ecosystem that includes the AI Foundry (state-of-the-art GPUs and engineering capacity), the System for User Knowledge (SUK
-
. • Knowledge of parallel computing and use of GPUs are desirable. • Supervision and teaching experience is an advantage. • Expertise in dynamical modelling and stellar spectroscopy are assets. • Presentation
-
environment with strong expertise in immunotherapies An open, collegial, and supportive working atmosphere in a respectful organizational culture A highly diverse and inclusive workforce Access to our GPU
-
computing frameworks (e.g., MPI, NCCL) and model parallelism techniques. Proficiency in C++/CUDA programming for GPU acceleration. Experience in optimizing deep learning models for inference (e.g., using