Sort by
Refine Your Search
-
Listed
-
Employer
- Argonne
- Oak Ridge National Laboratory
- Harvard University
- Rutgers University
- SUNY Polytechnic Institute
- Stanford University
- University of Nebraska Medical Center
- University of North Carolina at Chapel Hill
- University of Utah
- Yale University
- Brookhaven National Laboratory
- Duke University
- Embry-Riddle Aeronautical University
- Northeastern University
- Texas A&M University
- University of Miami
- University of New Hampshire
- 7 more »
- « less
-
Field
-
adaptive optimization during needle insertion, integrating live ultrasound imaging with GPU-accelerated dose calculation and optimization. The Postdoctoral Research Associate will join a multidisciplinary
-
/TimeSformer, CLIP/BLIP or similar) in PyTorch, including scalable training on GPUs and reproducible experimentation. Demonstrated experience building explainable models (e.g., concept bottlenecks, prototype
-
in GPU programming one or more parallel computing models, including SYCL, CUDA, HIP, or OpenMP Experience with scientific computing and software development on HPC systems Ability to conduct
-
). Practical experience with cloud computing platforms (e.g., AWS, GCP, Azure). Additional Qualifications Experience with multi-GPU model training and large-scale inference. Familiarity with modern AI
-
disease insights. The lab has state-of-the-art computing capabilities with an in-house cluster serving 80 CPU cores and 1.5TB of RAM, as well as a newly acquired NVIDIA DGX box with eight H100 GPUs and 224
-
mathematicians, and domain scientists Develop software that integrates machine learning and numerical techniques targeting heterogeneous architectures (GPUs and accelerators), including DOE leadership-class
-
scientists and engineers are accustomed to. Moreover, the vast majority of the performance associated with these reduced precision formats resides on special hardware units such as tensor cores on NVIDIA GPUs
-
). Expertise in data and model parallelisms for distributed training on large GPU-based machines is essential. Candidates with experience using diffusion-based or other generative AI methods as
-
). Practical experience with cloud computing platforms (e.g., AWS, GCP, Azure). Additional Qualifications Experience with multi-GPU model training and large-scale inference. Familiarity with modern AI
-
. The researcher(s) will be provided access to state-of-the-art supercomputing facilities with advanced GPU and data storage capabilities. Additionally, opportunities will be available for collaborations. Duties