Sort by
Refine Your Search
-
Listed
-
Employer
- Oak Ridge National Laboratory
- Argonne
- Duke University
- Stanford University
- Texas A&M University
- Harvard University
- New York University
- Rutgers University
- SUNY Polytechnic Institute
- University of Miami
- University of North Carolina at Chapel Hill
- Brookhaven National Laboratory
- National Aeronautics and Space Administration (NASA)
- Northeastern University
- Sandia National Laboratories
- University of Central Florida
- University of Nebraska Medical Center
- University of New Hampshire – Main Campus
- University of Utah
- 9 more »
- « less
-
Field
-
Experience with HPC (GPUs preferred) Related Skills and Other Requirements Ability to work at the interface of AI and science/engineering problems Ability to lead, develop, and contribute to multiple projects
-
, engineering, physical science or related technical discipline. Experience: Expertise in developing and training AI models Proficiency in Python Experience with HPC (GPUs preferred) Related Skills and Other
-
computing software libraries (e.g., Trilinos, MFEM, PETSc, MOOSE). Experience with shared and distributed memory parallel programming models such as OpenMP and MPI. Experience with one more GPU or performance
-
and GPU-accelerated tools for circuit and system design optimization, addressing challenges in physical design, timing analysis, and large-scale hardware design automation. The researcher will
-
simulation methods, GPU-accelerated computations, several programming languages, and presenting results to wide technical and non-technical audiences. Additionally, the candidate will also develop theory and
-
in GPU programming one or more parallel computing models, including SYCL, CUDA, HIP, or OpenMP Experience with scientific computing and software development on HPC systems Ability to conduct
-
). Practical experience with cloud computing platforms (e.g., AWS, GCP, Azure). Additional Qualifications Experience with multi-GPU model training and large-scale inference. Familiarity with modern AI
-
scientists and engineers are accustomed to. Moreover, the vast majority of the performance associated with these reduced precision formats resides on special hardware units such as tensor cores on NVIDIA GPUs
-
with OFDM modulation required. Skills Programming skills in MATLAB and or Python required, experience with wireless testbeds desirable, some familiarity with GPU programming desirable (to support
-
). Expertise in data and model parallelisms for distributed training on large GPU-based machines is essential. Candidates with experience using diffusion-based or other generative AI methods as