Sort by
Refine Your Search
-
Listed
-
Employer
- Oak Ridge National Laboratory
- Argonne
- Duke University
- Texas A&M University
- Harvard University
- SUNY Polytechnic Institute
- University of Miami
- Brookhaven National Laboratory
- National Aeronautics and Space Administration (NASA)
- Northeastern University
- Rutgers University
- Sandia National Laboratories
- Stanford University
- University of Nebraska Medical Center
- University of New Hampshire – Main Campus
- University of North Carolina at Chapel Hill
- University of Utah
- 7 more »
- « less
-
Field
-
simulation methods, GPU-accelerated computations, several programming languages, and presenting results to wide technical and non-technical audiences. Additionally, the candidate will also develop theory and
-
in GPU programming one or more parallel computing models, including SYCL, CUDA, HIP, or OpenMP Experience with scientific computing and software development on HPC systems Ability to conduct
-
and GPU-accelerated tools for circuit and system design optimization, addressing challenges in physical design, timing analysis, and large-scale hardware design automation. The researcher will
-
environments. Experience with parallel computing environments, HPC in a Linux environment. Experience with surrogate modeling. Experience with data analytics techniques. Familiarity with C++ and GPU programming
-
with OFDM modulation required. Skills Programming skills in MATLAB and or Python required, experience with wireless testbeds desirable, some familiarity with GPU programming desirable (to support
-
in top-tier machine learning/AI conferences and/or leading scientific journals. Excellent programming skills and hands-on experience with leading machine learning frameworks (e.g., TensorFlow, PyTorch
-
: Knowledge on floating point arithmetic and mixed/reduced precision computing techniques Experience with programming GPUs and/or other accelerators Proficiency in mathematical reasoning and numerical analysis
-
). Expertise in data and model parallelisms for distributed training on large GPU-based machines is essential. Candidates with experience using diffusion-based or other generative AI methods as
-
mathematicians, and domain scientists Develop software that integrates machine learning and numerical techniques targeting heterogeneous architectures (GPUs and accelerators), including DOE leadership-class
-
programming (Shared and Distributed memory, GPU programming etc.) Demonstrated experience with distributed memory MPI programming Experience with collaborative software design, development, and testing