Sort by
Refine Your Search
-
Listed
-
Country
-
Employer
- Oak Ridge National Laboratory
- Nature Careers
- Argonne
- CNRS
- Duke University
- Technical University of Munich
- NEW YORK UNIVERSITY ABU DHABI
- Stanford University
- Texas A&M University
- Aarhus University
- Harvard University
- New York University
- Rutgers University
- SUNY Polytechnic Institute
- Technical University of Denmark
- University of Luxembourg
- University of Miami
- University of North Carolina at Chapel Hill
- AI4I
- Aalborg Universitet
- Aalborg University
- Brookhaven National Laboratory
- Chalmers University of Technology
- Dublin City University
- ELETTRA - SINCROTRONE TRIESTE S.C.P.A.
- ETH Zürich
- Eindhoven University of Technology (TU/e)
- FAPESP - São Paulo Research Foundation
- Forschungszentrum Jülich
- Max Planck Institute for Solar System Research, Göttingen
- Max Planck Institute of Animal Behavior, Radolfzell / Konstanz
- Nagoya University
- National Aeronautics and Space Administration (NASA)
- Northeastern University
- Norwegian Meteorological Institute
- Sandia National Laboratories
- University of Basel
- University of Central Florida
- University of Jyväskylä
- University of Liverpool
- University of Nebraska Medical Center
- University of New Hampshire – Main Campus
- University of Turku
- University of Utah
- Université côte d'azur
- 35 more »
- « less
-
Field
-
simulation methods, GPU-accelerated computations, several programming languages, and presenting results to wide technical and non-technical audiences. Additionally, the candidate will also develop theory and
-
in GPU programming one or more parallel computing models, including SYCL, CUDA, HIP, or OpenMP Experience with scientific computing and software development on HPC systems Ability to conduct
-
100% funding per SNSF guidelines (~CHF 90'000/year) Access to modern GPU clusters and confidential-computing infrastructure Collaboration with leading researchers in AI & HPC systems and digital health
-
). Practical experience with cloud computing platforms (e.g., AWS, GCP, Azure). Additional Qualifications Experience with multi-GPU model training and large-scale inference. Familiarity with modern AI
-
scientists and engineers are accustomed to. Moreover, the vast majority of the performance associated with these reduced precision formats resides on special hardware units such as tensor cores on NVIDIA GPUs
-
with OFDM modulation required. Skills Programming skills in MATLAB and or Python required, experience with wireless testbeds desirable, some familiarity with GPU programming desirable (to support
-
, forward-looking, and varied research fields and projects, with numerous development opportunities Modern hardware and infrastructure at the workplace, from compute and GPU servers to supercomputers
-
). Expertise in data and model parallelisms for distributed training on large GPU-based machines is essential. Candidates with experience using diffusion-based or other generative AI methods as
-
disease insights. The lab has state-of-the-art computing capabilities with an in-house cluster serving 80 CPU cores and 1.5TB of RAM, as well as a newly acquired NVIDIA DGX box with eight H100 GPUs and 224
-
/TimeSformer, CLIP/BLIP or similar) in PyTorch, including scalable training on GPUs and reproducible experimentation. Demonstrated experience building explainable models (e.g., concept bottlenecks, prototype