Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Employer
- Nature Careers
- Oak Ridge National Laboratory
- Argonne
- CNRS
- Duke University
- Technical University of Munich
- NEW YORK UNIVERSITY ABU DHABI
- Stanford University
- Texas A&M University
- Aarhus University
- Harvard University
- Max Planck Institute for Multidisciplinary Sciences, Göttingen
- New York University
- Rutgers University
- SUNY Polytechnic Institute
- Technical University of Denmark
- University of Luxembourg
- University of Miami
- University of North Carolina at Chapel Hill
- AI4I
- Aalborg Universitet
- Aalborg University
- Brookhaven National Laboratory
- Chalmers University of Technology
- Dublin City University
- ELETTRA - SINCROTRONE TRIESTE S.C.P.A.
- Eindhoven University of Technology (TU/e)
- FAPESP - São Paulo Research Foundation
- Forschungszentrum Jülich
- Max Planck Institute for Solar System Research, Göttingen
- Max Planck Institute of Animal Behavior, Radolfzell / Konstanz
- Nagoya University
- National Aeronautics and Space Administration (NASA)
- Northeastern University
- Norwegian Meteorological Institute
- Sandia National Laboratories
- University of Basel
- University of Central Florida
- University of Jyväskylä
- University of Liverpool
- University of Nebraska Medical Center
- University of New Hampshire – Main Campus
- University of Turku
- University of Utah
- Université côte d'azur
- VIB
- 36 more »
- « less
-
Field
-
/TimeSformer, CLIP/BLIP or similar) in PyTorch, including scalable training on GPUs and reproducible experimentation. Demonstrated experience building explainable models (e.g., concept bottlenecks, prototype
-
, forward-looking, and varied research fields and projects, with numerous development opportunities Modern hardware and infrastructure at the workplace, from compute and GPU servers to supercomputers
-
environments. Experience with parallel computing environments, HPC in a Linux environment. Experience with surrogate modeling. Experience with data analytics techniques. Familiarity with C++ and GPU programming
-
E13) up to 5 years International collaboration to build a large radiotherapy dataset Dedicated GPU infrastructure Strong collaborations within TUM’s AI ecosystem High-impact publication potential
-
managing supercomputer resources Strong skills in algorithm development for large sparse matrices Excellency in programming GPU accelerators from all major vendors Very good command of written and spoken
-
finite-element models, e.g. Poisson, linear elasticity, large-deformation soft tissue, for real-time execution on AR devices and GPUs Implement these models within open-source frameworks such as SOFA
-
programming (Shared and Distributed memory, GPU programming etc.) Demonstrated experience with distributed memory MPI programming Experience with collaborative software design, development, and testing
-
mathematicians, and domain scientists Develop software that integrates machine learning and numerical techniques targeting heterogeneous architectures (GPUs and accelerators), including DOE leadership-class
-
engineering. The work involves simulations for quantum error correction and mid-circuit operations, and will require both low-level optimization skills (e.g., SIMD, GPU, FPGA) and an understanding of quantum
-
frameworks (preferably Pytorch) Use of Linux GPU servers via command line Written and spoken scientific English It would be a plus to have familiarity with: GIS and remote sensing Internal Application form(s