Sort by
Refine Your Search
-
Category
-
Program
-
Employer
-
Field
-
- into a GPU-enabled and parallel code to run efficiently on state-of-the-art exascale hardware Designing implementations and reviewing community contributions of library features and new statistical
-
molecular dynamics simulations and was specially designed for parallelisation on GPUs. It is open source and licensed under the LGPL. Details can be found on the website https://halmd.org Job-Description
-
Max Planck Institute for Multidisciplinary Sciences, Göttingen | Gottingen, Niedersachsen | Germany | 13 days ago
-site high performance/GPU compute facilities Competitive research in an inspiring, world-class environment A wide range of offers to help you balance work and family life Further training opportunities
-
and model generation, point cloud rendering, visual effects (GPU shader, shadergraph, VFX) and 3D scene design Development of AR/VR applications What you bring to the table Full-time student at a German
-
in physics, mathematics or any related field; correspondingly, Postdocs hold a PhD or equivalent degree in the abovementioned fields. What we offer State of the art on-site high performance/GPU compute
-
Max Planck Institute for Multidisciplinary Sciences, Göttingen | Gottingen, Niedersachsen | Germany | 10 days ago
, mathematics or any related field; correspondingly, Postdocs hold a PhD or equivalent degree in the above mentioned fields. What we offer State of the art on-site high performance/GPU compute facilities
-
or more GPUs; ability to work with pre-existing codebases and get a training run going Research interest in one or more of the following: Applied ML, Natural Language Processing, Computer Vision
-
and train CNN and SNN models utilizing frameworks such as Keras, PyTorch, and SNNtorch Implement GPU acceleration through CUDA to enable efficient neural network training Apply hardware-aware design
-
Max Planck Institute for Intelligent Systems, Tübingen site, Tübingen | Bingen am Rhein, Rheinland Pfalz | Germany | 3 months ago
— operates a state-of-the-art GPU cluster with more than 1200 GPUs, serving as a critical backbone for advancing ground-breaking research in AI. Possible tasks include: Build, administer, optimize, and
-
methods (LBM). For fluid simulations, we utilize the high-performance LBM framework waLBerla, predominantly written in C++, but increasingly adapted for GPU computations through automatic code generation