Sort by
Refine Your Search
-
Listed
-
Category
-
Program
-
Employer
- Forschungszentrum Jülich
- Fraunhofer-Gesellschaft
- DAAD
- Free University of Berlin
- Technical University of Munich
- Alfred-Wegener-Institut Helmholtz-Zentrum für Polar- und Meeresforschung
- Leibniz
- Max Planck Institute for Demographic Research (MPIDR)
- Academic Europe
- GFZ Helmholtz-Zentrum für Geoforschung
- Heidelberg University
- Helmholtz-Zentrum Dresden-Rossendorf - HZDR - Helmholtz Association
- Karlsruher Institut für Technologie (KIT)
- Max Planck Institute for Demographic Research, Rostock
- Max Planck School of Cognition
- Nature Careers
- TU Dresden
- Technische Universität Dortmund
- University of Bonn •
- 9 more »
- « less
-
Field
-
description: The Scientific Computing Center is the Information Technology Center of KIT. The Research Group Exascale Algorithm Engineering of SCC works at the interface of algorithmics, parallel computing, and
-
training and inference of GMMs for large, high-dimensional datasets Explore parallelization strategies to leverage modern GPU architectures Benchmark GPU-based implementations against CPU-based approaches
-
-edge Machine Learning applications on the Exascale computer JUPITER. Your work will include: Developing, implementing, and refining ML techniques suited for the largest scale Parallelizing model training
-
engineered 3D hydrogels, we will experimentally probe the mechanical forces and physical constraints that drive coordinated cell behavior. In parallel, we will develop and apply computational models and
-
skills Confident working in dynamic environments with a focus on efficiency and prioritizing parallel projects What you can expect Fascinating challenges in a scientific and entrepreneurial setting
-
Helmholtz-Zentrum Dresden-Rossendorf - HZDR - Helmholtz Association | Dresden, Sachsen | Germany | 7 days ago
to minimize training effort # Devise appropriate metrics to evaluate and tune trained models with respect to reproduction of key physical results # Contribute to a parallel training workflow to stream data from
-
on the Exascale computer JUPITER. Your work will include: Developing, implementing, and refining ML techniques suited for the largest scale Parallelizing model training and optimizing the execution User support in
-
cell types. Optimize 3D CAD designs for precision and parallel measurements. Evaluate the feasibility of integrating the probe system onto a robotic end-effector and design suitable mechanical and
-
build reliable, reproducible data flows for large EO datasets and workflows Lead performance engineering (parallelization, optimization, benchmarking) for adaptation and inference at scale Work closely
-
sintering press with selected copper pastes, followed by detailed characterization of the resulting interfaces in terms of porosity, thermal and mechanical integrity. In parallel, simulation models will be