Sort by
Refine Your Search
-
Listed
-
Country
-
Program
-
Employer
- Forschungszentrum Jülich
- Oak Ridge National Laboratory
- Argonne
- CNRS
- IMEC
- LINKS Foundation - Leading Innovation & Knowledge for Society
- Lawrence Berkeley National Laboratory
- MACQUARIE UNIVERSITY - SYDNEY AUSTRALIA
- Macquarie University
- National Renewable Energy Laboratory NREL
- Northeastern University
- Singapore-MIT Alliance for Research and Technology
- UNIVERSITE DE TECHNOLOGIE DE COMPIEGNE
- Universidad Politecnica de Cartagena
- University of Oslo
- University of Utah
- 6 more »
- « less
-
Field
-
through sensitivity, uncertainty, and scalability analyses. – Enhance the computational efficiency of large-scale optimization problems by exploring decomposition techniques, parallelization, and
-
. Demonstrated experience developing and running computational tools for high-performance computing environment, including distributed parallelism for GPUs. Demonstrated experience in common scientific programming
-
, or deployment at scale. A proven track record of high-quality research contributions published in top-tier machine learning conferences or journals. Proficiency in high-performance computing, distributed and
-
of two referees), a statement of research interests and achievements. Essential A PhD or equivalent degree in plasma physics/high energy density science/computational science. Experience in
-
tight AI-simulation coupling. What is Required: PhD in Physics, Chemistry, Computational Science, Data Science, Computer Science, Applied Mathematics, or a related numerical field. Programming experience
-
hydrodynamics and/or N-body simulations in the star and planet formation context Experience in the field with HPC system usage and parallel/distributed computing Knowledge in GPU-based programming would be
-
software for multi-arch environments Development in high-performance computing (HPC) or distributed systems Strong understanding of Linux toolchains, build systems (CMake), and debugging tools Parallel
-
results. Machine Learning skills to automise comparison process. Unbiased approach to different theoretical models. Experience in HPC system usage and parallel/distributed computing. Knowledge in GPU-based
-
programming distributed systems; Experience with parallel and distributed File Systems (e.g., Lustre, GPFS, Ceph) development. Advanced experience with high-performance computing and/or large-scale data centers