Sort by
Refine Your Search
-
Country
-
Employer
- Forschungszentrum Jülich
- Northeastern University
- Oak Ridge National Laboratory
- The University of North Carolina at Chapel Hill
- Brookhaven Lab
- Ecole Centrale de Lyon
- INSTITUTO DE ASTROFISICA DE CANARIAS (IAC) RESEARCH DIVISION
- Lawrence Berkeley National Laboratory
- Luleå University of Technology
- University of California
- University of Utah
- 1 more »
- « less
-
Field
-
The University of North Carolina at Chapel Hill | Chapel Hill, North Carolina | United States | 21 days ago
and Experience: Distributed parallel training and parameter-efficient tuning. Familiarity with multi-modal foundation models, HITL techniques, and prompt engineering. Experience with LLM fine-tuning
-
, Statistical Physics, Genome Annotation, and/or related fields Practical experience with High Performance Computing Systems as well as parallel/distributed programming Very good command of written and spoken
-
programming; Experience programming distributed systems; Experience with parallel and distributed File Systems (e.g., Lustre, GPFS, Ceph) development. Advanced experience with high-performance computing and/or
-
The University of North Carolina at Chapel Hill | Chapel Hill, North Carolina | United States | about 2 months ago
and Experience: Distributed parallel training and parameter-efficient tuning. Familiarity with multi-modal foundation models, HITL techniques, and prompt engineering. Experience with LLM fine-tuning
-
. Demonstrated experience developing and running computational tools for high-performance computing environment, including distributed parallelism for GPUs. Demonstrated experience in common scientific programming
-
programming; Experience programming distributed systems; Experience with parallel and distributed File Systems (e.g., Lustre, GPFS, Ceph) development. Advanced experience with high-performance computing and/or
-
programming; Experience programming distributed systems; Experience with parallel and distributed File Systems (e.g., Lustre, GPFS, Ceph) development. Advanced experience with high-performance computing and/or
-
willingness to learn: High-performance computing (distributed systems, profiling, performance optimization), Training large AI models (PyTorch/JAX/TensorFlow, parallelization, mixed precision), Data analysis
-
for an accurate simulation of time-dependent flows, enabling sensitive applications such as aeroacoustics. Furthermore, the high scalability on massively parallel computers can lead to advantageous turn-around
-
conferences. Engage in community knowledge-sharing (e.g. tutorials for the NERSC user base). What is Required: PhD awarded within the last five years in Physics, Computational Chemistry, Computational Science