Sort by
Refine Your Search
-
Listed
-
Country
-
Program
-
Employer
- Forschungszentrum Jülich
- Northeastern University
- Ecole Centrale de Lyon
- Oak Ridge National Laboratory
- University of California
- CNRS
- Lawrence Berkeley National Laboratory
- NEW YORK UNIVERSITY ABU DHABI
- University of Innsbruck, Institute of Computer Science
- University of Utah
- University of Washington
- Brookhaven Lab
- Brookhaven National Laboratory
- FCiências.ID
- INSTITUTO DE ASTROFISICA DE CANARIAS (IAC) RESEARCH DIVISION
- Japan Agency for Marine-Earth Science and Technology
- Luleå University of Technology
- Monash University
- National Renewable Energy Laboratory NREL
- New York University
- The University of North Carolina at Chapel Hill
- UNIVERSIDAD POLITECNICA DE MADRID
- UNIVERSITY OF VIENNA
- University of A Coruña
- University of California, Merced
- University of Dayton
- University of North Carolina at Chapel Hill
- Washington University in St. Louis
- 18 more »
- « less
-
Field
-
systems projects. We are developing the Apollo application development and computing environment. We have coordinated several EU projects on distributed and parallel systems including the edutain@grid
-
the Apollo application development and computing environment. We have coordinated several EU projects on distributed and parallel systems including the edutain@grid, AllScale and the ENTICE project. We
-
the Computer Science program at New York University Abu Dhabi, seeks to recruit a research assistant to work on the intersection of compilers and deep learning. Many companies, such as Google, Facebook, and Amazon
-
programming; Experience programming distributed systems; Experience with parallel and distributed File Systems (e.g., Lustre, GPFS, Ceph) development. Advanced experience with high-performance computing and/or
-
The University of North Carolina at Chapel Hill | Chapel Hill, North Carolina | United States | 18 days ago
and Experience: Distributed parallel training and parameter-efficient tuning. Familiarity with multi-modal foundation models, HITL techniques, and prompt engineering. Experience with LLM fine-tuning
-
. You have experience in matrix algorithms, data compression, parallel computing, optimization of advanced applications on parallel and distributed systems. An excellent scientific track record proven
-
programming; Experience programming distributed systems; Experience with parallel and distributed File Systems (e.g., Lustre, GPFS, Ceph) development. Advanced experience with high-performance computing and/or
-
) for a given Tiramisu program, many code optimizations should be applied. Optimizations include vectorization (using hardware vector instructions), parallelization (running loop iterations in parallel
-
for an accurate simulation of time-dependent flows, enabling sensitive applications such as aeroacoustics. Furthermore, the high scalability on massively parallel computers can lead to advantageous turn-around
-
, AMD uProf, or Omniperf. Debugging experience with distributed-memory parallel applications. Experience with containers (Docker, Podman, Shifter or similar) and modern software practices such as Git