Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Field
-
platforms and our local CPU and GPU clusters; implementing Python tools for automating CSP/DFT calculations; - Participation in the scientific activities of the Applied Quantum Chemistry group (IC2MP) and the
-
adaptive optimization during needle insertion, integrating live ultrasound imaging with GPU-accelerated dose calculation and optimization. The Postdoctoral Research Associate will join a multidisciplinary
-
IT4Innovations National Supercomputing Center, VSB - Technical University of Ostrava | Czech | 3 days ago
deployment, · knowledge of GPU computing and large-scale training, · experience working in an HPC environment, · experience with data annotation pipelines or synthetic data generation. We offer: · work in a
-
, the VUB is a member of EUTOPIA, an alliance of like-minded European universities, all ready to reinvent themselves. The position is part of the ETRO research group. RDI, a member of IMEC. ETRO. RDI (http
-
offers and actions on https://cluster-ia-enact.ai/ . You will work in a rare environment at the intersection of frugal AI, analog computing, reconfigurable electronics and THz imaging. The PhD is directly
-
registered at the MIMME Doctoral School (https://mimme.ed.univ-poitiers.fr/ ). Institut Pprime is a dedicated research unit (UPR) of the CNRS. Its scientific activities span a broad spectrum ranging from
-
100% funding per SNSF guidelines (~CHF 90'000/year) Access to modern GPU clusters and confidential-computing infrastructure Collaboration with leading researchers in AI & HPC systems and digital health
-
AUSTRALIAN NATIONAL UNIVERSITY (ANU) | Canberra, Australian Capital Territory | Australia | about 1 month ago
that supports this project has an expected end date of 30 June 2028. This role gives you hands-on access to Australia’s national supercomputing infrastructure—including world-class HPC clusters, large-scale GPU
-
advanced compilation techniques for scientific and AI applications on heterogeneous GPU clusters. Research topics include scheduling, memory management, communication–computation overlap, and performance
-
of cores, and a growing GPU cluster containing thousands of high-end GPUs. Depending on the day, we might be diving deep into market data, tuning hyperparameters, debugging distributed training performance