Sort by
Refine Your Search
-
Category
-
Country
-
Program
-
optimize large-scale distributed training frameworks (e.g., data parallelism, tensor parallelism, pipeline parallelism). Develop high-performance inference engines, improving latency, throughput, and memory
-
toolchains Familiarity with distributed training/inference, AI system bottlenecks, and performance tuning Prior experience with cloud computing and AI system deployment in production settings
-
the development of both, the quantum internet and distributed quantum computing. The objectives of this PhD thesis project are: (a) Demonstrate spin-photon entanglement with single colour centres in silicon carbide
-
to well-known open-source projects or a personal portfolio of impactful open-source research code. Experience with large-scale distributed training and high-performance computing (HPC) environments.
-
PhD degree in Computer Science, Physics or a related field Experience with parallel programming models Strong programming skills in C/C++ and/or Python Knowledge of distributed memory programming with
-
, copyrighted, or biased. By studying brain data recordings and building computational models that mimic real populations of neurons, the project aims to uncover active unlearning: how the brain learns