Sort by
Refine Your Search
-
, distance, flow, or cut problems that are as well-suited as possible to dynamic, parallel, or distributed computing models. Requirements: Master's degree (or equivalent) in computer science or a related field
-
given Tiramisu program, many code optimizations should be applied. Optimizations include vectorization (using hardware vector instructions), parallelization (running loop iterations in parallel
-
of the following subjects: scalable data management, systems for machine learning, distributed and parallel systems, or cloud-based systems. We are especially interested in researchers who build working systems and
-
IFREMER - Institut Français de Recherche pour l'Exploitation de la MER | Brest, Bretagne | France | 6 days ago
Research Framework Programme? Not funded by a EU programme Reference Number 2026-1852/1 Is the Job related to staff position within a Research Infrastructure? No Offer Description Deadline for applications
-
given Tiramisu program, many code optimizations should be applied. Optimizations include vectorization (using hardware vector instructions), parallelization (running loop iterations in parallel
-
Qualifications Experience: Relevant programming experience developing, implementing, debugging, and maintaining applications with Python. Experience working with high performance computers (e.g., parallelizing and
-
physics Physics » Thermodynamics Computer science » Modelling tools Researcher Profile First Stage Researcher (R1) Positions Postdoc Positions Application Deadline 19 Apr 2026 - 23:59 (Europe/Brussels
-
vibration isolation). In parallel to the benefits of quantum computing, artificial intelligence and neuromorphic computing seek to emulate the massively parallel, highly efficient computing capacity
-
position within a Research Infrastructure? No Offer Description Project description Third-cycle subject: Computer Science This Ph.D. project will develop fundamental theory and methods in distributed systems
-
, for example: Large‑scale optimization and machine learning: Stochastic and/or (non‑)convex optimization methods, first‑order methods, variance reduction, distributed and parallel optimization, federated