Sort by
Refine Your Search
-
Listed
-
Country
-
Field
-
facilities. Core responsibilities include the deployment and maintenance of small-scale HPC and compute nodes, GPU workstations, Linux and Windows servers, research data storage and backup solutions
-
for candidates with experience in ML model deployment, workflow orchestration, and high-throughput data processing, as well as experience working with large biological datasets in scalable GPU-based computing
-
datasets in scalable GPU-based computing environments. What we provide: A competitive compensation package, with comprehensive health and welfare benefits. A supportive team environment that promotes
-
Engine, Unity, Blender, Adobe Creative Cloud, or DaVinci Resolve, with simple version-control tools like GitHub or Perforce. Experience with powerful PCs with strong GPUs, a mix of VR headsets like Meta
-
learning, multicore and GPU programming, and highly parallel systems. Good knowledge in one or more of the following programming languages/environments: C/C++, Python, PyTorch (or similar), and Cuda. Place
-
datasets in scalable GPU-based computing environments. What we provide: A competitive compensation package, with comprehensive health and welfare benefits. A supportive team environment that promotes
-
Inria, the French national research institute for the digital sciences | Talence, Aquitaine | France | 2 months ago
project (http://www.numpex.fr ) endowed with more than 40 million euros over 6 years from 2023, to build a software stack for Exascale supercomputers related to the arrival in Europe of the first Exascale
-
capabilities. We can access a high-performance computer cluster with the most advanced GPU resources. We also partner with the New York Proton Center, which houses one cyclotron, three rotational gantry
-
with edge computing or embedded systems (e.g., NVIDIA Jetson, Raspberry Pi) Background in real-time processing and GPU acceleration (CUDA) Participation in relevant competitions (e.g., Kaggle, computer
-
hardware architectures (multicore, GPUs, FPGAs, and distributed machines). In order to have the best performance (fastest execution) for a given Tiramisu program, many code optimizations should be applied