Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Field
-
biology. Please include a cover letter with your application detailing your qualifications and experience for this position. Describe a deep learning project you have executed. Projects in computer vision
-
models on GPU infrastructure (SSH access) and distributed computing environments. Strong problem-solving, documentation, and communication skills across technical and non-technical contexts. Ability
-
IT4Innovations National Supercomputing Center, VSB - Technical University of Ostrava | Czech | 11 days ago
deployment, · knowledge of GPU computing and large-scale training, · experience working in an HPC environment, · experience with data annotation pipelines or synthetic data generation. We offer: · work in a
-
11 Mar 2026 Job Information Organisation/Company CNRS Department Institut de Chimie des Milieux et Matériaux de Poitiers Research Field Chemistry Chemistry » Computational chemistry Researcher
-
languages; experience with GPU programming (e.g., CUDA) is highly desirable. Background in optimization, image-guided radiotherapy, medical imaging, or computational modeling. Experience with treatment
-
on small test clusters. Test computational performance and resolve technical challenges on significantly larger models of selected quantum materials. Work on speeding up Krylov solvers on GPUs. Demonstrate
-
your bottom line. Total Compensation Calculator: http://www.cu.edu/node/153125 Equal Employment Opportunity Statement: CU is an Equal Opportunity Employer and complies with all applicable federal, state
-
or TensorFlow. Practical background in training and validating models on GPU-based and distributed computing environments. Working knowledge of containerization tools and orchestration platforms (e.g. Docker
-
NAISS, the National Academic Infrastructure for Supercomputing in Sweden, provides academic users with high-performance computing resources, storage capacity, and data services. NAISS is hosted by
-
advanced compilation techniques for scientific and AI applications on heterogeneous GPU clusters. Research topics include scheduling, memory management, communication–computation overlap, and performance