Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Field
-
. Proficient in Python, with working knowledge of bash and experience using HPC or cluster environments (e.g. SLURM). A pragmatic scientist who combines technical depth with a clear translational, goal‑oriented
-
leadership in digital research infrastructure or digital technologies, with a mandatory PhD or equivalent Extensive expertise across cloud computing, HPC, research data management, and cybersecurity A
-
platforms HPC and scientific computing experts to support large-scale analytics and AI/ML workflows Bioinformatics and data science teams to integrate clinical data with multi-modal research datasets Faculty
-
Blindern, Oslo. Job description This PhD project aims to study the convergence of high-performance computing (HPC) and AI, which is a subject that sees an increasing importance due to the widespread use
-
-specialists E3 Experience handling large image datasets E4 Experience with HPC, GPU computing, or cloud-based computational workflows. E5 Experience in preparing analysis and presentation of data to publication
-
outstanding contributions in computer science and high-performance computing (HPC) research. About Computing Sciences at Berkeley Lab: Whether running extreme-scale simulations on a supercomputer or applying
-
enhance the competencies of the Institute in one or more of the following research areas: foundation models and efficient training/adaptation methods on HPC systems, generative AI and multimodal learning
-
Inria, the French national research institute for the digital sciences | Talence, Aquitaine | France | 3 months ago
Master's degree, Engineer's degree, or PhD in computer science to join a team responsible for the packaging, deployment, and testing of software libraries for high-performance computing (HPC). This position
-
of contact and representative of the Housing Package Center (HPC). MAs provide exceptional service and act as a resource for current student residents, ensuring efficient processing, distribution, and delivery
-
, or equivalent experience. Experience with high-performance computing (HPC) environments and job scheduling systems (e.g., SLURM). Familiarity with Globus or similar large-scale data transfer tools. Familiarity