Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Field
-
intelligence models (LLMs) in multi-GPU environments. Preparation of technical documentation, best practices for development and operation. Where to apply Website https://sede.uvigo.gal/public/catalog-detail
-
infrastructure, and GPU/CPU cluster environments. This role leads and mentors a team of Systems Engineers and Administrators while remaining deeply technical and hands-on, actively designing, deploying, and tuning
-
and generative AI solutions tailored for large-scale research datasets. Prototype AI applications on local GPU-hardware, ensuring seamless scalability to HPC environments like the Gefion supercomputer
-
platforms such as llm servers, shared virtual GPUs (VGPUs) used by OPS-G, and the broader utilization of cloud resources. Ensuring the smooth operation, availability, and continuous improvement
-
influence the technological trajectory of the ecosystem. The core responsibilities of this position include developing and owning the overall SoC specifications and architecture, encompassing CPU, GPU, memory
-
in high-performance computing systems, storage and I/O. Extensive knowledge on parallel file systems, GPUs, and their trends. (more than 3 years of working with these systems) Experience with
-
Summary/ Department Summary: The GPU is a procedural unit running 5 days a week for both ambulatory and inpatients. The GPU performs a wide range of non-sterile procedures: examples include; therapeutic
-
megawatts. To transfer energy efficiently from the grid to CPUs/GPUs, higher system voltages are required in data centres/computer racks, and efficient power electronics converter systems based on SSTs
-
-performance workstations, CPU/GPU clusters, and experimental systems tailored for fish and fly research. This role will necessarily involve both software development and software-hardware integration, with
-
Inria, the French national research institute for the digital sciences | Pau, Aquitaine | France | about 1 month ago
methods (SFEM) offer superior accuracy per degree of freedom and are naturally suited to HPC architectures (CPU/GPU clusters). Two main Galerkin formulations exist: Continuous Galerkin (CG-SFEM): Memory