Sort by
Refine Your Search
-
Listed
-
Country
-
Field
-
is of advantage: Knowledge of parallel programming and HPC architectures, including accelerators (e.g., GPUs) Experience in modelling and simulation, ideally in the field of energy systems Experience
-
of existing bioinformatic workflows and development of new pipelines. The analyses will be carried out on GPUs and part will consist of data processing and visualization in order to facilitate interpretation
-
software aspects of large-scale AI systems. Areas of interest may include, but are not limited to: • Advanced accelerator chip technologies, such as GPUs or other specialized chips for large-scale AI
-
the EU’s ambitious AI Factories initiative. Learn more: https://mimer-ai.eu/about-mimer/ , https://www.naiss.se , https://eurohpc-ju.europa.eu/ai-factories_en The position As AI Training Program Officer, you
-
-specialists E3 Experience handling large image datasets E4 Experience with HPC, GPU computing, or cloud-based computational workflows. E5 Experience in preparing analysis and presentation of data to publication
-
Information Benefits Trabajo en IA generativa de vanguardia aplicada al habla / Work on cutting-edge generative AI for speech Acceso a servidores GPU y recursos de cómputo / Access to GPU servers and computing
-
with edge computing or embedded systems (e.g., NVIDIA Jetson, Raspberry Pi) Background in real-time processing and GPU acceleration (CUDA) Participation in relevant competitions (e.g., Kaggle, computer
-
, resource requests, and environment management. Desired Requirements: 1. Probabilistic modeling: scVI/scANVI/totalVI for RNA and RNA+protein integration. 2. GPU experience: PyTorch/CUDA for segmentation/model
-
approaches, the application of meta learning, and the integration of convex optimization layers Increase inference efficiency (e.g., GPU acceleration) and assess the applicability domain of learned algorithms
-
3T Siemens MR scanners, OPM-MEG, EEG, eye tracking, and TMS laboratories. They will also have access to Princeton's world-class computational infrastructure, including GPU systems capable of running