Sort by
Refine Your Search
-
Listed
-
Country
-
Program
-
Field
-
algorithms for parallel/distributed AI/ML Hardware-aware and resource-efficient partitioning for parallel/distributed AI/ML Optimization of process-to-process communication in parallel/distributed AI/ML
-
systems based on massively parallel hardware architectures Combination of programmable logic, tensor processors and general-purpose CPUs for real-time adaption and scheduling services (e.g., AMD Versal
-
on competence: contributing to research software development supporting simulations and/or data workflows (HPC/parallel environments), and open/reproducible release of data and analysis scripts under FAIR
-
algorithms will be assessed for neurodegeneration mapping in Alzheimer’s disease brain organoids. In parallel, the technology and algorithms will be applied to fish health research, supporting studies on ulcer
-
, the system and associated algorithms will be assessed for neurodegeneration mapping in Alzheimer’s disease brain organoids. In parallel, the technology and algorithms will be applied to fish health research
-
to 850 °C), offer the highest efficiency among different electrolysis technologies. Nonetheless, SOEC technology is less mature and requires further improvements in both performance and durability
-
multiome RNA-seq, ATAC-seq and massively parallel reporter assays (MPRAs) for unbiased genome-wide analysis for understanding the phenotypic plasticity in different cancer cell states. Work tasks The work
-
) exploring the role of autologous fat and muscle cells in breast reconstruction with autologous tissue transfer. The position involves working in parallel with several different cell types, which requires
-
and run it efficiently on different hardware architectures. For example, Google has built TensorFlow, a framework for deep learning allowing users to run deep learning on multiple hardware architectures
-
and optimization strategies for large-scale or streaming data. Develop parallelized and GPU-accelerated learning modules, ensuring scalability and performance efficiency. Build and maintain robust data