Sort by
Refine Your Search
-
Category
-
Country
-
Employer
- CNRS
- Forschungszentrum Jülich
- Universite de Moncton
- ; University of Southampton
- DAAD
- Delft University of Technology (TU Delft); 16 Oct ’25 published
- European Magnetism Association EMA
- Ghent University
- IT4Innovations National Supercomputing Center, VSB - Technical University of Ostrava
- Instituto Superior Técnico
- Instituto de Telecomunicações
- Karlsruher Institut für Technologie (KIT)
- Medizinische Universitaet Wien
- National Renewable Energy Laboratory NREL
- Nature Careers
- Radboud University
- Reykjavik University
- Technical University of Denmark
- Technical University of Munich
- The University of Manchester;
- University College Dublin
- University of Birmingham;
- University of Southern Denmark
- VU Amsterdam
- 14 more »
- « less
-
Field
-
different hardware backends. Design conventional (GPU-based) deep neural networks for comparison. Publish research articles, regular participation in top international conferences to present your work
-
, GPUs, AI accelerators etc.) require high power demands with optimized power distribution networks (PDNs) to improve power efficiency and preserve power integrity. Integrated voltage regulators (IVRs
-
) platforms used in machine learning, big data and artificial intelligence (AI) based applications (CPUs, GPUs, AI accelerators etc.) require high power demands with optimized power distribution networks (PDNs
-
(HPC) platforms used in machine learning, big data and artificial intelligence (AI) based applications (CPUs, GPUs, AI accelerators etc.) require high power demands with optimized power distribution
-
/GPUs. These devices provide massive spatial parallelism and are well-suited for dataflow programming paradigms. However, optimizing and porting code efficiently to these architectures remains a key
-
, which has multiple test machines with GPUs and AI accelerators. The algorithms used can be bound by the available compute power or memory bandwidth in different parts of the program. This information will
-
on conventional computing platforms such as GPUs, CPUs and TPUs. As language models become essential tools in society, there is a critical need to optimize their inference for edge and embedded systems
-
us to run large numerical simulations with billions grid points on mixed computer architectures including CPU and GPU machines. A current project is preparing the code set for the next generation of
-
computing/GPU clusters and the robot labs. You will work towards a PhD degree. Engaging with scientific and technical research at PhD level, publishing and engaging with colleagues, both nationally and