Sort by
Refine Your Search
-
of today’s heterogeneous hardware (multicore CPUs, GPUs, SmartNICs, disaggregated datacenters). We explore: SmartNICs & P4 switches for offloading intelligence from hosts Device-to-device communication
-
influence the technological trajectory of the ecosystem. The core responsibilities of this position include developing and owning the overall SoC specifications and architecture, encompassing CPU, GPU, memory
-
Data Core services, including high-performance computing pipelines and large-scale GPU resources, to scale LLM development and deployment. Your profile PhD in machine learning, computer science
-
line with best practices and international standards Leverage VIB’s Data Core services, including high-performance computing pipelines and large-scale GPU resources, to scale LLM development and
-
recommended. Strong background in computer architectures and embedded platforms (ARM Cortex-M, NPU, FPGA, embedded GPU), e.g., via academic courses and/or project courses Research experience (e.g., through a
-
services, which provide expert support in data management and high-performance computing, including optimized pipelines and large-scale GPU resources. A competitive salary and benefits package, with
-
with AI inside HPC applications is considered a plus. Experience with performance modeling (such as computer architecture simulation) for multiple types of computer hardware (e.g. CPU/GPU/NPU, or network
-
, which provide expert support in data management and high-performance computing, including optimized pipelines and large-scale GPU resources. A competitive salary and benefits package, with relocation
-
(Dell Precision 7960 Tower with NVIDIA RTX 6000 GPU, 128GB RAM, 32-core CPU) for large-scale NLP and machine learning experiments. The planned start date is 15 November 2025 or as soon as possible after