Sort by
Refine Your Search
-
well as large-scale GPU computing facilities for deep learning. We are looking for a Senior Research Engineer to manage the EEE GPU Cluster. The role will focus on leading the EEE GPU Cluster team and in charge
-
well as large-scale GPU computing facilities for deep learning. Our Lab aims to hire a Research Fellow to lead a research project on Real-World Deepfake Detection and Image Forgery Localization. The role will
-
or streaming data. Develop parallelized and GPU-accelerated learning modules, ensuring scalability and performance efficiency. Build and maintain robust data pipelines for high-throughput modeling over
-
or streaming data. Develop parallelized and GPU-accelerated learning modules, ensuring scalability and performance efficiency. Build and maintain robust data pipelines for high-throughput modeling over
-
, networks and GPU resources. The role ensures secure, efficient, and scalable systems to support research and administrative functions. This position work closely with PI, researchers and technical vendors
-
. Programming & Software Development: Proficiency in Python, PyTorch, JAX, or other ML frameworks Computing: Some experience with large-scale datasets, parallel computing, and GPUs/TPUs. Algorithm Development
-
programme Access to secure clinical and multi-omics data environments Modern GPU, and high-performance computing resources, plus dedicated research-engineering support Close integration with clinicians and
-
dynamic and enterprising individual to join us as a Senior HPC Application Engineer. Key Responsibilities Port scientific applications to GPU (e.g., using CUDA, HIP, OpenACC) or optimize for multi-core CPUs
-
. Programming experience in C/C++ is necessary while experience in parallel and GPU computing is most desired.PX4, Pixhawk or equivalent. Ability to work well with team members and good communication skills
-
and optimization strategies for large-scale or streaming data. Develop parallelized and GPU-accelerated learning modules, ensuring scalability and performance efficiency. Build and maintain robust data