Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Field
-
optimization layers Increase inference efficiency (e.g., GPU acceleration) and assess the applicability domain of learned algorithms Publish and present your results in peer-reviewed journals and at
-
(PyTorch, TensorFlow). Experience with dataset curation, annotation workflows, FAISS/embedding retrieval, LLM-based parsing, RAG-style pipeline, and GPU/HPC training. Familiarity with 3D data processing
-
datasets in scalable GPU-based computing environments. What we provide: A competitive compensation package, with comprehensive health and welfare benefits. A supportive team environment that promotes
-
projects at CASS. The center fellows will have access to a 70,000-core Infiniband Cluster (Jubail) dedicated to the science division, several GPU-based clusters at NYUAD, and other supercomputer facilities
-
Inria, the French national research institute for the digital sciences | Talence, Aquitaine | France | 2 months ago
project (http://www.numpex.fr ) endowed with more than 40 million euros over 6 years from 2023, to build a software stack for Exascale supercomputers related to the arrival in Europe of the first Exascale
-
capabilities. We can access a high-performance computer cluster with the most advanced GPU resources. We also partner with the New York Proton Center, which houses one cyclotron, three rotational gantry
-
with edge computing or embedded systems (e.g., NVIDIA Jetson, Raspberry Pi) Background in real-time processing and GPU acceleration (CUDA) Participation in relevant competitions (e.g., Kaggle, computer
-
and clinical MR systems fully dedicated to research, state-of-the-art local and scalable cloud-based compute infrastructure (CPU, GPU) and workshops for mechanical, electrical and electronic development
-
significant computational component in deploying multi-GPU codes to efficiently train on the large, densely-connected and graph-structured data encountered in our systems of interest. Your contributions would
-
hardware architectures (multicore, GPUs, FPGAs, and distributed machines). In order to have the best performance (fastest execution) for a given Tiramisu program, many code optimizations should be applied