Sort by
Refine Your Search
-
Listed
-
Country
-
Field
-
model in collaboration with partner institutions such as the German Climate Computing Center (DKRZ) and German Weather Service (DWD), including GPU porting. They will perform production runs of ICON and
-
heterogeneous (CPU/GPU) computing models. Collaborate with physicists, computer scientists, mathematicians and engineers across LBNL divisions to define software requirements, implement robust solutions, and
-
, Cloud Service Deployment). Desired: Experience with High-Performance Computing or GPU programming (CUDA). Specialized knowledge of Neural Rendering (NeRF/3DGS) or Satellite Photogrammetry. Demonstrated
-
strategies for large-scale or streaming data. Develop parallelized and GPU-accelerated learning modules, ensuring scalability and performance efficiency. Build and maintain robust data pipelines for high
-
frameworks (e.g., PyTorch). Familiarity with GPU-accelerated environments, virtualization tools, and prototyping using real testbeds (e.g., SDR). We expect a diploma in computer science or telecommunication
-
, enhanced sampling, QM/MM) Experience improving performance and scalability of simulation workflows via: Parallelization and performance engineering GPU/accelerator optimization Algorithmic innovation
-
authorship in papers in high-impact journals (IF>6) Experience with development of the PtyPy software Good understanding of Fourier optics GPU computing experience A background in Multibeam Ptychography is
-
frameworks). Experience using open-source model ecosystems such as Hugging Face (Transformers, Datasets, Accelerate). Experience using or supporting supercomputing or GPU-enabled clusters. Experience with data
-
of predicting electronic, structural, and thermal quantities while leveraging underlying symmetries for computational efficiency. There will be a significant computational component in deploying multi-GPU codes
-
frameworks (PyTorch, TensorFlow). Experience with dataset curation, annotation workflows, FAISS/embedding retrieval, LLM-based parsing, RAG-style pipeline, and GPU/HPC training. Familiarity with 3D data