Sort by
Refine Your Search
-
Listed
-
Category
-
Program
-
Employer
- Nature Careers
- BARCELONA SUPERCOMPUTING CENTER
- ICN2
- University of A Coruña
- Autonomous University of Madrid (Universidad Autónoma de Madrid)
- Barcelona Supercomputing Center (BSC)
- CIC biomaGUNE
- INSTITUTO DE ASTROFISICA DE CANARIAS (IAC) RESEARCH DIVISION
- Institute for bioengineering of Catalonia, IBEC
- Institute of Photonic Sciences
- UNIVERSIDAD POLITECNICA DE MADRID
- Universitat Politècnica de Catalunya (UPC)- BarcelonaTECH
- Universitat Pompeu Fabra - Department / School of Engineering
- 3 more »
- « less
-
Field
-
. These capsules will offer multimodal therapeutic capabilities by integrating chemo-enzymatic reactions and photothermal therapy. In parallel, realistic in vitro models incorporating relevant cell types
-
Introduction to the university’s talent recruitment policy Main forum of the International Youth Scholars Forum Keynote report I Keynote report II 2:00 PM-17:30 Sub-forum of Capital Medical University Parallel
-
into immunocompromised rodents to create chimeric human–mouse spinal cord models, allowing the study of human neural integration and response to injury in vivo. Parallel studies will use the in vitro vascularized hVSCO
-
Learning codes using high performance computing (HPC)” Specifically, the candidate will carry out research tasks for the design, implementation, and evaluation on HPC systems of parallel algorithms in
-
simulation environment. Contribute to the implementation of search systems and optimization of the parallelization of AI models or system topology to minimize time and energy consumed. Where to apply Website
-
sequences, networks, trajectories, images, etc. - Design, programming, optimization, and parallelization of machine learning algorithms. - Search in repositories and bioinformatics of DNA sequences
-
(AWS, Azure/GCP) Experience in open source software development. Knowledge of GPU-based computing, including multi-gpu/multi-node parallelization techniques will be valued. Fluency in spoken and written
-
optimize large-scale distributed training frameworks (e.g., data parallelism, tensor parallelism, pipeline parallelism). Develop high-performance inference engines, improving latency, throughput, and memory
-
agents of prepared systems. - Stage 3: Assistance in interpreting results and preparing reports. These stages can be developed in parallel for several systems to prevent one of them from being limited in
-
(DFT) packages and in-house quantum transport codes. Fully converged simulations from established databases will serve as ground-truth benchmarks. In parallel, the same optimisation principles will be