Sort by
Refine Your Search
-
Listed
-
Country
-
Program
-
Field
-
or streaming data. Develop parallelized and GPU-accelerated learning modules, ensuring scalability and performance efficiency. Build and maintain robust data pipelines for high-throughput modeling over
-
growing GPU cluster containing thousands of high-end GPUs. Depending on the day, we might be diving deep into market data, tuning hyperparameters, debugging distributed training performance, or studying how
-
University of New Hampshire – Main Campus | New Boston, New Hampshire | United States | about 3 hours ago
. The researcher(s) will be provided access to state-of-the-art supercomputing facilities with advanced GPU and data storage capabilities. Additionally, opportunities will be available for collaborations. Duties
-
of dense laser plasmas and intense laser interactions with matter using particle-in-cell (PIC), hydrodynamic, and/or Fokker-Planck open and proprietary simulation packages. Investigate and develop different
-
samples. Optimize reconstruction algorithms for efficient large-scale 3D imaging, including high-performance and GPU-accelerated computing where appropriate. Design, optimize, and validate a refractive
-
of thick and strongly scattering samples. Optimize reconstruction algorithms for efficient large-scale 3D imaging, including high-performance and GPU-accelerated computing where appropriate. Design, optimize
-
multiphase flows Your tasks Develop and extend the in-house GPU-accelerated multiphase Lattice Boltzmann (LBM) code for DNS-grade boiling multiphase flow related to nuclear reactor operation, including bubble
-
inhibitors with improved efficacy The project offers a highly interdisciplinary research environment spanning computational chemistry, cell biology, physics, and materials science. The work will leverage GPU
-
biology results The project offers a highly interdisciplinary research environment spanning computational chemistry, neuroscience, molecular biology, and psychology. The work will leverage GPU computing
-
and optimization strategies for large-scale or streaming data. Develop parallelized and GPU-accelerated learning modules, ensuring scalability and performance efficiency. Build and maintain robust data