Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Field
-
parallel with software development that will ultimately ensure delivered product meets quality standards, is fit for use, and meets expectations articulated by our customers. Collaborates with various IT@JH
-
, Azure, GCP), Massive Parallelism paradigms (MPI, OpenACC, Accelerators) and more You will be part of a team supporting all University-wide ARC services and national services such as NSF ACCESS, and, Open
-
Learning codes using high performance computing (HPC)” Specifically, the candidate will carry out research tasks for the design, implementation, and evaluation on HPC systems of parallel algorithms in
-
simulation environment. Contribute to the implementation of search systems and optimization of the parallelization of AI models or system topology to minimize time and energy consumed. Where to apply Website
-
parallel with software development that will ultimately ensure delivered product meets quality standards, is fit for use, and meets expectations articulated by our customers. Collaborates with various IT@JH
-
University of North Carolina at Chapel Hill | Chapel Hill, North Carolina | United States | 15 days ago
. Cellular and molecular biology experiments include human induced pluripotent stem cell (hiPSC)-derived models and high-throughput genomic library generation. This will require generating massively parallel
-
international leading academic institutes and key industrial partners. In November 2020, 6GIC was officially launched with parallel research undertaken in both 5G+ and 6G for 2030+. About you The successful
-
manage large-scale HPC storage systems, including parallel file systems such as Lustre, GPFS/Spectrum Scale, BeeGFS and WEKA. Design, implement, and operate large-scale Ceph storage clusters for HPC and
-
Department GENERAL DESCRIPTION Conduct further research on the NSF funded project “Enabling Extremely Fine-grained Parallelism on Modern Many-core Architectures”. Benefits Our commitment to employee well-being
-
optimize large-scale distributed training frameworks (e.g., data parallelism, tensor parallelism, pipeline parallelism). Develop high-performance inference engines, improving latency, throughput, and memory