Sort by
Refine Your Search
-
Listed
-
Category
-
Field
-
, and parallel computing, with a proven ability to work within highly secure and regulated environments. This role involves close collaboration with security teams, scientists, and IT leadership to ensure
-
-scale scientific data. Publishing research in leading peer-reviewed journals and conferences. Researching and developing parallel/scalable uncertainty visualization algorithms using HPC resources
-
parallel computing. Demonstrated hands-on experience and understanding of developing scientific data management, workflows and resource management problems. Strong problem-solving and communication skills
-
journals and conferences. Researching and developing parallel/scalable uncertainty visualization algorithms using HPC resources. Collaboration with domain scientists for demonstration and validation
-
strategic management and strict adherence to security protocols. We are looking for candidates with extensive experience in either classified HPC data center operations, architecture, parallel computing
-
systems. Expertise with batch schedulers (SLURM, PBS, LSF) and parallel file systems (Lustre, GPFS/Spectrum Scale). Proven ability to lead technical projects from concept through implementation, balancing
-
systems, high-speed parallel file systems, and archival solutions critical to advancing scientific discovery and innovation. As part of ORNL’s leadership-class computing ecosystem, you will play a vital
-
batch schedulers (e.g., SLURM, PBS, LSF) and parallel file systems (Lustre, GPFS/Spectrum Scale). Experience implementing and managing automation and configuration management frameworks (Ansible, Puppet
-
. Demonstrated experience developing and running computational tools for high-performance computing environment, including distributed parallelism for GPUs. Demonstrated experience in common scientific programming
-
. Scalability of Preprocessing Pipelines: Design and implement automated, parallel preprocessing workflows capable of handling multi-petabyte datasets efficiently while reducing throughput bottlenecks. Data