Sort by
Refine Your Search
-
Listed
-
Employer
- Oak Ridge National Laboratory
- Nature Careers
- Princeton University
- University of Texas at Austin
- The University of Chicago
- Auburn University
- Boston Children's Hospital
- California Institute of Technology
- Duke University
- Harvard University
- NIST
- North Carolina State University
- Northeastern University
- Pennsylvania State University
- SUNY University at Buffalo
- Stony Brook University
- Temple University
- University of California
- University of California Davis
- University of California, Los Angeles
- University of North Carolina at Chapel Hill
- University of Oklahoma
- University of Vermont
- Virginia Tech
- Yale University
- Alabama State University
- Boston College
- Carnegie Mellon University
- Cold Spring Harbor Laboratory
- Hofstra University
- Jane Street Capital
- Johns Hopkins University
- Koç University
- Lawrence Berkeley National Laboratory
- Medical College of Wisconsin
- Rutgers University
- San Jose State University
- The Chinese University of Hong Kong
- The Ohio State University
- University of Arkansas
- University of Colorado
- University of Delaware
- University of Florida
- University of Kansas Medical Center
- University of Maine
- University of Maryland, Baltimore
- University of Maryland, Baltimore County
- University of Miami
- University of Pennsylvania
- University of South Carolina
- University of Tennessee, Knoxville
- University of Texas at Dallas
- University of Washington
- Washington University in St. Louis
- 44 more »
- « less
-
Field
-
is also the home of the Vermont Advanced Computing Center, a research facility with both high-performance CPU and GPU clusters. The Larner College of Medicine is a short (<5 minutes) walk from
-
of machine learning models Preferred skills/knowledge includes: A Master’s degree Training and optimizing ML algorithms on GPU hardware architectures, specifically NVIDIA based Working with geo-spatial data
-
computing (HPC) systems, including GPUs, and programming, such as using CUDA, MPI, AI/ML/DL, and advanced debuggers and performance analyzers. Familiarity with working on open-source projects. About UF
-
. Computational Infrastructure: Deploy and maintain high-performance computing environments (GPU clusters, cloud services) for large-scale image-text experimentation. Data Engineering: Establish workflows
-
of ORNL’s AI/ML tools, leveraging high-performance computing resources and AI-focused GPUs. Deliver ORNL’s mission by aligning behaviors, priorities, and interactions with our core values of Impact, Integrity
-
systems. Areas of interest may include, but are not limited to: - Advanced accelerator chip technologies, such as GPUs or other specialized chips for large-scale AI processing - High-speed memory
-
interfaces (APIs), servers and other web resources open to the broader scientific community. Support the on-site HPC Linux GPU cluster with Slurm queuing system. Maintain RAID NFS storage, Lustre storage
-
computing (HPC) systems, including CPU, GPU, storage, file systems, networking, visualization, job schedulers, and scientific applications Experience leading the implementation and execution of research
-
(CNNs, vision transformers), multi-GPU multi-node training, and federated learning strongly encouraged. Experience working with cloud platforms. Experience with high performance computing. Publication
-
). Experience managing systems utilizing GPU (NVIDIA and AMD) clusters for AI/ML and/or image processing. Knowledge of networking fundamentals including TCP/IP, traffic analysis, common protocols, and network