Sort by
Refine Your Search
-
Listed
-
Employer
- Oak Ridge National Laboratory
- Princeton University
- Nature Careers
- California Institute of Technology
- University of Texas at Austin
- The University of Chicago
- Auburn University
- Carnegie Mellon University
- Boston Children's Hospital
- Harvard University
- NIST
- North Carolina State University
- Northeastern University
- Pennsylvania State University
- SUNY University at Buffalo
- Stony Brook University
- University of California
- University of California Davis
- University of California, Los Angeles
- University of Florida
- University of Oklahoma
- University of Vermont
- Virginia Tech
- Yale University
- Alabama State University
- Boston College
- Brookhaven Lab
- Capitol College
- Cold Spring Harbor Laboratory
- Duke University
- Hofstra University
- Jane Street Capital
- Johns Hopkins University
- Koç University
- Lawrence Berkeley National Laboratory
- Medical College of Wisconsin
- Rutgers University
- San Jose State University
- Stanford University
- Temple University
- The Chinese University of Hong Kong
- The Ohio State University
- University of Arkansas
- University of Colorado
- University of Delaware
- University of Kansas Medical Center
- University of Maine
- University of Maryland, Baltimore
- University of Maryland, Baltimore County
- University of Miami
- University of Pennsylvania
- University of South Carolina
- University of Texas at Dallas
- University of Washington
- Washington University in St. Louis
- 45 more »
- « less
-
Field
-
Summary: Boston Children's Hospital (BCH) Gastroenterology Procedure Unit (GPU), supports a wide array of anesthesia supported diagnostic and interventional endoscopic procedure for both inpatients and
-
managing and administering an NVIDIA DGX SuperPod instrument. You and another HPC administrator will partner closely with a team of data scientists from Stanford Data Science to ensure that the GPU cluster
-
possible. We work with petabytes of data, a computing cluster with hundreds of thousands of cores, and a growing GPU cluster containing thousands of high-end GPUs. We don’t believe in “one-size-fits-all
-
Generative AI (GenAI) applications, including GPU-hosted LLMs, containerized workloads, and internal agent platforms. Manage Model Context Protocol (MCP) servers, ensuring context routing, memory persistence
-
and scripting languages. Extensive knowledge of parallel programming techniques, including shared memory and message passing parallel programming, and knowledge of GPU programming. Experience with
-
3T Siemens MR scanners, OPM-MEG, EEG, eye tracking, and TMS laboratories. They will also have access to Princeton's world-class computational infrastructure, including GPU systems capable of running
-
, including GPU systems capable of running large-scale AI workflows. Applicants should submit online at https://www.princeton.edu/acad-positions/position/39681 and include a cover letter, curriculum vitae
-
on heterogeneous processor types Optimized GPU computing and exploitation of GPU architectures for HPEC (tensors, multi-GPU instantiations, advances in GPU for AI/ML) Compute-focused optimization of System-on-Chip
-
research computing and visualization services. ARC systems currently host 50,000+ CPU cores, 500+ advanced GPUs, and 10+ petabytes of storage. We stay abreast of novel and developing trends in research
-
transport. Knowledge of low-temperature plasma devices and relevant physics. Theoretical and computational knowledge of the particle-in-cell method. High-performance computing, including MPI, OpenMP and GPU