Sort by
Refine Your Search
-
Listed
-
Employer
- Oak Ridge National Laboratory
- Princeton University
- Nature Careers
- University of Texas at Austin
- California Institute of Technology
- The University of Chicago
- Auburn University
- Carnegie Mellon University
- Boston Children's Hospital
- Harvard University
- NIST
- North Carolina State University
- Northeastern University
- Pennsylvania State University
- SUNY University at Buffalo
- Stony Brook University
- University of California
- University of California Davis
- University of California, Los Angeles
- University of Florida
- University of Oklahoma
- University of Vermont
- Virginia Tech
- Yale University
- Alabama State University
- Boston College
- Brookhaven Lab
- Capitol College
- Cold Spring Harbor Laboratory
- Duke University
- Hofstra University
- Jane Street Capital
- Johns Hopkins University
- Koç University
- Lawrence Berkeley National Laboratory
- Medical College of Wisconsin
- Rutgers University
- San Jose State University
- Stanford University
- Temple University
- The Chinese University of Hong Kong
- The Ohio State University
- University of Arkansas
- University of Colorado
- University of Delaware
- University of Kansas Medical Center
- University of Maine
- University of Maryland, Baltimore
- University of Maryland, Baltimore County
- University of Miami
- University of Pennsylvania
- University of South Carolina
- University of Texas at Dallas
- University of Washington
- Washington University in St. Louis
- 45 more »
- « less
-
Field
-
, featuring 328 general nodes with 476 TB of RAM, and 448 GPU nodes with 31 TB of memory. We also have an AI/ML cluster, and an AI cluster, with over ~110 PB of storage for HPC computations. Applicants should
-
utilizing GPU (NVIDIA and AMD) clusters for AI/ML and/or image processing. Knowledge of networking fundamentals including TCP/IP, traffic analysis, common protocols, and network diagnostics. Experience with
-
with high-performance computing capabilities (including approximately 4,000 Nvidia RTX 4000 Ada GPUs and over 30,000 CPU cores) hosted at the project data center in Nevada where the telescope is located
-
learning frameworks such as TensorFlow, or PyTorc. Experience with GPU programming and optimization for model training and inference. Familiarity with data preprocessing, feature engineering, and model
-
models including scaling models across a large set of GPUs; building or optimizing LLMs to tackle new, complex tasks; developing new models of brain circuits and function; and learning software engineering
-
, computational algebra, logic and programming languages. The department is housed in the newly constructed Science & Innovation Center which boasts Data Center with High Performance GPU Cluster and state
-
transcriptomics. Innovative visualization tools and highly automated analytical pipelines powered by GPU technology. Mentorship from experienced scientists in data analysis and management, with an expertise in
-
through HPC - Monitor and stay current with trends in research computing, such as container technology, latest gpu and cpu hardware, hpc cluster management tools, storage tools/administration and cluster
-
& GPUs — and deep integration with clinical data including electronic health records A diverse array of ongoing research, education, patient care, and community centered activities that require increasing
-
4000 Ada GPUs and over 30,000 CPU cores) hosted at the project data center in Nevada where the telescope is located. Automation: Help in developing and implementing automated processes for server