Sort by
Refine Your Search
-
Listed
-
Category
-
Program
-
Employer
- Oak Ridge National Laboratory
- The University of Chicago
- University of Colorado
- University of Washington
- University of California
- National Renewable Energy Laboratory NREL
- Harvard University
- Massachusetts Institute of Technology
- Argonne
- Auburn University
- California Institute of Technology
- California State University, Fresno
- Cornell University
- Duke University
- Florida International University
- George Mason University
- HHMI
- Lawrence Berkeley National Laboratory
- NIST
- Northeastern University
- Pennsylvania State University
- Sandia National Laboratories
- Stanford University
- State University of New York University at Albany
- Texas A&M University
- The California State University
- University of Dayton
- University of Delaware
- University of Louisville
- University of Oklahoma
- University of Texas at Dallas
- Villanova University
- Washington University in St. Louis
- 23 more »
- « less
-
Field
-
with electronic structure methods (DFT, TD-DFT, BSE). Experience with scientific software integration and user-facing tools. Knowledge of HPC or parallel computing. Experience with machine learning in
-
technologies; knowledge of HPC parallel and highly performant clustered or distributed file systems architectures and their effective use and deployment for storage and management of research data lifecycles
-
images. However, the current limitations of desktop computers in terms of memory, disk storage and computational power, and the lack of image processing algorithms for advanced parallel and distributed
-
develop software for high-energy particle physics. Optimize software for performance, scalability, and efficiency on modern computing architectures, including HPC and distributed systems. Participate in
-
physics. Optimize software for performance, scalability, and efficiency on modern computing architectures, including HPC and distributed systems. Participate in research and development activities
-
develop software for high-energy particle physics. Optimize software for performance, scalability, and efficiency on modern computing architectures, including HPC and distributed systems. Participate in
-
technologies; knowledge of HPC parallel and highly performant clustered or distributed file systems architectures and their effective use and deployment for storage and management of research data lifecycles
-
targets while delivering a high quality, user centered experience for applicants and internal stakeholders. In collaboration with Program Directors, this position designs, implements, and sustains
-
computing software libraries (e.g., Trilinos, MFEM, PETSc, MOOSE). Experience with shared and distributed memory parallel programming models such as OpenMP and MPI. Experience with one more GPU or performance
-
of cluster and job/resource management software (e.g., Warewulf, Slurm, XCat) Parallel and distributed file system (Lustre) experience is a plus Experience with the installation, configuration and