Sort by
Refine Your Search
-
Listed
-
Category
-
Program
-
Employer
- University of Washington
- Oak Ridge National Laboratory
- University of Colorado
- The University of Chicago
- Cornell University
- George Mason University
- Nature Careers
- The California State University
- University of California
- University of Utah
- Wayne State University
- Argonne
- Auburn University
- California State University, Fresno
- Florida International University
- HHMI
- Harvard University
- Lawrence Berkeley National Laboratory
- NIST
- National Renewable Energy Laboratory NREL
- Northeastern University
- State University of New York University at Albany
- University of California Merced
- University of California, San Francisco
- University of Dayton
- University of Delaware
- University of Louisville
- University of Maine
- University of Pennsylvania
- University of Texas at Dallas
- 20 more »
- « less
-
Field
-
, appointment with tenure is possible. We particularly encourage applications from candidates whose research focuses on Computer Systems (including real-time and embedded systems, distributed systems, networking
-
. Demonstrated experience performing research and technical work supporting DoD customers, including but not limited to AFRL 4. Research involving parallel distributed autonomy applications 5. Experience with
-
paradigms, and distributed and parallel programming constructs. Minimum qualifications: A master’s degree or equivalent professional experience in Computer Science, Computer Engineering, or related fields Per
-
for Science @ Scale: Pretraining, instruction tuning, continued pretraining, Mixture-of-Experts; distributed training/inference (FSDP, DeepSpeed, Megatron-LM, tensor/sequence parallelism); scalable evaluation
-
to take a full range of routine and complex digital intraoral and extraoral radiographs, including periapical, bitewing, panoramic, occlusal, and lateral images, and cone beam computed tomography scans. Aid
-
for Science @ Scale: Pretraining, instruction tuning, continued pretraining, Mixture-of-Experts; distributed training/inference (FSDP, DeepSpeed, Megatron-LM, tensor/sequence parallelism); scalable evaluation
-
of relevant experience in Linux systems administration or HPC systems engineering. Preferred Qualifications Demonstrated experience leading the design and deployment of HPC or large-scale distributed computing
-
, appointment with tenure is possible. We particularly encourage applications from candidates whose research focuses on Computer Systems (including real-time and embedded systems, distributed systems, networking
-
on of effective approaches. This specialist may be required to utilize the elasticity of the AWS Cloud for Big Data Intensive (e.g. Hadoop/Spark) compute infrastructure and parallel system environment
-
, or workshops. Knowledge, Skills and Ability: Ability to program in multiple programming languages like C/C++, FORTRAN, Python, R or similar scientific programming languages. Knowledge of parallel programming