-
Computational/theoretical chemistry and/or physics, chemical engineering, materials or a closely related field completed within the last 5 years. Preferred Qualifications: Experience with coding, electronic
-
scientific outputs that may include peer-reviewed publications in top-tier water journals, professional scientific code/software contributions, and high-quality datasets. Candidates must also be willing
-
Requisition Id 15598 Overview: As a U.S. Department of Energy (DOE) Office of Science national laboratory, ORNL has an impressive 80-year legacy of addressing the nation’s most pressing challenges
-
data-model integration, leveraging the U.S. Department of Energy’s (DOE) Leadership-Class Computing Facilities to advance predictive understanding of complex environmental systems. Major Duties
-
coding (Python) for building energy modeling and controls Preferred Qualifications: Expertise in modern optimal control techniques (e.g., AI based controls) High level of competence in coding and scripting
-
those skills to a variety of problems, and the ability to determine and understand the broader context of his or her research. Preferred Qualifications: Proficiency in multiple modern coding languages is
-
to ORNL's Research Code of Conduct. Our full code of conduct and a statement by the Lab Director's office can be found here: https://www.ornl.gov/content/research-integrity . Basic Qualifications: PhD in
-
Postdoctoral Research Associate- AI/ML Accelerated Theory Modeling & Simulation for Microelectronics
familiarity with AI/ML algorithms, for generative materials design, or for knowledge extraction, e.g. causal ML or symbolic regression, etc. Strong demonstrated background in coding for data analysis using
-
, beam transfer lines and SNS Ring. The qualified candidate will also design software to monitor and control the SNS accelerator in real time and work in SNS Control Room. As a U.S. Department of Energy
-
., code interpreters, simulation frameworks, databases, lab instruments) and evaluation for long-horizon tasks. Experience with RL and post-training (reward modeling, preference learning, offline/online RL