Sort by
Refine Your Search
-
Requisition Id 16010 Overview: The Watershed Systems Modeling Group (WSMG) within the Environmental Sciences Division (ESD) at Oak Ridge National Laboratory (ORNL) is seeking a highly motivated
-
in the areas of Hydrological and Earth System Modeling and Artificial Intelligence (AI). The successful candidate will have a strong background in computational science, data analysis, and process
-
knowledge of models of strongly correlated electron systems. Proficiency with scripting or programmatic languages, such as Python, c, and MATLAB. Excellent written and oral communication skills. Motivated
-
environments. Knowledge of materials behavior in extreme environments (e.g., high temperature, irradiation, corrosion, and mechanical stress) and familiarity with multiscale and continuum modeling approaches
-
a particular emphasis on error-corrected methods for future fault-tolerant quantum computing. The algorithms will be designed to address key models of quantum materials, such as the Hubbard model
-
in ORNL’s Center for Radiation Protection Knowledge (CRPK). The candidate will work with experts in computational radiation dosimetry and risk assessment. The candidate should be an independent thinker
-
Postdoctoral Research Associate- AI/ML Accelerated Theory Modeling & Simulation for Microelectronics
. Major Duties/Responsibilities: Develop and validate AI/ML models that can be used for knowledge extraction (e.g. discovery of governing equations; correlative analysis across length/time-scales etc.) from
-
mathematically rigorous approaches to optimize the trade-off between privacy and utility especially in the context of large models. Advance knowledge of key AI methods such as deep learning, algorithm design
-
of scientific AI. Focus Areas: Cross-Domain Interoperability: Develop common readiness templates, standardized metadata models, and APIs to enable seamless integration across diverse scientific domains
-
such as quantization, model pruning, approximate attention (linear and sparse) and proposing new mechanisms for tackling speed, accuracy, as well as energy issues, for large language mode (LLM) inferencing