15 video-coding-hevc-video-compression Postdoctoral positions at Oak Ridge National Laboratory
Sort by
Refine Your Search
-
Laboratory (ORNL) is seeking several qualified applicants for postdoctoral positions related to Computational Methods for Data Reduction. Topics include data compression and reconstruction, data movement
-
to Computational Methods for Data Reduction. Topics include data compression and reconstruction, data movement, data assimilation, surrogate model design, and machine learning algorithms. The position comes with a
-
, Integrity, Teamwork, Safety, and Service. As a member of the ORNL scientific community, you will be expected to commit to ORNL's Research Code of Conduct. Our full code of conduct and a statement by the Lab
-
simulation codes, including computational scaling and efficiency, for hybrid exascale supercomputing systems. Programming model for multicore and heterogeneous architectures such as graphical processing units
-
or OpenMP. Experience in heterogeneous programming (i.e., GPU programming) and/or developing, debugging, and profiling massively parallel codes. Experience with using high performance computing (HPC
-
of the ORNL scientific community, you will be expected to commit to ORNL's Research Code of Conduct. Our full code of conduct and a statement by the Lab Director's office can be found here: https
-
and benchmark PyORBIT code. Participate in scientific conferences, workshops, meetings and publishes results in the form of SNS technical notes and memos, workshop and conference proceedings, and
-
to ORNL's Research Code of Conduct. Our full code of conduct, and a statement by the Lab Director's office can be found here: https://www.ornl.gov/content/research-integrity Benefits at ORNL: UT Battelle
-
Scientist you will be responsible for: Developing high-quality code following best practices in the community for documentation, provenance, version control, etc. Participating in research projects in AI
-
., code interpreters, simulation frameworks, databases, lab instruments) and evaluation for long-horizon tasks. Experience with RL and post-training (reward modeling, preference learning, offline/online RL