Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Field
-
, physical chemistry, or surface chemistry) or related field completed by the start date Strong postdoctoral training in polymer-based materials, surface characterization, or sensor technologies Demonstrated
-
the field of frugal or green AI TECHNICAL SPHERE You have a proven experience in frugal, green or low-resource AI Strong grasp of deep learning architectures (CNN, RNN, Transformers, LLMs). Experience in fine
-
-Based Wildfire Smoke and Air Quality Monitoring; Deep Learning for Post-Wildfire Damage Assessment. PROFILE of the OFFICE OF POSTDOCTORAL AFFAIRS (OPA) The mission of the UNLV Office of Postdoctoral
-
allows you to be a part of the life of a vibrant and active college campus. To learn more, go to Baylor Benefits & Advantages. Explore & Engage Learn more about Baylor and our strategic vision, Baylor in
-
Vision and Graphics, Statistical Learning, and Bioinformatics. Please visit the website at https://www.polyu.edu.hk/dsai/ for more information about DSAI. Duties The appointee will be required to: (a
-
, or behavioral data) and be proficient in Python and modern deep-learning frameworks (ideally PyTorch). Experience in computer vision, multimodal data fusion, self-supervised or generative modeling is highly
-
samples. Apply machine learning and deep learning techniques to automate segmentation and quantitative analysis of tomographic refractive-index data from cells and tissue samples. Apply the developed
-
/admittance, force control Experience with Artificial Intelligence and deep learning concepts for robotics computer vision, tactile sensing, reinforcement learning Experience with robotic simulation tools e.g
-
connection with the legal adoption of an eligible child, such as travel or court fees, for up to two adoptions in your household. To learn more, please visit: https://www.hr.upenn.edu/PennHR/benefits-pay
-
systems capable of understanding, learning, and acting in complex, dynamic settings. The team works at the intersection of computer vision, multimodal learning, and robotics to create next-generation