Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Field
-
-quality research in Computational Neuroscience, including but not limited to areas such as artificial intelligence, convolutional neural networks, reinforcement learning decision making, memory
-
programming and optimization for ML models, utilizing frameworks, like CUDA or OpenCL ● Experience with applied computer vision, such as convolutional neural networks and vision transformers is preferred
-
University of North Carolina at Chapel Hill | Chapel Hill, North Carolina | United States | about 2 months ago
, Artificial Intelligence, Neural Networks, Computational Biology, Bioinformatics, Biomedical Informatics or a related field. Programming experience in a language such as Python or R. Experience in writing grant
-
, gravity field, and geodynamic processes using GNSS and InSAR data. - Seismology: real-time earthquake detection, seismic signal classification, and predictive modeling using neural networks. - Earth systems
-
emotional development, learning, school readiness, and educational achievement) and AI models of detection and intervention (e.g., machine learning, large language models, neural networks). To offer a few
-
complex cognitive tasks? What are the principles that enable brains and artificial agents to learn efficiently from experience while minimizing forgetting? How can neural circuits generate the complex
-
, artificial intelligence tools and frameworks, neural networks, and ethical considerations in AI. The faculty member will also play a key role in shaping the AI curriculum and integrating hands-on learning
-
experience involving data pre-processing and preparation for machine learning models Demonstrable research experience in conducting experiments for training and evaluating deep neural networks Knowledge
-
, Computer Vision, Natural Language Processing, Biocomputation, Neural Networks, Generative Artificial Intelligence, or Mathematical Modelling who can: Conduct high-quality research, evidenced by publications
-
acceleration methods such as quantization, pruning, knowledge distillation, and cache tuning; demonstrable results with LLMs or large-scale neural networks. Demonstrable familiarity with AI/ML optimization tools