Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Field
-
control proficiency. Test-driven development experience (pytest, Jest). Linux/Unix command-line proficiency. Preferred Qualifications Strong proficiency in Python. Experience integrating AI/LLM APIs
-
tables, streams, and tasks. Familiarity with AI workloads: embeddings, vector search, RAG, LLM orchestration pipelines. Experience with ETL/ELT frameworks, API integrations, and event streaming (Kafka
-
, harmonization, aggregation and analysis of clinical data for a large ophthalmology repository. Extract information from unstructured electronic health record text using large language models (LLMs) and clinical
-
collaborations in areas such as healthy living, transport, green energy, and advanced manufacturing. The role requires strong hands-on technical skills in modern AI, including deep learning, LLMs, Agentic AI, and
-
and deep understanding of machine learning, artificial intelligence, algorithms, and knowledge of the latest developments in AI. Proficiency in ML tracking/monitoring tools (MLflow, Grafana) and LLM
-
intelligent transportation systems (ITS), with an emphasis on developing and deploying state-of-the-art Large-Language Models (LLMs), Vision-Language Models (VLMs), and Vision-Language-Action (VLA) models
-
Large Language Models (LLMs), integrating dataset expansion, evaluation, incremental fine-tuning, and security-aware code generation validation. The student will build an integrated pipeline that reuses
-
mathematics/statistics/computer science, willing to specialize their research on large language models (LLMs). The successful candidate will contribute to the development of trustworthy LLMs by focusing
-
learning, large language models, and the theory of deep learning. The candidate will develop DRL algorithms for online and off-line tasks, for robotic applications and possibly for LLM reasoning applications
-
. The position reports directly to Prof. Zhao Zhang. The key duties of this position are the following: Data Collection and Processing Deploying LLM fine-tuning and RAG pipelines Evaluating open source