Sort by
Refine Your Search
-
Listed
-
Country
-
Field
-
learning-based computer vision algorithms and software for object detection, classification, and segmentation. Key Responsibilities Participate in and manage the research project together with the PI, Co-PI
-
will work closely with the Principal Investigator (PI), Co-PI, and the research team to develop deep learning-based computer vision algorithms and software for object detection, classification, and
-
responsibilities will include: Explore innovative methods for food process optimization including the use of AI and machine-learning Develop and execute methods for characterizing and linking the texture and
-
Computer Science, Artificial Intelligence, Software Engineering, or a related field. Strong programming proficiency in Python and/or C++. Demonstrable experience with machine learning frameworks (e.g., PyTorch
-
++. Demonstrable experience with machine learning frameworks (e.g., PyTorch, TensorFlow). Hands-on experience with game AI agents and/or GUI agents such as Mineflayer, Unity ML-Agents, or similar. Solid expertise in
-
basic experience in machine learning or computer vision libraries; familiarity with Vision-Language Models (e.g., CLIP, BLIP) or scene-graph inference is a plus. Key Competencies Strong software
-
As a University of Applied Learning, SIT works closely with industry in our research pursuits. Our research staff will have the opportunity to be equipped with applied research skill sets
-
to staff position within a Research Infrastructure? No Offer Description As a University of Applied Learning, the Singapore Institute of Technology (SIT) works closely with industry in its research
-
As a University of Applied Learning, the Singapore Institute of Technology (SIT) works closely with industry in its research pursuits. This position is situated within the Centre for Immersification
-
(Kubernetes), serverless computing, and REST API development. Proficient in Python, with basic experience in machine learning or computer vision libraries; familiarity with Vision-Language Models (e.g., CLIP