Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Field
-
cutting-edge computer vision technology. Working closely with the Principal Investigator, Co-PI, and interdisciplinary research team, RE will develop and implement deep learning algorithms to analyze trap
-
will play a key role in automated wildlife identification and classification from trap camera images using cutting-edge computer vision technology. Working closely with the Principal Investigator, Co-PI
-
learning-based computer vision algorithms and software for object detection, classification, and segmentation. Key Responsibilities Participate in and manage the research project together with the PI, Co-PI
-
will work closely with the Principal Investigator (PI), Co-PI, and the research team to develop deep learning-based computer vision algorithms and software for object detection, classification, and
-
sensor data under varying environmental conditions. Design computer vision and human-behavior analysis models for detecting personnel, posture, casualties, and hazardous situations, including operation in
-
: Architect and deploy machine learning and computer vision models directly onto onboard edge devices (e.g., NVIDIA Jetson) for real-time object detection, tracking, and autonomous decision-making. Proof
-
Software Engineering: SDLC, requirement analysis, design, testing, optimisation, analysis, simulation, database, computer graphics, distributed systems, computer vision, video analytics Emerging Fields: IIoT
-
, testing, optimisation, analysis, simulation, database, computer graphics, distributed systems, computer vision, video analytics Emerging Fields: IIoT, computational fintech, robot-human interaction
-
basic experience in machine learning or computer vision libraries; familiarity with Vision-Language Models (e.g., CLIP, BLIP) or scene-graph inference is a plus. Key Competencies Strong software
-
(Kubernetes), serverless computing, and REST API development. Proficient in Python, with basic experience in machine learning or computer vision libraries; familiarity with Vision-Language Models (e.g., CLIP