Sort by
Refine Your Search
-
Listed
-
Category
-
Program
-
Field
-
"A picture is worth a thousands words"... or so the saying goes. How much information can we extract from an image of an insect on a flower? What species is the insect? What species is the flower? Where was the photograph taken? And at what time of the year? What time of the day? What was the...
-
their performance evaluated in terms of classification accuracy, computational speed, and overall usability. Required knowledge Deep learning (CNNs, Transformers) and computer vision Knowledge distillation for model
-
. Required knowledge Strong background in machine/deep learning, computer vision, or applied statistics. Solid programming skills in Python and experience with deep learning frameworks (e.g., PyTorch
-
that are constructed in a way that is inspired by what we know about self-awareness circuits in the brain and the field of self-aware computing. The project will advanced state of the art AI for NLP or vision or both
-
accepted by the intended users due to their limited capabilities to sustain long-term interactions. In this project we propose to develop compositional vision-language models for social robots, enabling them
-
🎯 Research Vision The next generation of software engineering tools will move beyond autocomplete and static code generation toward autonomous, agentic systems — AI developers capable of planning
-
healthcare, finance, environmental monitoring, and beyond. While recent advancements in foundation models have shown tremendous success in NLP and computer vision, the unique characteristics of time series
-
Geopolitical Security. The Research and Enterprise Portfolio is key to the delivery of our strategic vision, ensuring Monash delivers meaningful outcomes to the communities we serve, and remains at the forefront
-
computing/ computer science, engineering, social science, science, community development). They will be committed to undertaking research that supports First Nations people and communities in accessing and
-
settings. Candidates will also be expected to engage in a participatory research approach, involving blind and low vision end users as well as sector professionals References Cheng, W., Luo, Z., and Yin, Q