Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Field
-
University of North Carolina at Chapel Hill | Chapel Hill, North Carolina | United States | about 1 month ago
beautiful campus, world-class medical care, commitment to the arts and top athletic programs, Carolina is an ideal place to teach, work and learn. One of the best college towns and best places to live in
-
models, LLMs and Transformer architectures Excellent programming skills in PyTorch/JAX and experience working with GPUs and high-performance clusters. Strong mathematical skills with excellent
-
of central London. For more information: https://www.kcl.ac.uk/engineering About the role This role will support the delivery of a mesh generation project, funded under a recent major £7m EPSRC Programme Grant
-
Researcher (R1) Country Australia Application Deadline 21 Jan 2026 - 00:00 (UTC) Type of Contract Other Job Status Full-time Is the job funded through the EU Research Framework Programme? Not funded by a EU
-
energy efficiency bounds of modern CPU, GPU and FPGA devices at performing set operations in the context of combinatorial applications; Investigation of current trends in programming FPGA accelerators and
-
funded by the EU FP7 program, H2020 program and the European Space Agency (ESA). For further information, you may check: www.securityandtrust.lu The SigCom research group carries out research activities in
-
learning calculations (for example, AI/ML training and/or AI/ML inference) Scripting and/or programming skills, especially for AI/ML (for example, developing software that invokes AI/ML libraries such as
-
14 Dec 2025 - 00:00 (UTC) Type of Contract Temporary Job Status Full-time Is the job funded through the EU Research Framework Programme? Not funded by a EU programme Is the Job related to staff
-
GPU clusters to enhance efficiency and scalability. Knowledge, Skills, and Abilities: Good communication and teamwork skills; Strong skill in large language model customization techniques including
-
to push the boundaries of what’s possible. We work with petabytes of data, a computing cluster with hundreds of thousands of cores, and a growing GPU cluster containing thousands of high-end GPUs. We don’t