Sort by
Refine Your Search
-
Listed
-
Category
-
Employer
-
Field
-
, implementing efficient monitoring of the various deployments using Grafana and Prometheus and the autoscaling of compute nodes for CPU and GPU workloads across various cloud providers. Key responsibilities will
-
configuration and management Knowledge of Linux and GPU scripting, storage management, quantum computing, and cloud systems Here's how to apply: Please submit your updated resume and a short cover letter
-
for GPU-accelerated applications. Data Engineering Tools: Proficiency in data engineering tools, including Apache Airflow for workflow orchestration, and message brokers like RabbitMQ or Kafka
-
. Demonstrated history in Astronomy research or engineering. Experience in Interferometric Imaging and Calibration. Experience with Python package development and deployment. Experience with GPU application
-
infrastructures, improving system performance, scalability, and efficiency by optimizing resource usage (e.g., GPUs, CPUs, energy consumption). Researchers and students will explore innovative approaches to reduce
-
containerisation (e.g., Docker) and orchestration tools (e.g., Kubernetes) for deploying and managing applications at scale, including support for GPU-accelerated applications. Data Engineering Tools: Proficiency in
-
communications. Evaluation of model performance can be conducted based on the data collected through the water tank. We have the GPU machines ($14k) to develop deep neural networks for underwater communications
-
equipped with 8 GPUs. This infrastructure will enable the efficient training and evaluation of complex neural network models, essential for the project's success. The significance of this project for our