-
encouraged among scientists from the two universities. At each university, the center will anchor faculty members distributed in multiple schools and departments, ranging from biology, chemistry, physics
-
optimize large-scale distributed training frameworks (e.g., data parallelism, tensor parallelism, pipeline parallelism). Develop high-performance inference engines, improving latency, throughput, and memory
-
harvesting, storage, distribution and applications Superconducting magnetic energy storage systems Solid-state battery, high density battery Smart green buildings (including air quality, sensors, well-living
-
to well-known open-source projects or a personal portfolio of impactful open-source research code. Experience with large-scale distributed training and high-performance computing (HPC) environments.