Sort by
Refine Your Search
-
Listed
-
Category
-
Country
-
Program
-
Field
-
on the design and control of a robotic system for the maintenance of DEMO breeding blankets, in close collaboration with key EU fusion partners. https://mpe.au.dk/en/research/key-areas-in-research-and-development
-
Infrastructure? No Offer Description The POLYMODs project focuses on the fundamental molecular mechanisms underlying the self-assembly of dynamin molecular machines, which are widely involved in the fusion and
-
the VENUS-F fast-neutron subcritical reactor, coupled together via a tritiated titanium target placed at the center of the reactor, which converts deuterons into neutrons via the deuteron-tritium fusion
-
into neutrons via the deuteron-tritium fusion reaction. The GENEPI-3C accelerator has recently been upgraded to generate the intense and stable neutron source essential for the SPATIAL project's measurements
-
Lyudmila Mihaylova Application Deadline: Applications accepted all year round Details This PhD research is focused on multi-sensor data fusion for decision making. The project aims at dealing with large
-
University of Massachusetts Medical School | Worcester, Massachusetts | United States | about 2 months ago
General Summary of the Position Postdoctoral positions in Deep-Learning Omics are available in the Zhou Lab (https://profiles.umassmed.edu/display/20062865 ). The Zhou Lab at UMass Chan Medical
-
or similar) Preferred Qualifications: Experience with MRI/fMRI/DTI, PET, multimodal fusion, and/or machine learning Strong programming skills (Python/MATLAB), version control, and HPC workflows Special
-
teaching in key and rapidly evolving areas such as autonomous systems, data-driven modeling, learning-based control, optimization, complex networks, and sensor fusion. Research at the division is
-
People Technology Experience Manager to support our Oracle Fusion and ServiceNow environments with a strong focus on helping to build enageing user jounrneys. Reporting to the Director, People Services
-
: Design and train vision–text transformer architectures for multimodal fusion (RGB + thermal, intraoperative video, OR signals, EHR, surveys); develop temporal and cross-attention components