Internship: Python Data Processing on Supercomputers for Large Parallel Numerical Simulations.

Updated: about 5 hours ago
Location: Saint Martin, MIDI PYRENEES
Job Type: FullTime
Deadline: 21 Dec 2025

23 Oct 2025
Job Information
Organisation/Company

Inria, the French national research institute for the digital sciences
Research Field

Computer science
Researcher Profile

First Stage Researcher (R1)
Country

France
Application Deadline

21 Dec 2025 - 00:00 (UTC)
Type of Contract

Temporary
Job Status

Full-time
Hours Per Week

38.5
Offer Starting Date

1 Feb 2026
Is the job funded through the EU Research Framework Programme?

Horizon 2020
Reference Number

2025-09447
Is the Job related to staff position within a Research Infrastructure?

No

Offer Description

The internship will take place at the DataMove team located in the IMAG building on the campus of Saint Martin d’Heres (Univ. Grenoble Alpes) near Grenoble, under the supervision of Bruno Raffin (bruno.raffin@inria.fr ), Andres Bermeo (andres.bermeo-marinelli@inria.fr ) and Yushan Wang (yushan.wang@cea.fr )

The length of the internship is 4 months minimum and the start date is flexible, but need a 2 month delay before starting the interhsip due to administrative constraints. The DataMove team is a friendly and stimulating environment that gathers Professors, Researchers, PhD and Master students all leading research on High-Performance Computing. The city of Grenoble is a student-friendly city surrounded by the Alps mountains, offering a high quality of life and where you can experience all kinds of mountain-related outdoor activities.

The field of high-performance computing has reached a new milestone, with the world's most powerful supercomputers exceeding the exaflop threshold. These machines will make it possible to process unprecedented quantities of data, which can be used to simulate complex phenomena with superior precision in a wide range of application fields: astrophysics, particle physics, healthcare, genomics, etc. 

Without a significant change in practices, the increased computing capacity of the next generation of computers will lead to an explosion in the volume of data produced by numerical simulations. Managing this data, from production to analysis, is a major challenge.

The use of simulation results is based on a well-established calculation-storage-calculation protocol. The difference in capacity between computers and file systems makes it inevitable that the latter will be clogged. For instance, the Gysela code in production mode can produce up to 5TB of data per iteration. It is obvious that storing 5TB of data is not feasible at high frequency. What's more, loading this quantity of data for later analysis and visualization is also a difficult task. To bypass this difficulty, we choose to rely on the in situ data analysis approach. 

We developed an in situ data processing approach, called Deisa, relying on Dask, a Python environment for distributed tasks. Dask defines tasks that are executed asynchronously on workers once their input
data are available. The user defines a graph of tasks to be executed. This graph is then forwarded to the Dask scheduler. The scheduler is in charge of (1) optimizing the task graph and (2) distributing the tasks
for execution on the different workers according to a scheduling algorithm aiming at minimizing the graph execution time.

Deisa extends Dask so it becomes possible to couple a MPI-based parallel simulation code with Dask. Deisa enables the simulation code to directly send newly produced data into the worker memories, notifies
the Dask scheduler that these data are available for analysis and that associated tasks can then be scheduled for execution.

Compared to previous in situ approaches, which are typically MPI-based, our approach, relying on Python tasks, strikes a good balance between programming ease and runtime performance.

But Dask has one major limitation: the scheduler is centralized creating a performance bottleneck at large scale. To circumvent this limitation we developed a variation of Deisa (Deisa-on-Ray or Doreisa) that relies on
the Ray runtime. Ray is a framework for distributed task and actors very popular in the AI community. Ray is more flexible than Dask and supports a distributed task scheduler, making it a more suitable runtime than Dask when targeting the large scale.

What Dask-on-Ray acheives is:

  • The Dask task graph is split in sub-graphs and distributed to different Ray Actors
  • These Ray actors implement a local Dask scheduler. Each Dask to be executed is turned into a Ray task and handled to the local Ray scheduler. The execution of the Dask task graph is them distributed, showing sginficant performance gains
  •  If a rask requires a data that is actually produced by an other task handled by an other remote Ray scheduling actor, the Ray scheduler will fetch it automatically by relying on the Ray reference mechanism
    (can be seen as some kind of distributed smart pointer).

Dask-on-Ray has demonstrated significant performance improvement at scale (tested with up to 15 000 core) than the pure Dask-based appraoch.

The goal of this internship is to investigate solutions for:

  • Further improving performance. In situ analytics often repeats the execution of the same task graph at different iterations. So far
    the task graph is always processed, split and distributed at each iteration, while it could be kept in place, saving all pre-processing steps, for various consecutive iterations. Ray has some mechanisms that could be leveraged for that purpose, namely compiled-graphs and streams.
  • Extending functionalities. The data the simulation push to the analysis is staticaly defined at init time with no possibility for the analysis to change it during execution. Adding the capability to change the simulation behavior dynamically from the analytics would open the way to support more advanced simulation?analytics patterns like changing the data extracted from the simulation based on analysis results, or changing some internal states of the simulation based on analytics for assimilation of observation data for instance.

References

  • Deisa and Deisa-on-Ray Repo: https://github.com/deisa-project
  •  Ray - https://github.com/ray-project/ray
  •  Dask - https://www.dask.org/
  •  Ownership: A Distributed Futures System for Fine-Grained Tasks. Stephanie Wang et al. NSDI 2021. https://www.usenix.org/conference/nsdi21/presentation/cheng
  • Ray: A Distributed Framework for Emerging AI Applications. Philipp Moritz et al. 2018. http://arxiv.org/abs/1712.05889
  •  Deisa Paper: Dask-enabled in situ analytics. Amal Gueroudji, Julien Bigot, Bruno Raffin. Hipc 2021. https://hal.inria.fr/hal-03509198v1
  •  Deisa Paper: Dask-Extended External Tasks for HPC/ML In Transit Workflows, Amal Gueroudji, Julien Bigot, Bruno Raffin, Robert Ross. Work workshop at Supercomputing 23. https://hal.science/hal-04409157v1
  •  Damaris: How to Efficiently Leverage Multicore Parallelism to Achieve Scalable, Jitter-free I/O. Matthieu Dorier , Gabriel Antoniu , Franck Cappello, Marc Snir , Leigh Orf. IEEE Cluster 2012. https://inria.hal.science/hal-00715252
  •  Integrating External Resources with a Task-Based Programming Model. Zhihao Jia, Sean Treichler, Galen Shipman, Michael Bauer, Noah Watkins, Carlos Maltzahn, Patrick McCormick and Alex Aiken
    In the International Conference on High Performance Computing, Data, and Analytics (HiPC 2017). https://legion.stanford.edu/pdfs/hipc2017.pdf
  • Visibility Algorithms for Dynamic Dependence Analysis and Distributed Coherence. Michael Bauer, Elliott Slaughter, Sean Treichler, Wonchan Lee, Michael Garland and Alex Aiken
    In Principles and Practices of Parallel Programming (PPoPP 2023). https://legion.stanford.edu/pdfs/visibility2023.pd f
  •  After studying related work and getting familair with existing code, the candidate will start elaborating new solutions. The proposed approach will be iteratively refined through cycles of implementation, experimentation, result analysis, and design improvements. The candidate will have access to supercomputers for the experiments. If the results are promising, we may consider writing and submitting a publication.

     


    Where to apply
    Website
    https://jobs.inria.fr/public/classic/en/offres/2025-09447

    Requirements
    Skills/Qualifications

    Expected skills include

    • Knowledge on distributed, parallel computing and numerical simulations.
    • Python, Numpy, Parallel programming (MPI)
    • English (working language)

    Specific Requirements

    This internship work could lead to a PhD (funding already secured), on this topic or closly related ones. 


    Languages
    FRENCH
    Level
    Basic

    Languages
    ENGLISH
    Level
    Good

    Additional Information
    Benefits
    • Subsidized meals
    • Partial reimbursement of public transport costs
    • Leave: for annual work contract 7 weeks of annual leave + 10 extra days off due to RTT (statutory reduction in working hours) + possibility of exceptional leave (sick children, moving home, etc.)
    • Possibility of teleworking (90 days / year for an annual contract) and flexible organization of working hours at the condition of team leader approval
    • Social, cultural and sports events and activities

     €4.35 per hour of actual presence at 1 January 2025.

    About 590€ gross per month (internship allowance)

     


    Selection process

    CV + cover letter

    Les candidatures doivent être déposées en ligne sur le site Inria.
    Le traitement des candidatures adressées par d'autres canaux n'est pas garanti.

    Applications must be submitted online via the Inria website. Processing of applications submitted via other channels is not guaranteed.


    Website for additional job details

    https://jobs.inria.fr/public/classic/en/offres/2025-09447

    Work Location(s)
    Number of offers available
    1
    Company/Institute
    Inria
    Country
    France
    City
    Saint Martin d'Hères
    Geofield


    Contact
    City

    LE CHESNAY CEDEX
    Website

    http://www.inria.fr
    Street

    Domaine de Voluceau - Rocquencourt
    Postal Code

    78153

    STATUS: EXPIRED

    • X (formerly Twitter)
    • Facebook
    • LinkedIn
    • Whatsapp

    • More share options
      • E-mail
      • Pocket
      • Viadeo
      • Gmail
      • Weibo
      • Blogger
      • Qzone
      • YahooMail