I am currently a research scientist at Saclay AI, a startup that I founded in 2025 to conduct research in speech and language technologies.
Between December 2023 and March 2025, I was a postdoctoral researcher at GETALP, a joint research team between Université Grenoble Alpes and CNRS. I worked on self-supervised learning for multimodal speech-text models, under the supervision of Didier Schwab and Marco Dinarelli.
Between March 2020 and November 2023, I was a PhD candidate in multilingual speech-to-text translation, also at GETALP. I was advised by Didier Schwab and Benjamin Lecouteux. During my first year, I was also advised by Laurent Besacier. My PhD was funded by
Prior to my PhD, I also worked on a French language model called FlauBERT as part of my MSc internship.
Education
2020-2023 | PhD in Computer Science | Université Grenoble Alpes
2018-2019 | MSc in Data Science | CentraleSupélec & ESSEC
News
- 2026-02-12: Pantagruel has been accepted for an oral presentation at LREC 2026.
- 2026-01-09: New preprint: Pantagruel: Unified Self-Supervised Encoders for French Text and Speech.
- 2024-03-25: I have successfully defended my PhD.
- 2023-12-01: I have started my postdoc on self-supervised multimodal models, with Didier Schwab and Marco Dinarelli.
- 2023-04-24: CTC Meets Optimal Transport has been accepted for an oral presentation at ICML 2023.
- 2023-01-27: New preprint: Pre-training for Speech Translation: CTC Meets Optimal Transport.
- 2021-10-11: Self-supervised learning with LeBenchmark has been accepted at NeurIPS 2021.
- 2021-06-02: LeBenchmark has been accepted at Interspeech 2021.
- 2021-05-06: Lightweight Adapter Tuning for Multilingual Speech Translation has been accepted at ACL 2021.
- 2020-10-22: Dual-decoder Transformer has been accepted for an oral presentation at COLING 2020.
- 2020-02-13: FlauBERT has been accepted at LREC 2020.
- 2019-12-19: New preprint: FlauBERT: Unsupervised Language Model Pre-training for French.