Jobs

Post-doctoral position M/F –  Hierarchical goal-oriented data compression with AI 

Posted on the 15th September 2025

Location Rennes (35), France

Web site http://www.fr.mitsubishielectric-rce.eu/

Job reference SASFT046 

Contract Post-doc position – 12 months

Overall context

Mitsubishi Electric R&D Centre Europe (MERCE), located in the Rennes Atalante technology park, is a key player in the Mitsubishi Electric Group’s global research and development activities. 

Within MERCE, the Digital Information Systems (DIS) division hosts the Synergistic Autonomous Systems (SAS) team, which focuses its research on autonomous systems. The team places particular emphasis on the synergy between multiple systems and technological domains, including telecommunications, control, artificial intelligence, and autonomy. 

Research project

Connected systems generate massive volumes of data, making transmission increasingly costly in terms of bandwidth and energy. Traditional compression methods often aim to preserve reconstruction quality, but this is not always necessary. 

The Hierarchical Goal-Oriented Compression with AI approach focuses instead on maintaining the performance of downstream tasks—such as detection, classification, or control—by selectively compressing and transmitting only the most relevant information. 

Advances in Compression through AI and connections with generative AI 

The integration of AI has significantly transformed the field of data compression: 

. Latent Space Modeling: Works like [1], in the field of standard image compression, improve the estimation of the probability distribution of the latent space variable, which is essential for effective entropy coding. They also show how to design an encoder considering the bit size of the latent variable. 

. Generative Models: State-of-the-art image generation models [2][3] rely on fine-grained probability estimation of the latent space and autoregressive sampling [2], demonstrating the power of transformers neural networks in encoding and modeling complex data distributions. 

These developments show that modern neural architectures offer powerful tools for both information encoding and probability estimation, which are central to efficient compression. 

In fact, recent work started to highlight strong connections between the field of compression and generative AI [4][5]. 

Hierarchical and Adaptive Encoding for Digital Twins 

In applications such as digital twin updates, a hierarchical goal-oriented strategy is crucial. For example, systems might first transmit coarse information—like object positions and sizes—and then send finer details as needed, depending on available resources and task requirements. The system should also identify information useful for several tasks and therefore learn to extract relevant common information [6]. 

Research Objectives 

This research aims to explore how to best leverage these recent technical advances to design compression strategies that are both hierarchical and goal-oriented, enabling efficient and intelligent data transmission tailored to specific tasks. We aim at improving and adapting to the considered scenario existing work of this field such as [7][8][9][10]. 


Details research objectives 

. Implement state-of-the-art compression and generative AI algorithms by reproducing training procedures across diverse datasets to validate and benchmark performance. 

. Develop and integrate technical solutions proposed by MERCE researchers into experimental or production environments. 

. Contribute to original ideas and innovations to advance research in goal-oriented compression and related AI-driven data processing techniques. 


Prerequisites 

. PhD in Artificial Intelligence or Information Processing, with a focus on data compression or information theory. 

. Proficiency in Python programming, including experience with relevant libraries and frameworks. 

. Hands-on experience with neural networks, including architecture design and training procedures. 

. Background in compression systems and entropy coding, with knowledge of information theory is considered a strong asset. 

Supervisor

Vincent CORLAY, Senior Researcher 

Duration

12 months

Period

As soon as possible from Oct 25 

Contact

Magali BRANCHEREAU, HR Manager (jobs@fr.merce.mee.com) 

Please send us your application (CV and cover letter in PDF format), specifying the reference of the job off. 

References

[1] D. Minnen, J. Ballé, and G. D. Toderici. Joint Autoregressive and Hierarchical Priors for Learned Image Compression. In Advances in Neural Information Processing Systems, 31, 2018 

[2] Peize Sun, Yi Jiang, Shoufa Chen, Shilong Zhang, Bingyue Peng, Ping Luo, Zehuan Yuan. Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation. arXiv preprint arXiv:2406.06525, 2024. 

[3] X. Chen, N. Mishra, M. Rohaninejad, P. Abbeel. Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction. Advances in Neural Information Processing Systems, 33, 2020. 

[4] C. S. K. Valmeekam, K. Narayanan, D. Kalathil, J.-F. Chamberland, S. Shakkottai. LLMZip: Lossless Text Compression using Large Language Models. arXiv preprint arXiv:2306.04050, 2023. 

[5] G. Delétang, A. Ruoss, P.-A. Duquenne, E. Catt, T. Genewein, C. Mattern, J. Grau-Moya, L. K. Wenliang, M. Aitchison, L. Orseau, M. Hutter, J. Veness. Language Modeling Is Compression. Proceedings of the International Conference on Learning Representations (ICLR), 2024. 

[6] Michael Kleinman, Alessandro Achille, Stefano Soatto, Jonathan Kao. Gács-Körner Common Information Variational Autoencoder. Advances in Neural Information Processing Systems (NeurIPS), 2023. 

[7] M. Mortaheb, M. A. A. Khojastepour, S. T. Chakradhar, S. Ulukus. Semantic Multi-Resolution Communications. arXiv preprint arXiv:2308.11604, 2023. 

[8] W. Qian, B. Chen, Y. Zhang, G. Wen, F. Gechter. Multi-Task Variational Information Bottleneck. arXiv preprint arXiv:2007.00339, 2020 

[9] J. Machado de Freitas, S. Berg, B. C. Geiger, M. Mücke. Compressed Hierarchical Representations for Multi-Task Learning and Task Clustering. Proceedings of the International Joint Conference on Neural Networks (IJCNN), arXiv preprint arXiv:2205.15882, 2022. 

[10] Z. Kang, K. Grauman, F. Sha. Learning with Whom to Share in Multi-task Feature Learning. Proceedings of the 28th International Conference on Machine Learning (ICML), Bellevue, WA, USA, 2011.