-
The Second Challenge on Real-World Face Restoration at NTIRE 2026: Methods and Results
Authors:
Jingkai Wang,
Jue Gong,
Zheng Chen,
Kai Liu,
Jiatong Li,
Yulun Zhang,
Radu Timofte,
Jiachen Tu,
Yaokun Shi,
Guoyi Xu,
Yaoxin Jiang,
Jiajia Liu,
Yingsi Chen,
Yijiao Liu,
Hui Li,
Yu Wang,
Congchao Zhu,
Alexandru-Gabriel Lefterache,
Anamaria Radoi,
Chuanyue Yan,
Tao Lu,
Yanduo Zhang,
Kanghui Zhao,
Jiaming Wang,
Yuqi Li
, et al. (28 additional authors not shown)
Abstract:
This paper provides a review of the NTIRE 2026 challenge on real-world face restoration, highlighting the proposed solutions and the resulting outcomes. The challenge focuses on generating natural and realistic outputs while maintaining identity consistency. Its goal is to advance state-of-the-art solutions for perceptual quality and realism, without imposing constraints on computational resources…
▽ More
This paper provides a review of the NTIRE 2026 challenge on real-world face restoration, highlighting the proposed solutions and the resulting outcomes. The challenge focuses on generating natural and realistic outputs while maintaining identity consistency. Its goal is to advance state-of-the-art solutions for perceptual quality and realism, without imposing constraints on computational resources or training data. Performance is evaluated using a weighted image quality assessment (IQA) score and employs the AdaFace model as an identity checker. The competition attracted 96 registrants, with 10 teams submitting valid models; ultimately, 9 teams achieved valid scores in the final ranking. This collaborative effort advances the performance of real-world face restoration while offering an in-depth overview of the latest trends in the field.
△ Less
Submitted 12 April, 2026;
originally announced April 2026.
-
Dialectal and Low-Resource Machine Translation for Aromanian
Authors:
Alexandru-Iulius Jerpelea,
Alina Rădoi,
Sergiu Nisioi
Abstract:
This paper presents the process of building a neural machine translation system with support for English, Romanian, and Aromanian - an endangered Eastern Romance language. The primary contribution of this research is twofold: (1) the creation of the most extensive Aromanian-Romanian parallel corpus to date, consisting of 79,000 sentence pairs, and (2) the development and comparative analysis of se…
▽ More
This paper presents the process of building a neural machine translation system with support for English, Romanian, and Aromanian - an endangered Eastern Romance language. The primary contribution of this research is twofold: (1) the creation of the most extensive Aromanian-Romanian parallel corpus to date, consisting of 79,000 sentence pairs, and (2) the development and comparative analysis of several machine translation models optimized for Aromanian. To accomplish this, we introduce a suite of auxiliary tools, including a language-agnostic sentence embedding model for text mining and automated evaluation, complemented by a diacritics conversion system for different writing standards. This research brings contributions to both computational linguistics and language preservation efforts by establishing essential resources for a historically under-resourced language. All datasets, trained models, and associated tools are public: https://huggingface.co/aronlp and https://arotranslate.com
△ Less
Submitted 7 January, 2025; v1 submitted 23 October, 2024;
originally announced October 2024.
-
Temporal aggregation of audio-visual modalities for emotion recognition
Authors:
Andreea Birhala,
Catalin Nicolae Ristea,
Anamaria Radoi,
Liviu Cristian Dutu
Abstract:
Emotion recognition has a pivotal role in affective computing and in human-computer interaction. The current technological developments lead to increased possibilities of collecting data about the emotional state of a person. In general, human perception regarding the emotion transmitted by a subject is based on vocal and visual information collected in the first seconds of interaction with the su…
▽ More
Emotion recognition has a pivotal role in affective computing and in human-computer interaction. The current technological developments lead to increased possibilities of collecting data about the emotional state of a person. In general, human perception regarding the emotion transmitted by a subject is based on vocal and visual information collected in the first seconds of interaction with the subject. As a consequence, the integration of verbal (i.e., speech) and non-verbal (i.e., image) information seems to be the preferred choice in most of the current approaches towards emotion recognition. In this paper, we propose a multimodal fusion technique for emotion recognition based on combining audio-visual modalities from a temporal window with different temporal offsets for each modality. We show that our proposed method outperforms other methods from the literature and human accuracy rating. The experiments are conducted over the open-access multimodal dataset CREMA-D.
△ Less
Submitted 8 July, 2020;
originally announced July 2020.
-
Emotion Recognition System from Speech and Visual Information based on Convolutional Neural Networks
Authors:
Nicolae-Catalin Ristea,
Liviu Cristian Dutu,
Anamaria Radoi
Abstract:
Emotion recognition has become an important field of research in the human-computer interactions domain. The latest advancements in the field show that combining visual with audio information lead to better results if compared to the case of using a single source of information separately. From a visual point of view, a human emotion can be recognized by analyzing the facial expression of the pers…
▽ More
Emotion recognition has become an important field of research in the human-computer interactions domain. The latest advancements in the field show that combining visual with audio information lead to better results if compared to the case of using a single source of information separately. From a visual point of view, a human emotion can be recognized by analyzing the facial expression of the person. More precisely, the human emotion can be described through a combination of several Facial Action Units. In this paper, we propose a system that is able to recognize emotions with a high accuracy rate and in real time, based on deep Convolutional Neural Networks. In order to increase the accuracy of the recognition system, we analyze also the speech data and fuse the information coming from both sources, i.e., visual and audio. Experimental results show the effectiveness of the proposed scheme for emotion recognition and the importance of combining visual with audio data.
△ Less
Submitted 29 February, 2020;
originally announced March 2020.