This page has only limited features, please log in for full access.

Prof. Abdelmalik TALEB-AHMED
IEMN UMR CNRS 8520, UPHF

Basic Info

Basic Info is private.

Research Keywords & Expertise

0 Data Fusion
0 Deep Learning,
0 Pattern Recognition,
0 computer vision,
0 Machine Learning,

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

The user biography is not available.
Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Journal article
Published: 31 August 2021 in Sensors
Reads 0
Downloads 0

Since the appearance of the COVID-19 pandemic (at the end of 2019, Wuhan, China), the recognition of COVID-19 with medical imaging has become an active research topic for the machine learning and computer vision community. This paper is based on the results obtained from the 2021 COVID-19 SPGC challenge, which aims to classify volumetric CT scans into normal, COVID-19, or community-acquired pneumonia (Cap) classes. To this end, we proposed a deep-learning-based approach (CNR-IEMN) that consists of two main stages. In the first stage, we trained four deep learning architectures with a multi-tasks strategy for slice-level classification. In the second stage, we used the previously trained models with an XG-boost classifier to classify the whole CT scan into normal, COVID-19, or Cap classes. Our approach achieved a good result on the validation set, with an overall accuracy of 87.75% and 96.36%, 52.63%, and 95.83% sensitivities for COVID-19, Cap, and normal, respectively. On the other hand, our approach achieved fifth place on the three test datasets of SPGC in the COVID-19 challenge, where our approach achieved the best result for COVID-19 sensitivity. In addition, our approach achieved second place on two of the three testing sets.

ACS Style

Fares Bougourzi; Riccardo Contino; Cosimo Distante; Abdelmalik Taleb-Ahmed. Recognition of COVID-19 from CT Scans Using Two-Stage Deep-Learning-Based Approach: CNR-IEMN . Sensors 2021, 21, 5878 .

AMA Style

Fares Bougourzi, Riccardo Contino, Cosimo Distante, Abdelmalik Taleb-Ahmed. Recognition of COVID-19 from CT Scans Using Two-Stage Deep-Learning-Based Approach: CNR-IEMN . Sensors. 2021; 21 (17):5878.

Chicago/Turabian Style

Fares Bougourzi; Riccardo Contino; Cosimo Distante; Abdelmalik Taleb-Ahmed. 2021. "Recognition of COVID-19 from CT Scans Using Two-Stage Deep-Learning-Based Approach: CNR-IEMN ." Sensors 21, no. 17: 5878.

Journal article
Published: 11 August 2021 in Electronics
Reads 0
Downloads 0

Automatic pain recognition from facial expressions is a challenging problem that has attracted a significant attention from the research community. This article provides a comprehensive analysis on the topic by comparing some popular and Off-the-Shell CNN (Convolutional Neural Network) architectures, including MobileNet, GoogleNet, ResNeXt-50, ResNet18, and DenseNet-161. We use these networks in two distinct modes: stand alone mode or feature extractor mode. In stand alone mode, the models (i.e., the networks) are used for directly estimating the pain. In feature extractor mode, the “values” of the middle layers are extracted and used as inputs to classifiers, such as SVR (Support Vector Regression) and RFR (Random Forest Regression). We perform extensive experiments on the benchmarking and publicly available database called UNBC-McMaster Shoulder Pain. The obtained results are interesting as they give valuable insights into the usefulness of the hidden CNN layers for automatic pain estimation.

ACS Style

Safaa El Morabit; Atika Rivenq; Mohammed-En-Nadhir Zighem; Abdenour Hadid; Abdeldjalil Ouahabi; Abdelmalik Taleb-Ahmed. Automatic Pain Estimation from Facial Expressions: A Comparative Analysis Using Off-the-Shelf CNN Architectures. Electronics 2021, 10, 1926 .

AMA Style

Safaa El Morabit, Atika Rivenq, Mohammed-En-Nadhir Zighem, Abdenour Hadid, Abdeldjalil Ouahabi, Abdelmalik Taleb-Ahmed. Automatic Pain Estimation from Facial Expressions: A Comparative Analysis Using Off-the-Shelf CNN Architectures. Electronics. 2021; 10 (16):1926.

Chicago/Turabian Style

Safaa El Morabit; Atika Rivenq; Mohammed-En-Nadhir Zighem; Abdenour Hadid; Abdeldjalil Ouahabi; Abdelmalik Taleb-Ahmed. 2021. "Automatic Pain Estimation from Facial Expressions: A Comparative Analysis Using Off-the-Shelf CNN Architectures." Electronics 10, no. 16: 1926.

Journal article
Published: 03 May 2021 in Sensors
Reads 0
Downloads 0

Recently, most state-of-the-art anomaly detection methods are based on apparent motion and appearance reconstruction networks and use error estimation between generated and real information as detection features. These approaches achieve promising results by only using normal samples for training steps. In this paper, our contributions are two-fold. On the one hand, we propose a flexible multi-channel framework to generate multi-type frame-level features. On the other hand, we study how it is possible to improve the detection performance by supervised learning. The multi-channel framework is based on four Conditional GANs (CGANs) taking various type of appearance and motion information as input and producing prediction information as output. These CGANs provide a better feature space to represent the distinction between normal and abnormal events. Then, the difference between those generative and ground-truth information is encoded by Peak Signal-to-Noise Ratio (PSNR). We propose to classify those features in a classical supervised scenario by building a small training set with some abnormal samples of the original test set of the dataset. The binary Support Vector Machine (SVM) is applied for frame-level anomaly detection. Finally, we use Mask R-CNN as detector to perform object-centric anomaly localization. Our solution is largely evaluated on Avenue, Ped1, Ped2, and ShanghaiTech datasets. Our experiment results demonstrate that PSNR features combined with supervised SVM are better than error maps computed by previous methods. We achieve state-of-the-art performance for frame-level AUC on Ped1 and ShanghaiTech. Especially, for the most challenging Shanghaitech dataset, a supervised training model outperforms up to 9% the state-of-the-art an unsupervised strategy.

ACS Style

Tuan-Hung Vu; Jacques Boonaert; Sebastien Ambellouis; Abdelmalik Taleb-Ahmed. Multi-Channel Generative Framework and Supervised Learning for Anomaly Detection in Surveillance Videos. Sensors 2021, 21, 3179 .

AMA Style

Tuan-Hung Vu, Jacques Boonaert, Sebastien Ambellouis, Abdelmalik Taleb-Ahmed. Multi-Channel Generative Framework and Supervised Learning for Anomaly Detection in Surveillance Videos. Sensors. 2021; 21 (9):3179.

Chicago/Turabian Style

Tuan-Hung Vu; Jacques Boonaert; Sebastien Ambellouis; Abdelmalik Taleb-Ahmed. 2021. "Multi-Channel Generative Framework and Supervised Learning for Anomaly Detection in Surveillance Videos." Sensors 21, no. 9: 3179.

Journal article
Published: 09 March 2021 in Journal of Imaging
Reads 0
Downloads 0

In recent years, automatic tissue phenotyping has attracted increasing interest in the Digital Pathology (DP) field. For Colorectal Cancer (CRC), tissue phenotyping can diagnose the cancer and differentiate between different cancer grades. The development of Whole Slide Images (WSIs) has provided the required data for creating automatic tissue phenotyping systems. In this paper, we study different hand-crafted feature-based and deep learning methods using two popular multi-classes CRC-tissue-type databases: Kather-CRC-2016 and CRC-TP. For the hand-crafted features, we use two texture descriptors (LPQ and BSIF) and their combination. In addition, two classifiers are used (SVM and NN) to classify the texture features into distinct CRC tissue types. For the deep learning methods, we evaluate four Convolutional Neural Network (CNN) architectures (ResNet-101, ResNeXt-50, Inception-v3, and DenseNet-161). Moreover, we propose two Ensemble CNN approaches: Mean-Ensemble-CNN and NN-Ensemble-CNN. The experimental results show that the proposed approaches outperformed the hand-crafted feature-based methods, CNN architectures and the state-of-the-art methods in both databases.

ACS Style

Emanuela Paladini; Edoardo Vantaggiato; Fares Bougourzi; Cosimo Distante; Abdenour Hadid; Abdelmalik Taleb-Ahmed. Two Ensemble-CNN Approaches for Colorectal Cancer Tissue Type Classification. Journal of Imaging 2021, 7, 51 .

AMA Style

Emanuela Paladini, Edoardo Vantaggiato, Fares Bougourzi, Cosimo Distante, Abdenour Hadid, Abdelmalik Taleb-Ahmed. Two Ensemble-CNN Approaches for Colorectal Cancer Tissue Type Classification. Journal of Imaging. 2021; 7 (3):51.

Chicago/Turabian Style

Emanuela Paladini; Edoardo Vantaggiato; Fares Bougourzi; Cosimo Distante; Abdenour Hadid; Abdelmalik Taleb-Ahmed. 2021. "Two Ensemble-CNN Approaches for Colorectal Cancer Tissue Type Classification." Journal of Imaging 7, no. 3: 51.

Journal article
Published: 03 March 2021 in Sensors
Reads 0
Downloads 0

The recognition of COVID-19 infection from X-ray images is an emerging field in the learning and computer vision community. Despite the great efforts that have been made in this field since the appearance of COVID-19 (2019), the field still suffers from two drawbacks. First, the number of available X-ray scans labeled as COVID-19-infected is relatively small. Second, all the works that have been carried out in the field are separate; there are no unified data, classes, and evaluation protocols. In this work, based on public and newly collected data, we propose two X-ray COVID-19 databases, which are three-class COVID-19 and five-class COVID-19 datasets. For both databases, we evaluate different deep learning architectures. Moreover, we propose an Ensemble-CNNs approach which outperforms the deep learning architectures and shows promising results in both databases. In other words, our proposed Ensemble-CNNs achieved a high performance in the recognition of COVID-19 infection, resulting in accuracies of 100% and 98.1% in the three-class and five-class scenarios, respectively. In addition, our approach achieved promising results in the overall recognition accuracy of 75.23% and 81.0% for the three-class and five-class scenarios, respectively. We make our databases of COVID-19 X-ray scans publicly available to encourage other researchers to use it as a benchmark for their studies and comparisons.

ACS Style

Edoardo Vantaggiato; Emanuela Paladini; Fares Bougourzi; Cosimo Distante; Abdenour Hadid; Abdelmalik Taleb-Ahmed. COVID-19 Recognition Using Ensemble-CNNs in Two New Chest X-ray Databases. Sensors 2021, 21, 1742 .

AMA Style

Edoardo Vantaggiato, Emanuela Paladini, Fares Bougourzi, Cosimo Distante, Abdenour Hadid, Abdelmalik Taleb-Ahmed. COVID-19 Recognition Using Ensemble-CNNs in Two New Chest X-ray Databases. Sensors. 2021; 21 (5):1742.

Chicago/Turabian Style

Edoardo Vantaggiato; Emanuela Paladini; Fares Bougourzi; Cosimo Distante; Abdenour Hadid; Abdelmalik Taleb-Ahmed. 2021. "COVID-19 Recognition Using Ensemble-CNNs in Two New Chest X-ray Databases." Sensors 21, no. 5: 1742.

Review
Published: 23 July 2020 in Electronics
Reads 0
Downloads 0

Face recognition is one of the most active research fields of computer vision and pattern recognition, with many practical and commercial applications including identification, access control, forensics, and human-computer interactions. However, identifying a face in a crowd raises serious questions about individual freedoms and poses ethical issues. Significant methods, algorithms, approaches, and databases have been proposed over recent years to study constrained and unconstrained face recognition. 2D approaches reached some degree of maturity and reported very high rates of recognition. This performance is achieved in controlled environments where the acquisition parameters are controlled, such as lighting, angle of view, and distance between the camera–subject. However, if the ambient conditions (e.g., lighting) or the facial appearance (e.g., pose or facial expression) change, this performance will degrade dramatically. 3D approaches were proposed as an alternative solution to the problems mentioned above. The advantage of 3D data lies in its invariance to pose and lighting conditions, which has enhanced recognition systems efficiency. 3D data, however, is somewhat sensitive to changes in facial expressions. This review presents the history of face recognition technology, the current state-of-the-art methodologies, and future directions. We specifically concentrate on the most recent databases, 2D and 3D face recognition methods. Besides, we pay particular attention to deep learning approach as it presents the actuality in this field. Open issues are examined and potential directions for research in facial recognition are proposed in order to provide the reader with a point of reference for topics that deserve consideration.

ACS Style

Insaf Adjabi; Abdeldjalil Ouahabi; Amir Benzaoui; Abdelmalik Taleb-Ahmed. Past, Present, and Future of Face Recognition: A Review. Electronics 2020, 9, 1188 .

AMA Style

Insaf Adjabi, Abdeldjalil Ouahabi, Amir Benzaoui, Abdelmalik Taleb-Ahmed. Past, Present, and Future of Face Recognition: A Review. Electronics. 2020; 9 (8):1188.

Chicago/Turabian Style

Insaf Adjabi; Abdeldjalil Ouahabi; Amir Benzaoui; Abdelmalik Taleb-Ahmed. 2020. "Past, Present, and Future of Face Recognition: A Review." Electronics 9, no. 8: 1188.

Journal article
Published: 14 July 2020 in Neural Networks
Reads 0
Downloads 0

In the last few years, human age estimation from face images attracted the attention of many researchers in computer vision and machine learning fields. This is due to its numerous applications. In this paper, we propose a new architecture for age estimation based on facial images. It is mainly based on a cascade of classification trees ensembles, which are known recently as a Deep Random Forest. Our architecture is composed of two types of DRF. The first type extends and enhances the feature representation of a given facial descriptor. The second type operates on the fused form of all enhanced representations in order to provide a prediction for the age while taking into account the fuzziness property of the human age. While the proposed methodology is able to work with all kinds of image features, the face descriptors adopted in this work used off-the-shelf deep features allowing to retain both the rich deep features and the powerful enhancement and decision provided by the proposed architecture. Experiments conducted on six public databases prove the superiority of the proposed architecture over other state-of-the-art methods.

ACS Style

O. Guehairia; A. Ouamane; F. Dornaika; A. Taleb-Ahmed. Feature fusion via Deep Random Forest for facial age estimation. Neural Networks 2020, 130, 238 -252.

AMA Style

O. Guehairia, A. Ouamane, F. Dornaika, A. Taleb-Ahmed. Feature fusion via Deep Random Forest for facial age estimation. Neural Networks. 2020; 130 ():238-252.

Chicago/Turabian Style

O. Guehairia; A. Ouamane; F. Dornaika; A. Taleb-Ahmed. 2020. "Feature fusion via Deep Random Forest for facial age estimation." Neural Networks 130, no. : 238-252.

Journal article
Published: 27 March 2020 in Electronics
Reads 0
Downloads 0

A reliable environment perception is a crucial task for autonomous driving, especially in dense traffic areas. Recent improvements and breakthroughs in scene understanding for intelligent transportation systems are mainly based on deep learning and the fusion of different modalities. In this context, we introduce OLIMP: A heterOgeneous Multimodal Dataset for Advanced EnvIronMent Perception. This is the first public, multimodal and synchronized dataset that includes UWB radar data, acoustic data, narrow-band radar data and images. OLIMP comprises 407 scenes and 47,354 synchronized frames, presenting four categories: pedestrian, cyclist, car and tram. The dataset includes various challenges related to dense urban traffic such as cluttered environment and different weather conditions. To demonstrate the usefulness of the introduced dataset, we propose a fusion framework that combines the four modalities for multi object detection. The obtained results are promising and spur for future research.

ACS Style

Amira Mimouna; Ihsen Alouani; Anouar Ben Khalifa; Yassin El Hillali; Abdelmalik Taleb-Ahmed; Atika Menhaj; Abdeldjalil Ouahabi; Najoua Essoukri Ben Amara. OLIMP: A Heterogeneous Multimodal Dataset for Advanced Environment Perception. Electronics 2020, 9, 560 .

AMA Style

Amira Mimouna, Ihsen Alouani, Anouar Ben Khalifa, Yassin El Hillali, Abdelmalik Taleb-Ahmed, Atika Menhaj, Abdeldjalil Ouahabi, Najoua Essoukri Ben Amara. OLIMP: A Heterogeneous Multimodal Dataset for Advanced Environment Perception. Electronics. 2020; 9 (4):560.

Chicago/Turabian Style

Amira Mimouna; Ihsen Alouani; Anouar Ben Khalifa; Yassin El Hillali; Abdelmalik Taleb-Ahmed; Atika Menhaj; Abdeldjalil Ouahabi; Najoua Essoukri Ben Amara. 2020. "OLIMP: A Heterogeneous Multimodal Dataset for Advanced Environment Perception." Electronics 9, no. 4: 560.