This page has only limited features, please log in for full access.

Dr. Anouar BEN KHALIFA
Université de Sousse, Ecole Nationale d’Ingénieurs de Sousse, LATIS- Laboratory of Advanced Technology and Intelligent Systems, 4023, Sousse, Tunisie ;

Basic Info


Research Keywords & Expertise

0 Biometrics
0 Data Fusion
0 Deep Learning
0 Face Recognition
0 Gesture Recognition

Fingerprints

action recognition
Pedestrian Detection
Biometrics
Deep Learning
Gesture Recognition
intelligent transportation systems
Face Recognition
activity recognition

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

Anouar BEN KHALIFA, received the engineering degree (2005) from the National Engineering School of Monastir – Monastir university (Tunisia), a Msc degree (2007) and a Ph.D degree (2014) in Electrical Engineering, Signal Processing, System Analysis and Pattern Recognition from the National Engineering School of Tunis – TunisElManar university (Tunisia). He is now Associate Professor in Electrical and Computer Engineering at the National Engineering School of Sousse – Sousse university (Tunisia). He is a Founding member of the LATIS research labs (Laboratory of Advanced Technology and Intelligent Systems).He is the head of the Department of Industrial Electronic Engineering at the National Engineering School of Sousse (From december 2016 to September 2019). His research interests are Artificial Intelligence, Pattern Recognition, Image Processing, Machine Learning and Information Fusion.

Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Journal article
Published: 30 June 2021 in Traitement du Signal
Reads 0
Downloads 0

In the recent years, the face recognition task has attracted the attention of researchers due to its efficiency in several domains such as surveillance and access control. Unfortunately, there are multiple challenges that decrease the performance of face recognition. Partial occlusion is the most challenging one since it often causes a great lack of information. The main purpose of this paper is to prove that facial reconstruction improves the results of facial recognition compared to de-occlusion and full-face recognition in the presence of occlusion. Our objective is to achieve occluded-face recognition, de-occluded-face recognition, and reconstructed-face recognition. Regarding face reconstruction, we introduce two different methods based on Laplacian pyramid blending and CycleGANs. In order to validate our work, we perform two different feature extraction techniques: hand-crafted features and learned features exploiting the final layers of a pre-trained deep architecture model. The experimental results on the EURECOM Kinect Face Dataset (EKFD) and the IST-EURECOM Light Field Face Database (IST-EURECOM LFFD) show that the proposed face reconstruction approach, compared with the face de-occlusion and occluded-face recognition ones, clearly improves the face recognition task. Our method boosts the classification performance in comparison with the state-of-the-art methods, achieving 94.66% on EKFD and 72.35% on IST-EURECOM LFFD.

ACS Style

Laila Ouannes; Anouar Ben Khalifa; Najoua Essoukri Ben Amara. Comparative Study Based on De-Occlusion and Reconstruction of Face Images in Degraded Conditions. Traitement du Signal 2021, 38, 573 -585.

AMA Style

Laila Ouannes, Anouar Ben Khalifa, Najoua Essoukri Ben Amara. Comparative Study Based on De-Occlusion and Reconstruction of Face Images in Degraded Conditions. Traitement du Signal. 2021; 38 (3):573-585.

Chicago/Turabian Style

Laila Ouannes; Anouar Ben Khalifa; Najoua Essoukri Ben Amara. 2021. "Comparative Study Based on De-Occlusion and Reconstruction of Face Images in Degraded Conditions." Traitement du Signal 38, no. 3: 573-585.

Journal article
Published: 08 January 2021 in IEEE Sensors Journal
Reads 0
Downloads 0

The development of safe intelligent transportation systems (ITS) has driven extensive research to come up with efficient environment perception techniques with a variety of sensors. In short range settings, Ultra Wide-Band (UWB) radars represent a promising technology for building reliable obstacle detection systems as they are robust to environmental conditions. However, UWB radars suffer from a segmentation challenge: localizing relevant regions of interest (ROIs) within its signals. This paper proposes a segmentation approach to detect ROIs in an environment perception-dedicated UWB radar. Specifically, we implement a differential entropy analysis to detect ROIs. We evaluate our technique on a benchmark of more than 47 thousands samples. The obtained results show higher performance in terms of obstacle detection compared to state-of-the-art techniques, and a stable robustness even with low amplitude signals.

ACS Style

Amira Mimouna; Anouar Ben Khalifa; Ihsen Alouani; Najoua Essoukri Ben Amara; Atika Rivenq; Abdelmalik Taleb-Ahmed. Entropy-Based Ultra-Wide Band Radar Signals Segmentation for Multi Obstacle Detection. IEEE Sensors Journal 2021, 21, 8142 -8149.

AMA Style

Amira Mimouna, Anouar Ben Khalifa, Ihsen Alouani, Najoua Essoukri Ben Amara, Atika Rivenq, Abdelmalik Taleb-Ahmed. Entropy-Based Ultra-Wide Band Radar Signals Segmentation for Multi Obstacle Detection. IEEE Sensors Journal. 2021; 21 (6):8142-8149.

Chicago/Turabian Style

Amira Mimouna; Anouar Ben Khalifa; Ihsen Alouani; Najoua Essoukri Ben Amara; Atika Rivenq; Abdelmalik Taleb-Ahmed. 2021. "Entropy-Based Ultra-Wide Band Radar Signals Segmentation for Multi Obstacle Detection." IEEE Sensors Journal 21, no. 6: 8142-8149.

Journal article
Published: 25 November 2020 in Traitement du Signal
Reads 0
Downloads 0

In the current era, the implementation of automated security video surveillance systems is particularly needy in terms of human violence recognition. Nevertheless, the latter encounters various interlinked difficulties which require efficient solutions as well as feasible methods that provide a relevant distinction between normal human actions and abnormal ones. In this paper, we present an overview of these issues and a literature review of the related works and current research on-going efforts on this field and suggests a novel prediction model for violence recognition, based on a preliminary spatio-temporal features extraction using the material derivative which describes the rate of change of a particle while in motion with respect to time. The classification algorithm is then carried out using a deep learning LSTM technique to classify generated features into eight specified violent and non-violent categories and a prediction value for each class of action is calculated. The whole model is trained on a public dataset and its classification capacity is evaluated on a confusion matrix which assembles all the predictions made by the system with their actual labels. The obtained results are promising and show that the proposed model can be potentially useful for detecting human violence.

ACS Style

Wafa Lejmi; Anouar Ben Khalifa; Mohamed Ali Mahjoub. A Novel Spatio-Temporal Violence Classification Framework Based on Material Derivative and LSTM Neural Network. Traitement du Signal 2020, 37, 687 -701.

AMA Style

Wafa Lejmi, Anouar Ben Khalifa, Mohamed Ali Mahjoub. A Novel Spatio-Temporal Violence Classification Framework Based on Material Derivative and LSTM Neural Network. Traitement du Signal. 2020; 37 (5):687-701.

Chicago/Turabian Style

Wafa Lejmi; Anouar Ben Khalifa; Mohamed Ali Mahjoub. 2020. "A Novel Spatio-Temporal Violence Classification Framework Based on Material Derivative and LSTM Neural Network." Traitement du Signal 37, no. 5: 687-701.

Journal article
Published: 25 August 2020 in IEEE Sensors Journal
Reads 0
Downloads 0

Driver behaviors and decisions are crucial factors for on-road driving safety. With a precise driver behavior monitoring system, traffic accidents and injuries can be significantly reduced. However, understanding human behaviors in real-world driving settings is a challenging task because of the uncontrolled conditions including illumination variation, occlusion, and dynamic and cluttered background. In this paper, a Kinect sensor, which provides multimodal signals, is adopted as a driver monitoring sensor to recognize safe driving and common secondary most distracting in-vehicle actions. We propose a novel soft spatial attention-based network named the Depth-based Spatial Attention network (DSA), which adds a cognitive process to deep network by selectively focusing on the driver’s silhouette and motion in the cluttered driving scene. In fact, at each time t, we introduce a new weighted RGB frame based on an attention model designed using a depth frame. The final classification accuracy is substantially enhanced compared to the state-of-the-art results with an achieved improvement of up to 27%.

ACS Style

Imen Jegham; Anouar Ben Khalifa; Ihsen Alouani; Mohamed Ali Mahjoub. Soft Spatial Attention-Based Multimodal Driver Action Recognition Using Deep Learning. IEEE Sensors Journal 2020, 21, 1918 -1925.

AMA Style

Imen Jegham, Anouar Ben Khalifa, Ihsen Alouani, Mohamed Ali Mahjoub. Soft Spatial Attention-Based Multimodal Driver Action Recognition Using Deep Learning. IEEE Sensors Journal. 2020; 21 (2):1918-1925.

Chicago/Turabian Style

Imen Jegham; Anouar Ben Khalifa; Ihsen Alouani; Mohamed Ali Mahjoub. 2020. "Soft Spatial Attention-Based Multimodal Driver Action Recognition Using Deep Learning." IEEE Sensors Journal 21, no. 2: 1918-1925.

Journal article
Published: 08 August 2020 in Signal Processing: Image Communication
Reads 0
Downloads 0

Driver distraction and fatigue have become one of the leading causes of severe traffic accidents. Hence, driver inattention monitoring systems are crucial. Even with the growing development of advanced driver assistance systems and the introduction of third-level autonomous vehicles, this task is still trending and complex due to challenges such as the illumination change and the dynamic background. To reliably compare and validate driver inattention monitoring methods, a limited number of public datasets are available. In this paper, we put forward a public, well-structured and complete dataset, named Multiview, Multimodal and Multispectral Driver Action Dataset (3MDAD). The dataset is mainly composed of two sets: the first one recorded in daytime and the second one at nighttime. Each set consists of two synchronized data modalities, both from frontal and side views. More than 60 drivers are asked to execute 16 in-vehicle actions under a wide range of naturalistic driving settings. In contrast to other public datasets, 3MDAD presents multiple modalities, spectrums and views under different time and weather conditions. To highlight the utility of our dataset, we independently analyze the driver action recognition results adapted to each modality and those obtained of several combinations of modalities.

ACS Style

Imen Jegham; Anouar Ben Khalifa; Ihsen Alouani; Mohamed Ali Mahjoub. A novel public dataset for multimodal multiview and multispectral driver distraction analysis: 3MDAD. Signal Processing: Image Communication 2020, 88, 115960 .

AMA Style

Imen Jegham, Anouar Ben Khalifa, Ihsen Alouani, Mohamed Ali Mahjoub. A novel public dataset for multimodal multiview and multispectral driver distraction analysis: 3MDAD. Signal Processing: Image Communication. 2020; 88 ():115960.

Chicago/Turabian Style

Imen Jegham; Anouar Ben Khalifa; Ihsen Alouani; Mohamed Ali Mahjoub. 2020. "A novel public dataset for multimodal multiview and multispectral driver distraction analysis: 3MDAD." Signal Processing: Image Communication 88, no. : 115960.

Journal article
Published: 16 July 2020 in Future Generation Computer Systems
Reads 0
Downloads 0

Recent advances in machine-learning, especially in deep neural networks have significantly accelerated the development and deployment of transport-oriented intelligent designs with increasingly high efficiency. While these technologies are exceptionally promising towards revolutionizing our current mobility and reducing the number of road accidents, the way to safe Intelligent Transportation Systems (ITS) remains long. Since pedestrians are the most vulnerable road users, designing accurate pedestrian detection methods is a priority task. However, traditional monocular pedestrian detection methods are limited, especially in occlusion handling. Hence, a collaborative perception scheme in which vehicles no longer restrict their input data to their immediate embedded sensors and rather exploit data from remote sensors is necessary to achieve a more comprehensive environment perception. In this work, we propose a novel public dataset: Infrastructure to Vehicle Multi-View Pedestrian Detection Database (I2V-MVPD) that combines synchronized images from both a mobile camera embedded in a car and a static camera in the road infrastructure. We also propose a new multi-view pedestrian detection framework based on collaborative intelligence between vehicles and infrastructure. Our results show a significant improvement in detection performance over monocular detection.

ACS Style

Anouar Ben Khalifa; Ihsen Alouani; Mohamed Ali Mahjoub; Atika Rivenq. A novel multi-view pedestrian detection database for collaborative Intelligent Transportation Systems. Future Generation Computer Systems 2020, 113, 506 -527.

AMA Style

Anouar Ben Khalifa, Ihsen Alouani, Mohamed Ali Mahjoub, Atika Rivenq. A novel multi-view pedestrian detection database for collaborative Intelligent Transportation Systems. Future Generation Computer Systems. 2020; 113 ():506-527.

Chicago/Turabian Style

Anouar Ben Khalifa; Ihsen Alouani; Mohamed Ali Mahjoub; Atika Rivenq. 2020. "A novel multi-view pedestrian detection database for collaborative Intelligent Transportation Systems." Future Generation Computer Systems 113, no. : 506-527.

Journal article
Published: 27 June 2020 in Medical Image Analysis
Reads 0
Downloads 0

Existing graph analysis techniques generally focus on decreasing the dimensionality of graph data (i.e., removing nodes, edges, or both) in diverse predictive learning tasks in pattern recognition, computer vision, and medical data analysis such as dimensionality reduction, filtering and embedding techniques. However, graph super-resolution is strikingly lacking, i.e., the concept of super-resolving low-resolution (LR) graphs with nr nodes into high-resolution graphs (HR) with nr′>nr nodes. Particularly, learning how to automatically generate HR brain connectomes, without resorting to the computationally expensive MRI processing steps such as image registration and parcellation, remains unexplored. To fill this gap, we propose the first technique to super-resolve undirected fully connected graphs with application to brain connectomes. First, we root our brain graph super-resolution (BGSR) framework in learning how to estimate a centered LR population-based brain graph representation, coined as connectional brain template (CBT), acting as a proxy in the target BGSR task. Specifically, we hypothesize that the estimation of a well-representative and centered CBT would help better capture the individuality of each LR brain graph via its residual distance from the population-based CBT. This will eventually allow an accurate identification of the most similar individual graphs to a new testing graph in the LR domain for the target prediction task. Second, we leverage the estimated LR CBT (i.e., population mean) to derive residual LR brain graphs, capturing the deviation of all subjects from the estimated CBT. Third, we learn multi-topology LR graph manifolds using different graph topological measurements (e.g., degree, closeness, betweenness) by estimating residual LR similarity matrices modeling the relationship between pairs of residual LR graphs. These are then fused so we can effectively identify for each testing LR subject its most K similar training LR graphs. Last, the missing testing HR graph is predicted by averaging the HR graphs of the K selected training subjects. Predicted HR from LR functional brain graphs boosted classification results for autistic subjects by 16.48% compared with LR functional graphs.

ACS Style

Islem Mhiri; Anouar Ben Khalifa; Mohamed Ali Mahjoub; Islem Rekik. Brain graph super-resolution for boosting neurological disorder diagnosis using unsupervised multi-topology connectional brain template learning. Medical Image Analysis 2020, 65, 101768 .

AMA Style

Islem Mhiri, Anouar Ben Khalifa, Mohamed Ali Mahjoub, Islem Rekik. Brain graph super-resolution for boosting neurological disorder diagnosis using unsupervised multi-topology connectional brain template learning. Medical Image Analysis. 2020; 65 ():101768.

Chicago/Turabian Style

Islem Mhiri; Anouar Ben Khalifa; Mohamed Ali Mahjoub; Islem Rekik. 2020. "Brain graph super-resolution for boosting neurological disorder diagnosis using unsupervised multi-topology connectional brain template learning." Medical Image Analysis 65, no. : 101768.

Journal article
Published: 06 June 2020 in Entertainment Computing
Reads 0
Downloads 0

Due to the recent development of machine learning and sensor innovations, hand gesture recognition systems become promising for the digital entertainment field. In this paper, we propose a dynamic hand gesture recognition approach using touchless hand motions over a Leap Motion device. First, we analyze the sequential time series data gathered from Leap Motion using Long Short-Term Memory (LSTM) recurrent neural networks for recognition purposes. We exploit basic unidirectional LSTM and bidirectional LSTM separately. Then, we propound novel architecture by combining the aforementioned models with additional components to give a final prediction network, named Hybrid Bidirectional Unidirectional LSTM (HBU-LSTM). The suggested network improves the model performance significantly by considering the spatial and temporal dependencies between the Leap Motion data and the network layers during the forward and backward pass. The recognition models are examined on two available benchmark datasets, named the LeapGestureDB dataset and the RIT dataset. Experiments demonstrate the potential of the proposed HBU-LSTM network for dynamic hand gesture recognition, with an average recognition rate reaching approximately 90%. Our suggested approach reaches superior performance, in terms of accuracy and computational complexity, over some existing methods for hand gesture recognition.

ACS Style

Safa Ameur; Anouar Ben Khalifa; Med Salim Bouhlel. A novel hybrid bidirectional unidirectional LSTM network for dynamic hand gesture recognition with Leap Motion. Entertainment Computing 2020, 35, 100373 .

AMA Style

Safa Ameur, Anouar Ben Khalifa, Med Salim Bouhlel. A novel hybrid bidirectional unidirectional LSTM network for dynamic hand gesture recognition with Leap Motion. Entertainment Computing. 2020; 35 ():100373.

Chicago/Turabian Style

Safa Ameur; Anouar Ben Khalifa; Med Salim Bouhlel. 2020. "A novel hybrid bidirectional unidirectional LSTM network for dynamic hand gesture recognition with Leap Motion." Entertainment Computing 35, no. : 100373.

Journal article
Published: 02 June 2020 in Journal of Visual Communication and Image Representation
Reads 0
Downloads 0

Recently, Hand-Gesture-Recognition (HGR) systems has appreciably change the way of interaction between humans and computers thanks to advanced sensor technologies like the Leap-Motion-Controller (LMC). Despite the success achieved by many state-of-the-art methods, they have not worked on the rich temporal information existing in the sequential hand gesture data and characterizing the discriminative representation of different hand gesture classes. In this paper, we suggest a novel Chronological-Pattern-Indexing (CPI) approach which encodes the temporal orders of patterns for hand gesture time series data acquired by the LMC sensor. We extract a set of temporal patterns from different optimized projections. Then, we compare their temporal order and we encode the whole sequence with the index of the first coming pattern. We repeat these steps until we generate an efficient feature vector modeling the chronological dynamics of the hand gesture. The experiments demonstrate the potential of the proposed CPI approach for HGR systems.

ACS Style

Safa Ameur; Anouar Ben Khalifa; Med Salim Bouhlel. Chronological pattern indexing: An efficient feature extraction method for hand gesture recognition with Leap Motion. Journal of Visual Communication and Image Representation 2020, 70, 102842 .

AMA Style

Safa Ameur, Anouar Ben Khalifa, Med Salim Bouhlel. Chronological pattern indexing: An efficient feature extraction method for hand gesture recognition with Leap Motion. Journal of Visual Communication and Image Representation. 2020; 70 ():102842.

Chicago/Turabian Style

Safa Ameur; Anouar Ben Khalifa; Med Salim Bouhlel. 2020. "Chronological pattern indexing: An efficient feature extraction method for hand gesture recognition with Leap Motion." Journal of Visual Communication and Image Representation 70, no. : 102842.

Journal article
Published: 30 April 2020 in Traitement du Signal
Reads 0
Downloads 0
ACS Style

Bilel Tarchoun; Anouar BEN Khalifa; Selma Dhifallah; Imen Jegham; Mohamed Mahjou. Hand-Crafted Features vs Deep Learning for Pedestrian Detection in Moving Camera. Traitement du Signal 2020, 37, 209 -216.

AMA Style

Bilel Tarchoun, Anouar BEN Khalifa, Selma Dhifallah, Imen Jegham, Mohamed Mahjou. Hand-Crafted Features vs Deep Learning for Pedestrian Detection in Moving Camera. Traitement du Signal. 2020; 37 (2):209-216.

Chicago/Turabian Style

Bilel Tarchoun; Anouar BEN Khalifa; Selma Dhifallah; Imen Jegham; Mohamed Mahjou. 2020. "Hand-Crafted Features vs Deep Learning for Pedestrian Detection in Moving Camera." Traitement du Signal 37, no. 2: 209-216.

Journal article
Published: 27 March 2020 in Electronics
Reads 0
Downloads 0

A reliable environment perception is a crucial task for autonomous driving, especially in dense traffic areas. Recent improvements and breakthroughs in scene understanding for intelligent transportation systems are mainly based on deep learning and the fusion of different modalities. In this context, we introduce OLIMP: A heterOgeneous Multimodal Dataset for Advanced EnvIronMent Perception. This is the first public, multimodal and synchronized dataset that includes UWB radar data, acoustic data, narrow-band radar data and images. OLIMP comprises 407 scenes and 47,354 synchronized frames, presenting four categories: pedestrian, cyclist, car and tram. The dataset includes various challenges related to dense urban traffic such as cluttered environment and different weather conditions. To demonstrate the usefulness of the introduced dataset, we propose a fusion framework that combines the four modalities for multi object detection. The obtained results are promising and spur for future research.

ACS Style

Amira Mimouna; Ihsen Alouani; Anouar Ben Khalifa; Yassin El Hillali; Abdelmalik Taleb-Ahmed; Atika Menhaj; Abdeldjalil Ouahabi; Najoua Essoukri Ben Amara. OLIMP: A Heterogeneous Multimodal Dataset for Advanced Environment Perception. Electronics 2020, 9, 560 .

AMA Style

Amira Mimouna, Ihsen Alouani, Anouar Ben Khalifa, Yassin El Hillali, Abdelmalik Taleb-Ahmed, Atika Menhaj, Abdeldjalil Ouahabi, Najoua Essoukri Ben Amara. OLIMP: A Heterogeneous Multimodal Dataset for Advanced Environment Perception. Electronics. 2020; 9 (4):560.

Chicago/Turabian Style

Amira Mimouna; Ihsen Alouani; Anouar Ben Khalifa; Yassin El Hillali; Abdelmalik Taleb-Ahmed; Atika Menhaj; Abdeldjalil Ouahabi; Najoua Essoukri Ben Amara. 2020. "OLIMP: A Heterogeneous Multimodal Dataset for Advanced Environment Perception." Electronics 9, no. 4: 560.

Review article
Published: 27 January 2020 in Forensic Science International: Digital Investigation
Reads 0
Downloads 0

Within a large range of applications in computer vision, Human Action Recognition has become one of the most attractive research fields. Ambiguities in recognizing actions does not only come from the difficulty to define the motion of body parts, but also from many other challenges related to real world problems such as camera motion, dynamic background, and bad weather conditions. There has been little research work in the real world conditions of human action recognition systems, which encourages us to seriously search in this application domain. Although a plethora of robust approaches have been introduced in the literature, they are still insufficient to fully cover the challenges. To quantitatively and qualitatively compare the performance of these methods, public datasets that present various actions under several conditions and constraints are recorded. In this paper, we investigate an overview of the existing methods according to the kind of issue they address. Moreover, we present a comparison of the existing datasets introduced for the human action recognition field.

ACS Style

Imen Jegham; Anouar Ben Khalifa; Ihsen Alouani; Mohamed Ali Mahjoub. Vision-based human action recognition: An overview and real world challenges. Forensic Science International: Digital Investigation 2020, 32, 200901 .

AMA Style

Imen Jegham, Anouar Ben Khalifa, Ihsen Alouani, Mohamed Ali Mahjoub. Vision-based human action recognition: An overview and real world challenges. Forensic Science International: Digital Investigation. 2020; 32 ():200901.

Chicago/Turabian Style

Imen Jegham; Anouar Ben Khalifa; Ihsen Alouani; Mohamed Ali Mahjoub. 2020. "Vision-based human action recognition: An overview and real world challenges." Forensic Science International: Digital Investigation 32, no. : 200901.

Journal article
Published: 16 December 2019 in Cognitive Systems Research
Reads 0
Downloads 0

While background subtraction techniques have been widely applied to detect moving objects in a video stream captured by a static camera, detecting moving objects using a moving camera still represents a challenging task. In this context, pedestrian detection using a camera placed on the top of a vehicle’s windshield has been rarely investigated. This is mainly due to the background ego-motion. Since the scene captured by the camera seems in motion, it is very difficult to distinguish the moving pedestrians from the others that belong to the static part of the scene. For this reason, a compensation step is needed to suppress the ego-motion. This paper presents a study on the main challenges facing pedestrian detection systems as well as methods proposed to handle these challenges. A novel trajectory classification framework for detecting pedestrians even in challenging real-world environments is proposed. The proposed method models the background motion between two consecutive frames in order to compensate the camera motion. Then, it defines a classification process that differentiates between the background and the foreground in the frame. Using the defined foreground, we consequently identify the presence of pedestrians in the scene. The proposed method was validated on a public benchmark dataset: CVC-14 containing both visible and far infrared video sequences in day and night times. Experimental results confirm the effectiveness of the proposed approach in capturing the dynamic aspect between frames and therefore detecting the presence of pedestrians in the scene.

ACS Style

Anouar Ben Khalifa; Ihsen Alouani; Mohamed Ali Mahjoub; Najoua Essoukri Ben Amara. Pedestrian detection using a moving camera: A novel framework for foreground detection. Cognitive Systems Research 2019, 60, 77 -96.

AMA Style

Anouar Ben Khalifa, Ihsen Alouani, Mohamed Ali Mahjoub, Najoua Essoukri Ben Amara. Pedestrian detection using a moving camera: A novel framework for foreground detection. Cognitive Systems Research. 2019; 60 ():77-96.

Chicago/Turabian Style

Anouar Ben Khalifa; Ihsen Alouani; Mohamed Ali Mahjoub; Najoua Essoukri Ben Amara. 2019. "Pedestrian detection using a moving camera: A novel framework for foreground detection." Cognitive Systems Research 60, no. : 77-96.

Conference paper
Published: 22 August 2019 in Transactions on Petri Nets and Other Models of Concurrency XV
Reads 0
Downloads 0

“Driver’s distraction is deadly!”. Due to its crucial role in saving lives, driver action recognition is an important and trending topic in the field of computer vision. However, a very limited number of public datasets are available to validate proposed methods. This paper introduces a new public, well structured and extensive dataset, named Multiview and multimodal in-vehicle Driver Action Dataset (MDAD). MDAD consists of two temporally synchronised data modalities from side and frontal views. These modalities include RGB and depth data from different Kinect cameras. Many subjects with various body sizes, gender and ages are asked to perform 16 in-vehicle actions in several weather conditions. Each subject drives the vehicle on multiple trip routes in Sousse, Tunisia, at different times to describe a large range of head rotations, changes in lighting conditions and some occlusions. Our recorded dataset provides researchers with a testbed to develop new algorithms across multiple modalities and views under different illumination conditions. To demonstrate the utility of our dataset, we analyze driver action recognition results from each modality and every view independently, and then we combine modalities and views. This public dataset is of benefit to research activities for humans driver action analysis.

ACS Style

Imen Jegham; Anouar Ben Khalifa; Ihsen Alouani; Mohamed Ali Mahjoub. MDAD: A Multimodal and Multiview in-Vehicle Driver Action Dataset. Transactions on Petri Nets and Other Models of Concurrency XV 2019, 518 -529.

AMA Style

Imen Jegham, Anouar Ben Khalifa, Ihsen Alouani, Mohamed Ali Mahjoub. MDAD: A Multimodal and Multiview in-Vehicle Driver Action Dataset. Transactions on Petri Nets and Other Models of Concurrency XV. 2019; ():518-529.

Chicago/Turabian Style

Imen Jegham; Anouar Ben Khalifa; Ihsen Alouani; Mohamed Ali Mahjoub. 2019. "MDAD: A Multimodal and Multiview in-Vehicle Driver Action Dataset." Transactions on Petri Nets and Other Models of Concurrency XV , no. : 518-529.

Conference paper
Published: 22 August 2019 in Transactions on Petri Nets and Other Models of Concurrency XV
Reads 0
Downloads 0

This article presents a survey of the latest methods of violence detection in video sequences. Although many studies have described the approaches taken to detect violence, there are few surveys providing exhaustive review of the available methods. We expose the main challenges in this area and we classify the methods into five broad categories. We discuss each category and present the main techniques that proposed improvements as well as some performance measures using public datasets to evaluate the different existing techniques of violence detection.

ACS Style

Wafa Lejmi; Anouar Ben Khalifa; Mohamed Ali Mahjoub. Challenges and Methods of Violence Detection in Surveillance Video: A Survey. Transactions on Petri Nets and Other Models of Concurrency XV 2019, 62 -73.

AMA Style

Wafa Lejmi, Anouar Ben Khalifa, Mohamed Ali Mahjoub. Challenges and Methods of Violence Detection in Surveillance Video: A Survey. Transactions on Petri Nets and Other Models of Concurrency XV. 2019; ():62-73.

Chicago/Turabian Style

Wafa Lejmi; Anouar Ben Khalifa; Mohamed Ali Mahjoub. 2019. "Challenges and Methods of Violence Detection in Surveillance Video: A Survey." Transactions on Petri Nets and Other Models of Concurrency XV , no. : 62-73.

Conference paper
Published: 01 December 2018 in 2018 30th International Conference on Microelectronics (ICM)
Reads 0
Downloads 0

Driver distraction is one of the main factors of fatal road traffic injuries. According to the national Highway Traffic Safety Administration (NHTSA), in USA, 3450 are killed by distracted driving, in 2016. In order to save lives, Advanced Driver Assistance Systems (ADAS), more specifically those systems for distracted driver action recognition are introduced. Our method aim to extract, from each frame, a region of interest (KOI) that contains body parts performing in-vehicle actions. These regions hold the most important key points after eliminating those common ones that are similar to the key points of the safe driving actions. The proposed approach was evaluated on the distracted driver detection dataset. Experimental results illustrate the performance of the proposed approach.

ACS Style

Imen Jegham; Anouar Ben Khalifa; Ihsen Alouani; Mohamed Ali Mahjoub. Safe Driving : Driver Action Recognition using SURF Keypoints. 2018 30th International Conference on Microelectronics (ICM) 2018, 60 -63.

AMA Style

Imen Jegham, Anouar Ben Khalifa, Ihsen Alouani, Mohamed Ali Mahjoub. Safe Driving : Driver Action Recognition using SURF Keypoints. 2018 30th International Conference on Microelectronics (ICM). 2018; ():60-63.

Chicago/Turabian Style

Imen Jegham; Anouar Ben Khalifa; Ihsen Alouani; Mohamed Ali Mahjoub. 2018. "Safe Driving : Driver Action Recognition using SURF Keypoints." 2018 30th International Conference on Microelectronics (ICM) , no. : 60-63.

Conference paper
Published: 01 October 2018 in 2018 IEEE/ACS 15th International Conference on Computer Systems and Applications (AICCSA)
Reads 0
Downloads 0

The identification of persons through partial fingerprint is one of the basic biometrie matters. The context of use is situated particularly in forensics. The main issue is the insufficiency of information contained in the partial fingerprint, it depends from the size of the proportion found in the crime scene. In this paper, we propose a novel approach for the identification of persons through partial fingerprint which uses redefined characteristics of minutiae extracted from the fingerprint image. To validate our work, we exploit two complete databases FVC 2004 and POLYU HRF to form our dataset. The proposed approach reaches recognition rates up to 98.06% and 98.82% when using datasets extracted from POLYU HRF database and FVC 2004 database respectively. It outperforms four state-of-the-art methods in term of recognition rates in different percentage of incomplete fingerprint image.

ACS Style

Sana Boujnah; Sami Jaballah; Anouar BEN Khalifa; Mohamed Lassaad Ammari. Person's Identification with Partial Fingerprint Based on a Redefinition of Minutiae Features. 2018 IEEE/ACS 15th International Conference on Computer Systems and Applications (AICCSA) 2018, 1 -5.

AMA Style

Sana Boujnah, Sami Jaballah, Anouar BEN Khalifa, Mohamed Lassaad Ammari. Person's Identification with Partial Fingerprint Based on a Redefinition of Minutiae Features. 2018 IEEE/ACS 15th International Conference on Computer Systems and Applications (AICCSA). 2018; ():1-5.

Chicago/Turabian Style

Sana Boujnah; Sami Jaballah; Anouar BEN Khalifa; Mohamed Lassaad Ammari. 2018. "Person's Identification with Partial Fingerprint Based on a Redefinition of Minutiae Features." 2018 IEEE/ACS 15th International Conference on Computer Systems and Applications (AICCSA) , no. : 1-5.

Conference paper
Published: 01 March 2018 in 2018 15th International Multi-Conference on Systems, Signals & Devices (SSD)
Reads 0
Downloads 0

With the deployment of kinematic sensors, the recognition of human activities using triaxial accelerometer has become unavoidable in various fields. In this work, we propose an approach based on the selection of the most expressive signals describing the action. This selection is based on an entropy calculation as it presents the amount of information contained in the signal. The extracted descriptors are relative to the time-frequency domain. For classification, we used support vector machine to identify and to recognize the different actions. We proved the effectiveness of our approach by following experiments carried out on 3 public databases. Thus, the performances found are comparable to those introduced by other works.

ACS Style

Amira Mimouna; Anouar BEN Khalifa; Najoua Essoukri Ben Amara. Human Action Recognition Using Triaxial Accelerometer Data: Selective Approach. 2018 15th International Multi-Conference on Systems, Signals & Devices (SSD) 2018, 491 -496.

AMA Style

Amira Mimouna, Anouar BEN Khalifa, Najoua Essoukri Ben Amara. Human Action Recognition Using Triaxial Accelerometer Data: Selective Approach. 2018 15th International Multi-Conference on Systems, Signals & Devices (SSD). 2018; ():491-496.

Chicago/Turabian Style

Amira Mimouna; Anouar BEN Khalifa; Najoua Essoukri Ben Amara. 2018. "Human Action Recognition Using Triaxial Accelerometer Data: Selective Approach." 2018 15th International Multi-Conference on Systems, Signals & Devices (SSD) , no. : 491-496.

Conference paper
Published: 01 March 2018 in 2018 15th International Multi-Conference on Systems, Signals & Devices (SSD)
Reads 0
Downloads 0

The development of autonomous vehicle is an important and active research area. In the last few years, pedestrian detection methods for a moving camera have been severely developed. This field presents many challenges in order to avoid the camera motion and recognize the dynamic objects. This paper proposes a background compensation method for pedestrian detection with a moving Camera. This method relies on motion compensation to transfer the background model from the current frame to the previous frame in order to detect dynamic obstacles. This motion compensation is carried out using different block matching algorithms and the gradient information of the images to establish the background's model motion. The proposed method was evaluated on a public benchmark system: the CVC14 and achieved promising results as shown in this article.

ACS Style

Khouloud Chebli; Anouar BEN Khalifa. Pedestrian Detection Based on Background Compensation with Block-Matching Algorithm. 2018 15th International Multi-Conference on Systems, Signals & Devices (SSD) 2018, 497 -501.

AMA Style

Khouloud Chebli, Anouar BEN Khalifa. Pedestrian Detection Based on Background Compensation with Block-Matching Algorithm. 2018 15th International Multi-Conference on Systems, Signals & Devices (SSD). 2018; ():497-501.

Chicago/Turabian Style

Khouloud Chebli; Anouar BEN Khalifa. 2018. "Pedestrian Detection Based on Background Compensation with Block-Matching Algorithm." 2018 15th International Multi-Conference on Systems, Signals & Devices (SSD) , no. : 497-501.

Conference paper
Published: 01 October 2017 in 2017 IEEE/ACS 14th International Conference on Computer Systems and Applications (AICCSA)
Reads 0
Downloads 0

Our work highlights event detection system in video surveillance sequences. This should mainly distinguish acts of violence. The survey discusses the current methods and techniques that are being applied for the task of automated violence recognition in the images derived from video surveillance sequences. To do this, we propose a fusion strategy after using a variety of feature extraction algorithms to obtain the points-of-interest from input images and each of the extracted feature vectors is submitted to a classifier. In a decision fusion strategy, different classifiers are used to classify a feature vector and to establish a most suitable decision to classify the input action as violent or non-violent. We study the performance of the mentioned approaches on 21 datasets of human interaction images. Experiments were implemented in Matlab computing environment. This paper aspires to be a contribution for researchers who wish to improve the study of violent activity recognition and gather inspiration on the main challenges to tackle in this emerging field.

ACS Style

Wafa Lejmi; Anouar Ben Khalifa; Mohamed Ali Mahjoub. Fusion strategies for recognition of violence actions. 2017 IEEE/ACS 14th International Conference on Computer Systems and Applications (AICCSA) 2017, 178 -183.

AMA Style

Wafa Lejmi, Anouar Ben Khalifa, Mohamed Ali Mahjoub. Fusion strategies for recognition of violence actions. 2017 IEEE/ACS 14th International Conference on Computer Systems and Applications (AICCSA). 2017; ():178-183.

Chicago/Turabian Style

Wafa Lejmi; Anouar Ben Khalifa; Mohamed Ali Mahjoub. 2017. "Fusion strategies for recognition of violence actions." 2017 IEEE/ACS 14th International Conference on Computer Systems and Applications (AICCSA) , no. : 178-183.