This page has only limited features, please log in for full access.

Unclaimed
Ahmad Jalal
Department of Computer Science, Air University, Islamabad 44200, Pakistan

Basic Info

Basic Info is private.

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

The user biography is not available.
Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Journal article
Published: 21 June 2021 in Applied Sciences
Reads 0
Downloads 0

This work presents the grouping of dependent tasks into a cluster using the Bayesian analysis model to solve the affinity scheduling problem in heterogeneous multicore systems. The non-affinity scheduling of tasks has a negative impact as the overall execution time for the tasks increases. Furthermore, non-affinity-based scheduling also limits the potential for data reuse in the caches so it becomes necessary to bring the same data into the caches multiple times. In heterogeneous multicore systems, it is essential to address the load balancing problem as all cores are operating at varying frequencies. We propose two techniques to solve the load balancing issue, one being designated “chunk-based scheduler” (CBS) which is applied to the heterogeneous systems while the other system is “quantum-based intra-core task migration” (QBICTM) where each task is given a fair and equal chance to run on the fastest core. Results show 30–55% improvement in the average execution time of the tasks by applying our CBS or QBICTM scheduler compare to other traditional schedulers when compared using the same operating system.

ACS Style

Sohaib Abbasi; Shaharyar Kamal; Munkhjargal Gochoo; Ahmad Jalal; KiBum Kim. Affinity-Based Task Scheduling on Heterogeneous Multicore Systems Using CBS and QBICTM. Applied Sciences 2021, 11, 5740 .

AMA Style

Sohaib Abbasi, Shaharyar Kamal, Munkhjargal Gochoo, Ahmad Jalal, KiBum Kim. Affinity-Based Task Scheduling on Heterogeneous Multicore Systems Using CBS and QBICTM. Applied Sciences. 2021; 11 (12):5740.

Chicago/Turabian Style

Sohaib Abbasi; Shaharyar Kamal; Munkhjargal Gochoo; Ahmad Jalal; KiBum Kim. 2021. "Affinity-Based Task Scheduling on Heterogeneous Multicore Systems Using CBS and QBICTM." Applied Sciences 11, no. 12: 5740.

Journal article
Published: 14 June 2021 in Applied Sciences
Reads 0
Downloads 0

Automatic head tracking and counting using depth imagery has various practical applications in security, logistics, queue management, space utilization and visitor counting. However, no currently available system can clearly distinguish between a human head and other objects in order to track and count people accurately. For this reason, we propose a novel system that can track people by monitoring their heads and shoulders in complex environments and also count the number of people entering and exiting the scene. Our system is split into six phases; at first, preprocessing is done by converting videos of a scene into frames and removing the background from the video frames. Second, heads are detected using Hough Circular Gradient Transform, and shoulders are detected by HOG based symmetry methods. Third, three robust features, namely, fused joint HOG-LBP, Energy based Point clouds and Fused intra-inter trajectories are extracted. Fourth, the Apriori-Association is implemented to select the best features. Fifth, deep learning is used for accurate people tracking. Finally, heads are counted using Cross-line judgment. The system was tested on three benchmark datasets: the PCDS dataset, the MICC people counting dataset and the GOTPD dataset and counting accuracy of 98.40%, 98%, and 99% respectively was achieved. Our system obtained remarkable results.

ACS Style

Munkhjargal Gochoo; Syeda Rizwan; Yazeed Ghadi; Ahmad Jalal; KiBum Kim. A Systematic Deep Learning Based Overhead Tracking and Counting System Using RGB-D Remote Cameras. Applied Sciences 2021, 11, 5503 .

AMA Style

Munkhjargal Gochoo, Syeda Rizwan, Yazeed Ghadi, Ahmad Jalal, KiBum Kim. A Systematic Deep Learning Based Overhead Tracking and Counting System Using RGB-D Remote Cameras. Applied Sciences. 2021; 11 (12):5503.

Chicago/Turabian Style

Munkhjargal Gochoo; Syeda Rizwan; Yazeed Ghadi; Ahmad Jalal; KiBum Kim. 2021. "A Systematic Deep Learning Based Overhead Tracking and Counting System Using RGB-D Remote Cameras." Applied Sciences 11, no. 12: 5503.

Journal article
Published: 18 May 2021 in Entropy
Reads 0
Downloads 0

To prevent disasters and to control and supervise crowds, automated video surveillance has become indispensable. In today’s complex and crowded environments, manual surveillance and monitoring systems are inefficient, labor intensive, and unwieldy. Automated video surveillance systems offer promising solutions, but challenges remain. One of the major challenges is the extraction of true foregrounds of pixels representing humans only. Furthermore, to accurately understand and interpret crowd behavior, human crowd behavior (HCB) systems require robust feature extraction methods, along with powerful and reliable decision-making classifiers. In this paper, we describe our approach to these issues by presenting a novel Particles Force Model for multi-person tracking, a vigorous fusion of global and local descriptors, along with a robust improved entropy classifier for detecting and interpreting crowd behavior. In the proposed model, necessary preprocessing steps are followed by the application of a first distance algorithm for the removal of background clutter; true-foreground elements are then extracted via a Particles Force Model. The detected human forms are then counted by labeling and performing cluster estimation, using a K-nearest neighbors search algorithm. After that, the location of all the human silhouettes is fixed and, using the Jaccard similarity index and normalized cross-correlation as a cost function, multi-person tracking is performed. For HCB detection, we introduced human crowd contour extraction as a global feature and a particles gradient motion (PGD) descriptor, along with geometrical and speeded up robust features (SURF) for local features. After features were extracted, we applied bat optimization for optimal features, which also works as a pre-classifier. Finally, we introduced a robust improved entropy classifier for decision making and automated crowd behavior detection in smart surveillance systems. We evaluated the performance of our proposed system on a publicly available benchmark PETS2009 and UMN dataset. Experimental results show that our system performed better compared to existing well-known state-of-the-art methods by achieving higher accuracy rates. The proposed system can be deployed to great benefit in numerous public places, such as airports, shopping malls, city centers, and train stations to control, supervise, and protect crowds.

ACS Style

Faisal Abdullah; Yazeed Ghadi; Munkhjargal Gochoo; Ahmad Jalal; KiBum Kim. Multi-Person Tracking and Crowd Behavior Detection via Particles Gradient Motion Descriptor and Improved Entropy Classifier. Entropy 2021, 23, 628 .

AMA Style

Faisal Abdullah, Yazeed Ghadi, Munkhjargal Gochoo, Ahmad Jalal, KiBum Kim. Multi-Person Tracking and Crowd Behavior Detection via Particles Gradient Motion Descriptor and Improved Entropy Classifier. Entropy. 2021; 23 (5):628.

Chicago/Turabian Style

Faisal Abdullah; Yazeed Ghadi; Munkhjargal Gochoo; Ahmad Jalal; KiBum Kim. 2021. "Multi-Person Tracking and Crowd Behavior Detection via Particles Gradient Motion Descriptor and Improved Entropy Classifier." Entropy 23, no. 5: 628.

Journal article
Published: 11 May 2021 in Sustainability
Reads 0
Downloads 0

Based on the rapid increase in the demand for people counting and tracking systems for surveillance applications, there is a critical need for more accurate, efficient, and reliable systems. The main goal of this study was to develop an accurate, sustainable, and efficient system that is capable of error-free counting and tracking in public places. The major objective of this research is to develop a system that can perform well in different orientations, different densities, and different backgrounds. We propose an accurate and novel approach consisting of preprocessing, object detection, people verification, particle flow, feature extraction, self-organizing map (SOM) based clustering, people counting, and people tracking. Initially, filters are applied to preprocess images and detect objects. Next, random particles are distributed, and features are extracted. Subsequently, particle flows are clustered using a self-organizing map, and people counting and tracking are performed based on motion trajectories. Experimental results on the PETS-2009 dataset reveal an accuracy of 86.9% for people counting and 87.5% for people tracking, while experimental results on the TUD-Pedestrian dataset yield 94.2% accuracy for people counting and 94.5% for people tracking. The proposed system is a useful tool for medium-density crowds and can play a vital role in people counting and tracking applications.

ACS Style

Mahwish Pervaiz; Yazeed Ghadi; Munkhjargal Gochoo; Ahmad Jalal; Shaharyar Kamal; Dong-Seong Kim. A Smart Surveillance System for People Counting and Tracking Using Particle Flow and Modified SOM. Sustainability 2021, 13, 5367 .

AMA Style

Mahwish Pervaiz, Yazeed Ghadi, Munkhjargal Gochoo, Ahmad Jalal, Shaharyar Kamal, Dong-Seong Kim. A Smart Surveillance System for People Counting and Tracking Using Particle Flow and Modified SOM. Sustainability. 2021; 13 (10):5367.

Chicago/Turabian Style

Mahwish Pervaiz; Yazeed Ghadi; Munkhjargal Gochoo; Ahmad Jalal; Shaharyar Kamal; Dong-Seong Kim. 2021. "A Smart Surveillance System for People Counting and Tracking Using Particle Flow and Modified SOM." Sustainability 13, no. 10: 5367.

Original article
Published: 19 April 2021 in Journal of Electrical Engineering & Technology
Reads 0
Downloads 0

To understand daily events accurately, adaptive pose estimation (APE) systems require a robust context-aware model and optimal feature selection methods. In this paper, we propose a novel gait event detection (GED) system that consists of saliency silhouette detection, a robust body parts model and a 2D stick-model followed by a hierarchical optimization algorithm. Furthermore, the most prominent context-aware features such as energy, 0–180° intensity and distinct moveable features are proposed by focusing on invariant and localized characteristics of human postures in different event classes. Finally, we apply Grey Wolf optimization and a genetic algorithm to discriminate complex postures and to provide appropriate labels to each event. In order to evaluate the performance of proposed GED, two public benchmark datasets, UCF101 and YouTube, are examined via the n-fold cross validation method. For the two benchmark datasets, our proposed method detects the human body key points with 82.4% and 83.2% accuracy respectively. Also, it extracts the context-aware features and finally recognizes the gait events with 82.6% and 85.0% accuracy, respectively. Compared with other well-known statistical and state-of-the-art methods, our proposed method outperforms other similarly tasked methods in terms of posture detection and recognition accuracy.

ACS Style

Israr Akhter; Ahmad Jalal; KiBum Kim. Adaptive Pose Estimation for Gait Event Detection Using Context-Aware Model and Hierarchical Optimization. Journal of Electrical Engineering & Technology 2021, 1 -9.

AMA Style

Israr Akhter, Ahmad Jalal, KiBum Kim. Adaptive Pose Estimation for Gait Event Detection Using Context-Aware Model and Hierarchical Optimization. Journal of Electrical Engineering & Technology. 2021; ():1-9.

Chicago/Turabian Style

Israr Akhter; Ahmad Jalal; KiBum Kim. 2021. "Adaptive Pose Estimation for Gait Event Detection Using Context-Aware Model and Hierarchical Optimization." Journal of Electrical Engineering & Technology , no. : 1-9.

Article
Published: 16 March 2021 in Multimedia Tools and Applications
Reads 0
Downloads 0

Automated human posture estimation (A-HPE) systems need delicate methods for detecting body parts and selecting cues based on marker-less sensors to effectively recognize complex activity motions. Recognition of human activities using vision sensors is a challenging issue due to variations in illumination conditions and complex movements during the monitoring of sports and fitness exercises. In this paper, we propose a novel A-HPE method that intelligently identifies human behaviours by utilizing saliency silhouette detection, robust body parts model and multidimensional cues from full-body silhouettes followed by an entropy Markov model. Initially, images are pre-processed and noise is removed to obtain a robust silhouette. Body parts models are then used to extract twelve key body parts. These key body parts are further optimized to assist the generation of multidimensional cues. These cues include energy, optical flow and distinctive values that are fed into quadratic discriminant analysis to discriminate cues which help in the recognition of actions. Finally, these optimized patterns are further processed by a maximum entropy Markov model as a recognizer engine based on transition and emission probability values for activity recognition. For evaluation, we used a leave-one-out cross validation scheme and the results outperformed existing well-known statistical state-of-the-art methods by achieving better body parts detection and higher recognition accuracy over four benchmark datasets. The proposed method will be useful for man-machine interactions such as 3D interactive games, virtual reality, service robots, e-health fitness, and security surveillance. Design model of automatic posture estimation and action recognition.

ACS Style

Amir Nadeem; Ahmad Jalal; KiBum Kim. Automatic human posture estimation for sport activity recognition with robust body parts detection and entropy markov model. Multimedia Tools and Applications 2021, 80, 21465 -21498.

AMA Style

Amir Nadeem, Ahmad Jalal, KiBum Kim. Automatic human posture estimation for sport activity recognition with robust body parts detection and entropy markov model. Multimedia Tools and Applications. 2021; 80 (14):21465-21498.

Chicago/Turabian Style

Amir Nadeem; Ahmad Jalal; KiBum Kim. 2021. "Automatic human posture estimation for sport activity recognition with robust body parts detection and entropy markov model." Multimedia Tools and Applications 80, no. 14: 21465-21498.

Journal article
Published: 09 March 2021 in Sustainability
Reads 0
Downloads 0

Due to the constantly increasing demand for the automatic localization of landmarks in hand gesture recognition, there is a need for a more sustainable, intelligent, and reliable system for hand gesture recognition. The main purpose of this study was to develop an accurate hand gesture recognition system that is capable of error-free auto-landmark localization of any gesture dateable in an RGB image. In this paper, we propose a system based on landmark extraction from RGB images regardless of the environment. The extraction of gestures is performed via two methods, namely, fused and directional image methods. The fused method produced greater extracted gesture recognition accuracy. In the proposed system, hand gesture recognition (HGR) is done via several different methods, namely, (1) HGR via point-based features, which consist of (i) distance features, (ii) angular features, and (iii) geometric features; (2) HGR via full hand features, which are composed of (i) SONG mesh geometry and (ii) active model. To optimize these features, we applied gray wolf optimization. After optimization, a reweighted genetic algorithm was used for classification and gesture recognition. Experimentation was performed on five challenging datasets: Sign Word, Dexter1, Dexter + Object, STB, and NYU. Experimental results proved that auto landmark localization with the proposed feature extraction technique is an efficient approach towards developing a robust HGR system. The classification results of the reweighted genetic algorithm were compared with Artificial Neural Network (ANN) and decision tree. The developed system plays a significant role in healthcare muscle exercise.

ACS Style

Hira Ansar; Ahmad Jalal; Munkhjargal Gochoo; KiBum Kim. Hand Gesture Recognition Based on Auto-Landmark Localization and Reweighted Genetic Algorithm for Healthcare Muscle Activities. Sustainability 2021, 13, 2961 .

AMA Style

Hira Ansar, Ahmad Jalal, Munkhjargal Gochoo, KiBum Kim. Hand Gesture Recognition Based on Auto-Landmark Localization and Reweighted Genetic Algorithm for Healthcare Muscle Activities. Sustainability. 2021; 13 (5):2961.

Chicago/Turabian Style

Hira Ansar; Ahmad Jalal; Munkhjargal Gochoo; KiBum Kim. 2021. "Hand Gesture Recognition Based on Auto-Landmark Localization and Reweighted Genetic Algorithm for Healthcare Muscle Activities." Sustainability 13, no. 5: 2961.

Journal article
Published: 28 February 2021 in Remote Sensing
Reads 0
Downloads 0

Advances in video capturing devices enable adaptive posture estimation (APE) and event classification of multiple human-based videos for smart systems. Accurate event classification and adaptive posture estimation are still challenging domains, although researchers work hard to find solutions. In this research article, we propose a novel method to classify stochastic remote sensing events and to perform adaptive posture estimation. We performed human silhouette extraction using the Gaussian Mixture Model (GMM) and saliency map. After that, we performed human body part detection and used a unified pseudo-2D stick model for adaptive posture estimation. Multifused data that include energy, 3D Cartesian view, angular geometric, skeleton zigzag and moveable body parts were applied. Using a charged system search, we optimized our feature vector and deep belief network. We classified complex events, which were performed over sports videos in the wild (SVW), Olympic sports, UCF aerial action dataset and UT-interaction datasets. The mean accuracy of human body part detection was 83.57% over the UT-interaction, 83.00% for the Olympic sports and 83.78% for the SVW dataset. The mean event classification accuracy was 91.67% over the UT-interaction, 92.50% for Olympic sports and 89.47% for SVW dataset. These results are superior compared to existing state-of-the-art methods.

ACS Style

Munkhjargal Gochoo; Israr Akhter; Ahmad Jalal; KiBum Kim. Stochastic Remote Sensing Event Classification over Adaptive Posture Estimation via Multifused Data and Deep Belief Network. Remote Sensing 2021, 13, 912 .

AMA Style

Munkhjargal Gochoo, Israr Akhter, Ahmad Jalal, KiBum Kim. Stochastic Remote Sensing Event Classification over Adaptive Posture Estimation via Multifused Data and Deep Belief Network. Remote Sensing. 2021; 13 (5):912.

Chicago/Turabian Style

Munkhjargal Gochoo; Israr Akhter; Ahmad Jalal; KiBum Kim. 2021. "Stochastic Remote Sensing Event Classification over Adaptive Posture Estimation via Multifused Data and Deep Belief Network." Remote Sensing 13, no. 5: 912.

Journal article
Published: 14 February 2021 in Electronics
Reads 0
Downloads 0

The features and appearance of the human face are affected greatly by aging. A human face is an important aspect for human age identification from childhood through adulthood. Although many traits are used in human age estimation, this article discusses age classification using salient texture and facial landmark feature vectors. We propose a novel human age classification (HAC) model that can localize landmark points of the face. A robust multi-perspective view-based Active Shape Model (ASM) is generated and age classification is achieved using Convolution Neural Network (CNN). The HAC model is subdivided into the following steps: (1) at first, a face is detected using aYCbCr color segmentation model; (2) landmark localization is done on the face using a connected components approach and a ridge contour method; (3) an Active Shape Model (ASM) is generated on the face using three-sided polygon meshes and perpendicular bisection of a triangle; (4) feature extraction is achieved using anthropometric model, carnio-facial development, interior angle formulation, wrinkle detection and heat maps; (5) Sequential Forward Selection (SFS) is used to select the most ideal set of features; and (6) finally, the Convolution Neural Network (CNN) model is used to classify according to age in the correct age group. The proposed system outperforms existing statistical state-of-the-art HAC methods in terms of classification accuracy, achieving 91.58% with The Images of Groups dataset, 92.62% with the OUI Adience dataset and 94.59% with the FG-NET dataset. The system is applicable to many research areas including access control, surveillance monitoring, human–machine interaction and self-identification.

ACS Style

Syeda Rizwan; Ahmad Jalal; Munkhjargal Gochoo; KiBum Kim. Robust Active Shape Model via Hierarchical Feature Extraction with SFS-Optimized Convolution Neural Network for Invariant Human Age Classification. Electronics 2021, 10, 465 .

AMA Style

Syeda Rizwan, Ahmad Jalal, Munkhjargal Gochoo, KiBum Kim. Robust Active Shape Model via Hierarchical Feature Extraction with SFS-Optimized Convolution Neural Network for Invariant Human Age Classification. Electronics. 2021; 10 (4):465.

Chicago/Turabian Style

Syeda Rizwan; Ahmad Jalal; Munkhjargal Gochoo; KiBum Kim. 2021. "Robust Active Shape Model via Hierarchical Feature Extraction with SFS-Optimized Convolution Neural Network for Invariant Human Age Classification." Electronics 10, no. 4: 465.

Journal article
Published: 04 February 2021 in Sustainability
Reads 0
Downloads 0

The daily life-log routines of elderly individuals are susceptible to numerous complications in their physical healthcare patterns. Some of these complications can cause injuries, followed by extensive and expensive recovery stages. It is important to identify physical healthcare patterns that can describe and convey the exact state of an individual’s physical health while they perform their daily life activities. In this paper, we propose a novel Sustainable Physical Healthcare Pattern Recognition (SPHR) approach using a hybrid features model that is capable of distinguishing multiple physical activities based on a multiple wearable sensors system. Initially, we acquired raw data from well-known datasets, i.e., mobile health and human gait databases comprised of multiple human activities. The proposed strategy includes data pre-processing, hybrid feature detection, and feature-to-feature fusion and reduction, followed by codebook generation and classification, which can recognize sustainable physical healthcare patterns. Feature-to-feature fusion unites the cues from all of the sensors, and Gaussian mixture models are used for the codebook generation. For the classification, we recommend deep belief networks with restricted Boltzmann machines for five hidden layers. Finally, the results are compared with state-of-the-art techniques in order to demonstrate significant improvements in accuracy for physical healthcare pattern recognition. The experiments show that the proposed architecture attained improved accuracy rates for both datasets, and that it represents a significant sustainable physical healthcare pattern recognition (SPHR) approach. The anticipated system has potential for use in human–machine interaction domains such as continuous movement recognition, pattern-based surveillance, mobility assistance, and robot control systems.

ACS Style

Madiha Javeed; Munkhjargal Gochoo; Ahmad Jalal; KiBum Kim. HF-SPHR: Hybrid Features for Sustainable Physical Healthcare Pattern Recognition Using Deep Belief Networks. Sustainability 2021, 13, 1699 .

AMA Style

Madiha Javeed, Munkhjargal Gochoo, Ahmad Jalal, KiBum Kim. HF-SPHR: Hybrid Features for Sustainable Physical Healthcare Pattern Recognition Using Deep Belief Networks. Sustainability. 2021; 13 (4):1699.

Chicago/Turabian Style

Madiha Javeed; Munkhjargal Gochoo; Ahmad Jalal; KiBum Kim. 2021. "HF-SPHR: Hybrid Features for Sustainable Physical Healthcare Pattern Recognition Using Deep Belief Networks." Sustainability 13, no. 4: 1699.

Original article
Published: 01 February 2021 in Journal of Electrical Engineering & Technology
Reads 0
Downloads 0

In the recent days, scene understanding has become hot research topic due to its real usage at perceiving, analyzing and recognizing different dynamic scenes coverage during GPS monitoring system, drone’s targets, auto-driving and tourist guide. The goal of scene understanding is to make machines look at like humans do, which means the accurate recognition of the contents in scenes and during location observations. Then, we perform two operations such as (1) to perfectly describe the whole environment and (2) to describe what action is going on in the environment. Due to complex scene analysis, recognition of multiple objects and the relation between the objects remain as a challenging part of the research. In this paper, we have proposed a novel approach for the scene understanding that integrates multiple objects detection/segmentation and scene labeling using Geometric features, Histogram of oriented gradient and scale invariant feature transform descriptors. The complete procedure of the purposed model includes resizing and noise removing of images from the dataset, multiple object segmentation and detection, feature extraction and multiple object recognition using multi-layer kernel sliding perceptron. After that, scene recognition is achieved by using multi-class logistic regression. Finally, two datasets such as MSRC and UIUC sports are used for the experimental evaluation of our proposed method. Our purposed method accurately handles the complex objects physical exclusion and objects occlusion. Therefore, it outperforms in term of accuracy compared with other state-of-the-art approaches.

ACS Style

Abrar Ahmed; Ahmad Jalal; KiBum Kim. Multi-objects Detection and Segmentation for Scene Understanding Based on Texton Forest and Kernel Sliding Perceptron. Journal of Electrical Engineering & Technology 2021, 16, 1143 -1150.

AMA Style

Abrar Ahmed, Ahmad Jalal, KiBum Kim. Multi-objects Detection and Segmentation for Scene Understanding Based on Texton Forest and Kernel Sliding Perceptron. Journal of Electrical Engineering & Technology. 2021; 16 (2):1143-1150.

Chicago/Turabian Style

Abrar Ahmed; Ahmad Jalal; KiBum Kim. 2021. "Multi-objects Detection and Segmentation for Scene Understanding Based on Texton Forest and Kernel Sliding Perceptron." Journal of Electrical Engineering & Technology 16, no. 2: 1143-1150.

Journal article
Published: 19 January 2021 in Sustainability
Reads 0
Downloads 0

Due to the constantly increasing demand for automatic tracking and recognition systems, there is a need for more proficient, intelligent and sustainable human activity tracking. The main purpose of this study is to develop an accurate and sustainable human action tracking system that is capable of error-free identification of human movements irrespective of the environment in which those actions are performed. Therefore, in this paper we propose a stereoscopic Human Action Recognition (HAR) system based on the fusion of RGB (red, green, blue) and depth sensors. These sensors give an extra depth of information which enables the three-dimensional (3D) tracking of each and every movement performed by humans. Human actions are tracked according to four features, namely, (1) geodesic distance; (2) 3D Cartesian-plane features; (3) joints Motion Capture (MOCAP) features and (4) way-points trajectory generation. In order to represent these features in an optimized form, Particle Swarm Optimization (PSO) is applied. After optimization, a neuro-fuzzy classifier is used for classification and recognition. Extensive experimentation is performed on three challenging datasets: A Nanyang Technological University (NTU) RGB+D dataset; a UoL (University of Lincoln) 3D social activity dataset and a Collective Activity Dataset (CAD). Evaluation experiments on the proposed system proved that a fusion of vision sensors along with our unique features is an efficient approach towards developing a robust HAR system, having achieved a mean accuracy of 93.5% with the NTU RGB+D dataset, 92.2% with the UoL dataset and 89.6% with the Collective Activity dataset. The developed system can play a significant role in many computer vision-based applications, such as intelligent homes, offices and hospitals, and surveillance systems.

ACS Style

Nida Khalid; Munkhjargal Gochoo; Ahmad Jalal; KiBum Kim. Modeling Two-Person Segmentation and Locomotion for Stereoscopic Action Identification: A Sustainable Video Surveillance System. Sustainability 2021, 13, 970 .

AMA Style

Nida Khalid, Munkhjargal Gochoo, Ahmad Jalal, KiBum Kim. Modeling Two-Person Segmentation and Locomotion for Stereoscopic Action Identification: A Sustainable Video Surveillance System. Sustainability. 2021; 13 (2):970.

Chicago/Turabian Style

Nida Khalid; Munkhjargal Gochoo; Ahmad Jalal; KiBum Kim. 2021. "Modeling Two-Person Segmentation and Locomotion for Stereoscopic Action Identification: A Sustainable Video Surveillance System." Sustainability 13, no. 2: 970.

Journal article
Published: 10 December 2020 in Sustainability
Reads 0
Downloads 0

Human behavior modeling (HBM) is a challenging classification task for researchers seeking to develop sustainable systems that precisely monitor and record human life-logs. In recent years, several models have been proposed; however, HBM remains an inspiring problem that is only partly solved. This paper proposes a novel framework of human behavior modeling based on wearable inertial sensors; the system framework is composed of data acquisition, feature extraction, optimization and classification stages. First, inertial data is filtered via three different filters, i.e., Chebyshev, Elliptic and Bessel filters. Next, six different features from time and frequency domains are extracted to determine the maximum optimal values. Then, the Probability Based Incremental Learning (PBIL) optimizer and the K-Ary tree hashing classifier are applied to model different human activities. The proposed model is evaluated on two benchmark datasets, namely DALIAC and PAMPA2, and one self-annotated dataset, namely, IM-LifeLog, respectively. For evaluation, we used a leave-one-out cross validation scheme. The experimental results show that our model outperformed existing state-of-the-art methods with accuracy rates of 94.23%, 94.07% and 96.40% over DALIAC, PAMPA2 and IM-LifeLog datasets, respectively. The proposed system can be used in healthcare, physical activity detection, surveillance systems and medical fitness fields.

ACS Style

Ahmad Jalal; Mouazma Batool; KiBum Kim. Sustainable Wearable System: Human Behavior Modeling for Life-Logging Activities Using K-Ary Tree Hashing Classifier. Sustainability 2020, 12, 10324 .

AMA Style

Ahmad Jalal, Mouazma Batool, KiBum Kim. Sustainable Wearable System: Human Behavior Modeling for Life-Logging Activities Using K-Ary Tree Hashing Classifier. Sustainability. 2020; 12 (24):10324.

Chicago/Turabian Style

Ahmad Jalal; Mouazma Batool; KiBum Kim. 2020. "Sustainable Wearable System: Human Behavior Modeling for Life-Logging Activities Using K-Ary Tree Hashing Classifier." Sustainability 12, no. 24: 10324.

Journal article
Published: 24 November 2020 in Sustainability
Reads 0
Downloads 0

This paper suggests that human pose estimation (HPE) and sustainable event classification (SEC) require an advanced human skeleton and context-aware features extraction approach along with machine learning classification methods to recognize daily events precisely. Over the last few decades, researchers have found new mechanisms to make HPE and SEC applicable in daily human life-log events such as sports, surveillance systems, human monitoring systems, and in the education sector. In this research article, we propose a novel HPE and SEC system for which we designed a pseudo-2D stick model. To extract full-body human silhouette features, we proposed various features such as energy, sine, distinct body parts movements, and a 3D Cartesian view of smoothing gradients features. Features extracted to represent human key posture points include rich 2D appearance, angular point, and multi-point autocorrelation. After the extraction of key points, we applied a hierarchical classification and optimization model via ray optimization and a K-ary tree hashing algorithm over a UCF50 dataset, an hmdb51 dataset, and an Olympic sports dataset. Human body key points detection accuracy for the UCF50 dataset was 80.9%, for the hmdb51 dataset it was 82.1%, and for the Olympic sports dataset it was 81.7%. Event classification for the UCF50 dataset was 90.48%, for the hmdb51 dataset it was 89.21%, and for the Olympic sports dataset it was 90.83%. These results indicate better performance for our approach compared to other state-of-the-art methods.

ACS Style

Ahmad Jalal; Israr Akhtar; KiBum Kim. Human Posture Estimation and Sustainable Events Classification via Pseudo-2D Stick Model and K-ary Tree Hashing. Sustainability 2020, 12, 9814 .

AMA Style

Ahmad Jalal, Israr Akhtar, KiBum Kim. Human Posture Estimation and Sustainable Events Classification via Pseudo-2D Stick Model and K-ary Tree Hashing. Sustainability. 2020; 12 (23):9814.

Chicago/Turabian Style

Ahmad Jalal; Israr Akhtar; KiBum Kim. 2020. "Human Posture Estimation and Sustainable Events Classification via Pseudo-2D Stick Model and K-ary Tree Hashing." Sustainability 12, no. 23: 9814.

Journal article
Published: 23 November 2020 in Symmetry
Reads 0
Downloads 0

Object recognition in depth images is challenging and persistent task in machine vision, robotics, and automation of sustainability. Object recognition tasks are a challenging part of various multimedia technologies for video surveillance, human–computer interaction, robotic navigation, drone targeting, tourist guidance, and medical diagnostics. However, the symmetry that exists in real-world objects plays a significant role in perception and recognition of objects in both humans and machines. With advances in depth sensor technology, numerous researchers have recently proposed RGB-D object recognition techniques. In this paper, we introduce a sustainable object recognition framework that is consistent despite any change in the environment, and can recognize and analyze RGB-D objects in complex indoor scenarios. Firstly, after acquiring a depth image, the point cloud and the depth maps are extracted to obtain the planes. Then, the plane fitting model and the proposed modified maximum likelihood estimation sampling consensus (MMLESAC) are applied as a segmentation process. Then, depth kernel descriptors (DKDES) over segmented objects are computed for single and multiple object scenarios separately. These DKDES are subsequently carried forward to isometric mapping (IsoMap) for feature space reduction. Finally, the reduced feature vector is forwarded to a kernel sliding perceptron (KSP) for the recognition of objects. Three datasets are used to evaluate four different experiments by employing a cross-validation scheme to validate the proposed model. The experimental results over RGB-D object, RGB-D scene, and NYUDv1 datasets demonstrate overall accuracies of 92.2%, 88.5%, and 90.5% respectively. These results outperform existing state-of-the-art methods and verify the suitability of the method.

ACS Style

Adnan Ahmed Rafique; Ahmad Jalal; KiBum Kim. Automated Sustainable Multi-Object Segmentation and Recognition via Modified Sampling Consensus and Kernel Sliding Perceptron. Symmetry 2020, 12, 1928 .

AMA Style

Adnan Ahmed Rafique, Ahmad Jalal, KiBum Kim. Automated Sustainable Multi-Object Segmentation and Recognition via Modified Sampling Consensus and Kernel Sliding Perceptron. Symmetry. 2020; 12 (11):1928.

Chicago/Turabian Style

Adnan Ahmed Rafique; Ahmad Jalal; KiBum Kim. 2020. "Automated Sustainable Multi-Object Segmentation and Recognition via Modified Sampling Consensus and Kernel Sliding Perceptron." Symmetry 12, no. 11: 1928.

Journal article
Published: 21 November 2020 in Sensors
Reads 0
Downloads 0

Nowadays, wearable technology can enhance physical human life-log routines by shifting goals from merely counting steps to tackling significant healthcare challenges. Such wearable technology modules have presented opportunities to acquire important information about human activities in real-life environments. The purpose of this paper is to report on recent developments and to project future advances regarding wearable sensor systems for the sustainable monitoring and recording of human life-logs. On the basis of this survey, we propose a model that is designed to retrieve better information during physical activities in indoor and outdoor environments in order to improve the quality of life and to reduce risks. This model uses a fusion of both statistical and non-statistical features for the recognition of different activity patterns using wearable inertial sensors, i.e., triaxial accelerometers, gyroscopes and magnetometers. These features include signal magnitude, positive/negative peaks and position direction to explore signal orientation changes, position differentiation, temporal variation and optimal changes among coordinates. These features are processed by a genetic algorithm for the selection and classification of inertial signals to learn and recognize abnormal human movement. Our model was experimentally evaluated on four benchmark datasets: Intelligent Media Wearable Smart Home Activities (IM-WSHA), a self-annotated physical activities dataset, Wireless Sensor Data Mining (WISDM) with different sporting patterns from an IM-SB dataset and an SMotion dataset with different physical activities. Experimental results show that the proposed feature extraction strategy outperformed others, achieving an improved recognition accuracy of 81.92%, 95.37%, 90.17%, 94.58%, respectively, when IM-WSHA, WISDM, IM-SB and SMotion datasets were applied.

ACS Style

Ahmad Jalal; Majid Ali Khan Quaid; Sheikh Badar Ud Din Tahir; KiBum Kim. A Study of Accelerometer and Gyroscope Measurements in Physical Life-Log Activities Detection Systems. Sensors 2020, 20, 6670 .

AMA Style

Ahmad Jalal, Majid Ali Khan Quaid, Sheikh Badar Ud Din Tahir, KiBum Kim. A Study of Accelerometer and Gyroscope Measurements in Physical Life-Log Activities Detection Systems. Sensors. 2020; 20 (22):6670.

Chicago/Turabian Style

Ahmad Jalal; Majid Ali Khan Quaid; Sheikh Badar Ud Din Tahir; KiBum Kim. 2020. "A Study of Accelerometer and Gyroscope Measurements in Physical Life-Log Activities Detection Systems." Sensors 20, no. 22: 6670.

Journal article
Published: 24 October 2020 in Symmetry
Reads 0
Downloads 0

Recent developments in sensor technologies enable physical activity recognition (PAR) as an essential tool for smart health monitoring and for fitness exercises. For efficient PAR, model representation and training are significant factors contributing to the ultimate success of recognition systems because model representation and accurate detection of body parts and physical activities cannot be distinguished if the system is not well trained. This paper provides a unified framework that explores multidimensional features with the help of a fusion of body part models and quadratic discriminant analysis which uses these features for markerless human pose estimation. Multilevel features are extracted as displacement parameters to work as spatiotemporal properties. These properties represent the respective positions of the body parts with respect to time. Finally, these features are processed by a maximum entropy Markov model as a recognition engine based on transition and emission probability values. Experimental results demonstrate that the proposed model produces more accurate results compared to the state-of-the-art methods for both body part detection and for physical activity recognition. The accuracy of the proposed method for body part detection is 90.91% on a University of Central Florida’s (UCF) sports action dataset and, for activity recognition on a UCF YouTube action dataset and an IM-DailyRGBEvents dataset, accuracy is 89.09% and 88.26% respectively.

ACS Style

Amir Nadeem; Ahmad Jalal; KiBum Kim. Accurate Physical Activity Recognition using Multidimensional Features and Markov Model for Smart Health Fitness. Symmetry 2020, 12, 1766 .

AMA Style

Amir Nadeem, Ahmad Jalal, KiBum Kim. Accurate Physical Activity Recognition using Multidimensional Features and Markov Model for Smart Health Fitness. Symmetry. 2020; 12 (11):1766.

Chicago/Turabian Style

Amir Nadeem; Ahmad Jalal; KiBum Kim. 2020. "Accurate Physical Activity Recognition using Multidimensional Features and Markov Model for Smart Health Fitness." Symmetry 12, no. 11: 1766.

Journal article
Published: 13 October 2020 in Applied Sciences
Reads 0
Downloads 0

The classification of human activity is becoming one of the most important areas of human health monitoring and physical fitness. With the use of physical activity recognition applications, people suffering from various diseases can be efficiently monitored and medical treatment can be administered in a timely fashion. These applications could improve remote services for health care monitoring and delivery. However, the fixed health monitoring devices provided in hospitals limits the subjects’ movement. In particular, our work reports on wearable sensors that provide remote monitoring that periodically checks human health through different postures and activities to give people timely and effective treatment. In this paper, we propose a novel human activity recognition (HAR) system with multiple combined features to monitor human physical movements from continuous sequences via tri-axial inertial sensors. The proposed HAR system filters 1D signals using a notch filter that examines the lower/upper cutoff frequencies to calculate the optimal wearable sensor data. Then, it calculates multiple combined features, i.e., statistical features, Mel Frequency Cepstral Coefficients, and Gaussian Mixture Model features. For the classification and recognition engine, a Decision Tree classifier optimized by the Binary Grey Wolf Optimization algorithm is proposed. The proposed system is applied and tested on three challenging benchmark datasets to assess the feasibility of the model. The experimental results show that our proposed system attained an exceptional level of performance compared to conventional solutions. We achieved accuracy rates of 88.25%, 93.95%, and 96.83% over MOTIONSENSE, MHEALTH, and the proposed self-annotated IM-AccGyro human-machine dataset, respectively.

ACS Style

Ahmad Jalal; Mouazma Batool; KiBum Kim. Stochastic Recognition of Physical Activity and Healthcare Using Tri-Axial Inertial Wearable Sensors. Applied Sciences 2020, 10, 7122 .

AMA Style

Ahmad Jalal, Mouazma Batool, KiBum Kim. Stochastic Recognition of Physical Activity and Healthcare Using Tri-Axial Inertial Wearable Sensors. Applied Sciences. 2020; 10 (20):7122.

Chicago/Turabian Style

Ahmad Jalal; Mouazma Batool; KiBum Kim. 2020. "Stochastic Recognition of Physical Activity and Healthcare Using Tri-Axial Inertial Wearable Sensors." Applied Sciences 10, no. 20: 7122.

Original article
Published: 02 October 2020 in Journal of Electrical Engineering & Technology
Reads 0
Downloads 0

Wearable sensors in the smart home environment have been actively developed as assistive systems to detect behavioral anomalies. Smart wearable devices incorprated into daily life can respond immediately to anomalies and process and dispatch important information in real-time. Artificially intelligent technology monitoring of the user’s daily activities and smart home ambience is especially useful in telehealthcare. In this paper, we propose a behavioral activity recognition framework which uses inertial devices (accelerometer and gyroscope) for activity detection within the home environment via multi-fused features and a reweighted genetic algorithm. The procedure begins with the segmentation and framing of data to enable efficient processing of useful information. Features are then extracted and transformed into a matrix. Finally, biogeography-based optimization and a reweighted genetic algorithm are used for the optimization and classification of extracted features. For evaluation, we used the “leave-one-out” cross validation scheme. The results outperformed existing state-of-the-art methods, achieving higher recognition accuracy rates of 88%, 88.75%, and 93.33% compared with CMU-Multi-Modal Activity, WISDM, and IMSB datasets respectively.

ACS Style

Mouazma Batool; Ahmad Jalal; KiBum Kim. Telemonitoring of Daily Activity Using Accelerometer and Gyroscope in Smart Home Environments. Journal of Electrical Engineering & Technology 2020, 15, 2801 -2809.

AMA Style

Mouazma Batool, Ahmad Jalal, KiBum Kim. Telemonitoring of Daily Activity Using Accelerometer and Gyroscope in Smart Home Environments. Journal of Electrical Engineering & Technology. 2020; 15 (6):2801-2809.

Chicago/Turabian Style

Mouazma Batool; Ahmad Jalal; KiBum Kim. 2020. "Telemonitoring of Daily Activity Using Accelerometer and Gyroscope in Smart Home Environments." Journal of Electrical Engineering & Technology 15, no. 6: 2801-2809.

Journal article
Published: 26 July 2020 in Entropy
Reads 0
Downloads 0

Automatic identification of human interaction is a challenging task especially in dynamic environments with cluttered backgrounds from video sequences. Advancements in computer vision sensor technologies provide powerful effects in human interaction recognition (HIR) during routine daily life. In this paper, we propose a novel features extraction method which incorporates robust entropy optimization and an efficient Maximum Entropy Markov Model (MEMM) for HIR via multiple vision sensors. The main objectives of proposed methodology are: (1) to propose a hybrid of four novel features—i.e., spatio-temporal features, energy-based features, shape based angular and geometric features—and a motion-orthogonal histogram of oriented gradient (MO-HOG); (2) to encode hybrid feature descriptors using a codebook, a Gaussian mixture model (GMM) and fisher encoding; (3) to optimize the encoded feature using a cross entropy optimization function; (4) to apply a MEMM classification algorithm to examine empirical expectations and highest entropy, which measure pattern variances to achieve outperformed HIR accuracy results. Our system is tested over three well-known datasets: SBU Kinect interaction; UoL 3D social activity; UT-interaction datasets. Through wide experimentations, the proposed features extraction algorithm, along with cross entropy optimization, has achieved the average accuracy rate of 91.25% with SBU, 90.4% with UoL and 87.4% with UT-Interaction datasets. The proposed HIR system will be applicable to a wide variety of man–machine interfaces, such as public-place surveillance, future medical applications, virtual reality, fitness exercises and 3D interactive gaming.

ACS Style

Ahmad Jalal; Nida Khalid; KiBum Kim. Automatic Recognition of Human Interaction via Hybrid Descriptors and Maximum Entropy Markov Model Using Depth Sensors. Entropy 2020, 22, 817 .

AMA Style

Ahmad Jalal, Nida Khalid, KiBum Kim. Automatic Recognition of Human Interaction via Hybrid Descriptors and Maximum Entropy Markov Model Using Depth Sensors. Entropy. 2020; 22 (8):817.

Chicago/Turabian Style

Ahmad Jalal; Nida Khalid; KiBum Kim. 2020. "Automatic Recognition of Human Interaction via Hybrid Descriptors and Maximum Entropy Markov Model Using Depth Sensors." Entropy 22, no. 8: 817.