This page has only limited features, please log in for full access.
Human-Object Interaction (HOI) recognition, due to its significance in many computer vision-based applications, requires in-depth and meaningful details from image sequences. Incorporating semantics in scene understanding has led to a deep understanding of human-centric actions. Therefore, in this research work, we propose a semantic HOI recognition system based on multi-vision sensors. In the proposed system, the de-noised RGB and depth images, via Bilateral Filtering (BLF), are segmented into multiple clusters using a Simple Linear Iterative Clustering (SLIC) algorithm. The skeleton is then extracted from segmented RGB and depth images via Euclidean Distance Transform (EDT). Human joints, extracted from the skeleton, provide the annotations for accurate pixel-level labeling. An elliptical human model is then generated via a Gaussian Mixture Model (GMM). A Conditional Random Field (CRF) model is trained to allocate a specific label to each pixel of different human body parts and an interaction object. Two semantic feature types that are extracted from each labeled body part of the human and labelled objects are: Fiducial points and 3D point cloud. Features descriptors are quantized using Fisher’s Linear Discriminant Analysis (FLDA) and classified using K-ary Tree Hashing (KATH). In experimentation phase the recognition accuracy achieved with the Sports dataset is 92.88%, with the Sun Yat-Sen University (SYSU) 3D HOI dataset is 93.5% and with the Nanyang Technological University (NTU) RGB+D dataset it is 94.16%. The proposed system is validated via extensive experimentation and should be applicable to many computer-vision based applications such as healthcare monitoring, security systems and assisted living etc.
Nida Khalid; Yazeed Yasin Ghadi; Munkhjargal Gochoo; Ahmad Jalal; KiBum Kim. Semantic Recognition of Human-Object Interactions via Gaussian-based Elliptical Modelling and Pixel-Level Labeling. IEEE Access 2021, PP, 1 -1.
AMA StyleNida Khalid, Yazeed Yasin Ghadi, Munkhjargal Gochoo, Ahmad Jalal, KiBum Kim. Semantic Recognition of Human-Object Interactions via Gaussian-based Elliptical Modelling and Pixel-Level Labeling. IEEE Access. 2021; PP (99):1-1.
Chicago/Turabian StyleNida Khalid; Yazeed Yasin Ghadi; Munkhjargal Gochoo; Ahmad Jalal; KiBum Kim. 2021. "Semantic Recognition of Human-Object Interactions via Gaussian-based Elliptical Modelling and Pixel-Level Labeling." IEEE Access PP, no. 99: 1-1.
This work presents the grouping of dependent tasks into a cluster using the Bayesian analysis model to solve the affinity scheduling problem in heterogeneous multicore systems. The non-affinity scheduling of tasks has a negative impact as the overall execution time for the tasks increases. Furthermore, non-affinity-based scheduling also limits the potential for data reuse in the caches so it becomes necessary to bring the same data into the caches multiple times. In heterogeneous multicore systems, it is essential to address the load balancing problem as all cores are operating at varying frequencies. We propose two techniques to solve the load balancing issue, one being designated “chunk-based scheduler” (CBS) which is applied to the heterogeneous systems while the other system is “quantum-based intra-core task migration” (QBICTM) where each task is given a fair and equal chance to run on the fastest core. Results show 30–55% improvement in the average execution time of the tasks by applying our CBS or QBICTM scheduler compare to other traditional schedulers when compared using the same operating system.
Sohaib Abbasi; Shaharyar Kamal; Munkhjargal Gochoo; Ahmad Jalal; KiBum Kim. Affinity-Based Task Scheduling on Heterogeneous Multicore Systems Using CBS and QBICTM. Applied Sciences 2021, 11, 5740 .
AMA StyleSohaib Abbasi, Shaharyar Kamal, Munkhjargal Gochoo, Ahmad Jalal, KiBum Kim. Affinity-Based Task Scheduling on Heterogeneous Multicore Systems Using CBS and QBICTM. Applied Sciences. 2021; 11 (12):5740.
Chicago/Turabian StyleSohaib Abbasi; Shaharyar Kamal; Munkhjargal Gochoo; Ahmad Jalal; KiBum Kim. 2021. "Affinity-Based Task Scheduling on Heterogeneous Multicore Systems Using CBS and QBICTM." Applied Sciences 11, no. 12: 5740.
Automatic head tracking and counting using depth imagery has various practical applications in security, logistics, queue management, space utilization and visitor counting. However, no currently available system can clearly distinguish between a human head and other objects in order to track and count people accurately. For this reason, we propose a novel system that can track people by monitoring their heads and shoulders in complex environments and also count the number of people entering and exiting the scene. Our system is split into six phases; at first, preprocessing is done by converting videos of a scene into frames and removing the background from the video frames. Second, heads are detected using Hough Circular Gradient Transform, and shoulders are detected by HOG based symmetry methods. Third, three robust features, namely, fused joint HOG-LBP, Energy based Point clouds and Fused intra-inter trajectories are extracted. Fourth, the Apriori-Association is implemented to select the best features. Fifth, deep learning is used for accurate people tracking. Finally, heads are counted using Cross-line judgment. The system was tested on three benchmark datasets: the PCDS dataset, the MICC people counting dataset and the GOTPD dataset and counting accuracy of 98.40%, 98%, and 99% respectively was achieved. Our system obtained remarkable results.
Munkhjargal Gochoo; Syeda Rizwan; Yazeed Ghadi; Ahmad Jalal; KiBum Kim. A Systematic Deep Learning Based Overhead Tracking and Counting System Using RGB-D Remote Cameras. Applied Sciences 2021, 11, 5503 .
AMA StyleMunkhjargal Gochoo, Syeda Rizwan, Yazeed Ghadi, Ahmad Jalal, KiBum Kim. A Systematic Deep Learning Based Overhead Tracking and Counting System Using RGB-D Remote Cameras. Applied Sciences. 2021; 11 (12):5503.
Chicago/Turabian StyleMunkhjargal Gochoo; Syeda Rizwan; Yazeed Ghadi; Ahmad Jalal; KiBum Kim. 2021. "A Systematic Deep Learning Based Overhead Tracking and Counting System Using RGB-D Remote Cameras." Applied Sciences 11, no. 12: 5503.
To prevent disasters and to control and supervise crowds, automated video surveillance has become indispensable. In today’s complex and crowded environments, manual surveillance and monitoring systems are inefficient, labor intensive, and unwieldy. Automated video surveillance systems offer promising solutions, but challenges remain. One of the major challenges is the extraction of true foregrounds of pixels representing humans only. Furthermore, to accurately understand and interpret crowd behavior, human crowd behavior (HCB) systems require robust feature extraction methods, along with powerful and reliable decision-making classifiers. In this paper, we describe our approach to these issues by presenting a novel Particles Force Model for multi-person tracking, a vigorous fusion of global and local descriptors, along with a robust improved entropy classifier for detecting and interpreting crowd behavior. In the proposed model, necessary preprocessing steps are followed by the application of a first distance algorithm for the removal of background clutter; true-foreground elements are then extracted via a Particles Force Model. The detected human forms are then counted by labeling and performing cluster estimation, using a K-nearest neighbors search algorithm. After that, the location of all the human silhouettes is fixed and, using the Jaccard similarity index and normalized cross-correlation as a cost function, multi-person tracking is performed. For HCB detection, we introduced human crowd contour extraction as a global feature and a particles gradient motion (PGD) descriptor, along with geometrical and speeded up robust features (SURF) for local features. After features were extracted, we applied bat optimization for optimal features, which also works as a pre-classifier. Finally, we introduced a robust improved entropy classifier for decision making and automated crowd behavior detection in smart surveillance systems. We evaluated the performance of our proposed system on a publicly available benchmark PETS2009 and UMN dataset. Experimental results show that our system performed better compared to existing well-known state-of-the-art methods by achieving higher accuracy rates. The proposed system can be deployed to great benefit in numerous public places, such as airports, shopping malls, city centers, and train stations to control, supervise, and protect crowds.
Faisal Abdullah; Yazeed Ghadi; Munkhjargal Gochoo; Ahmad Jalal; KiBum Kim. Multi-Person Tracking and Crowd Behavior Detection via Particles Gradient Motion Descriptor and Improved Entropy Classifier. Entropy 2021, 23, 628 .
AMA StyleFaisal Abdullah, Yazeed Ghadi, Munkhjargal Gochoo, Ahmad Jalal, KiBum Kim. Multi-Person Tracking and Crowd Behavior Detection via Particles Gradient Motion Descriptor and Improved Entropy Classifier. Entropy. 2021; 23 (5):628.
Chicago/Turabian StyleFaisal Abdullah; Yazeed Ghadi; Munkhjargal Gochoo; Ahmad Jalal; KiBum Kim. 2021. "Multi-Person Tracking and Crowd Behavior Detection via Particles Gradient Motion Descriptor and Improved Entropy Classifier." Entropy 23, no. 5: 628.
The monitoring of human physical activities using wearable sensors, such as inertial-based sensors, plays a significant role in various current and potential applications. These applications include physical health tracking, surveillance systems, and robotic assistive technologies. Despite the wide range of applications, classification and recognition of human activities remains imprecise and this may contribute to unfavorable reactions and responses. To improve the recognition of human activities, we designed a dataset in which ten participants (five male and five female) performed 11 different activities wearing three body-worn inertial sensors in different locations on the body. Our model extracts data via a hierarchical feature-based technique. These features include time, wavelet, and time-frequency domains, respectively. Stochastic gradient descent (SGD) is then introduced to optimize selective features. The selected features with optimized patterns are further processed by multi-layered kernel sliding perceptron to develop adaptive learning for the classification of physical human activities. Our proposed model was experimentally evaluated and applied on three benchmark datasets: IM-WSHA, a self-annotated dataset, PAMAP2 dataset which is comprised of daily living activities, and an HuGaDB, a dataset which contains physical activities for aging people. The experimental results show that the proposed method achieves better results and outperforms others in terms of recognition accuracy, achieving an accuracy rate of 83.18%, 94.16%, and 92.50% respectively, when IM-WSHA, PAMAP2, and HuGaDB datasets are applied.
Munkhjargal Gochoo; Sheikh Badar Ud Din Tahir; Ahmad Jalal; KiBum Kim. Monitoring Real-Time Personal Locomotion Behaviors Over Smart Indoor-Outdoor Environments Via Body-Worn Sensors. IEEE Access 2021, 9, 70556 -70570.
AMA StyleMunkhjargal Gochoo, Sheikh Badar Ud Din Tahir, Ahmad Jalal, KiBum Kim. Monitoring Real-Time Personal Locomotion Behaviors Over Smart Indoor-Outdoor Environments Via Body-Worn Sensors. IEEE Access. 2021; 9 ():70556-70570.
Chicago/Turabian StyleMunkhjargal Gochoo; Sheikh Badar Ud Din Tahir; Ahmad Jalal; KiBum Kim. 2021. "Monitoring Real-Time Personal Locomotion Behaviors Over Smart Indoor-Outdoor Environments Via Body-Worn Sensors." IEEE Access 9, no. : 70556-70570.
To understand daily events accurately, adaptive pose estimation (APE) systems require a robust context-aware model and optimal feature selection methods. In this paper, we propose a novel gait event detection (GED) system that consists of saliency silhouette detection, a robust body parts model and a 2D stick-model followed by a hierarchical optimization algorithm. Furthermore, the most prominent context-aware features such as energy, 0–180° intensity and distinct moveable features are proposed by focusing on invariant and localized characteristics of human postures in different event classes. Finally, we apply Grey Wolf optimization and a genetic algorithm to discriminate complex postures and to provide appropriate labels to each event. In order to evaluate the performance of proposed GED, two public benchmark datasets, UCF101 and YouTube, are examined via the n-fold cross validation method. For the two benchmark datasets, our proposed method detects the human body key points with 82.4% and 83.2% accuracy respectively. Also, it extracts the context-aware features and finally recognizes the gait events with 82.6% and 85.0% accuracy, respectively. Compared with other well-known statistical and state-of-the-art methods, our proposed method outperforms other similarly tasked methods in terms of posture detection and recognition accuracy.
Israr Akhter; Ahmad Jalal; KiBum Kim. Adaptive Pose Estimation for Gait Event Detection Using Context-Aware Model and Hierarchical Optimization. Journal of Electrical Engineering & Technology 2021, 1 -9.
AMA StyleIsrar Akhter, Ahmad Jalal, KiBum Kim. Adaptive Pose Estimation for Gait Event Detection Using Context-Aware Model and Hierarchical Optimization. Journal of Electrical Engineering & Technology. 2021; ():1-9.
Chicago/Turabian StyleIsrar Akhter; Ahmad Jalal; KiBum Kim. 2021. "Adaptive Pose Estimation for Gait Event Detection Using Context-Aware Model and Hierarchical Optimization." Journal of Electrical Engineering & Technology , no. : 1-9.
Automated human posture estimation (A-HPE) systems need delicate methods for detecting body parts and selecting cues based on marker-less sensors to effectively recognize complex activity motions. Recognition of human activities using vision sensors is a challenging issue due to variations in illumination conditions and complex movements during the monitoring of sports and fitness exercises. In this paper, we propose a novel A-HPE method that intelligently identifies human behaviours by utilizing saliency silhouette detection, robust body parts model and multidimensional cues from full-body silhouettes followed by an entropy Markov model. Initially, images are pre-processed and noise is removed to obtain a robust silhouette. Body parts models are then used to extract twelve key body parts. These key body parts are further optimized to assist the generation of multidimensional cues. These cues include energy, optical flow and distinctive values that are fed into quadratic discriminant analysis to discriminate cues which help in the recognition of actions. Finally, these optimized patterns are further processed by a maximum entropy Markov model as a recognizer engine based on transition and emission probability values for activity recognition. For evaluation, we used a leave-one-out cross validation scheme and the results outperformed existing well-known statistical state-of-the-art methods by achieving better body parts detection and higher recognition accuracy over four benchmark datasets. The proposed method will be useful for man-machine interactions such as 3D interactive games, virtual reality, service robots, e-health fitness, and security surveillance. Design model of automatic posture estimation and action recognition.
Amir Nadeem; Ahmad Jalal; KiBum Kim. Automatic human posture estimation for sport activity recognition with robust body parts detection and entropy markov model. Multimedia Tools and Applications 2021, 80, 21465 -21498.
AMA StyleAmir Nadeem, Ahmad Jalal, KiBum Kim. Automatic human posture estimation for sport activity recognition with robust body parts detection and entropy markov model. Multimedia Tools and Applications. 2021; 80 (14):21465-21498.
Chicago/Turabian StyleAmir Nadeem; Ahmad Jalal; KiBum Kim. 2021. "Automatic human posture estimation for sport activity recognition with robust body parts detection and entropy markov model." Multimedia Tools and Applications 80, no. 14: 21465-21498.
Due to the constantly increasing demand for the automatic localization of landmarks in hand gesture recognition, there is a need for a more sustainable, intelligent, and reliable system for hand gesture recognition. The main purpose of this study was to develop an accurate hand gesture recognition system that is capable of error-free auto-landmark localization of any gesture dateable in an RGB image. In this paper, we propose a system based on landmark extraction from RGB images regardless of the environment. The extraction of gestures is performed via two methods, namely, fused and directional image methods. The fused method produced greater extracted gesture recognition accuracy. In the proposed system, hand gesture recognition (HGR) is done via several different methods, namely, (1) HGR via point-based features, which consist of (i) distance features, (ii) angular features, and (iii) geometric features; (2) HGR via full hand features, which are composed of (i) SONG mesh geometry and (ii) active model. To optimize these features, we applied gray wolf optimization. After optimization, a reweighted genetic algorithm was used for classification and gesture recognition. Experimentation was performed on five challenging datasets: Sign Word, Dexter1, Dexter + Object, STB, and NYU. Experimental results proved that auto landmark localization with the proposed feature extraction technique is an efficient approach towards developing a robust HGR system. The classification results of the reweighted genetic algorithm were compared with Artificial Neural Network (ANN) and decision tree. The developed system plays a significant role in healthcare muscle exercise.
Hira Ansar; Ahmad Jalal; Munkhjargal Gochoo; KiBum Kim. Hand Gesture Recognition Based on Auto-Landmark Localization and Reweighted Genetic Algorithm for Healthcare Muscle Activities. Sustainability 2021, 13, 2961 .
AMA StyleHira Ansar, Ahmad Jalal, Munkhjargal Gochoo, KiBum Kim. Hand Gesture Recognition Based on Auto-Landmark Localization and Reweighted Genetic Algorithm for Healthcare Muscle Activities. Sustainability. 2021; 13 (5):2961.
Chicago/Turabian StyleHira Ansar; Ahmad Jalal; Munkhjargal Gochoo; KiBum Kim. 2021. "Hand Gesture Recognition Based on Auto-Landmark Localization and Reweighted Genetic Algorithm for Healthcare Muscle Activities." Sustainability 13, no. 5: 2961.
Advances in video capturing devices enable adaptive posture estimation (APE) and event classification of multiple human-based videos for smart systems. Accurate event classification and adaptive posture estimation are still challenging domains, although researchers work hard to find solutions. In this research article, we propose a novel method to classify stochastic remote sensing events and to perform adaptive posture estimation. We performed human silhouette extraction using the Gaussian Mixture Model (GMM) and saliency map. After that, we performed human body part detection and used a unified pseudo-2D stick model for adaptive posture estimation. Multifused data that include energy, 3D Cartesian view, angular geometric, skeleton zigzag and moveable body parts were applied. Using a charged system search, we optimized our feature vector and deep belief network. We classified complex events, which were performed over sports videos in the wild (SVW), Olympic sports, UCF aerial action dataset and UT-interaction datasets. The mean accuracy of human body part detection was 83.57% over the UT-interaction, 83.00% for the Olympic sports and 83.78% for the SVW dataset. The mean event classification accuracy was 91.67% over the UT-interaction, 92.50% for Olympic sports and 89.47% for SVW dataset. These results are superior compared to existing state-of-the-art methods.
Munkhjargal Gochoo; Israr Akhter; Ahmad Jalal; KiBum Kim. Stochastic Remote Sensing Event Classification over Adaptive Posture Estimation via Multifused Data and Deep Belief Network. Remote Sensing 2021, 13, 912 .
AMA StyleMunkhjargal Gochoo, Israr Akhter, Ahmad Jalal, KiBum Kim. Stochastic Remote Sensing Event Classification over Adaptive Posture Estimation via Multifused Data and Deep Belief Network. Remote Sensing. 2021; 13 (5):912.
Chicago/Turabian StyleMunkhjargal Gochoo; Israr Akhter; Ahmad Jalal; KiBum Kim. 2021. "Stochastic Remote Sensing Event Classification over Adaptive Posture Estimation via Multifused Data and Deep Belief Network." Remote Sensing 13, no. 5: 912.
The features and appearance of the human face are affected greatly by aging. A human face is an important aspect for human age identification from childhood through adulthood. Although many traits are used in human age estimation, this article discusses age classification using salient texture and facial landmark feature vectors. We propose a novel human age classification (HAC) model that can localize landmark points of the face. A robust multi-perspective view-based Active Shape Model (ASM) is generated and age classification is achieved using Convolution Neural Network (CNN). The HAC model is subdivided into the following steps: (1) at first, a face is detected using aYCbCr color segmentation model; (2) landmark localization is done on the face using a connected components approach and a ridge contour method; (3) an Active Shape Model (ASM) is generated on the face using three-sided polygon meshes and perpendicular bisection of a triangle; (4) feature extraction is achieved using anthropometric model, carnio-facial development, interior angle formulation, wrinkle detection and heat maps; (5) Sequential Forward Selection (SFS) is used to select the most ideal set of features; and (6) finally, the Convolution Neural Network (CNN) model is used to classify according to age in the correct age group. The proposed system outperforms existing statistical state-of-the-art HAC methods in terms of classification accuracy, achieving 91.58% with The Images of Groups dataset, 92.62% with the OUI Adience dataset and 94.59% with the FG-NET dataset. The system is applicable to many research areas including access control, surveillance monitoring, human–machine interaction and self-identification.
Syeda Rizwan; Ahmad Jalal; Munkhjargal Gochoo; KiBum Kim. Robust Active Shape Model via Hierarchical Feature Extraction with SFS-Optimized Convolution Neural Network for Invariant Human Age Classification. Electronics 2021, 10, 465 .
AMA StyleSyeda Rizwan, Ahmad Jalal, Munkhjargal Gochoo, KiBum Kim. Robust Active Shape Model via Hierarchical Feature Extraction with SFS-Optimized Convolution Neural Network for Invariant Human Age Classification. Electronics. 2021; 10 (4):465.
Chicago/Turabian StyleSyeda Rizwan; Ahmad Jalal; Munkhjargal Gochoo; KiBum Kim. 2021. "Robust Active Shape Model via Hierarchical Feature Extraction with SFS-Optimized Convolution Neural Network for Invariant Human Age Classification." Electronics 10, no. 4: 465.
With advances in machine vision systems (e.g., artificial eye, unmanned aerial vehicles, surveillance monitoring) scene semantic recognition (SSR) technology has attracted much attention due to its related applications such as autonomous driving, tourist navigation, intelligent traffic and remote aerial sensing. Although tremendous progress has been made in visual interpretation, several challenges remain (i.e., dynamic backgrounds, occlusion, lack of labeled data, changes in illumination, direction, and size). Therefore, we have proposed a novel SSR framework that intelligently segments the locations of objects, generates a novel Bag of Features, and recognizes scenes via Maximum Entropy. First, denoising and smoothing are applied on scene data. Second, modified Fuzzy C-Means integrates with super-pixels and Random Forest for the segmentation of objects. Third, these segmented objects are used to extract a novel Bag of Features that concatenate different blobs, multiple orientations, Fourier transform and geometrical points over the objects. An Artificial Neural Network recognizes the multiple objects using the different patterns of objects. Finally, labels are estimated via Maximum Entropy model. During experimental evaluation, our proposed system illustrated a remarkable mean accuracy rate of 90.07% over the MSRC dataset and 89.26% over the Caltech 101 for object recognition, and 93.53% over the Pascal-VOC12 dataset for scene recognition, respectively. The proposed system should be applicable to various emerging technologies, such as augmented reality, to represent the real-world environment for military training and engineering design, as well as for entertainment, artificial eyes for visually impaired people and traffic monitoring to avoid congestion or road accidents.
Ahmad Jalal; Abrar Ahmed; Adnan Ahmed Rafique; KiBum Kim. Scene Semantic Recognition Based on Modified Fuzzy C-Mean and Maximum Entropy Using Object-to-Object Relations. IEEE Access 2021, 9, 27758 -27772.
AMA StyleAhmad Jalal, Abrar Ahmed, Adnan Ahmed Rafique, KiBum Kim. Scene Semantic Recognition Based on Modified Fuzzy C-Mean and Maximum Entropy Using Object-to-Object Relations. IEEE Access. 2021; 9 ():27758-27772.
Chicago/Turabian StyleAhmad Jalal; Abrar Ahmed; Adnan Ahmed Rafique; KiBum Kim. 2021. "Scene Semantic Recognition Based on Modified Fuzzy C-Mean and Maximum Entropy Using Object-to-Object Relations." IEEE Access 9, no. : 27758-27772.
The daily life-log routines of elderly individuals are susceptible to numerous complications in their physical healthcare patterns. Some of these complications can cause injuries, followed by extensive and expensive recovery stages. It is important to identify physical healthcare patterns that can describe and convey the exact state of an individual’s physical health while they perform their daily life activities. In this paper, we propose a novel Sustainable Physical Healthcare Pattern Recognition (SPHR) approach using a hybrid features model that is capable of distinguishing multiple physical activities based on a multiple wearable sensors system. Initially, we acquired raw data from well-known datasets, i.e., mobile health and human gait databases comprised of multiple human activities. The proposed strategy includes data pre-processing, hybrid feature detection, and feature-to-feature fusion and reduction, followed by codebook generation and classification, which can recognize sustainable physical healthcare patterns. Feature-to-feature fusion unites the cues from all of the sensors, and Gaussian mixture models are used for the codebook generation. For the classification, we recommend deep belief networks with restricted Boltzmann machines for five hidden layers. Finally, the results are compared with state-of-the-art techniques in order to demonstrate significant improvements in accuracy for physical healthcare pattern recognition. The experiments show that the proposed architecture attained improved accuracy rates for both datasets, and that it represents a significant sustainable physical healthcare pattern recognition (SPHR) approach. The anticipated system has potential for use in human–machine interaction domains such as continuous movement recognition, pattern-based surveillance, mobility assistance, and robot control systems.
Madiha Javeed; Munkhjargal Gochoo; Ahmad Jalal; KiBum Kim. HF-SPHR: Hybrid Features for Sustainable Physical Healthcare Pattern Recognition Using Deep Belief Networks. Sustainability 2021, 13, 1699 .
AMA StyleMadiha Javeed, Munkhjargal Gochoo, Ahmad Jalal, KiBum Kim. HF-SPHR: Hybrid Features for Sustainable Physical Healthcare Pattern Recognition Using Deep Belief Networks. Sustainability. 2021; 13 (4):1699.
Chicago/Turabian StyleMadiha Javeed; Munkhjargal Gochoo; Ahmad Jalal; KiBum Kim. 2021. "HF-SPHR: Hybrid Features for Sustainable Physical Healthcare Pattern Recognition Using Deep Belief Networks." Sustainability 13, no. 4: 1699.
In the recent days, scene understanding has become hot research topic due to its real usage at perceiving, analyzing and recognizing different dynamic scenes coverage during GPS monitoring system, drone’s targets, auto-driving and tourist guide. The goal of scene understanding is to make machines look at like humans do, which means the accurate recognition of the contents in scenes and during location observations. Then, we perform two operations such as (1) to perfectly describe the whole environment and (2) to describe what action is going on in the environment. Due to complex scene analysis, recognition of multiple objects and the relation between the objects remain as a challenging part of the research. In this paper, we have proposed a novel approach for the scene understanding that integrates multiple objects detection/segmentation and scene labeling using Geometric features, Histogram of oriented gradient and scale invariant feature transform descriptors. The complete procedure of the purposed model includes resizing and noise removing of images from the dataset, multiple object segmentation and detection, feature extraction and multiple object recognition using multi-layer kernel sliding perceptron. After that, scene recognition is achieved by using multi-class logistic regression. Finally, two datasets such as MSRC and UIUC sports are used for the experimental evaluation of our proposed method. Our purposed method accurately handles the complex objects physical exclusion and objects occlusion. Therefore, it outperforms in term of accuracy compared with other state-of-the-art approaches.
Abrar Ahmed; Ahmad Jalal; KiBum Kim. Multi-objects Detection and Segmentation for Scene Understanding Based on Texton Forest and Kernel Sliding Perceptron. Journal of Electrical Engineering & Technology 2021, 16, 1143 -1150.
AMA StyleAbrar Ahmed, Ahmad Jalal, KiBum Kim. Multi-objects Detection and Segmentation for Scene Understanding Based on Texton Forest and Kernel Sliding Perceptron. Journal of Electrical Engineering & Technology. 2021; 16 (2):1143-1150.
Chicago/Turabian StyleAbrar Ahmed; Ahmad Jalal; KiBum Kim. 2021. "Multi-objects Detection and Segmentation for Scene Understanding Based on Texton Forest and Kernel Sliding Perceptron." Journal of Electrical Engineering & Technology 16, no. 2: 1143-1150.
Due to the constantly increasing demand for automatic tracking and recognition systems, there is a need for more proficient, intelligent and sustainable human activity tracking. The main purpose of this study is to develop an accurate and sustainable human action tracking system that is capable of error-free identification of human movements irrespective of the environment in which those actions are performed. Therefore, in this paper we propose a stereoscopic Human Action Recognition (HAR) system based on the fusion of RGB (red, green, blue) and depth sensors. These sensors give an extra depth of information which enables the three-dimensional (3D) tracking of each and every movement performed by humans. Human actions are tracked according to four features, namely, (1) geodesic distance; (2) 3D Cartesian-plane features; (3) joints Motion Capture (MOCAP) features and (4) way-points trajectory generation. In order to represent these features in an optimized form, Particle Swarm Optimization (PSO) is applied. After optimization, a neuro-fuzzy classifier is used for classification and recognition. Extensive experimentation is performed on three challenging datasets: A Nanyang Technological University (NTU) RGB+D dataset; a UoL (University of Lincoln) 3D social activity dataset and a Collective Activity Dataset (CAD). Evaluation experiments on the proposed system proved that a fusion of vision sensors along with our unique features is an efficient approach towards developing a robust HAR system, having achieved a mean accuracy of 93.5% with the NTU RGB+D dataset, 92.2% with the UoL dataset and 89.6% with the Collective Activity dataset. The developed system can play a significant role in many computer vision-based applications, such as intelligent homes, offices and hospitals, and surveillance systems.
Nida Khalid; Munkhjargal Gochoo; Ahmad Jalal; KiBum Kim. Modeling Two-Person Segmentation and Locomotion for Stereoscopic Action Identification: A Sustainable Video Surveillance System. Sustainability 2021, 13, 970 .
AMA StyleNida Khalid, Munkhjargal Gochoo, Ahmad Jalal, KiBum Kim. Modeling Two-Person Segmentation and Locomotion for Stereoscopic Action Identification: A Sustainable Video Surveillance System. Sustainability. 2021; 13 (2):970.
Chicago/Turabian StyleNida Khalid; Munkhjargal Gochoo; Ahmad Jalal; KiBum Kim. 2021. "Modeling Two-Person Segmentation and Locomotion for Stereoscopic Action Identification: A Sustainable Video Surveillance System." Sustainability 13, no. 2: 970.
Human behavior modeling (HBM) is a challenging classification task for researchers seeking to develop sustainable systems that precisely monitor and record human life-logs. In recent years, several models have been proposed; however, HBM remains an inspiring problem that is only partly solved. This paper proposes a novel framework of human behavior modeling based on wearable inertial sensors; the system framework is composed of data acquisition, feature extraction, optimization and classification stages. First, inertial data is filtered via three different filters, i.e., Chebyshev, Elliptic and Bessel filters. Next, six different features from time and frequency domains are extracted to determine the maximum optimal values. Then, the Probability Based Incremental Learning (PBIL) optimizer and the K-Ary tree hashing classifier are applied to model different human activities. The proposed model is evaluated on two benchmark datasets, namely DALIAC and PAMPA2, and one self-annotated dataset, namely, IM-LifeLog, respectively. For evaluation, we used a leave-one-out cross validation scheme. The experimental results show that our model outperformed existing state-of-the-art methods with accuracy rates of 94.23%, 94.07% and 96.40% over DALIAC, PAMPA2 and IM-LifeLog datasets, respectively. The proposed system can be used in healthcare, physical activity detection, surveillance systems and medical fitness fields.
Ahmad Jalal; Mouazma Batool; KiBum Kim. Sustainable Wearable System: Human Behavior Modeling for Life-Logging Activities Using K-Ary Tree Hashing Classifier. Sustainability 2020, 12, 10324 .
AMA StyleAhmad Jalal, Mouazma Batool, KiBum Kim. Sustainable Wearable System: Human Behavior Modeling for Life-Logging Activities Using K-Ary Tree Hashing Classifier. Sustainability. 2020; 12 (24):10324.
Chicago/Turabian StyleAhmad Jalal; Mouazma Batool; KiBum Kim. 2020. "Sustainable Wearable System: Human Behavior Modeling for Life-Logging Activities Using K-Ary Tree Hashing Classifier." Sustainability 12, no. 24: 10324.
This paper suggests that human pose estimation (HPE) and sustainable event classification (SEC) require an advanced human skeleton and context-aware features extraction approach along with machine learning classification methods to recognize daily events precisely. Over the last few decades, researchers have found new mechanisms to make HPE and SEC applicable in daily human life-log events such as sports, surveillance systems, human monitoring systems, and in the education sector. In this research article, we propose a novel HPE and SEC system for which we designed a pseudo-2D stick model. To extract full-body human silhouette features, we proposed various features such as energy, sine, distinct body parts movements, and a 3D Cartesian view of smoothing gradients features. Features extracted to represent human key posture points include rich 2D appearance, angular point, and multi-point autocorrelation. After the extraction of key points, we applied a hierarchical classification and optimization model via ray optimization and a K-ary tree hashing algorithm over a UCF50 dataset, an hmdb51 dataset, and an Olympic sports dataset. Human body key points detection accuracy for the UCF50 dataset was 80.9%, for the hmdb51 dataset it was 82.1%, and for the Olympic sports dataset it was 81.7%. Event classification for the UCF50 dataset was 90.48%, for the hmdb51 dataset it was 89.21%, and for the Olympic sports dataset it was 90.83%. These results indicate better performance for our approach compared to other state-of-the-art methods.
Ahmad Jalal; Israr Akhtar; KiBum Kim. Human Posture Estimation and Sustainable Events Classification via Pseudo-2D Stick Model and K-ary Tree Hashing. Sustainability 2020, 12, 9814 .
AMA StyleAhmad Jalal, Israr Akhtar, KiBum Kim. Human Posture Estimation and Sustainable Events Classification via Pseudo-2D Stick Model and K-ary Tree Hashing. Sustainability. 2020; 12 (23):9814.
Chicago/Turabian StyleAhmad Jalal; Israr Akhtar; KiBum Kim. 2020. "Human Posture Estimation and Sustainable Events Classification via Pseudo-2D Stick Model and K-ary Tree Hashing." Sustainability 12, no. 23: 9814.
Object recognition in depth images is challenging and persistent task in machine vision, robotics, and automation of sustainability. Object recognition tasks are a challenging part of various multimedia technologies for video surveillance, human–computer interaction, robotic navigation, drone targeting, tourist guidance, and medical diagnostics. However, the symmetry that exists in real-world objects plays a significant role in perception and recognition of objects in both humans and machines. With advances in depth sensor technology, numerous researchers have recently proposed RGB-D object recognition techniques. In this paper, we introduce a sustainable object recognition framework that is consistent despite any change in the environment, and can recognize and analyze RGB-D objects in complex indoor scenarios. Firstly, after acquiring a depth image, the point cloud and the depth maps are extracted to obtain the planes. Then, the plane fitting model and the proposed modified maximum likelihood estimation sampling consensus (MMLESAC) are applied as a segmentation process. Then, depth kernel descriptors (DKDES) over segmented objects are computed for single and multiple object scenarios separately. These DKDES are subsequently carried forward to isometric mapping (IsoMap) for feature space reduction. Finally, the reduced feature vector is forwarded to a kernel sliding perceptron (KSP) for the recognition of objects. Three datasets are used to evaluate four different experiments by employing a cross-validation scheme to validate the proposed model. The experimental results over RGB-D object, RGB-D scene, and NYUDv1 datasets demonstrate overall accuracies of 92.2%, 88.5%, and 90.5% respectively. These results outperform existing state-of-the-art methods and verify the suitability of the method.
Adnan Ahmed Rafique; Ahmad Jalal; KiBum Kim. Automated Sustainable Multi-Object Segmentation and Recognition via Modified Sampling Consensus and Kernel Sliding Perceptron. Symmetry 2020, 12, 1928 .
AMA StyleAdnan Ahmed Rafique, Ahmad Jalal, KiBum Kim. Automated Sustainable Multi-Object Segmentation and Recognition via Modified Sampling Consensus and Kernel Sliding Perceptron. Symmetry. 2020; 12 (11):1928.
Chicago/Turabian StyleAdnan Ahmed Rafique; Ahmad Jalal; KiBum Kim. 2020. "Automated Sustainable Multi-Object Segmentation and Recognition via Modified Sampling Consensus and Kernel Sliding Perceptron." Symmetry 12, no. 11: 1928.
Nowadays, wearable technology can enhance physical human life-log routines by shifting goals from merely counting steps to tackling significant healthcare challenges. Such wearable technology modules have presented opportunities to acquire important information about human activities in real-life environments. The purpose of this paper is to report on recent developments and to project future advances regarding wearable sensor systems for the sustainable monitoring and recording of human life-logs. On the basis of this survey, we propose a model that is designed to retrieve better information during physical activities in indoor and outdoor environments in order to improve the quality of life and to reduce risks. This model uses a fusion of both statistical and non-statistical features for the recognition of different activity patterns using wearable inertial sensors, i.e., triaxial accelerometers, gyroscopes and magnetometers. These features include signal magnitude, positive/negative peaks and position direction to explore signal orientation changes, position differentiation, temporal variation and optimal changes among coordinates. These features are processed by a genetic algorithm for the selection and classification of inertial signals to learn and recognize abnormal human movement. Our model was experimentally evaluated on four benchmark datasets: Intelligent Media Wearable Smart Home Activities (IM-WSHA), a self-annotated physical activities dataset, Wireless Sensor Data Mining (WISDM) with different sporting patterns from an IM-SB dataset and an SMotion dataset with different physical activities. Experimental results show that the proposed feature extraction strategy outperformed others, achieving an improved recognition accuracy of 81.92%, 95.37%, 90.17%, 94.58%, respectively, when IM-WSHA, WISDM, IM-SB and SMotion datasets were applied.
Ahmad Jalal; Majid Ali Khan Quaid; Sheikh Badar Ud Din Tahir; KiBum Kim. A Study of Accelerometer and Gyroscope Measurements in Physical Life-Log Activities Detection Systems. Sensors 2020, 20, 6670 .
AMA StyleAhmad Jalal, Majid Ali Khan Quaid, Sheikh Badar Ud Din Tahir, KiBum Kim. A Study of Accelerometer and Gyroscope Measurements in Physical Life-Log Activities Detection Systems. Sensors. 2020; 20 (22):6670.
Chicago/Turabian StyleAhmad Jalal; Majid Ali Khan Quaid; Sheikh Badar Ud Din Tahir; KiBum Kim. 2020. "A Study of Accelerometer and Gyroscope Measurements in Physical Life-Log Activities Detection Systems." Sensors 20, no. 22: 6670.
Recent developments in sensor technologies enable physical activity recognition (PAR) as an essential tool for smart health monitoring and for fitness exercises. For efficient PAR, model representation and training are significant factors contributing to the ultimate success of recognition systems because model representation and accurate detection of body parts and physical activities cannot be distinguished if the system is not well trained. This paper provides a unified framework that explores multidimensional features with the help of a fusion of body part models and quadratic discriminant analysis which uses these features for markerless human pose estimation. Multilevel features are extracted as displacement parameters to work as spatiotemporal properties. These properties represent the respective positions of the body parts with respect to time. Finally, these features are processed by a maximum entropy Markov model as a recognition engine based on transition and emission probability values. Experimental results demonstrate that the proposed model produces more accurate results compared to the state-of-the-art methods for both body part detection and for physical activity recognition. The accuracy of the proposed method for body part detection is 90.91% on a University of Central Florida’s (UCF) sports action dataset and, for activity recognition on a UCF YouTube action dataset and an IM-DailyRGBEvents dataset, accuracy is 89.09% and 88.26% respectively.
Amir Nadeem; Ahmad Jalal; KiBum Kim. Accurate Physical Activity Recognition using Multidimensional Features and Markov Model for Smart Health Fitness. Symmetry 2020, 12, 1766 .
AMA StyleAmir Nadeem, Ahmad Jalal, KiBum Kim. Accurate Physical Activity Recognition using Multidimensional Features and Markov Model for Smart Health Fitness. Symmetry. 2020; 12 (11):1766.
Chicago/Turabian StyleAmir Nadeem; Ahmad Jalal; KiBum Kim. 2020. "Accurate Physical Activity Recognition using Multidimensional Features and Markov Model for Smart Health Fitness." Symmetry 12, no. 11: 1766.
The classification of human activity is becoming one of the most important areas of human health monitoring and physical fitness. With the use of physical activity recognition applications, people suffering from various diseases can be efficiently monitored and medical treatment can be administered in a timely fashion. These applications could improve remote services for health care monitoring and delivery. However, the fixed health monitoring devices provided in hospitals limits the subjects’ movement. In particular, our work reports on wearable sensors that provide remote monitoring that periodically checks human health through different postures and activities to give people timely and effective treatment. In this paper, we propose a novel human activity recognition (HAR) system with multiple combined features to monitor human physical movements from continuous sequences via tri-axial inertial sensors. The proposed HAR system filters 1D signals using a notch filter that examines the lower/upper cutoff frequencies to calculate the optimal wearable sensor data. Then, it calculates multiple combined features, i.e., statistical features, Mel Frequency Cepstral Coefficients, and Gaussian Mixture Model features. For the classification and recognition engine, a Decision Tree classifier optimized by the Binary Grey Wolf Optimization algorithm is proposed. The proposed system is applied and tested on three challenging benchmark datasets to assess the feasibility of the model. The experimental results show that our proposed system attained an exceptional level of performance compared to conventional solutions. We achieved accuracy rates of 88.25%, 93.95%, and 96.83% over MOTIONSENSE, MHEALTH, and the proposed self-annotated IM-AccGyro human-machine dataset, respectively.
Ahmad Jalal; Mouazma Batool; KiBum Kim. Stochastic Recognition of Physical Activity and Healthcare Using Tri-Axial Inertial Wearable Sensors. Applied Sciences 2020, 10, 7122 .
AMA StyleAhmad Jalal, Mouazma Batool, KiBum Kim. Stochastic Recognition of Physical Activity and Healthcare Using Tri-Axial Inertial Wearable Sensors. Applied Sciences. 2020; 10 (20):7122.
Chicago/Turabian StyleAhmad Jalal; Mouazma Batool; KiBum Kim. 2020. "Stochastic Recognition of Physical Activity and Healthcare Using Tri-Axial Inertial Wearable Sensors." Applied Sciences 10, no. 20: 7122.