This page has only limited features, please log in for full access.
Sleep is a natural phenomenon controlled by the central nervous system. The sleep-wake pattern, which functions as an essential indicator of neurophysiological organization in the neonatal period, has profound meaning in the prediction of cognitive diseases and brain maturity. In recent years, unobtrusive sleep monitoring and automatic sleep staging have been intensively studied for adults, but much less for neonates. This work aims to investigate a novel video-based unobtrusive method for neonatal sleep-wake classification by analyzing the behavioral changes in the neonatal facial region. A hybrid model is proposed to monitor the sleep-wake patterns of human neonates. The model combines two algorithms: deep convolutional neural network (DCNN) and support vector machine (SVM), where DCNN works as a trainable feature extractor and SVM as a classifier. Data was collected from nineteen Chinese neonates at the Children's Hospital of Fudan University, Shanghai, China. The classification results are compared with the gold standard of video-electroencephalography scored by pediatric neurologists. Validations indicate that the proposed hybrid DCNN-SVM model achieved reliable performances in classifying neonatal sleep and wake states in RGB video frames (with the face region detected), with an accuracy of 93.8 ± 2.2% and an F1-score 0.93 ± 0.3.
Muhammad Awais; Xi Long; Bin Yin; Saadullah Farooq Abbasi; Saeed Akhbarzadeh; Chunmei Lu; Xinhua Wang; Laishuan Wang; Jiong Zhang; Jeroen Dudink; Wei Chen. A Hybrid DCNN-SVM Model for Classifying Neonatal Sleep and Wake States Based on Facial Expression in Video. IEEE Journal of Biomedical and Health Informatics 2021, 25, 1 -1.
AMA StyleMuhammad Awais, Xi Long, Bin Yin, Saadullah Farooq Abbasi, Saeed Akhbarzadeh, Chunmei Lu, Xinhua Wang, Laishuan Wang, Jiong Zhang, Jeroen Dudink, Wei Chen. A Hybrid DCNN-SVM Model for Classifying Neonatal Sleep and Wake States Based on Facial Expression in Video. IEEE Journal of Biomedical and Health Informatics. 2021; 25 (5):1-1.
Chicago/Turabian StyleMuhammad Awais; Xi Long; Bin Yin; Saadullah Farooq Abbasi; Saeed Akhbarzadeh; Chunmei Lu; Xinhua Wang; Laishuan Wang; Jiong Zhang; Jeroen Dudink; Wei Chen. 2021. "A Hybrid DCNN-SVM Model for Classifying Neonatal Sleep and Wake States Based on Facial Expression in Video." IEEE Journal of Biomedical and Health Informatics 25, no. 5: 1-1.
Human movement is a significant factor in extensive spatial-transmission models of contagious viruses. The proposed COUNTERACT system recognizes infectious sites by retrieving location data from a mobile phone device linked with a particular infected subject. The proposed approach is computing an incubation phase for the subject's infection, backpropagation through the subjects’ location data to investigate a location where the subject has been during the incubation period. Classifying to each such site as a contagious site, informing exposed suspects who have been to the contagious location, and seeking near real-time or real-time feedback from suspects to affirm, discard, or improve the recognition of the infectious site. This technique is based on the contraption to gather confirmed infected subject and possibly carrier suspect area location, correlating location for the incubation days. Security and privacy are a specific thing in the present research, and the system is used only through authentication and authorization. The proposed approach is for healthcare officials primarily. It is different from other existing systems where all the subjects have to install the application. The cell phone associated with the global positioning system (GPS) location data is collected from the COVID-19 subjects.
Hemant Ghayvat; Muhammad Awais; Prosanta Gope; Sharnil Pandya; Shubhankar Majumdar. ReCognizing SUspect and PredictiNg ThE SpRead of Contagion Based on Mobile Phone LoCation DaTa (COUNTERACT): A system of identifying COVID-19 infectious and hazardous sites, detecting disease outbreaks based on the internet of things, edge computing, and artificial intelligence. Sustainable Cities and Society 2021, 69, 102798 .
AMA StyleHemant Ghayvat, Muhammad Awais, Prosanta Gope, Sharnil Pandya, Shubhankar Majumdar. ReCognizing SUspect and PredictiNg ThE SpRead of Contagion Based on Mobile Phone LoCation DaTa (COUNTERACT): A system of identifying COVID-19 infectious and hazardous sites, detecting disease outbreaks based on the internet of things, edge computing, and artificial intelligence. Sustainable Cities and Society. 2021; 69 ():102798.
Chicago/Turabian StyleHemant Ghayvat; Muhammad Awais; Prosanta Gope; Sharnil Pandya; Shubhankar Majumdar. 2021. "ReCognizing SUspect and PredictiNg ThE SpRead of Contagion Based on Mobile Phone LoCation DaTa (COUNTERACT): A system of identifying COVID-19 infectious and hazardous sites, detecting disease outbreaks based on the internet of things, edge computing, and artificial intelligence." Sustainable Cities and Society 69, no. : 102798.
Human Action Recognition (HAR) is the classification of an action performed by a human. The goal of this study was to recognize human actions in action video sequences. We present a novel feature descriptor for HAR that involves multiple features and combining them using fusion technique. The major focus of the feature descriptor is to exploits the action dissimilarities. The key contribution of the proposed approach is to built robust features descriptor that can work for underlying video sequences and various classification models. To achieve the objective of the proposed work, HAR has been performed in the following manner. First, moving object detection and segmentation are performed from the background. The features are calculated using the histogram of oriented gradient (HOG) from a segmented moving object. To reduce the feature descriptor size, we take an averaging of the HOG features across non-overlapping video frames. For the frequency domain information we have calculated regional features from the Fourier hog. Moreover, we have also included the velocity and displacement of moving object. Finally, we use fusion technique to combine these features in the proposed work. After a feature descriptor is prepared, it is provided to the classifier. Here, we have used well-known classifiers such as artificial neural networks (ANNs), support vector machine (SVM), multiple kernel learning (MKL), Meta-cognitive Neural Network (McNN), and the late fusion methods. The main objective of the proposed approach is to prepare a robust feature descriptor and to show the diversity of our feature descriptor. Though we are using five different classifiers, our feature descriptor performs relatively well across the various classifiers. The proposed approach is performed and compared with the state-of-the-art methods for action recognition on two publicly available benchmark datasets (KTH and Weizmann) and for cross-validation on the UCF11 dataset, HMDB51 dataset, and UCF101 dataset. Results of the control experiments, such as a change in the SVM classifier and the effects of the second hidden layer in ANN, are also reported. The results demonstrate that the proposed method performs reasonably compared with the majority of existing state-of-the-art methods, including the convolutional neural network-based feature extractors.
Chirag I. Patel; Dileep Labana; Sharnil Pandya; Kirit Modi; Hemant Ghayvat; Muhammad Awais. Histogram of Oriented Gradient-Based Fusion of Features for Human Action Recognition in Action Video Sequences. Sensors 2020, 20, 7299 .
AMA StyleChirag I. Patel, Dileep Labana, Sharnil Pandya, Kirit Modi, Hemant Ghayvat, Muhammad Awais. Histogram of Oriented Gradient-Based Fusion of Features for Human Action Recognition in Action Video Sequences. Sensors. 2020; 20 (24):7299.
Chicago/Turabian StyleChirag I. Patel; Dileep Labana; Sharnil Pandya; Kirit Modi; Hemant Ghayvat; Muhammad Awais. 2020. "Histogram of Oriented Gradient-Based Fusion of Features for Human Action Recognition in Action Video Sequences." Sensors 20, no. 24: 7299.
Objective In this paper, we propose to evaluate the use of pre-trained convolutional neural networks (CNNs) as a features extractor followed by the Principal Component Analysis (PCA) to find the best discriminant features to perform classification using support vector machine (SVM) algorithm for neonatal sleep and wake states using Fluke® facial video frames. Using pre-trained CNNs as a feature extractor would hugely reduce the effort of collecting new neonatal data for training a neural network which could be computationally expensive. The features are extracted after fully connected layers (FCL’s), where we compare several pre-trained CNNs, e.g., VGG16, VGG19, InceptionV3, GoogLeNet, ResNet, and AlexNet. Results From around 2-h Fluke® video recording of seven neonates, we achieved a modest classification performance with an accuracy, sensitivity, and specificity of 65.3%, 69.8%, 61.0%, respectively with AlexNet using Fluke® (RGB) video frames. This indicates that using a pre-trained model as a feature extractor could not fully suffice for highly reliable sleep and wake classification in neonates. Therefore, in future work a dedicated neural network trained on neonatal data or a transfer learning approach is required.
Muhammad Awais; Xi Long; Bin Yin; Chen Chen; Saeed Akbarzadeh; Saadullah Farooq Abbasi; Muhammad Irfan; Chunmei Lu; Xinhua Wang; Laishuan Wang; Wei Chen. Can pre-trained convolutional neural networks be directly used as a feature extractor for video-based neonatal sleep and wake classification? BMC Research Notes 2020, 13, 1 -6.
AMA StyleMuhammad Awais, Xi Long, Bin Yin, Chen Chen, Saeed Akbarzadeh, Saadullah Farooq Abbasi, Muhammad Irfan, Chunmei Lu, Xinhua Wang, Laishuan Wang, Wei Chen. Can pre-trained convolutional neural networks be directly used as a feature extractor for video-based neonatal sleep and wake classification? BMC Research Notes. 2020; 13 (1):1-6.
Chicago/Turabian StyleMuhammad Awais; Xi Long; Bin Yin; Chen Chen; Saeed Akbarzadeh; Saadullah Farooq Abbasi; Muhammad Irfan; Chunmei Lu; Xinhua Wang; Laishuan Wang; Wei Chen. 2020. "Can pre-trained convolutional neural networks be directly used as a feature extractor for video-based neonatal sleep and wake classification?" BMC Research Notes 13, no. 1: 1-6.
Objective: In this paper, we propose to evaluate the use of a pre-trained convolutional neural networks (CNNs) as a features extractor followed by the Principal Component Analysis (PCA) to find the best discriminant features to perform classification using support vector machine (SVM) algorithm for neonatal sleep and wake states using Fluke® facial video frames. Using pre-trained CNNs as feature extractor would hugely reduce the effort of collecting new neonatal data for training a neural network which could be computationally very expensive. The features are extracted after fully connected layers (FCL’s), where we compare several pre-trained CNNs, e.g., VGG16, VGG19, InceptionV3, GoogLeNet, ResNet, and AlexNet. Results: From around 2-h Fluke® video recording of seven neonate, we achieved a modest classification performance with an accuracy, sensitivity, and specificity of 65.3%, 69.8%, 61.0%, respectively with AlexNet using Fluke® (RGB) video frames. This indicates that using a pre-trained model as a feature extractor could not fully suffice for highly reliable sleep and wake classification in neonates. Therefore, in future a dedicated neural network trained on neonatal data or a transfer learning approach is required.
Muhammad Awais; Xi Long; Bin Yin; Chen Chen; Saeed Akbarzadeh; Saadullah Farooq Abbasi; Muhammad Irfan; Chunmei Lu; Xinhua Wang; Laishuan Wang; Wei Chen. Can Pre-Trained Convolutional Neural Networks be directly used as a Feature Extractor for Video-based Neonatal Sleep and Wake Classification? 2020, 1 .
AMA StyleMuhammad Awais, Xi Long, Bin Yin, Chen Chen, Saeed Akbarzadeh, Saadullah Farooq Abbasi, Muhammad Irfan, Chunmei Lu, Xinhua Wang, Laishuan Wang, Wei Chen. Can Pre-Trained Convolutional Neural Networks be directly used as a Feature Extractor for Video-based Neonatal Sleep and Wake Classification? . 2020; ():1.
Chicago/Turabian StyleMuhammad Awais; Xi Long; Bin Yin; Chen Chen; Saeed Akbarzadeh; Saadullah Farooq Abbasi; Muhammad Irfan; Chunmei Lu; Xinhua Wang; Laishuan Wang; Wei Chen. 2020. "Can Pre-Trained Convolutional Neural Networks be directly used as a Feature Extractor for Video-based Neonatal Sleep and Wake Classification?" , no. : 1.
Oral mucosal lesions (OML) and oral potentially malignant disorders (OPMDs) have been identified as having the potential to transform into oral squamous cell carcinoma (OSCC). This research focuses on the human-in-the-loop-system named Healthcare Professionals in the Loop (HPIL) to support diagnosis through an advanced machine learning procedure. HPIL is a novel system approach based on the textural pattern of OML and OPMDs (anomalous regions) to differentiate them from standard regions of the oral cavity by using autofluorescence imaging. An innovative method based on pre-processing, e.g., the Deriche–Canny edge detector and circular Hough transform (CHT); a post-processing textural analysis approach using the gray-level co-occurrence matrix (GLCM); and a feature selection algorithm (linear discriminant analysis (LDA)), followed by k-nearest neighbor (KNN) to classify OPMDs and the standard region, is proposed in this paper. The accuracy, sensitivity, and specificity in differentiating between standard and anomalous regions of the oral cavity are 83%, 85%, and 84%, respectively. The performance evaluation was plotted through the receiver operating characteristics of periodontist diagnosis with the HPIL system and without the system. This method of classifying OML and OPMD areas may help the dental specialist to identify anomalous regions for performing their biopsies more efficiently to predict the histological diagnosis of epithelial dysplasia.
Muhammad Awais; Hemant Ghayvat; Anitha Krishnan Pandarathodiyil; Wan Maria Nabillah Ghani; Anand Ramanathan; Sharnil Pandya; Nicolas Walter; Mohamad Naufal Saad; Rosnah Binti Zain; Ibrahima Faye. Healthcare Professional in the Loop (HPIL): Classification of Standard and Oral Cancer-Causing Anomalous Regions of Oral Cavity Using Textural Analysis Technique in Autofluorescence Imaging. Sensors 2020, 20, 5780 .
AMA StyleMuhammad Awais, Hemant Ghayvat, Anitha Krishnan Pandarathodiyil, Wan Maria Nabillah Ghani, Anand Ramanathan, Sharnil Pandya, Nicolas Walter, Mohamad Naufal Saad, Rosnah Binti Zain, Ibrahima Faye. Healthcare Professional in the Loop (HPIL): Classification of Standard and Oral Cancer-Causing Anomalous Regions of Oral Cavity Using Textural Analysis Technique in Autofluorescence Imaging. Sensors. 2020; 20 (20):5780.
Chicago/Turabian StyleMuhammad Awais; Hemant Ghayvat; Anitha Krishnan Pandarathodiyil; Wan Maria Nabillah Ghani; Anand Ramanathan; Sharnil Pandya; Nicolas Walter; Mohamad Naufal Saad; Rosnah Binti Zain; Ibrahima Faye. 2020. "Healthcare Professional in the Loop (HPIL): Classification of Standard and Oral Cancer-Causing Anomalous Regions of Oral Cavity Using Textural Analysis Technique in Autofluorescence Imaging." Sensors 20, no. 20: 5780.
Objective: Classification of sleep-wake states using multichannel electroencephalography (EEG) data that reliably work for neonates. Methods: A deep multilayer perceptron (MLP) neural network is developed to classify sleep-wake states using multichannel bipolar EEG signals, which takes an input vector of size 108 containing the joint features of 9 channels. The network avoids any post-processing step in order to work as a full-fledged real-time application. For training and testing the model, EEG recordings of 3525 30-second segments from 19 neonates (postmenstrual age of 37 ± 05 weeks) are used. Results: For sleep-wake classification, mean Cohen’s kappa between the network estimate and the ground truth annotation by human experts is 0.62. The maximum mean accuracy can reach up to 83% which, to date, is the highest accuracy for sleep-wake classification.
Saadullah Farooq Abbasi; Jawad Ahmad; Ahsen Tahir; Muhammad Awais; Chen Chen; Muhammad Irfan; Hafiza Ayesha Siddiqa; Abu Bakar Waqas; Xi Long; Bin Yin; Saeed Akbarzadeh; Chunmei Lu; Laishuan Wang; Wei Chen. EEG-Based Neonatal Sleep-Wake Classification Using Multilayer Perceptron Neural Network. IEEE Access 2020, 8, 183025 -183034.
AMA StyleSaadullah Farooq Abbasi, Jawad Ahmad, Ahsen Tahir, Muhammad Awais, Chen Chen, Muhammad Irfan, Hafiza Ayesha Siddiqa, Abu Bakar Waqas, Xi Long, Bin Yin, Saeed Akbarzadeh, Chunmei Lu, Laishuan Wang, Wei Chen. EEG-Based Neonatal Sleep-Wake Classification Using Multilayer Perceptron Neural Network. IEEE Access. 2020; 8 ():183025-183034.
Chicago/Turabian StyleSaadullah Farooq Abbasi; Jawad Ahmad; Ahsen Tahir; Muhammad Awais; Chen Chen; Muhammad Irfan; Hafiza Ayesha Siddiqa; Abu Bakar Waqas; Xi Long; Bin Yin; Saeed Akbarzadeh; Chunmei Lu; Laishuan Wang; Wei Chen. 2020. "EEG-Based Neonatal Sleep-Wake Classification Using Multilayer Perceptron Neural Network." IEEE Access 8, no. : 183025-183034.
Air pollution has been a looming issue of the 21st century that has also significantly impacted the surrounding environment and societal health. Recently, previous studies have conducted extensive research on air pollution and air quality monitoring. Despite this, the fields of air pollution and air quality monitoring remain plagued with unsolved problems. In this study, the Pollution Weather Prediction System (PWP) is proposed to perform air pollution prediction for outdoor sites for various pollution parameters. In the presented research work, we introduced a PWP system configured with pollution-sensing units, such as SDS021, MQ07-CO, NO2-B43F, and Aeroqual Ozone (O3). These sensing units were utilized to collect and measure various pollutant levels, such as PM2.5, PM10, CO, NO2, and O3, for 90 days at Symbiosis International University, Pune, Maharashtra, India. The data collection was carried out between the duration of December 2019 to February 2020 during the winter. The investigation results validate the success of the presented PWP system. In the conducted experiments, linear regression and artificial neural network (ANN)-based AQI (air quality index) predictions were performed. Furthermore, the presented study also found that the customized linear regression methodology outperformed other machine-learning methods, such as linear, ridge, Lasso, Bayes, Huber, Lars, Lasso-lars, stochastic gradient descent (SGD), and ElasticNet regression methodologies, and the customized ANN regression methodology used in the conducted experiments. The overall AQI values of the air pollutants were calculated based on the summation of the AQI values of all the presented air pollutants. In the end, the web and mobile interfaces were developed to display air pollution prediction values of a variety of air pollutants.
Sharnil Pandya; Hemant Ghayvat; Anirban Sur; Muhammad Awais; Ketan Kotecha; Santosh Saxena; Nandita Jassal; Gayatri Pingale. Pollution Weather Prediction System: Smart Outdoor Pollution Monitoring and Prediction for Healthy Breathing and Living. Sensors 2020, 20, 5448 .
AMA StyleSharnil Pandya, Hemant Ghayvat, Anirban Sur, Muhammad Awais, Ketan Kotecha, Santosh Saxena, Nandita Jassal, Gayatri Pingale. Pollution Weather Prediction System: Smart Outdoor Pollution Monitoring and Prediction for Healthy Breathing and Living. Sensors. 2020; 20 (18):5448.
Chicago/Turabian StyleSharnil Pandya; Hemant Ghayvat; Anirban Sur; Muhammad Awais; Ketan Kotecha; Santosh Saxena; Nandita Jassal; Gayatri Pingale. 2020. "Pollution Weather Prediction System: Smart Outdoor Pollution Monitoring and Prediction for Healthy Breathing and Living." Sensors 20, no. 18: 5448.
Objective: In this paper, we propose to evaluate the use of a pre-trained convolutional neural networks (CNNs) as a features extractor followed by the Principal Component Analysis (PCA) to find the best discriminant features to perform classification using support vector machine (SVM) algorithm for neonatal sleep and wake states using Fluke® facial video frames. Using pre-trained CNNs as feature extractor would hugely reduce the effort of collecting new neonatal data for training a neural network which could be computationally very expensive. The features are extracted after fully connected layers (FCL’s), where we compare several pre-trained CNNs, e.g., VGG16, VGG19, InceptionV3, GoogLeNet, ResNet, and AlexNet.Results: From around 2-h Fluke® video recording of seven neonate, we achieved a modest classification performance with an accuracy, sensitivity, and specificity of 65.3%, 69.8%, 61.0%, respectively with AlexNet using Fluke® (RGB) video frames. This indicates that using a pre-trained model as a feature extractor could not fully suffice for highly reliable sleep and wake classification in neonates. Therefore, in future a dedicated neural network trained on neonatal data or a transfer learning approach is required.
Muhammad Awais; Xi Long; Bin Yin; Chen Chen; Saeed Akbarzadeh; Saadullah Farooq Abbasi; Muhammad Irfan; Chunmei Lu; Xinhua Wang; Laishuan Wang; Wei Chen. Can Pre-Trained Convolutional Neural Networks be used as Feature Extractors for Video-based Neonatal Sleep and Wake Classification? 2020, 1 .
AMA StyleMuhammad Awais, Xi Long, Bin Yin, Chen Chen, Saeed Akbarzadeh, Saadullah Farooq Abbasi, Muhammad Irfan, Chunmei Lu, Xinhua Wang, Laishuan Wang, Wei Chen. Can Pre-Trained Convolutional Neural Networks be used as Feature Extractors for Video-based Neonatal Sleep and Wake Classification? . 2020; ():1.
Chicago/Turabian StyleMuhammad Awais; Xi Long; Bin Yin; Chen Chen; Saeed Akbarzadeh; Saadullah Farooq Abbasi; Muhammad Irfan; Chunmei Lu; Xinhua Wang; Laishuan Wang; Wei Chen. 2020. "Can Pre-Trained Convolutional Neural Networks be used as Feature Extractors for Video-based Neonatal Sleep and Wake Classification?" , no. : 1.
Objective In this paper, we propose to evaluate the use of a pre-trained convolutional neural networks (CNN’s) as a features extractor followed by the Principal Component Analysis (PCA) to find the best discriminant features to perform classification using support vector machine (SVM) algorithm for neonatal sleep and wake states using Fluke® facial video frames. Using pre-trained CNN’s as feature extractor would hugely reduce the effort of collecting new neonatal data for training a neural network which could be computationally very expensive. The features are extracted after fully connected layers (FCL’s), where we compare several pre-trained CNN’s, e.g., VGG16, VGG19, InceptionV3, GoogLeNet, ResNet, and AlexNet. Results From around 2-h Fluke® video recording of seven neonate, we achieved a modest classification performance with an accuracy, sensitivity, and specificity of 65.3%, 69.8%, 61.0%, respectively with AlexNet using Fluke® (RGB) video frames. This indicates that using a pre-trained model as a feature extractor could not fully suffice for highly reliable sleep and wake classification in neonates. Therefore, in future a dedicated neural network trained on neonatal data or a transfer learning approach is required.
Muhammad Awais; Xi Long; Bin Yin; Chen Chen; Saeed Akbarzadeh; Saadullah Farooq Abbasi; Muhammad Irfan; Chunmei Lu; Xinhua Wang; Laishuan Wang; Wei Chen. Can Pre-Trained Convolutional Neural Networks be used as Feature Extractors for Video-based Neonatal Sleep and Wake Classification? 2020, 1 .
AMA StyleMuhammad Awais, Xi Long, Bin Yin, Chen Chen, Saeed Akbarzadeh, Saadullah Farooq Abbasi, Muhammad Irfan, Chunmei Lu, Xinhua Wang, Laishuan Wang, Wei Chen. Can Pre-Trained Convolutional Neural Networks be used as Feature Extractors for Video-based Neonatal Sleep and Wake Classification? . 2020; ():1.
Chicago/Turabian StyleMuhammad Awais; Xi Long; Bin Yin; Chen Chen; Saeed Akbarzadeh; Saadullah Farooq Abbasi; Muhammad Irfan; Chunmei Lu; Xinhua Wang; Laishuan Wang; Wei Chen. 2020. "Can Pre-Trained Convolutional Neural Networks be used as Feature Extractors for Video-based Neonatal Sleep and Wake Classification?" , no. : 1.
With an increasing penetration of ubiquitous connectivity, the amount of data describing the actions of end-users has been increasing dramatically, both within the domain of the Internet of Things (IoT) and other smart devices. This has led to more awareness of users in terms of protecting personal data. Within the IoT, there is a growing number of peer-to-peer (P2P) transactions, increasing the exposure to security vulnerabilities, and the risk of cyberattacks. Blockchain technology has been explored as middleware in P2P transactions, but existing solutions have mainly focused on providing a safe environment for data trade without considering potential changes in interaction topologies. we present EdgeBoT, a proof-of-concept smart contracts based platform for the IoT built on top of the ethereum blockchain. With the Blockchain of Things (BoT) at the edge of the network, EdgeBoT enables a wider variety of interaction topologies between nodes in the network and external services while guaranteeing ownership of data and end users’ privacy. in EdgeBoT, edge devices trade their data directly with third parties and without the need of intermediaries. This opens the door to new interaction modalities, in which data producers at the edge grant access to batches of their data to different third parties. Leveraging the immutability properties of blockchains, together with the distributed nature of smart contracts, data owners can audit and are aware of all transactions that have occurred with their data. we report initial results demonstrating the potential of EdgeBoT within the IoT. we show that integrating our solutions on top of existing IoT systems has a relatively small footprint in terms of computational resource usage, but a significant impact on the protection of data ownership and management of data trade.
Anum Nawaz; Jorge Peña Queralta; Jixin Guan; Muhammad Awais; Tuan Gia; Ali Bashir; Haibin Kan; Tomi Westerlund. Edge Computing to Secure IoT Data Ownership and Trade with the Ethereum Blockchain. Sensors 2020, 20, 3965 .
AMA StyleAnum Nawaz, Jorge Peña Queralta, Jixin Guan, Muhammad Awais, Tuan Gia, Ali Bashir, Haibin Kan, Tomi Westerlund. Edge Computing to Secure IoT Data Ownership and Trade with the Ethereum Blockchain. Sensors. 2020; 20 (14):3965.
Chicago/Turabian StyleAnum Nawaz; Jorge Peña Queralta; Jixin Guan; Muhammad Awais; Tuan Gia; Ali Bashir; Haibin Kan; Tomi Westerlund. 2020. "Edge Computing to Secure IoT Data Ownership and Trade with the Ethereum Blockchain." Sensors 20, no. 14: 3965.
In recent times, with the advancement of digital imaging, automatic facial recognition has been intensively studied for adults, while less for neonates. Due to the miniature facial structure and facial attributes, newborn facial recognition remains a challenging area. In this paper, an automatic video-based Neonatal Face Attributes Recognition (NFAR) approach in a hierarchical framework is proposed by coalescing the intensity-based method, pose estimation, and novel dedicated neonatal Face Feature Selection (FFS) algorithm. The intensity-based method is used for face detection, followed by the facial pose estimation algorithm and FFS are dedicated to neonatal pose and face feature recognition, respectively. In this study, video-data of 19 neonates’ were collected from the Children’s Hospital affiliated to Fudan University, Shanghai, to evaluate the proposed NFAR approach. The results show promising performance to detect the neonatal face, pose estimation (-450, 450), and facial features (nose, mouth, and eyes) recognition. The NFAR approach exhibits a sensitivity, accuracy, and specificity of 98.7%, 98.5%, and, 95.7% respectively, for the newborn babies at the frontal (00) facial region. The neonatal face and its attributes recognition can be expected to detect neonate’s medical abnormalities unobtrusively by examining the variation in newborn facial texture pattern.
Muhammad Awais; Chen Chen; Xi Long; Bin Yin; Anum Nawaz; Saadullah Farooq Abbasi; Saeed Akbarzadeh; Linkai Tao; Chunmei Lu; Laishuan Wang; Ronald M. Aarts; Wei Chen. Novel Framework: Face Feature Selection Algorithm for Neonatal Facial and Related Attributes Recognition. IEEE Access 2020, 8, 59100 -59113.
AMA StyleMuhammad Awais, Chen Chen, Xi Long, Bin Yin, Anum Nawaz, Saadullah Farooq Abbasi, Saeed Akbarzadeh, Linkai Tao, Chunmei Lu, Laishuan Wang, Ronald M. Aarts, Wei Chen. Novel Framework: Face Feature Selection Algorithm for Neonatal Facial and Related Attributes Recognition. IEEE Access. 2020; 8 (99):59100-59113.
Chicago/Turabian StyleMuhammad Awais; Chen Chen; Xi Long; Bin Yin; Anum Nawaz; Saadullah Farooq Abbasi; Saeed Akbarzadeh; Linkai Tao; Chunmei Lu; Laishuan Wang; Ronald M. Aarts; Wei Chen. 2020. "Novel Framework: Face Feature Selection Algorithm for Neonatal Facial and Related Attributes Recognition." IEEE Access 8, no. 99: 59100-59113.
Background: Ambiguities and anomalies in the Activity of Daily Living (ADL) patterns indicate deviations from Wellness. The monitoring of lifestyles could facilitate remote physicians or caregivers to give insight into symptoms of the disease and provide health improvement advice to residents; Objective: This research work aims to apply lifestyle monitoring in an ambient assisted living (AAL) system by diagnosing conduct and distinguishing variation from the norm with the slightest conceivable fake alert. In pursuing this aim, the main objective is to fill the knowledge gap of two contextual observations (i.e., day and time) in the frequent behavior modeling for an individual in AAL. Each sensing category has its advantages and restrictions. Only a single type of sensing unit may not manage composite states in practice and lose the activity of daily living. To boost the efficiency of the system, we offer an exceptional sensor data fusion technique through different sensing modalities; Methods: As behaviors may also change according to other contextual observations, including seasonal, weather (or temperature), and social interaction, we propose the design of a novel activity learning model by adding behavioral observations, which we name as the Wellness indices analysis model; Results: The ground-truth data are collected from four elderly houses, including daily activities, with a sample size of three hundred days plus sensor activation. The investigation results validate the success of our method. The new feature set from sensor data fusion enhances the system accuracy to (98.17% ± 0.95) from (80.81% ± 0.68). The performance evaluation parameters of the proposed model for ADL recognition are recorded for the 14 selected activities. These parameters are Sensitivity (0.9852), Specificity (0.9988), Accuracy (0.9974), F1 score (0.9851), False Negative Rate (0.0130).
Hemant Ghayvat; Muhammad Awais; Sharnil Pandya; Hao Ren; Saeed Akbarzadeh; Subhas Chandra Mukhopadhyay; Chen Chen; Prosanta Gope; Arpita Chouhan; Wei Chen. Smart Aging System: Uncovering the Hidden Wellness Parameter for Well-Being Monitoring and Anomaly Detection. Sensors 2019, 19, 766 .
AMA StyleHemant Ghayvat, Muhammad Awais, Sharnil Pandya, Hao Ren, Saeed Akbarzadeh, Subhas Chandra Mukhopadhyay, Chen Chen, Prosanta Gope, Arpita Chouhan, Wei Chen. Smart Aging System: Uncovering the Hidden Wellness Parameter for Well-Being Monitoring and Anomaly Detection. Sensors. 2019; 19 (4):766.
Chicago/Turabian StyleHemant Ghayvat; Muhammad Awais; Sharnil Pandya; Hao Ren; Saeed Akbarzadeh; Subhas Chandra Mukhopadhyay; Chen Chen; Prosanta Gope; Arpita Chouhan; Wei Chen. 2019. "Smart Aging System: Uncovering the Hidden Wellness Parameter for Well-Being Monitoring and Anomaly Detection." Sensors 19, no. 4: 766.
The proposed research methodology aims to design a generally implementable framework for providing a house owner/member with the immediate notification of an ongoing theft (unauthorized access to their premises). For this purpose, a rigorous analysis of existing systems was undertaken to identify research gaps. The problems found with existing systems were that they can only identify the intruder after the theft, or cannot distinguish between human and non-human objects. Wireless Sensors Networks (WSNs) combined with the use of Internet of Things (IoT) and Cognitive Internet of Things are expanding smart home concepts and solutions, and their applications. The present research proposes a novel smart home anti-theft system that can detect an intruder, even if they have partially/fully hidden their face using clothing, leather, fiber, or plastic materials. The proposed system can also detect an intruder in the dark using a CCTV camera without night vision capability. The fundamental idea was to design a cost-effective and efficient system for an individual to be able to detect any kind of theft in real-time and provide instant notification of the theft to the house owner. The system also promises to implement home security with large video data handling in real-time. The investigation results validate the success of the proposed system. The system accuracy has been enhanced to 97.01%, 84.13, 78.19%, and 66.5%, in scenarios where a detected intruder had not hidden his/her face, hidden his/her face partially, fully, and was detected in the dark from 85%, 64.13%, 56.70%, and 44.01%.
Sharnil Pandya; Hemant Ghayvat; Ketan Kotecha; Mohammed Awais; Saeed Akbarzadeh; Prosanta Gope; Subhas Chandra Mukhopadhyay; Wei Chen. Smart Home Anti-Theft System: A Novel Approach for Near Real-Time Monitoring and Smart Home Security for Wellness Protocol. Applied System Innovation 2018, 1, 42 .
AMA StyleSharnil Pandya, Hemant Ghayvat, Ketan Kotecha, Mohammed Awais, Saeed Akbarzadeh, Prosanta Gope, Subhas Chandra Mukhopadhyay, Wei Chen. Smart Home Anti-Theft System: A Novel Approach for Near Real-Time Monitoring and Smart Home Security for Wellness Protocol. Applied System Innovation. 2018; 1 (4):42.
Chicago/Turabian StyleSharnil Pandya; Hemant Ghayvat; Ketan Kotecha; Mohammed Awais; Saeed Akbarzadeh; Prosanta Gope; Subhas Chandra Mukhopadhyay; Wei Chen. 2018. "Smart Home Anti-Theft System: A Novel Approach for Near Real-Time Monitoring and Smart Home Security for Wellness Protocol." Applied System Innovation 1, no. 4: 42.