This page has only limited features, please log in for full access.
Human activity recognition using smartphone has been attracting great interest. Since collecting large amount of labeled data is expensive and time-consuming for conventional machine learning techniques, transfer learning techniques have been proposed for activity recognition. However, existing transfer learning techniques typically rely on feature matching based on global domain shift and lack considering the intra-class knowledge transfer. In this paper, a novel transfer learning technique is proposed for cross-domain activity recognition, which can properly integrate feature matching and instance reweighting across the source and target domain in principled dimensionality reduction. The experiments using three real datasets demonstrate that the proposed scheme can achieve much higher precision (92%), recall (91%), and F1-score (92%), in comparison with the existing schemes.
Xianyao Chen; Kyung Tae Kim; Hee Yong Youn. Feature matching and instance reweighting with transfer learning for human activity recognition using smartphone. The Journal of Supercomputing 2021, 1 -28.
AMA StyleXianyao Chen, Kyung Tae Kim, Hee Yong Youn. Feature matching and instance reweighting with transfer learning for human activity recognition using smartphone. The Journal of Supercomputing. 2021; ():1-28.
Chicago/Turabian StyleXianyao Chen; Kyung Tae Kim; Hee Yong Youn. 2021. "Feature matching and instance reweighting with transfer learning for human activity recognition using smartphone." The Journal of Supercomputing , no. : 1-28.
The popularity of diverse IoT-based applications and services continuously generating tremendous amount of data has revealed the significance of data compression (DC). Principal component analysis (PCA) is one of the most commonly employed algorithms for DC. However, when dealing with large-scale matrices, the standard PCA takes a very long time and requires a lot of memory. Therefore, this paper presents a novel distributed stochastic PCA algorithm (DSPCA) for hierarchical sensor network based on gradient-based adaptive PCA (GA-PCA), where the standard PCA is reformulated as a single-pass stochastic setting to find the direction of approximate maximal variance. The step-size in each iteration is obtained by incorporating the stabilized Barzilai-Borwein method with the gradient optimization. This enables DSPCA to be processed with low computational complexity while maintaining a high convergence speed. Computer simulation with two types of datasets displays that the proposed scheme consistently outperforms the representative DC schemes in terms of reconstruction accuracy of original data and explained variance.
Pei Heng Li; Hee Yong Youn. Distributed stochastic principal component analysis using stabilized Barzilai-Borwein step-size for data compression with WSN. The Journal of Supercomputing 2021, 1 -20.
AMA StylePei Heng Li, Hee Yong Youn. Distributed stochastic principal component analysis using stabilized Barzilai-Borwein step-size for data compression with WSN. The Journal of Supercomputing. 2021; ():1-20.
Chicago/Turabian StylePei Heng Li; Hee Yong Youn. 2021. "Distributed stochastic principal component analysis using stabilized Barzilai-Borwein step-size for data compression with WSN." The Journal of Supercomputing , no. : 1-20.
Wireless sensor network (WSN) is used for data collection and transmission in IoT environment. Since it consists of a large number of sensor nodes, a significant amount of redundant data and outliers are generated which substantially deteriorate the network performance. Data aggregation is needed to reduce energy consumption and prolong the lifetime of WSN. In this paper a novel data aggregation scheme is proposed which is based on modified radial basis function neural network to classify the collected data at cluster head and eliminate the redundant data and outliers. Additionally, cosine similarity is used to cluster the nodes having the most similar data. The radial basis function (RBF) is adapted by Mahalanobis distance to support the outlier’s detection and analysis in the multivariate data. The data collected from the sensor node at the cluster head are processed by mahalanbis distance-based radial basis function neural network (MDRBF-NN) before transferred to the based station. Extensive computer simulation with real datasets shows that the proposed scheme consistently outperforms the existing representative data aggregation schemes in terms of data classification, outlier detection, and energy efficiency.
Ihsan Ullah; Hee Yong Youn; Youn-Hee Han. An efficient data aggregation and outlier detection scheme based on radial basis function neural network for WSN. Journal of Ambient Intelligence and Humanized Computing 2021, 1 -17.
AMA StyleIhsan Ullah, Hee Yong Youn, Youn-Hee Han. An efficient data aggregation and outlier detection scheme based on radial basis function neural network for WSN. Journal of Ambient Intelligence and Humanized Computing. 2021; ():1-17.
Chicago/Turabian StyleIhsan Ullah; Hee Yong Youn; Youn-Hee Han. 2021. "An efficient data aggregation and outlier detection scheme based on radial basis function neural network for WSN." Journal of Ambient Intelligence and Humanized Computing , no. : 1-17.
Wireless sensor network (WSN) is effective for monitoring the target environment, which consists of a large number of sensor nodes of limited energy. An efficient medium access control (MAC) protocol is thus imperative to maximize the energy efficiency and performance of WSN. The most existing MAC protocols are based on the scheduling of sleep and active period of the nodes, and do not consider the relationship between the load condition and performance. In this paper a novel scheme is proposed to properly determine the duty cycle of the WSN nodes according to the load, which employs the Q-learning technique and function approximation with linear regression. This allows low-latency energy-efficient scheduling for a wide range of traffic conditions, and effectively overcomes the limitation of Q-learning with the problem of continuous state-action space. NS3 simulation reveals that the proposed scheme significantly improves the throughput, latency, and energy efficiency compared to the existing fully active scheme and S-MAC.
Han Yao Huang; Kyung Tae Kim; Hee Yong Youn. Determining node duty cycle using Q-learning and linear regression for WSN. Frontiers of Computer Science 2020, 15, 1 -7.
AMA StyleHan Yao Huang, Kyung Tae Kim, Hee Yong Youn. Determining node duty cycle using Q-learning and linear regression for WSN. Frontiers of Computer Science. 2020; 15 (1):1-7.
Chicago/Turabian StyleHan Yao Huang; Kyung Tae Kim; Hee Yong Youn. 2020. "Determining node duty cycle using Q-learning and linear regression for WSN." Frontiers of Computer Science 15, no. 1: 1-7.
Spatial and temporal correlation between sensor observations in an Internet of Things environment can be exploited to eliminate unnecessary transmissions. Transmitting less data certainly contributes to meet the growing need for energy-saving and robust transmissions, thus prolong the lifespan of the entire WSN. Spatiotemporal correlation-based dual prediction (DP) and data compression (DC) schemes aim to reduce the amount of data transmission while ensuring data accuracy. In practice, however, the existing methods restrict the stability of the system when the model hyper-parameters are uncertain. Thus adaptive model has lately attracted extensive attention for the development of resource-constrained WSN. In this paper, we propose a gradient-based adaptive model that implements both schemes in a two-tier data reduction framework. To the best of our knowledge, the proposed scheme is the first attempt to introduce adaptiveness into both the DP and DC schemes by using a simple gradient optimization method. Gradient-based Optimal Step-size LMS (GO-LMS) is introduced to make the DP aspects adaptive, while a Gradient-based Adaptive PCA (GA-PCA) approach is used for the DC aspects. The Barzilai–Borwein method is incorporated into the gradient optimization to enable adaptive computation of the step-size for each iteration. Through extensive simulations, the developed framework was found to outperform other state-of-the-art schemes in terms of both the transmission reduction ratio and data recovery accuracy.
Pei Heng Li; Hee Yong Youn. Gradient-based adaptive modeling for IoT data transmission reduction. Wireless Networks 2020, 26, 1 -14.
AMA StylePei Heng Li, Hee Yong Youn. Gradient-based adaptive modeling for IoT data transmission reduction. Wireless Networks. 2020; 26 (8):1-14.
Chicago/Turabian StylePei Heng Li; Hee Yong Youn. 2020. "Gradient-based adaptive modeling for IoT data transmission reduction." Wireless Networks 26, no. 8: 1-14.
Pipeline processing is applied to multiple flow tables (MFT) in the switch of software-defined network (SDN) to increase the throughput of the flows. However, the processing time of each flow increases as the size or number of flow tables gets larger. In this paper we propose a novel approach called PopFlow where a table keeping popular flow entries is located up front in the pipeline, and an express path is provided for the flow matching the table. A Markov model is employed for the selection of popular entries considering the match latency and match frequency, and Queuing theory is used to model the flow processing time of the existing MFT-based schemes and the proposed scheme. Computer simulation reveals that the proposed scheme substantially reduces the flow processing time compared to the existing schemes, and the difference gets more significant as the flow arrival rate increases.
Cheng Wang; Kyung Tae Kim; Hee Yong Youn. PopFlow: a novel flow management scheme for SDN switch of multiple flow tables based on flow popularity. Frontiers of Computer Science 2020, 14, 1 -12.
AMA StyleCheng Wang, Kyung Tae Kim, Hee Yong Youn. PopFlow: a novel flow management scheme for SDN switch of multiple flow tables based on flow popularity. Frontiers of Computer Science. 2020; 14 (6):1-12.
Chicago/Turabian StyleCheng Wang; Kyung Tae Kim; Hee Yong Youn. 2020. "PopFlow: a novel flow management scheme for SDN switch of multiple flow tables based on flow popularity." Frontiers of Computer Science 14, no. 6: 1-12.
Various dimensionality reduction (DR) schemes have been developed for projecting high-dimensional data into low-dimensional representation. The existing schemes usually preserve either only the global structure or local structure of the original data, but not both. To resolve this issue, a scheme called sparse locality for principal component analysis (SLPCA) is proposed. In order to effectively consider the trade-off between the complexity and efficiency, a robust L2,p-norm-based principal component analysis (R2P-PCA) is introduced for global DR, while sparse representation-based locality preserving projection (SR-LPP) is used for local DR. Sparse representation is also employed to construct the weighted matrix of the samples. Being parameter-free, this allows the construction of an intrinsic graph more robust against the noise. In addition, simultaneous learning of projection matrix and sparse similarity matrix is possible. Experimental results demonstrate that the proposed scheme consistently outperforms the existing schemes in terms of clustering accuracy and data reconstruction error.
Pei Heng Li; Taeho Lee; Hee Yong Youn. Dimensionality Reduction with Sparse Locality for Principal Component Analysis. Mathematical Problems in Engineering 2020, 2020, 1 -12.
AMA StylePei Heng Li, Taeho Lee, Hee Yong Youn. Dimensionality Reduction with Sparse Locality for Principal Component Analysis. Mathematical Problems in Engineering. 2020; 2020 ():1-12.
Chicago/Turabian StylePei Heng Li; Taeho Lee; Hee Yong Youn. 2020. "Dimensionality Reduction with Sparse Locality for Principal Component Analysis." Mathematical Problems in Engineering 2020, no. : 1-12.
Efficient data collection and communication are key tasks in smart IoT environment consisting of a large number of devices. Here imprecise data are generated due to the interferences between the devices and harsh operation condition, and therefore data fusion is needed to gather and extract useful data from multiple sources. A number of approaches for data fusion have been proposed which are based on probability, artificial intelligence, or evidence theory to efficiently aggregate the data. The techniques allow the system to be cognitive and intelligent in terms of decision-making under the uncertainty of data and limited resource. In this paper a comprehensive survey on the data fusion techniques for smart IoT system is presented. The challenges and opportunities with data fusion are also delineated. It will be useful for the researchers in developing the applications and services based on smart IoT environment, which require intelligent decision making.
Ihsan Ullah; Hee Yong Youn. Intelligent Data Fusion for Smart IoT Environment: A Survey. Wireless Personal Communications 2020, 114, 409 -430.
AMA StyleIhsan Ullah, Hee Yong Youn. Intelligent Data Fusion for Smart IoT Environment: A Survey. Wireless Personal Communications. 2020; 114 (1):409-430.
Chicago/Turabian StyleIhsan Ullah; Hee Yong Youn. 2020. "Intelligent Data Fusion for Smart IoT Environment: A Survey." Wireless Personal Communications 114, no. 1: 409-430.
The rapid evolution of Internet of Things and cloud computing have endorsed a novel computing paradigm called edge computing. Here tasks are processed by edge devices before sent to the cloud to reduce the computational latency and overhead of cloud server. In edge computing efficient classification and distribution of the tasks among the constituent nodes is a challenging issue because of their resource limitedness and heterogeneity. In this paper a novel scheme named KTCS (K-means Clustering-based Task Classification and Scheduling) is proposed which classifies the task based on the type of resource requirement in terms of CPU, I/O, or COMM before distributed to the edge node. Using the K-means algorithm modeled with the M/M/c queuing theory, the proposed scheme efficiently schedules and assigns the task so that the utilization of the edge devices can be increased. The simulation result reveals that the proposed scheme significantly improves the performance of edge nodes in terms of task execution time and resource utilization.
Ihsan Ullah; Hee Yong Youn. Task Classification and Scheduling Based on K-Means Clustering for Edge Computing. Wireless Personal Communications 2020, 113, 2611 -2624.
AMA StyleIhsan Ullah, Hee Yong Youn. Task Classification and Scheduling Based on K-Means Clustering for Edge Computing. Wireless Personal Communications. 2020; 113 (4):2611-2624.
Chicago/Turabian StyleIhsan Ullah; Hee Yong Youn. 2020. "Task Classification and Scheduling Based on K-Means Clustering for Edge Computing." Wireless Personal Communications 113, no. 4: 2611-2624.
Wireless sensor network is effective for data aggregation and transmission in IoT environment. Here, the sensor data often contain a significant amount of noises or redundancy exists, and thus, the data are aggregated to extract meaningful information and reduce the transmission cost. In this paper, a novel data aggregation scheme is proposed based on clustering of the nodes and extreme learning machine (ELM) which efficiently reduces redundant and erroneous data. Mahalanobis distance-based radial basis function is applied to the projection stage of the ELM to reduce the instability of the training process. Kalman filter is also used to filter the data at each sensor node before transmitted to the cluster head. Computer simulation with real datasets shows that the proposed scheme consistently outperforms the existing schemes in terms of clustering accuracy of the data and energy efficiency of WSN.
Ihsan Ullah; Hee Yong Youn. Efficient data aggregation with node clustering and extreme learning machine for WSN. The Journal of Supercomputing 2020, 76, 10009 -10035.
AMA StyleIhsan Ullah, Hee Yong Youn. Efficient data aggregation with node clustering and extreme learning machine for WSN. The Journal of Supercomputing. 2020; 76 (12):10009-10035.
Chicago/Turabian StyleIhsan Ullah; Hee Yong Youn. 2020. "Efficient data aggregation with node clustering and extreme learning machine for WSN." The Journal of Supercomputing 76, no. 12: 10009-10035.
In the internet of things (IoT) environment consisting of various devices the traffic condition dynamically changes. Failure to process them in complying with the QoS requirement can significantly degrade the reliability and quality of the system. Therefore, the gateway collecting the data needs to quickly establish a new scheduling policy according to the changing traffic condition. The traditional packet scheduling schemes are not effective for IoT since the data transmission pattern is not identified in advance. Q-learning is a type of reinforcement learning that can establish a dynamic scheduling policy without any prior knowledge on the network condition. In this paper a novel Q-learning scheme is proposed which updates the Q-table and reward table based on the condition of the queues in the gateway. Computer simulation reveals that the proposed scheme significantly increases the number of packets satisfying the delay requirement while decreasing the processing time compared to the existing scheme based on Q-learning with stochastic learning automaton. And the processing time is also minimized by omitting unnecessary computation steps in selecting the action in the iterative Q-learning operations.
Donghyun Kim; Taeho Lee; SeJun Kim; Byungjun Lee; Hee Yong Youn. Adaptive packet scheduling in IoT environment based on Q-learning. Journal of Ambient Intelligence and Humanized Computing 2019, 11, 2225 -2235.
AMA StyleDonghyun Kim, Taeho Lee, SeJun Kim, Byungjun Lee, Hee Yong Youn. Adaptive packet scheduling in IoT environment based on Q-learning. Journal of Ambient Intelligence and Humanized Computing. 2019; 11 (6):2225-2235.
Chicago/Turabian StyleDonghyun Kim; Taeho Lee; SeJun Kim; Byungjun Lee; Hee Yong Youn. 2019. "Adaptive packet scheduling in IoT environment based on Q-learning." Journal of Ambient Intelligence and Humanized Computing 11, no. 6: 2225-2235.
This paper presents a novel method of classifying heart conditions from an electrocardiography (ECG) signal. For this purpose, the R-R intervals of ECG signal are analyzed by Gamma distribution parameters and classified into normal (NR) or abnormal (AN) ECG waves. For the normal ECG waves, the heart condition is further investigated by analyzing the dynamic behavior of heart activity based on the correlation between successive R-R intervals and long-term analysis. The classification of heart conditions is made by estimating the conditional class probabilities using class probability output networks (CPONs). The simulation for classifying heart conditions using the MIT-BIH data sets reveals that the proposed approach is effective for classifying heart conditions and allows more accurate classification than the existing classifiers such as the k-NN and SVM.
Han Bin Bae; Min Seop Park; Rhee Man Kil; Hee Yong Youn. Classifying heart conditions based on class probability output networks. Neurocomputing 2019, 360, 198 -208.
AMA StyleHan Bin Bae, Min Seop Park, Rhee Man Kil, Hee Yong Youn. Classifying heart conditions based on class probability output networks. Neurocomputing. 2019; 360 ():198-208.
Chicago/Turabian StyleHan Bin Bae; Min Seop Park; Rhee Man Kil; Hee Yong Youn. 2019. "Classifying heart conditions based on class probability output networks." Neurocomputing 360, no. : 198-208.
The usage of multiple flow tables (MFT) has significantly extended the flexibility and applicability of software-defined networking (SDN). However, the size of MFT is usually limited due to the use of expensive ternary content addressable memory (TCAM). Moreover, the pipeline mechanism of MFT causes long flow processing time. In this paper a novel approach called Agg-ExTable is proposed to efficiently manage the MFT. Here the flow entries in MFT are periodically aggregated by applying pruning and the Quine–Mccluskey algorithm. Utilizing the memory space saved by the aggregation, a front-end ExTable is constructed, keeping popular flow entries for early match. Popular entries are decided by the Hidden Markov model based on the match frequency and match probability. Computer simulation reveals that the proposed scheme is able to save about 45% of space of MFT, and efficiently decrease the flow processing time compared to the existing schemes.
Cheng Wang; Hee Yong Youn. Entry Aggregation and Early Match Using Hidden Markov Model of Flow Table in SDN. Sensors 2019, 19, 2341 .
AMA StyleCheng Wang, Hee Yong Youn. Entry Aggregation and Early Match Using Hidden Markov Model of Flow Table in SDN. Sensors. 2019; 19 (10):2341.
Chicago/Turabian StyleCheng Wang; Hee Yong Youn. 2019. "Entry Aggregation and Early Match Using Hidden Markov Model of Flow Table in SDN." Sensors 19, no. 10: 2341.
Support vector machine (SVM) is an efficient machine learning technique widely applied to various classification problems due to its robustness. However, the training time grows dramatically as the number of training data increases. As a result, the applicability of SVM to large-scale datasets is somewhat limited. In SVM, only a few training samples called support vectors (SVs) affect the construction of hyperplane. Therefore, removing training data irrelevant to the SVs does not degrade the performance of SVM. In this paper the clustering-based convex hull (CBCH) scheme is introduced which allows to efficiently remove insignificant data and thereby reduce the training time of SVM. The CBCH scheme initially applies k-mean clustering algorithm to the given training data points, and then, the convex hull of each cluster is obtained. Only the vertices of the convex hulls and the data points relevant to the SVs are included as training data points. Computer simulation over various sizes and types of datasets reveals that the proposed scheme is considerably faster and more accurate than the existing SVM classifiers. The proposed algorithm is based on geometric interpretation of the SVM and applicable to both linearly separable and linearly inseparable datasets.
Pardis Birzhandi; Hee Yong Youn. CBCH (clustering-based convex hull) for reducing training time of support vector machine. The Journal of Supercomputing 2019, 75, 5261 -5279.
AMA StylePardis Birzhandi, Hee Yong Youn. CBCH (clustering-based convex hull) for reducing training time of support vector machine. The Journal of Supercomputing. 2019; 75 (8):5261-5279.
Chicago/Turabian StylePardis Birzhandi; Hee Yong Youn. 2019. "CBCH (clustering-based convex hull) for reducing training time of support vector machine." The Journal of Supercomputing 75, no. 8: 5261-5279.
Wireless sensor network allows efficient data collection and transmission in IoT environment. Since it usually consists of a large number of sensor nodes, a significant amount of redundant data and outliers are generated which deteriorate the network performance. In this paper, a novel data aggregation scheme is proposed which is based on self-organized map neural network to reduce redundant data and eliminate outliers. In addition, cosine similarity is used to improve the clustering process of sensor nodes based on the density and similarity of the data, and interquartile analysis is adopted to remove outliers. It allows to significantly reduce the energy consumption and enhance the network performance. Extensive simulation with real dataset shows that the proposed scheme consistently outperforms the existing representative data aggregation schemes in term of data reduction rate, network lifetime, and energy efficiency.
Ihsan Ullah; Hee Yong Youn. A novel data aggregation scheme based on self-organized map for WSN. The Journal of Supercomputing 2019, 75, 3975 -3996.
AMA StyleIhsan Ullah, Hee Yong Youn. A novel data aggregation scheme based on self-organized map for WSN. The Journal of Supercomputing. 2019; 75 (7):3975-3996.
Chicago/Turabian StyleIhsan Ullah; Hee Yong Youn. 2019. "A novel data aggregation scheme based on self-organized map for WSN." The Journal of Supercomputing 75, no. 7: 3975-3996.
Load Balancing (LB) is one of the most important tasks required to maximize network performance, scalability and robustness. Nowadays, with the emergence of Software-Defined Networking (SDN), LB for SDN has become a very important issue. SDN decouples the control plane from the data forwarding plane to implement centralized control of the whole network. LB assigns the network traffic to the resources in such a way that no one resource is overloaded and therefore the overall performance is maximized. The Ant Colony Optimization (ACO) algorithm has been recognized to be effective for LB of SDN among several existing optimization algorithms. The convergence latency and searching optimal solution are the key criteria of ACO. In this paper, a novel dynamic LB scheme that integrates genetic algorithm (GA) with ACO for further enhancing the performance of SDN is proposed. It capitalizes the merit of fast global search of GA and efficient search of an optimal solution of ACO. Computer simulation results show that the proposed scheme substantially improves the Round Robin and ACO algorithm in terms of the rate of searching optimal path, round trip time, and packet loss rate.
Hai Xue; Kyung Tae Kim; Hee Yong Youn. Dynamic Load Balancing of Software-Defined Networking Based on Genetic-Ant Colony Optimization. Sensors 2019, 19, 311 .
AMA StyleHai Xue, Kyung Tae Kim, Hee Yong Youn. Dynamic Load Balancing of Software-Defined Networking Based on Genetic-Ant Colony Optimization. Sensors. 2019; 19 (2):311.
Chicago/Turabian StyleHai Xue; Kyung Tae Kim; Hee Yong Youn. 2019. "Dynamic Load Balancing of Software-Defined Networking Based on Genetic-Ant Colony Optimization." Sensors 19, no. 2: 311.
The rapid growth in social networking services has led to the generation of a massivevolume of opinionated information in the form of electronic text. As a result, the research on textsentiment analysis has drawn a great deal of interest. In this paper a novel feature weighting approachis proposed for the sentiment analysis of Twitter data. It properly measures the relative significanceof each feature regarding both intra-category and intra-category distribution. A new statistical modelcalled Category Discriminative Strength is introduced to characterize the discriminability of thefeatures among various categories, and a modified Chi-square (2)-based measure is employed tomeasure the intra-category dependency of the features. Moreover, a fine-grained feature clusteringstrategy is proposed to maximize the accuracy of the analysis. Extensive experiments demonstrate thatthe proposed approach significantly outperforms four state-of-the-art sentiment analysis techniquesin terms of accuracy, precision, recall, and F1 measure with various sizes and patterns of training andtest datasets.
Yili Wang; Hee Yong Youn. Feature Weighting Based on Inter-Category and Intra-Category Strength for Twitter Sentiment Analysis. Applied Sciences 2018, 9, 92 .
AMA StyleYili Wang, Hee Yong Youn. Feature Weighting Based on Inter-Category and Intra-Category Strength for Twitter Sentiment Analysis. Applied Sciences. 2018; 9 (1):92.
Chicago/Turabian StyleYili Wang; Hee Yong Youn. 2018. "Feature Weighting Based on Inter-Category and Intra-Category Strength for Twitter Sentiment Analysis." Applied Sciences 9, no. 1: 92.
Software-defined networking (SDN) decouples the control plane and data forwarding plane to overcome the limitations of traditional networking infrastructure. Among several communication protocols employed for SDN, OpenFlow is most widely used for the communication between the controller and switch. In this paper two packet scheduling schemes, FCFS-Pushout (FCFS-PO) and FCFS-Pushout-Priority (FCFS-PO-P), are proposed to effectively handle the overload issue of multiple-switch SDN targeting the edge computing environment. Analytical models on their operations are developed, and extensive experiment based on a testbed is carried out to evaluate the schemes. They reveal that both of them are better than the typical FCFS-Block (FCFS-BL) scheduling algorithm in terms of packet wait time. Furthermore, FCFS-PO-P is found to be more effective than FCFS-PO in the edge computing environment.
Hai Xue; Kyung Tae Kim; Hee Yong Youn. Packet Scheduling for Multiple-Switch Software-Defined Networking in Edge Computing Environment. Wireless Communications and Mobile Computing 2018, 2018, 1 -11.
AMA StyleHai Xue, Kyung Tae Kim, Hee Yong Youn. Packet Scheduling for Multiple-Switch Software-Defined Networking in Edge Computing Environment. Wireless Communications and Mobile Computing. 2018; 2018 ():1-11.
Chicago/Turabian StyleHai Xue; Kyung Tae Kim; Hee Yong Youn. 2018. "Packet Scheduling for Multiple-Switch Software-Defined Networking in Edge Computing Environment." Wireless Communications and Mobile Computing 2018, no. : 1-11.
This paper proposes a novel method of predicting daily peak power demands using the deep structure of Gaussian kernel function networks (GKFNs). For the prediction model, the whole time series is divided into multiple parts and each part is trained using a GKFN. Then, the trained GKFNs are combined using the deep structure of GKFNs to minimize the mean square errors (MSEs) of prediction model. As a consequence, the proposed deep structure of GKFNs provides an improved performance of prediction accuracy compared with canonical GKFNs. The simulation for predicting daily peak power demands in Korea reveals that the proposed prediction model has the merits in prediction performances compared with the GKFN model and also other prediction models such as the k-NN and SVR.
Dae Hyeon Kim; Ye Jin Lee; Rhee Man Kil; Hee Yong Youn. Deep Structure of Gaussian Kernel Function Networks for Predicting Daily Peak Power Demands. Privacy Enhancing Technologies 2018, 116 -126.
AMA StyleDae Hyeon Kim, Ye Jin Lee, Rhee Man Kil, Hee Yong Youn. Deep Structure of Gaussian Kernel Function Networks for Predicting Daily Peak Power Demands. Privacy Enhancing Technologies. 2018; ():116-126.
Chicago/Turabian StyleDae Hyeon Kim; Ye Jin Lee; Rhee Man Kil; Hee Yong Youn. 2018. "Deep Structure of Gaussian Kernel Function Networks for Predicting Daily Peak Power Demands." Privacy Enhancing Technologies , no. : 116-126.
In the Internet of Things (IoT) environment consisting of various devices the arrival rate of data packets dynamically changes. Failure to process them in complying with the QoS requirement can significantly degrade the reliability and quality of the system. Therefore, the gateway collecting the data needs to quickly establish a new scheduling policy according to the changing traffic condition. The existing packet scheduling schemes are not effective for IoT since the data transmission pattern is not defined in advance. Q-learning is a type of reinforcement learning that can establish a dynamic scheduling policy according to the state of each queue without any prior knowledge on the network status. In this paper a novel Q-learning scheme is proposed which updates the Q-table and reward table based on the condition of the queues in the gateway and adjusts the reward value according to the time slot. Computer simulation reveals that the proposed scheme significantly reduces the scheduling time while allowing high accuracy compared to the existing Q-learning scheme based on Stochastic Learning Automaton (SLA).
Donghyun Kim; Taeho Lee; SeJun Kim; Byungjun Lee; Hee Yong Youn. Adaptive Packet Scheduling in IoT Environment Based on Q-learning. Procedia Computer Science 2018, 141, 247 -254.
AMA StyleDonghyun Kim, Taeho Lee, SeJun Kim, Byungjun Lee, Hee Yong Youn. Adaptive Packet Scheduling in IoT Environment Based on Q-learning. Procedia Computer Science. 2018; 141 ():247-254.
Chicago/Turabian StyleDonghyun Kim; Taeho Lee; SeJun Kim; Byungjun Lee; Hee Yong Youn. 2018. "Adaptive Packet Scheduling in IoT Environment Based on Q-learning." Procedia Computer Science 141, no. : 247-254.