This page has only limited features, please log in for full access.

Prof. Dae-Ki Kang
Machine Learning/Deep Learning Research Labs, Department of Computer Engineering, Dongseo University, Busan 47011, Korea

Basic Info

Basic Info is private.

Research Keywords & Expertise

0 Artificial Intelligence
0 Machine Learning
0 Domain Adaptation
0 few shot learning
0 meta-learning

Fingerprints

Machine Learning
Domain Adaptation

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

The user biography is not available.
Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Journal article
Published: 06 June 2021 in Applied Sciences
Reads 0
Downloads 0

CORrelation ALignment (CORAL) is an unsupervised domain adaptation method that uses a linear transformation to align the covariances of source and target domains. Deep CORAL extends CORAL with a nonlinear transformation using a deep neural network and adds CORAL loss as a part of the total loss to align the covariances of source and target domains. However, there are still two problems to be solved in Deep CORAL: features extracted from AlexNet are not always a good representation of the original data, as well as joint training combined with both the classification and CORAL loss may not be efficient enough to align the distribution of the source and target domain. In this paper, we proposed two strategies: attention to improve the quality of feature maps and the p-norm loss function to align the distribution of the source and target features, further reducing the offset caused by the classification loss function. Experiments on the Office-31 dataset indicate that our proposed methodologies improved Deep CORAL in terms of performance.

ACS Style

Zhi-Yong Wang; Dae-Ki Kang. P-Norm Attention Deep CORAL: Extending Correlation Alignment Using Attention and the P-Norm Loss Function. Applied Sciences 2021, 11, 5267 .

AMA Style

Zhi-Yong Wang, Dae-Ki Kang. P-Norm Attention Deep CORAL: Extending Correlation Alignment Using Attention and the P-Norm Loss Function. Applied Sciences. 2021; 11 (11):5267.

Chicago/Turabian Style

Zhi-Yong Wang; Dae-Ki Kang. 2021. "P-Norm Attention Deep CORAL: Extending Correlation Alignment Using Attention and the P-Norm Loss Function." Applied Sciences 11, no. 11: 5267.

Journal article
Published: 12 April 2021 in Applied Sciences
Reads 0
Downloads 0

Extinction has been frequently studied by evolutionary biologists and is shown to play a significant role in evolution. The genetic algorithm (GA), one of popular evolutionary algorithms, has been based on key concepts in natural evolution such as selection, crossover, and mutation. Although GA has been widely studied and implemented in many fields, little work has been done to enhance the performance of GA through extinction. In this research, we propose stagnation-driven extinction protocol for genetic algorithm (SDEP-GA), a novel algorithm inspired by the extinction phenomenon in nature, to enhance the performance of classical GA. Experimental results on various benchmark test functions and their comparative analysis indicate the effectiveness of SDEP-GA in terms of avoiding stagnation in the evolution process.

ACS Style

Gan Ye; Dae-Ki Kang. Extended Evolutionary Algorithms with Stagnation-Based Extinction Protocol. Applied Sciences 2021, 11, 3461 .

AMA Style

Gan Ye, Dae-Ki Kang. Extended Evolutionary Algorithms with Stagnation-Based Extinction Protocol. Applied Sciences. 2021; 11 (8):3461.

Chicago/Turabian Style

Gan Ye; Dae-Ki Kang. 2021. "Extended Evolutionary Algorithms with Stagnation-Based Extinction Protocol." Applied Sciences 11, no. 8: 3461.

Journal article
Published: 11 March 2021 in Applied Sciences
Reads 0
Downloads 0

Location-based recommender systems have gained a lot of attention in both commercial domains and research communities where there are various approaches that have shown great potential for further studies. However, there has been little attention in previous research on location-based recommender systems for generating recommendations considering the locations of target users. Such recommender systems sometimes recommend places that are far from the target user’s current location. In this paper, we explore the issues of generating location recommendations for users who are traveling overseas by taking into account the user’s social influence and also the native or local expert’s knowledge. Accordingly, we have proposed a collaborative filtering recommendation framework called the Friend-And-Native-Aware Approach for Collaborative Filtering (FANA-CF), to generate reasonable location recommendations for users. We have validated our approach by systematic and extensive experiments using real-world datasets collected from Foursquare TM. By comparing algorithms such as the collaborative filtering approach (item-based collaborative filtering and user-based collaborative filtering) and the personalized mean approach, we have shown that our proposed approach has slightly outperformed the conventional collaborative filtering approach and personalized mean approach.

ACS Style

Aaron Yi; Dae-Ki Kang. Experimental Analysis of Friend-And-Native Based Location Awareness for Accurate Collaborative Filtering. Applied Sciences 2021, 11, 2510 .

AMA Style

Aaron Yi, Dae-Ki Kang. Experimental Analysis of Friend-And-Native Based Location Awareness for Accurate Collaborative Filtering. Applied Sciences. 2021; 11 (6):2510.

Chicago/Turabian Style

Aaron Yi; Dae-Ki Kang. 2021. "Experimental Analysis of Friend-And-Native Based Location Awareness for Accurate Collaborative Filtering." Applied Sciences 11, no. 6: 2510.

Conference paper
Published: 06 February 2021 in Transactions on Petri Nets and Other Models of Concurrency XV
Reads 0
Downloads 0

The training process of GAN can be regarded as a process in which the generation network and the identification network play against each other and finally reach a state where it cannot be further improved if the opponent does not change. At the same time, the start of the gradient descent method will choose a direction to reduce the defined loss. The loss function plays a key role in the performance of the model. Choosing the right loss function can help your model learn how to focus on the correct set of features in the data to achieve optimal and faster convergence. In this work, we propose a novel loss function scheme, namely, Diminish Smooth L1 loss. We improve a robust L1 loss called Smooth L1 loss by lowering the threshold so that the network can converge to a lower minimum. From our experimental results on several benchmark data, we found that our algorithm often outperforms the previous approaches.

ACS Style

Arief Rachman Sutanto; Dae-Ki Kang. A Novel Diminish Smooth L1 Loss Model with Generative Adversarial Network. Transactions on Petri Nets and Other Models of Concurrency XV 2021, 361 -368.

AMA Style

Arief Rachman Sutanto, Dae-Ki Kang. A Novel Diminish Smooth L1 Loss Model with Generative Adversarial Network. Transactions on Petri Nets and Other Models of Concurrency XV. 2021; ():361-368.

Chicago/Turabian Style

Arief Rachman Sutanto; Dae-Ki Kang. 2021. "A Novel Diminish Smooth L1 Loss Model with Generative Adversarial Network." Transactions on Petri Nets and Other Models of Concurrency XV , no. : 361-368.

Journal article
Published: 17 November 2020 in Electronics
Reads 0
Downloads 0

Deep neural networks have achieved high performance in image classification, image generation, voice recognition, natural language processing, etc.; however, they still have confronted several open challenges that need to be solved such as incremental learning problem, overfitting in neural networks, hyperparameter optimization, lack of flexibility and multitasking, etc. In this paper, we focus on the incremental learning problem which is related with machine learning methodologies that continuously train an existing model with additional knowledge. To the best of our knowledge, a simple and direct solution to solve this challenge is to retrain the entire neural network after adding the new labels in the output layer. Besides that, transfer learning can be applied only if the domain of the new labels is related to the domain of the labels that have already been trained in the neural network. In this paper, we propose a novel network architecture, namely Brick Assembly Network (BAN), which allows a trained network to assemble (or dismantle) a new label to (or from) a trained neural network without retraining the entire network. In BAN, we train labels with a sub-network (i.e., a simple neural network) individually and then we assemble the converged sub-networks that have trained for a single label together to form a full neural network. For each label to be trained in a sub-network of BAN, we introduce a new loss function that minimizes the loss of the network with only one class data. Applying one loss function for each class label is unique and different from standard neural network architectures (e.g., AlexNet, ResNet, InceptionV3, etc.) which use the values of a loss function from multiple labels to minimize the error of the network. The difference of between the loss functions of previous approaches and the one we have introduced is that we compute a loss values from node values of penultimate layer (we named it as a characteristic layer) instead of the output layer where the computation of the loss values occurs between true labels and predicted labels. From the experiment results on several benchmark datasets, we evaluate that BAN shows a strong capability of adding (and removing) a new label to a trained network compared with a standard neural network and other previous work.

ACS Style

Jiacang Ho; Dae-Ki Kang. Brick Assembly Networks: An Effective Network for Incremental Learning Problems. Electronics 2020, 9, 1929 .

AMA Style

Jiacang Ho, Dae-Ki Kang. Brick Assembly Networks: An Effective Network for Incremental Learning Problems. Electronics. 2020; 9 (11):1929.

Chicago/Turabian Style

Jiacang Ho; Dae-Ki Kang. 2020. "Brick Assembly Networks: An Effective Network for Incremental Learning Problems." Electronics 9, no. 11: 1929.

Journal article
Published: 17 June 2020 in Electronics
Reads 0
Downloads 0

Intelligent anomaly detection is a promising area to discover anomalies as manual processing by human are generally labor-intensive and time-consuming. An effective approach to deal with is essentially to build a classifier system that can reflect the condition of the infrastructure when it tends to behave abnormally, and therefore the appropriate course of action can be taken immediately. In order to achieve aforementioned objective, we proposed to build a dual-staged cascade one class SVM (OCSVM) for water level monitor systems. In the first stage of the cascade model, our OCSVM learns directly on single observation at a time, 1-g to detect point anomaly. Whereas in the second stage, OCSVM learns from the constructed n-gram feature vectors based on the historical data to discover any collective anomaly where the pattern from the n-gram failed to conform to the expected normal pattern. The experimental result showed that our proposed dual-staged OCSVM is able to detect anomaly and collective anomalies effectively. Our model performance has attained remarkable result of about 99% in terms of F1-score. We also compared the performance of our OCSVM algorithm with other algorithms.

ACS Style

Fabian Hann Shen Tan; Jun Ryeol Park; Kyuil Jung; Jun Seoung Lee; Dae-Ki Kang. Cascade of One Class Classifiers for Water Level Anomaly Detection. Electronics 2020, 9, 1012 .

AMA Style

Fabian Hann Shen Tan, Jun Ryeol Park, Kyuil Jung, Jun Seoung Lee, Dae-Ki Kang. Cascade of One Class Classifiers for Water Level Anomaly Detection. Electronics. 2020; 9 (6):1012.

Chicago/Turabian Style

Fabian Hann Shen Tan; Jun Ryeol Park; Kyuil Jung; Jun Seoung Lee; Dae-Ki Kang. 2020. "Cascade of One Class Classifiers for Water Level Anomaly Detection." Electronics 9, no. 6: 1012.

Journal article
Published: 21 May 2020 in Neural Networks
Reads 0
Downloads 0

Deep neural networks have shown high performance in prediction, but they are defenseless when they predict on adversarial examples which are generated by adversarial attack techniques. In image classification, those attack techniques usually perturb the pixel of an image to fool the deep neural networks. To improve the robustness of the neural networks, many researchers have introduced several defense techniques against those attack techniques. To the best of our knowledge, adversarial training is one of the most effective defense techniques against the adversarial examples. However, the defense technique could fail against a semantic adversarial image that performs arbitrary perturbation to fool the neural networks, where the modified image semantically represents the same object as the original image. Against this background, we propose a novel defense technique, Uni-Image Procedure (UIP) method. UIP generates a universal-image (uni-image) from a given image, which can be a clean image or a perturbed image by some attacks. The generated uni-image preserves its own characteristics (i.e. color) regardless of the transformations of the original image. Note that those transformations include inverting the pixel value of an image, modifying the saturation, hue, and value of an image, etc. Our experimental results using several benchmark datasets show that our method not only defends well known adversarial attacks and semantic adversarial attack but also boosts the robustness of the neural network.

ACS Style

Jiacang Ho; Byung-Gook Lee; Dae-Ki Kang. Uni-image: Universal image construction for robust neural model. Neural Networks 2020, 128, 279 -287.

AMA Style

Jiacang Ho, Byung-Gook Lee, Dae-Ki Kang. Uni-image: Universal image construction for robust neural model. Neural Networks. 2020; 128 ():279-287.

Chicago/Turabian Style

Jiacang Ho; Byung-Gook Lee; Dae-Ki Kang. 2020. "Uni-image: Universal image construction for robust neural model." Neural Networks 128, no. : 279-287.

Chapter
Published: 08 February 2020 in Blockchain Technology for IoT Applications
Reads 0
Downloads 0

We propose a novel Blockchain architecture for providing services with insertion and deletion features without compromising decentralization and integrity. In our proposed architecture, we construct three stage multiple layered blockchain. In previous blockchain architectures, it is difficult to modify the contents and contents-index because they are in one public blockchain. In our architecture, extra blockchain manages the link between contents and content-index to resolve the difficulty in modifying the contents. We present various use cases of our multiple layered blockchain architecture to demonstrate its effectiveness.

ACS Style

Min-Gyu Han; Dae-Ki Kang. Toward Multiple Layered Blockchain Structure for Tracking of Private Contents and Right to Be Forgotten. Blockchain Technology for IoT Applications 2020, 99 -114.

AMA Style

Min-Gyu Han, Dae-Ki Kang. Toward Multiple Layered Blockchain Structure for Tracking of Private Contents and Right to Be Forgotten. Blockchain Technology for IoT Applications. 2020; ():99-114.

Chicago/Turabian Style

Min-Gyu Han; Dae-Ki Kang. 2020. "Toward Multiple Layered Blockchain Structure for Tracking of Private Contents and Right to Be Forgotten." Blockchain Technology for IoT Applications , no. : 99-114.

Article
Published: 02 February 2019 in Applied Intelligence
Reads 0
Downloads 0

Restricted Boltzmann machines (RBMs) can be trained by applying stochastic gradient ascent to the objective function as the maximum likelihood learning. However, it is a difficult task due to the intractability of marginalization function gradient. Several methodologies have been proposed by adopting Gibbs Markov chain to approximate this intractability including Contrastive Divergence, Persistent Contrastive Divergence, and Fast Contrastive Divergence. In this paper, we propose an optimization which is injecting noise to underlying Monte Carlo estimation. We introduce two novel learning algorithms. They are Noisy Persistent Contrastive Divergence (NPCD), and further Fast Noisy Persistent Contrastive Divergence (FNPCD). We prove that the NPCD and FNPCD algorithms benefit on the average to equilibrium state with satisfactory condition. We have performed empirical investigation of diverse CD-based approaches and found that our proposed methods frequently obtain higher classification performance than traditional approaches on several benchmark tasks in standard image classification tasks such as MNIST, basic, and rotation datasets.

ACS Style

Prima Sanjaya; Dae-Ki Kang. Optimizing restricted Boltzmann machine learning by injecting Gaussian noise to likelihood gradient approximation. Applied Intelligence 2019, 49, 2723 -2734.

AMA Style

Prima Sanjaya, Dae-Ki Kang. Optimizing restricted Boltzmann machine learning by injecting Gaussian noise to likelihood gradient approximation. Applied Intelligence. 2019; 49 (7):2723-2734.

Chicago/Turabian Style

Prima Sanjaya; Dae-Ki Kang. 2019. "Optimizing restricted Boltzmann machine learning by injecting Gaussian noise to likelihood gradient approximation." Applied Intelligence 49, no. 7: 2723-2734.

Research article
Published: 17 October 2018 in BioMed Research International
Reads 0
Downloads 0

MapReduce is the preferred cloud computing framework used in large data analysis and application processing. MapReduce frameworks currently in place suffer performance degradation due to the adoption of sequential processing approaches with little modification and thus exhibit underutilization of cloud resources. To overcome this drawback and reduce costs, we introduce a Parallel MapReduce (PMR) framework in this paper. We design a novel parallel execution strategy of Map and Reduce worker nodes. Our strategy enables further performance improvement and efficient utilization of cloud resources execution of Map and Reduce functions to utilize multicore environments available with computing nodes. We explain in detail makespan modeling and working principle of the PMR framework in the paper. Performance of PMR is compared with Hadoop through experiments considering three biomedical applications. Experiments conducted for BLAST, CAP3, and DeepBind biomedical applications report makespan time reduction of 38.92%, 18.00%, and 34.62% considering the PMR framework against Hadoop framework. Experiments' results prove that the PMR cloud computing platform proposed is robust, cost-effective, and scalable, which sufficiently supports diverse applications on public and private cloud platforms. Consequently, overall presentation and results indicate that there is good matching between theoretical makespan modeling presented and experimental values investigated.

ACS Style

Ahmed Abdulhakim Al-Absi; Najeeb Abbas Al-Sammarraie; Wael Mohamed Shaher Yafooz; Dae-Ki Kang. Parallel MapReduce: Maximizing Cloud Resource Utilization and Performance Improvement Using Parallel Execution Strategies. BioMed Research International 2018, 2018, 1 -17.

AMA Style

Ahmed Abdulhakim Al-Absi, Najeeb Abbas Al-Sammarraie, Wael Mohamed Shaher Yafooz, Dae-Ki Kang. Parallel MapReduce: Maximizing Cloud Resource Utilization and Performance Improvement Using Parallel Execution Strategies. BioMed Research International. 2018; 2018 ():1-17.

Chicago/Turabian Style

Ahmed Abdulhakim Al-Absi; Najeeb Abbas Al-Sammarraie; Wael Mohamed Shaher Yafooz; Dae-Ki Kang. 2018. "Parallel MapReduce: Maximizing Cloud Resource Utilization and Performance Improvement Using Parallel Execution Strategies." BioMed Research International 2018, no. : 1-17.

Journal article
Published: 01 August 2018 in Neural Networks
Reads 0
Downloads 0

Training a deep neural network with a large number of parameters often leads to overfitting problem. Recently, Dropout has been introduced as a simple, yet effective regularization approach to combat overfitting in such models. Although Dropout has shown remarkable results on many deep neural network cases, its actual effect on CNN has not been thoroughly explored. Moreover, training a Dropout model will significantly increase the training time as it takes longer time to converge than a non-Dropout model with the same architecture. To deal with these issues, we address Biased Dropout and Crossmap Dropout, two novel approaches of Dropout extension based on the behavior of hidden units in CNN model. Biased Dropout divides the hidden units in a certain layer into two groups based on their magnitude and applies different Dropout rate to each group appropriately. Hidden units with higher activation value, which give more contributions to the network final performance, will be retained by a lower Dropout rate, while units with lower activation value will be exposed to a higher Dropout rate to compensate the previous part. The second approach is Crossmap Dropout, which is an extension of the regular Dropout in convolution layer. Each feature map in a convolution layer has a strong correlation between each other, particularly in every identical pixel location in each feature map. Crossmap Dropout tries to maintain this important correlation yet at the same time break the correlation between each adjacent pixel with respect to all feature maps by applying the same Dropout mask to all feature maps, so that all pixels or units in equivalent positions in each feature map will be either dropped or active during training. Our experiment with various benchmark datasets shows that our approaches provide better generalization than the regular Dropout. Moreover, our Biased Dropout takes faster time to converge during training phase, suggesting that assigning noise appropriately in hidden units can lead to an effective regularization.

ACS Style

Alvin Poernomo; Dae-Ki Kang. Biased Dropout and Crossmap Dropout: Learning towards effective Dropout regularization in convolutional neural network. Neural Networks 2018, 104, 60 -67.

AMA Style

Alvin Poernomo, Dae-Ki Kang. Biased Dropout and Crossmap Dropout: Learning towards effective Dropout regularization in convolutional neural network. Neural Networks. 2018; 104 ():60-67.

Chicago/Turabian Style

Alvin Poernomo; Dae-Ki Kang. 2018. "Biased Dropout and Crossmap Dropout: Learning towards effective Dropout regularization in convolutional neural network." Neural Networks 104, no. : 60-67.

Journal article
Published: 01 October 2017 in Pattern Recognition
Reads 0
Downloads 0
ACS Style

Jiacang Ho; Dae-Ki Kang. Mini-batch bagging and attribute ranking for accurate user authentication in keystroke dynamics. Pattern Recognition 2017, 70, 139 -151.

AMA Style

Jiacang Ho, Dae-Ki Kang. Mini-batch bagging and attribute ranking for accurate user authentication in keystroke dynamics. Pattern Recognition. 2017; 70 ():139-151.

Chicago/Turabian Style

Jiacang Ho; Dae-Ki Kang. 2017. "Mini-batch bagging and attribute ranking for accurate user authentication in keystroke dynamics." Pattern Recognition 70, no. : 139-151.

Article
Published: 24 August 2017 in Applied Intelligence
Reads 0
Downloads 0

Biometric-based approaches, including keystroke dynamics on keyboards, mice, and mobile devices, have incorporated machine learning algorithms to learn users’ typing behavior for authentication systems. Among the machine learning algorithms, one-class naïve Bayes (ONENB) has been shown to be effective when it is applied to anomaly tests; however, there have been few studies on applying the ONENB algorithm to keystroke dynamics-based authentication. We applied the ONENB algorithm to calculate the likelihood of attributes in keystroke dynamics data. Additionally, we propose the speed inspection in typing skills (SITS) algorithm designed from the observation that every person has a different typing speed on specific keys. These specific characteristics, also known as the keystroke’s index order, can be used as essential patterns for authentication systems to distinguish between a genuine user and imposter. To further evaluate the effectiveness of the SITS algorithm and examine the quality of each attribute type (e.g., dwell time and flight time), we investigated the influence of attribute types on the keystroke’s index order. From the experimental results of the proposed algorithms and their combination, we observed that the shortest/longest time attributes and separation of the attributes are useful for enhancing the performance of the proposed algorithms.

ACS Style

Jiacang Ho; Dae-Ki Kang. One-class naïve Bayes with duration feature ranking for accurate user authentication using keystroke dynamics. Applied Intelligence 2017, 48, 1547 -1564.

AMA Style

Jiacang Ho, Dae-Ki Kang. One-class naïve Bayes with duration feature ranking for accurate user authentication using keystroke dynamics. Applied Intelligence. 2017; 48 (6):1547-1564.

Chicago/Turabian Style

Jiacang Ho; Dae-Ki Kang. 2017. "One-class naïve Bayes with duration feature ranking for accurate user authentication using keystroke dynamics." Applied Intelligence 48, no. 6: 1547-1564.

Original paper
Published: 14 March 2017 in Journal of Computer Virology and Hacking Techniques
Reads 0
Downloads 0

In this paper, we consider hidden Markov model (HMM) based sequence classification to misuse based intrusion detection. HMM is one of statistical Markov models that regards the system as a group of observable states and hidden states. We apply HMM for detecting intrusive program traces in some public benchmark datasets including University of New Mexico (UNM) and Massachusetts Institute of Technology: Lincoln Laboratory (MIT LL) datasets. We compare the performance of HMM with that of Naïve Bayes (NB) classification algorithm, support vector machines (SVM), and other basic machine learning algorithms. Our experiments and their results on the UNM and MIT LL datasets show that HMM shows comparable performance to previous methods.

ACS Style

Kyung-Hwan Cha; Dae-Ki Kang. Experimental analysis of hidden Markov model based secure misuse intrusion trace classification and hacking detection. Journal of Computer Virology and Hacking Techniques 2017, 13, 233 -238.

AMA Style

Kyung-Hwan Cha, Dae-Ki Kang. Experimental analysis of hidden Markov model based secure misuse intrusion trace classification and hacking detection. Journal of Computer Virology and Hacking Techniques. 2017; 13 (3):233-238.

Chicago/Turabian Style

Kyung-Hwan Cha; Dae-Ki Kang. 2017. "Experimental analysis of hidden Markov model based secure misuse intrusion trace classification and hacking detection." Journal of Computer Virology and Hacking Techniques 13, no. 3: 233-238.

Journal article
Published: 27 December 2016 in Indian Journal of Science and Technology
Reads 0
Downloads 0

Objectives: In machine learning based human activity monitoring, the algorithm needs to produce a prediction model with a high accuracy. Support vector machine is one of the leading machine learning algorithms. Methods/Statistical Analysis: We propose an optimization approach of support vector machines that optimizes its regularization parameter to further improve its prediction accuracy in a human activity recognition application. In order to implement an efficient support vector machines predictive model of a particular dataset that would generalize well and have a good prediction performance, a suitable regularization parameter has to be applied in the regularization part of the equation. Findings: In order to empirically evaluate the effectiveness of our proposed approach, we show the results of our implementation and discuss the results of our proposed approach explained in the previous section on support vector machines models. From our experiments, we can see that we got fabulous results when the regularization parameter is 1000. For the accuracy on train/test dataset pair, we got a sufficiently high percentage for regularization parameter values of 10, 100 and 1000. And, the best cross validation accuracy is 98.8575, which is corresponding to a regularization parameter value of 1000. Additionally, we can also notice that the relation between the classification accuracy and the cross validation accuracy is proportional, and that is obvious in the accuracies responding to the regularization parameter of 0.0001, because both accuracies are significantly low. Improvements/Applications: Our idea was to replace the parameter value with a vector of parameter values and compare their results. It shows more improved and promising performance improvement but if we can apply parallel programming.

ACS Style

Ahmed El-Koka; Dae-Ki Kang. Logarithmic Incremental Parameter Tuning of Support Vector Machines for Human Activity Recognition. Indian Journal of Science and Technology 2016, 9, 1 .

AMA Style

Ahmed El-Koka, Dae-Ki Kang. Logarithmic Incremental Parameter Tuning of Support Vector Machines for Human Activity Recognition. Indian Journal of Science and Technology. 2016; 9 (46):1.

Chicago/Turabian Style

Ahmed El-Koka; Dae-Ki Kang. 2016. "Logarithmic Incremental Parameter Tuning of Support Vector Machines for Human Activity Recognition." Indian Journal of Science and Technology 9, no. 46: 1.

Research article
Published: 01 December 2016 in Scientific Programming
Reads 0
Downloads 0

One of the latest authentication methods is by discerning human gestures. Previous research has shown that different people can develop distinct gesture behaviours even when executing the same gesture. Hand gesture is one of the most commonly used gestures in both communication and authentication research since it requires less room to perform as compared to other bodily gestures. There are different types of hand gesture and they have been researched by many researchers, but stationary hand gesture has yet to be thoroughly explored. There are a number of disadvantages and flaws in general hand gesture authentication such as reliability, usability, and computational cost. Although stationary hand gesture is not able to solve all these problems, it still provides more benefits and advantages over other hand gesture authentication methods, such as making gesture into a motion flow instead of trivial image capturing, and requires less room to perform, less vision cue needed during performance, and so forth. In this paper, we introduce stationary hand gesture authentication by implementing edit distance on finger pointing direction interval (ED-FPDI) from hand gesture to model behaviour-based authentication system. The accuracy rate of the proposed ED-FPDI shows promising results.

ACS Style

Alex Ming Hui Wong; Dae-Ki Kang. Stationary Hand Gesture Authentication Using Edit Distance on Finger Pointing Direction Interval. Scientific Programming 2016, 2016, 1 -15.

AMA Style

Alex Ming Hui Wong, Dae-Ki Kang. Stationary Hand Gesture Authentication Using Edit Distance on Finger Pointing Direction Interval. Scientific Programming. 2016; 2016 ():1-15.

Chicago/Turabian Style

Alex Ming Hui Wong; Dae-Ki Kang. 2016. "Stationary Hand Gesture Authentication Using Edit Distance on Finger Pointing Direction Interval." Scientific Programming 2016, no. : 1-15.

Article
Published: 14 October 2016 in Applied Intelligence
Reads 0
Downloads 0

Averaged one-dependence estimators (AODE) is a type of supervised learning algorithm that relaxes the conditional independence assumption that governs standard naïve Bayes learning algorithms. AODE has demonstrated reasonable improvement in terms of classification performance when compared with a naïve Bayes learner. However, AODE does not consider the relationships between the super-parent attribute and other normal attributes. In this paper, we propose a novel method based on AODE that weighs the relationship between the attributes called weighted AODE (WAODE), which is an attribute weighting method that uses the conditional mutual information metric to rank the relations among the attributes. We have conducted experiments on University of California, Irvine (UCI) benchmark datasets and compared accuracies between AODE and our proposed learner. The experimental results in our paper show that WAODE exhibits higher accuracy performance than the original AODE.

ACS Style

Zhong-Liang Xiang; Dae-Ki Kang. Attribute weighting for averaged one-dependence estimators. Applied Intelligence 2016, 46, 616 -629.

AMA Style

Zhong-Liang Xiang, Dae-Ki Kang. Attribute weighting for averaged one-dependence estimators. Applied Intelligence. 2016; 46 (3):616-629.

Chicago/Turabian Style

Zhong-Liang Xiang; Dae-Ki Kang. 2016. "Attribute weighting for averaged one-dependence estimators." Applied Intelligence 46, no. 3: 616-629.

Journal article
Published: 30 September 2016 in International journal of advanced smart convergence
Reads 0
Downloads 0
ACS Style

Jiacang Ho; Dae-Ki Kang. Mini-Batch Ensemble Method on Keystroke Dynamics based User Authentication. International journal of advanced smart convergence 2016, 5, 40 -46.

AMA Style

Jiacang Ho, Dae-Ki Kang. Mini-Batch Ensemble Method on Keystroke Dynamics based User Authentication. International journal of advanced smart convergence. 2016; 5 (3):40-46.

Chicago/Turabian Style

Jiacang Ho; Dae-Ki Kang. 2016. "Mini-Batch Ensemble Method on Keystroke Dynamics based User Authentication." International journal of advanced smart convergence 5, no. 3: 40-46.

Research article
Published: 29 December 2015 in BioMed Research International
Reads 0
Downloads 0

Genomic sequence alignment is an important technique to decode genome sequences in bioinformatics. Next-Generation Sequencing technologies produce genomic data of longer reads. Cloud platforms are adopted to address the problems arising from storage and analysis of large genomic data. Existing genes sequencing tools for cloud platforms predominantly consider short read gene sequences and adopt the Hadoop MapReduce framework for computation. However, serial execution of map and reduce phases is a problem in such systems. Therefore, in this paper, we introduce Burrows-Wheeler Aligner’s Smith-Waterman Alignment on Parallel MapReduce (BWASW-PMR) cloud platform for long sequence alignment. The proposed cloud platform adopts a widely accepted and accurate BWA-SW algorithm for long sequence alignment. A custom MapReduce platform is developed to overcome the drawbacks of the Hadoop framework. A parallel execution strategy of the MapReduce phases and optimization of Smith-Waterman algorithm are considered. Performance evaluation results exhibit an average speed-up of 6.7 considering BWASW-PMR compared with the state-of-the-art Bwasw-Cloud. An average reduction of 30% in the map phase makespan is reported across all experiments comparing BWASW-PMR with Bwasw-Cloud. Optimization of Smith-Waterman results in reducing the execution time by 91.8%. The experimental study proves the efficiency of BWASW-PMR for aligning long genomic sequences on cloud platforms.

ACS Style

Ahmed Abdulhakim Al-Absi; Dae-Ki Kang. Long Read Alignment with Parallel MapReduce Cloud Platform. BioMed Research International 2015, 2015, 1 -13.

AMA Style

Ahmed Abdulhakim Al-Absi, Dae-Ki Kang. Long Read Alignment with Parallel MapReduce Cloud Platform. BioMed Research International. 2015; 2015 ():1-13.

Chicago/Turabian Style

Ahmed Abdulhakim Al-Absi; Dae-Ki Kang. 2015. "Long Read Alignment with Parallel MapReduce Cloud Platform." BioMed Research International 2015, no. : 1-13.

Journal article
Published: 22 October 2015 in Applied Intelligence
Reads 0
Downloads 0

Naïve Bayes learners are widely used, efficient, and effective supervised learning methods for labeled datasets in noisy environments. It has been shown that naïve Bayes learners produce reasonable performance compared with other machine learning algorithms. However, the conditional independence assumption of naïve Bayes learning imposes restrictions on the handling of real-world data. To relax the independence assumption, we propose a smooth kernel to augment weights for the likelihood estimation. We then select an attribute weighting method that uses the mutual information metric to cooperate with the proposed framework. A series of experiments are conducted on 17 UCI benchmark datasets to compare the accuracy of the proposed learner against that of other methods that employ a relaxed conditional independence assumption. The results demonstrate the effectiveness and efficiency of our proposed learning algorithm. The overall results also indicate the superiority of attribute-weighting methods over those that attempt to determine the structure of the network.

ACS Style

Zhong-Liang Xiang; Xiang-Ru Yu; Dae-Ki Kang. Experimental analysis of naïve Bayes classifier based on an attribute weighting framework with smooth kernel density estimations. Applied Intelligence 2015, 44, 611 -620.

AMA Style

Zhong-Liang Xiang, Xiang-Ru Yu, Dae-Ki Kang. Experimental analysis of naïve Bayes classifier based on an attribute weighting framework with smooth kernel density estimations. Applied Intelligence. 2015; 44 (3):611-620.

Chicago/Turabian Style

Zhong-Liang Xiang; Xiang-Ru Yu; Dae-Ki Kang. 2015. "Experimental analysis of naïve Bayes classifier based on an attribute weighting framework with smooth kernel density estimations." Applied Intelligence 44, no. 3: 611-620.