This page has only limited features, please log in for full access.

Unclaimed
Ming-Gang Wen
Department of Information Management, National United University, Miaoli 36063, Taiwan

Basic Info

Basic Info is private.

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

The user biography is not available.
Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Journal article
Published: 29 October 2015 in Remote Sensing
Reads 0
Downloads 0

In this paper, a general nearest feature line (NFL) embedding (NFLE) transformation called fuzzy-kernel NFLE (FKNFLE) is proposed for hyperspectral image (HSI) classification in which kernelization and fuzzification are simultaneously considered. Though NFLE has successfully demonstrated its discriminative capability, the non-linear manifold structure cannot be structured more efficiently by linear scatters using the linear NFLE method. According to the proposed scheme, samples were projected into a kernel space and assigned larger weights based on that of their neighbors. The within-class and between-class scatters were calculated using the fuzzy weights, and the best transformation was obtained by maximizing the Fisher criterion in the kernel space. In that way, the kernelized manifold learning preserved the local manifold structure in a Hilbert space as well as the locality of the manifold structure in the reduced low-dimensional space. The proposed method was compared with various state-of-the-art methods to evaluate the performance using three benchmark data sets. Based on the experimental results: the proposed FKNFLE outperformed the other, more conventional methods.

ACS Style

Ying-Nong Chen; Cheng-Ta Hsieh; Ming-Gang Wen; Chin-Chuan Han; Kuo-Chin Fan. A Dimension Reduction Framework for HSI Classification Using Fuzzy and Kernel NFLE Transformation. Remote Sensing 2015, 7, 14292 -14326.

AMA Style

Ying-Nong Chen, Cheng-Ta Hsieh, Ming-Gang Wen, Chin-Chuan Han, Kuo-Chin Fan. A Dimension Reduction Framework for HSI Classification Using Fuzzy and Kernel NFLE Transformation. Remote Sensing. 2015; 7 (11):14292-14326.

Chicago/Turabian Style

Ying-Nong Chen; Cheng-Ta Hsieh; Ming-Gang Wen; Chin-Chuan Han; Kuo-Chin Fan. 2015. "A Dimension Reduction Framework for HSI Classification Using Fuzzy and Kernel NFLE Transformation." Remote Sensing 7, no. 11: 14292-14326.

Journal article
Published: 12 June 2014 in IEEE Systems Journal
Reads 0
Downloads 0

In this paper, a visual face feature extraction scheme using image processing techniques for health management systems is proposed. Five visual face features, i.e., face contours, face colors, smile lines, hairlines, and melanocytes, were detected, extracted, stored, and retrieved for healthcare management. These actions were performed using the software services in cloud computing environments. Users can periodically browse the changes of the face shapes and texture features. Chinese medical doctors could also check the extracted visual features and give some suggestions for patients via the Internet.

ACS Style

Sheng-Bin Hsu; Chin-Chuan Han; Ming-Gang Wen; Yu-Chi Wu; Kuo-Chin Fan. Extraction of Visual Facial Features for Health Management. IEEE Systems Journal 2014, 10, 1 -11.

AMA Style

Sheng-Bin Hsu, Chin-Chuan Han, Ming-Gang Wen, Yu-Chi Wu, Kuo-Chin Fan. Extraction of Visual Facial Features for Health Management. IEEE Systems Journal. 2014; 10 (3):1-11.

Chicago/Turabian Style

Sheng-Bin Hsu; Chin-Chuan Han; Ming-Gang Wen; Yu-Chi Wu; Kuo-Chin Fan. 2014. "Extraction of Visual Facial Features for Health Management." IEEE Systems Journal 10, no. 3: 1-11.

Article
Published: 06 October 2010 in ETRI Journal
Reads 0
Downloads 0

Due to the rapid development of mobile devices equipped with cameras, instant translation of any text seen in any context is possible. Mobile devices can serve as a translation tool by recognizing the texts presented in the captured scenes. Images captured by cameras will embed more external or unwanted effects which need not to be considered in traditional optical character recognition (OCR). In this paper, we segment a text image captured by mobile devices into individual single characters to facilitate OCR kernel processing. Before proceeding with character segmentation, text detection and text line construction need to be performed in advance. A novel character segmentation method which integrates touched character filters is employed on text images captured by cameras. In addition, periphery features are extracted from the segmented images of touched characters and fed as inputs to support vector machines to calculate the confident values. In our experiment, the accuracy rate of the proposed character segmentation system is 94.90%, which demonstrates the effectiveness of the proposed method.

ACS Style

Hsin‐Te Lue; Ming‐Gang Wen; Hsu‐Yung Cheng; Kuo‐Chin Fan; Chih‐Wei Lin; Chih‐Chang Yu. A Novel Character Segmentation Method for Text Images Captured by Cameras. ETRI Journal 2010, 32, 729 -739.

AMA Style

Hsin‐Te Lue, Ming‐Gang Wen, Hsu‐Yung Cheng, Kuo‐Chin Fan, Chih‐Wei Lin, Chih‐Chang Yu. A Novel Character Segmentation Method for Text Images Captured by Cameras. ETRI Journal. 2010; 32 (5):729-739.

Chicago/Turabian Style

Hsin‐Te Lue; Ming‐Gang Wen; Hsu‐Yung Cheng; Kuo‐Chin Fan; Chih‐Wei Lin; Chih‐Chang Yu. 2010. "A Novel Character Segmentation Method for Text Images Captured by Cameras." ETRI Journal 32, no. 5: 729-739.