This page has only limited features, please log in for full access.

Mr. Hsin-Hung Tseng
Department of Civil Engineering, National Chung Hsing University

Basic Info


Research Keywords & Expertise

0 Remote Sensing
0 EDGE COMPUTING
0 Unmanned Aerial System, Photogrammetry, Geodesy, transformations
0 Edge AI
0 Machine Learning and Applications

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

The user biography is not available.
Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Data descriptor
Published: 01 April 2021 in Remote Sensing
Reads 0
Downloads 0

Recently, unmanned aerial vehicles (UAVs) have been broadly applied to the remote sensing field. For a great number of UAV images, deep learning has been reinvigorated and performed many results in agricultural applications. The popular image datasets for deep learning model training are generated for general purpose use, in which the objects, views, and applications are for ordinary scenarios. However, UAV images possess different patterns of images mostly from a look-down perspective. This paper provides a verified annotated dataset of UAV images that are described in data acquisition, data preprocessing, and a showcase of a CNN classification. The dataset collection consists of one multi-rotor UAV platform by flying a planned scouting routine over rice paddies. This paper introduces a semi-auto annotation method with an ExGR index to generate the training data of rice seedlings. For demonstration, this study modified a classical CNN architecture, VGG-16, to run a patch-based rice seedling detection. The k-fold cross-validation was employed to obtain an 80/20 dividing ratio of training/test data. The accuracy of the network increases with the increase of epoch, and all the divisions of the cross-validation dataset achieve a 0.99 accuracy. The rice seedling dataset provides the training-validation dataset, patch-based detection samples, and the ortho-mosaic image of the field.

ACS Style

Ming-Der Yang; Hsin-Hung Tseng; Yu-Chun Hsu; Chin-Ying Yang; Ming-Hsin Lai; Dong-Hong Wu. A UAV Open Dataset of Rice Paddies for Deep Learning Practice. Remote Sensing 2021, 13, 1358 .

AMA Style

Ming-Der Yang, Hsin-Hung Tseng, Yu-Chun Hsu, Chin-Ying Yang, Ming-Hsin Lai, Dong-Hong Wu. A UAV Open Dataset of Rice Paddies for Deep Learning Practice. Remote Sensing. 2021; 13 (7):1358.

Chicago/Turabian Style

Ming-Der Yang; Hsin-Hung Tseng; Yu-Chun Hsu; Chin-Ying Yang; Ming-Hsin Lai; Dong-Hong Wu. 2021. "A UAV Open Dataset of Rice Paddies for Deep Learning Practice." Remote Sensing 13, no. 7: 1358.

Journal article
Published: 12 October 2020 in Computers and Electronics in Agriculture
Reads 0
Downloads 0

Rice is a globally important crop that will continue to play an essential role in feeding our world as we grapple with climate change and population growth. Lodging is a primary threat to rice production, decreasing rice yield, and quality. Lodging assessment is a tedious task and requires heavy labor and a long duration due to the vast land areas involved. Newly developed autonomous crop scouting techniques have shown promise in mapping crop fields without any human interaction. By combining autonomous scouting and lodged rice detection with edge computing, it is possible to estimate rice lodging faster and at a much lower cost than previous methods. This study presents an adaptive crop scouting mechanism for Autonomous Unmanned Aerial Vehicles (UAV). We simulate UAV crop scouting of rice fields at multiple levels using deep neural networks and real UAV energy profiles, focusing on areas with high lodging. Using the proposed method, we can scout rice fields 36% faster than conventional scouting methods at 99.25% accuracy.

ACS Style

Ming-Der Yang; Jayson G. Boubin; Hui Ping Tsai; Hsin-Hung Tseng; Yu-Chun Hsu; Christopher C. Stewart. Adaptive autonomous UAV scouting for rice lodging assessment using edge computing with deep learning EDANet. Computers and Electronics in Agriculture 2020, 179, 105817 .

AMA Style

Ming-Der Yang, Jayson G. Boubin, Hui Ping Tsai, Hsin-Hung Tseng, Yu-Chun Hsu, Christopher C. Stewart. Adaptive autonomous UAV scouting for rice lodging assessment using edge computing with deep learning EDANet. Computers and Electronics in Agriculture. 2020; 179 ():105817.

Chicago/Turabian Style

Ming-Der Yang; Jayson G. Boubin; Hui Ping Tsai; Hsin-Hung Tseng; Yu-Chun Hsu; Christopher C. Stewart. 2020. "Adaptive autonomous UAV scouting for rice lodging assessment using edge computing with deep learning EDANet." Computers and Electronics in Agriculture 179, no. : 105817.

Journal article
Published: 14 February 2020 in Remote Sensing
Reads 0
Downloads 0

A rapid and precise large-scale agricultural disaster survey is a basis for agricultural disaster relief and insurance but is labor-intensive and time-consuming. This study applies Unmanned Aerial Vehicles (UAVs) images through deep-learning image processing to estimate the rice lodging in paddies over a large area. This study establishes an image semantic segmentation model employing two neural network architectures, FCN-AlexNet, and SegNet, whose effects are explored in the interpretation of various object sizes and computation efficiency. Commercial UAVs imaging rice paddies in high-resolution visible images are used to calculate three vegetation indicators to improve the applicability of visible images. The proposed model was trained and tested on a set of UAV images in 2017 and was validated on a set of UAV images in 2019. For the identification of rice lodging on the 2017 UAV images, the F1-score reaches 0.80 and 0.79 for FCN-AlexNet and SegNet, respectively. The F1-score of FCN-AlexNet using RGB + ExGR combination also reaches 0.78 in the 2019 images for validation. The proposed model adopting semantic segmentation networks is proven to have better efficiency, approximately 10 to 15 times faster, and a lower misinterpretation rate than that of the maximum likelihood method.

ACS Style

Ming-Der Yang; Hsin-Hung Tseng; Yu-Chun Hsu; Hui Ping Tsai. Semantic Segmentation Using Deep Learning with Vegetation Indices for Rice Lodging Identification in Multi-date UAV Visible Images. Remote Sensing 2020, 12, 633 .

AMA Style

Ming-Der Yang, Hsin-Hung Tseng, Yu-Chun Hsu, Hui Ping Tsai. Semantic Segmentation Using Deep Learning with Vegetation Indices for Rice Lodging Identification in Multi-date UAV Visible Images. Remote Sensing. 2020; 12 (4):633.

Chicago/Turabian Style

Ming-Der Yang; Hsin-Hung Tseng; Yu-Chun Hsu; Hui Ping Tsai. 2020. "Semantic Segmentation Using Deep Learning with Vegetation Indices for Rice Lodging Identification in Multi-date UAV Visible Images." Remote Sensing 12, no. 4: 633.

Conference paper
Published: 01 January 2020 in 2020 IEEE 17th Annual Consumer Communications & Networking Conference (CCNC)
Reads 0
Downloads 0

In recent years, edge computing and deep learning have been successfully performed processing and classification tasks in a variety of fields including agriculture. Therefore, this research aims to use unmanned aerial vehicle (UAV) for agriculture applications with integrating edge computing and deep learning techniques. This research experiment was carried out in the NCHU Experimental Farm. The DJI Matrice 100 drone with ASUS Tinker Board S embedded system, which connects a Logitech C925e webcam to capture images are used in this study. The ASUS Tinker Board S runs a folder monitoring program and sends images over the 4G LTE network to the backend server whenever new images are captured and stored. The backend server runs a pre-trained image semantic segmentation model and provides image inference service. The image inference results with the associated segmented image will be sent to the mobile device of the drone controller, and a dynamic flight control action can be triggered. The image semantic segmentation model adopts SegNet network architecture. For a comparison purpose, another network architecture, FCN-AlexNet, was also trained and validated. The preliminary results show SegNet outperformed FCN-AlexNet in image semantic segmentation tasks in terms of the evaluation between training and validation. The average inference speed of the semantic image segmentation model is 0.7s with segmentation identification accuracy is 89%. The promising results shed light on many agriculture applications, such as crop growth condition assessment, fertilizer management, and yield prediction. Additionally, this research provides possible solutions for the labor shortage issue of agriculture which is a common challenge in an aging community like Taiwan and many countries worldwide.

ACS Style

Ming Der Yang; Hsin-Hung Tseng; Yu Chun Hsu; Wei Chen Tseng. Real-time Crop Classification Using Edge Computing and Deep Learning. 2020 IEEE 17th Annual Consumer Communications & Networking Conference (CCNC) 2020, 1 -4.

AMA Style

Ming Der Yang, Hsin-Hung Tseng, Yu Chun Hsu, Wei Chen Tseng. Real-time Crop Classification Using Edge Computing and Deep Learning. 2020 IEEE 17th Annual Consumer Communications & Networking Conference (CCNC). 2020; ():1-4.

Chicago/Turabian Style

Ming Der Yang; Hsin-Hung Tseng; Yu Chun Hsu; Wei Chen Tseng. 2020. "Real-time Crop Classification Using Edge Computing and Deep Learning." 2020 IEEE 17th Annual Consumer Communications & Networking Conference (CCNC) , no. : 1-4.