This page has only limited features, please log in for full access.
In this study, we use OpenPose to capture many facial feature nodes, create a data set and label it, and finally bring in the neural network model we created. The purpose is to predict the direction of the person’s line of sight from the face and facial feature nodes and finally add object detection technology to calculate the object that the person is observing. After implementing this method, we found that this method can correctly estimate the human body’s form. Furthermore, if multiple lenses can get more information, the effect will be better than a single lens, evaluating the observed objects more accurately. Furthermore, we found that the head in the image can judge the direction of view. In addition, we found that in the case of the test face tilt, approximately at a tilt angle of 60 degrees, the face nodes can still be captured. Similarly, when the inclination angle is greater than 60 degrees, the facing node cannot be used.
Yu-Shiuan Tsai; Nai-Chi Chen; Yi-Zeng Hsieh; Shih-Syun Lin. The Development of Long-Distance Viewing Direction Analysis and Recognition of Observed Objects Using Head Image and Deep Learning. Mathematics 2021, 9, 1880 .
AMA StyleYu-Shiuan Tsai, Nai-Chi Chen, Yi-Zeng Hsieh, Shih-Syun Lin. The Development of Long-Distance Viewing Direction Analysis and Recognition of Observed Objects Using Head Image and Deep Learning. Mathematics. 2021; 9 (16):1880.
Chicago/Turabian StyleYu-Shiuan Tsai; Nai-Chi Chen; Yi-Zeng Hsieh; Shih-Syun Lin. 2021. "The Development of Long-Distance Viewing Direction Analysis and Recognition of Observed Objects Using Head Image and Deep Learning." Mathematics 9, no. 16: 1880.
This study uses deep learning to model the discharge characteristic curve of the lithium-ion battery. The battery measurement instrument was used to charge and discharge the battery to establish the discharge characteristic curve. The parameter method tries to find the discharge characteristic curve and was improved by MLP (multilayer perceptron), RNN (recurrent neural network), LSTM (long short-term memory), and GRU (gated recurrent unit). The results obtained by these methods were graphs. We used genetic algorithm (GA) to obtain the parameters of the discharge characteristic curve equation.
Shih-Wei Tan; Sheng-Wei Huang; Yi-Zeng Hsieh; Shih-Syun Lin. The Estimation Life Cycle of Lithium-Ion Battery Based on Deep Learning Network and Genetic Algorithm. Energies 2021, 14, 4423 .
AMA StyleShih-Wei Tan, Sheng-Wei Huang, Yi-Zeng Hsieh, Shih-Syun Lin. The Estimation Life Cycle of Lithium-Ion Battery Based on Deep Learning Network and Genetic Algorithm. Energies. 2021; 14 (15):4423.
Chicago/Turabian StyleShih-Wei Tan; Sheng-Wei Huang; Yi-Zeng Hsieh; Shih-Syun Lin. 2021. "The Estimation Life Cycle of Lithium-Ion Battery Based on Deep Learning Network and Genetic Algorithm." Energies 14, no. 15: 4423.
The focus of this study is under the auspices of China Steel Corporation, Taiwan, in carrying out the national energy policy of 2025 Non-Nuclear Home. Under this policy, an estimated 600 offshore wind turbines will be installed by 2025. In order to carry out the wind energy project effectively, a preliminary study must be conducted. In this article, we investigated the influence of the wake effect on the efficiency of the turbines’ layout in a windfarm. A distributed genetic algorithm is deployed to study the wind turbines’ layout in order to alleviate the detrimental wake effect. In the current stage of this research, the historical weather data of weather stations near the site of the 29th windfarm, Taiwan, were collected by Academia Sinica. Our wake effect resilient optimized windfarm showed superior performance over that of the conventional windfarm. Additionally, an operation cost minimization process is also demonstrated and implemented using an ant colony optimization algorithm to optimize the total length of the power-carrying interconnecting cables for the turbines inside the optimized windfarm.
Yi-Zeng Hsieh; Shih-Syun Lin; En-Yu Chang; Kwong-Kau Tiong; Shih-Wei Tan; Chiou-Yi Hor; Shyi-Chy Cheng; Yu-Shiuan Tsai; Chao-Rong Chen. Wind Technologies for Wake Effect Performance in Windfarm Layout Based on Population-Based Optimization Algorithm. Energies 2021, 14, 4125 .
AMA StyleYi-Zeng Hsieh, Shih-Syun Lin, En-Yu Chang, Kwong-Kau Tiong, Shih-Wei Tan, Chiou-Yi Hor, Shyi-Chy Cheng, Yu-Shiuan Tsai, Chao-Rong Chen. Wind Technologies for Wake Effect Performance in Windfarm Layout Based on Population-Based Optimization Algorithm. Energies. 2021; 14 (14):4125.
Chicago/Turabian StyleYi-Zeng Hsieh; Shih-Syun Lin; En-Yu Chang; Kwong-Kau Tiong; Shih-Wei Tan; Chiou-Yi Hor; Shyi-Chy Cheng; Yu-Shiuan Tsai; Chao-Rong Chen. 2021. "Wind Technologies for Wake Effect Performance in Windfarm Layout Based on Population-Based Optimization Algorithm." Energies 14, no. 14: 4125.
Exercise monitoring systems for rehabilitation are usually not able to pinpoint the exact part for patients’ exercise. The research objective is to develop the projection-based motion recognition (PMR) algorithm based on depth data and wide-accepted methods to solve this matter. We regard a motion trajectory as a combination of basic posture units, and then project the basic posture units onto a 2-D space via a projection mapping. Each motion trajectory is transformed to a 2-D motion trajectory map by sequentially connecting the basic posture units involved in the motion trajectory. Finally, we employ a convolutional neural network (CNN)-based classifier to classify the trajectory maps. Accurate classification rate reaches as high as 95.21%. The originality of PMR algorithm lies in (1) it has the generalization capability to some extent since it only adopts popular methods and contains an essential and comprehensive mechanism; (2) the resultant trajectory map may reveal the information about how well a patient execute the rehabilitation assignments.
Mu-Chun Su; Pang-Ti Tai; Jieh-Haur Chen; Yi-Zeng Hsieh; Shu-Fang Lee; Zhe-Fu Yeh. A Projection-based Human Motion Recognition Algorithm based on Depth Sensors. IEEE Sensors Journal 2021, PP, 1 -1.
AMA StyleMu-Chun Su, Pang-Ti Tai, Jieh-Haur Chen, Yi-Zeng Hsieh, Shu-Fang Lee, Zhe-Fu Yeh. A Projection-based Human Motion Recognition Algorithm based on Depth Sensors. IEEE Sensors Journal. 2021; PP (99):1-1.
Chicago/Turabian StyleMu-Chun Su; Pang-Ti Tai; Jieh-Haur Chen; Yi-Zeng Hsieh; Shu-Fang Lee; Zhe-Fu Yeh. 2021. "A Projection-based Human Motion Recognition Algorithm based on Depth Sensors." IEEE Sensors Journal PP, no. 99: 1-1.
Neural networks have achieved great results in sound recognition, and many different kinds of acoustic features have been tried as the training input for the network. However, there is still doubt about whether a neural network can efficiently extract features from the raw audio signal input. This study improved the raw-signal-input network from other researches using deeper network architectures. The raw signals could be better analyzed in the proposed network. We also presented a discussion of several kinds of network settings, and with the spectrogram-like conversion, our network could reach an accuracy of 73.55% in the open-audio-dataset “Dataset for Environmental Sound Classification 50” (ESC50). This study also proposed a network architecture that could combine different kinds of network feeds with different features. With the help of global pooling, a flexible fusion way was integrated into the network. Our experiment successfully combined two different networks with different audio feature inputs (a raw audio signal and the log-mel spectrum). Using the above settings, the proposed ParallelNet finally reached the accuracy of 81.55% in ESC50, which also reached the recognition level of human beings.
Yu-Kai Lin; Mu-Chun Su; Yi-Zeng Hsieh. The Application and Improvement of Deep Neural Networks in Environmental Sound Recognition. Applied Sciences 2020, 10, 5965 .
AMA StyleYu-Kai Lin, Mu-Chun Su, Yi-Zeng Hsieh. The Application and Improvement of Deep Neural Networks in Environmental Sound Recognition. Applied Sciences. 2020; 10 (17):5965.
Chicago/Turabian StyleYu-Kai Lin; Mu-Chun Su; Yi-Zeng Hsieh. 2020. "The Application and Improvement of Deep Neural Networks in Environmental Sound Recognition." Applied Sciences 10, no. 17: 5965.
This study proposes a design for a wearable guide device for blind or visually impaired persons on the basis of video streaming and deep learning. This work mainly aims to provide supplementary assistance to white canes used by visually impaired persons and offer them increased freedom of movement and independence using the proposed wearable device. The considerable amount of environmental information provided by the device also ensures enhanced safety for its users. Computer vision in the proposed device uses an RGB camera instead of the RGBD camera commonly used in computer vision. Deep learning is applied to convert RGB images into depth images and calculate the plane for detecting indoor objects and safe walking routes. A convolutional neural network (CNN) is adopted, and its neural network structure, which is similar to that of the human brain, simulates a neural transmission mechanism similar to that triggered in human learning. Therefore, this system can learn a large number of feature routes and then generate a model from the learning result. The proposed system can help blind or visually impaired persons identify flat and safe walking routes.
Yi-Zeng Hsieh; Shih-Syun Lin; Fu-Xiong Xu. Development of a wearable guide device based on convolutional neural network for blind or visually impaired persons. Multimedia Tools and Applications 2020, 79, 1 -19.
AMA StyleYi-Zeng Hsieh, Shih-Syun Lin, Fu-Xiong Xu. Development of a wearable guide device based on convolutional neural network for blind or visually impaired persons. Multimedia Tools and Applications. 2020; 79 (39-40):1-19.
Chicago/Turabian StyleYi-Zeng Hsieh; Shih-Syun Lin; Fu-Xiong Xu. 2020. "Development of a wearable guide device based on convolutional neural network for blind or visually impaired persons." Multimedia Tools and Applications 79, no. 39-40: 1-19.
In recent years, the breakthrough of neural networks and the rise of deep learning have led to the advancement of machine vision, which has been commonly used in the practical application of image recognition. Automobiles, drones, portable devices, behavior recognition, indoor positioning and many other industries also rely on the integrated application, and require the support of deep learning and machine vision. As for these technologies, there is a high demand for the accuracy related to the recognition of portraits or objects. The recognition of human figures is also a research goal that has drawn great attention in various fields. However, the portrait will be affected by various factors such as height, weight, posture, angle and whether it is covered or not, which affects the accuracy of recognition. This paper applies the application of deep learning to portraits with different poses and angles, especially the actual distance of a single lens for the shadowed portrait (depth estimation), so that it can be used for automatic control of drones in the future. Traditional methods for calculating depth using images are mainly divided into three types: one—single-lens estimation, two—lens estimation, and three—optical band estimation. In view of the fact that both the second and third categories require relatively large and expensive equipment to effectively perform distance calculations, numerous methods for calculating distance using a single lens have recently been produced. However, whether it is the use of traditional “units of distance measurement calibration”, “defocus distance measurement”, or the “three-dimensional grid space messages distance measurement method”, all of these face corresponding difficulties and problems. Additionally, they have to deal with outside disturbances and process the shadowed image. Therefore, under the new research method, OpenPose, which is proposed by Carnegie Mellon University, this paper intends to propose a depth algorithm for a single-lens occluded portrait to estimate the actual portrait distance for different poses, angles of view and obscuration.
Yu-Shiuan Tsai; Li-Heng Hsu; Yi-Zeng Hsieh; Shih-Syun Lin. The Real-Time Depth Estimation for an Occluded Person Based on a Single Image and OpenPose Method. Mathematics 2020, 8, 1333 .
AMA StyleYu-Shiuan Tsai, Li-Heng Hsu, Yi-Zeng Hsieh, Shih-Syun Lin. The Real-Time Depth Estimation for an Occluded Person Based on a Single Image and OpenPose Method. Mathematics. 2020; 8 (8):1333.
Chicago/Turabian StyleYu-Shiuan Tsai; Li-Heng Hsu; Yi-Zeng Hsieh; Shih-Syun Lin. 2020. "The Real-Time Depth Estimation for an Occluded Person Based on a Single Image and OpenPose Method." Mathematics 8, no. 8: 1333.
Under the vigorous development of global anticipatory computing in recent years, there have been numerous applications of artificial intelligence (AI) in people’s daily lives. Learning analytics of big data can assist students, teachers, and school administrators to gain new knowledge and estimate learning information; in turn, the enhanced education contributes to the rapid development of science and technology. Education is sustainable life learning, as well as the most important promoter of science and technology worldwide. In recent years, a large number of anticipatory computing applications based on AI have promoted the training professional AI talent. As a result, this study aims to design a set of interactive robot-assisted teaching for classroom setting to help students overcoming academic difficulties. Teachers, students, and robots in the classroom can interact with each other through the ARCS motivation model in programming. The proposed method can help students to develop the motivation, relevance, and confidence in learning, thus enhancing their learning effectiveness. The robot, like a teaching assistant, can help students solving problems in the classroom by answering questions and evaluating students’ answers in natural and responsive interactions. The natural interactive responses of the robot are achieved through the use of a database of emotional big data (Google facial expression comparison dataset). The robot is loaded with an emotion recognition system to assess the moods of the students through their expressions and sounds, and then offer corresponding emotional responses. The robot is able to communicate naturally with the students, thereby attracting their attention, triggering their learning motivation, and improving their learning effectiveness.
Yi-Zeng Hsieh; Shih-Syun Lin; Yu-Cin Luo; Yu-Lin Jeng; Shih-Wei Tan; Chao-Rong Chen; Pei-Ying Chiang. ARCS-Assisted Teaching Robots Based on Anticipatory Computing and Emotional Big Data for Improving Sustainable Learning Efficiency and Motivation. Sustainability 2020, 12, 5605 .
AMA StyleYi-Zeng Hsieh, Shih-Syun Lin, Yu-Cin Luo, Yu-Lin Jeng, Shih-Wei Tan, Chao-Rong Chen, Pei-Ying Chiang. ARCS-Assisted Teaching Robots Based on Anticipatory Computing and Emotional Big Data for Improving Sustainable Learning Efficiency and Motivation. Sustainability. 2020; 12 (14):5605.
Chicago/Turabian StyleYi-Zeng Hsieh; Shih-Syun Lin; Yu-Cin Luo; Yu-Lin Jeng; Shih-Wei Tan; Chao-Rong Chen; Pei-Ying Chiang. 2020. "ARCS-Assisted Teaching Robots Based on Anticipatory Computing and Emotional Big Data for Improving Sustainable Learning Efficiency and Motivation." Sustainability 12, no. 14: 5605.
This study presents a stereo vision robotic arm assistance system, in which five degrees of catching can be performed by the robot arm in a single instance. The algorithm of the control system is built for population-based optimization and specifically aimed to assist people with disabilities. The proposed stereo vision-based robot arm system enables users to manipulate objects based on the robot’s ability to aim at objects by using computer vision. The stereo vision system counts the parameters by focusing on the real-word position of the instance in the coordinate system. A trained deep fully connected network is then adopted to compensate the location measurement errors incurred by the inaccurate parameters measured from the deep learning procedure. Subsequently, the proposed Q-learning-based swarm optimization algorithm is adopted to solve the forward kinematics problem and count the angles of each servo. The performance of the robot arm is compared with several real-life experiments to test its ability to grip a target object in different positions.
Yi-Zeng Hsieh; Shih-Syun Lin. Robotic Arm Assistance System Based on Simple Stereo Matching and Q-Learning Optimization. IEEE Sensors Journal 2020, 20, 10945 -10954.
AMA StyleYi-Zeng Hsieh, Shih-Syun Lin. Robotic Arm Assistance System Based on Simple Stereo Matching and Q-Learning Optimization. IEEE Sensors Journal. 2020; 20 (18):10945-10954.
Chicago/Turabian StyleYi-Zeng Hsieh; Shih-Syun Lin. 2020. "Robotic Arm Assistance System Based on Simple Stereo Matching and Q-Learning Optimization." IEEE Sensors Journal 20, no. 18: 10945-10954.
The human eye is a vital sensory organ that provides us with visual information about the world around us. It can also convey such information as our emotional state to people with whom we interact. In technology, eye tracking has become a hot research topic recently, and a growing number of eye-tracking devices have been widely applied in fields such as psychology, medicine, education, and virtual reality. However, most commercially available eye trackers are prohibitively expensive and require that the user’s head remain completely stationary in order to accurately estimate the direction of their gaze. To address these drawbacks, this paper proposes an inner corner-pupil center vector (ICPCV) eye-tracking system based on a deep neural network, which does not require that the user’s head remain stationary or expensive hardware to operate. The performance of the proposed system is compared with those of other currently available eye-tracking estimation algorithms, and the results show that it outperforms these systems.
Mu-Chun Su; Tat-Meng U; Yi-Zeng Hsieh; Zhe-Fu Yeh; Shu-Fang Lee; Shih-Syun Lin. An Eye-Tracking System based on Inner Corner-Pupil Center Vector and Deep Neural Network. Sensors 2019, 20, 25 .
AMA StyleMu-Chun Su, Tat-Meng U, Yi-Zeng Hsieh, Zhe-Fu Yeh, Shu-Fang Lee, Shih-Syun Lin. An Eye-Tracking System based on Inner Corner-Pupil Center Vector and Deep Neural Network. Sensors. 2019; 20 (1):25.
Chicago/Turabian StyleMu-Chun Su; Tat-Meng U; Yi-Zeng Hsieh; Zhe-Fu Yeh; Shu-Fang Lee; Shih-Syun Lin. 2019. "An Eye-Tracking System based on Inner Corner-Pupil Center Vector and Deep Neural Network." Sensors 20, no. 1: 25.
Magnetic resonance imaging (MRI) offers the most detailed brain structure image available today; it can identify tiny lesions or cerebral cortical abnormalities. The primary purpose of the procedure is to confirm whether there is structural variation that causes epilepsy, such as hippocampal sclerotherapy, local cerebral cortical dysplasia, and cavernous hemangioma. Cerebrovascular disease, the second most common factor of death in the world, is also the fourth leading cause of death in Taiwan, with cerebrovascular disease having the highest rate of stroke. Among the most common are large vascular atherosclerotic lesions, small vascular lesions, and cardiac emboli. The purpose of this thesis is to establish a computer-aided diagnosis system based on small blood vessel lesions in MRI images, using the method of Convolutional Neural Network and deep learning to analyze brain vascular occlusion by analyzing brain MRI images. Blocks can help clinicians more quickly determine the probability and severity of stroke in patients. We analyzed MRI data from 50 patients, including 30 patients with stroke, 17 patients with occlusion but no stroke, and 3 patients with dementia. This system mainly helps doctors find out whether there are cerebral small vessel lesions in the brain MRI images, and to output the found results into labeled images. The marked contents include the position coordinates of the small blood vessel blockage, the block range, the area size, and if it may cause a stroke. Finally, all the MRI images of the patient are synthesized, showing a 3D display of the small blood vessels in the brain to assist the doctor in making a diagnosis or to provide accurate lesion location for the patient.
Yi-Zeng Hsieh; Yu-Cin Luo; Chen Pan; Mu-Chun Su; Chi-Jen Chen; Kevin Li-Chun Hsieh. Cerebral Small Vessel Disease Biomarkers Detection on MRI-Sensor-Based Image and Deep Learning. Sensors 2019, 19, 2573 .
AMA StyleYi-Zeng Hsieh, Yu-Cin Luo, Chen Pan, Mu-Chun Su, Chi-Jen Chen, Kevin Li-Chun Hsieh. Cerebral Small Vessel Disease Biomarkers Detection on MRI-Sensor-Based Image and Deep Learning. Sensors. 2019; 19 (11):2573.
Chicago/Turabian StyleYi-Zeng Hsieh; Yu-Cin Luo; Chen Pan; Mu-Chun Su; Chi-Jen Chen; Kevin Li-Chun Hsieh. 2019. "Cerebral Small Vessel Disease Biomarkers Detection on MRI-Sensor-Based Image and Deep Learning." Sensors 19, no. 11: 2573.
The student learning performance is analyzed that we adopted the proposed Jacobian Matrix-based Learning Machine (JMLM). It is significant for establishing prediction machine learning model for student learning performance and these tool can help teacher to analyze the student data not difficult to analyze. Correct rate of our model is 87% and 86% better than traditional machine learning models.
Yi-Zeng Hsieh; Mu-Chun Su; Yu-Lin Jeng. The Jacobian Matrix-Based Learning Machine in Student. Natural Computing Series 2017, 469 -474.
AMA StyleYi-Zeng Hsieh, Mu-Chun Su, Yu-Lin Jeng. The Jacobian Matrix-Based Learning Machine in Student. Natural Computing Series. 2017; ():469-474.
Chicago/Turabian StyleYi-Zeng Hsieh; Mu-Chun Su; Yu-Lin Jeng. 2017. "The Jacobian Matrix-Based Learning Machine in Student." Natural Computing Series , no. : 469-474.
Fall events are important health issues in elderly living environments like homes. Hence, a confident and real-time video surveillance device that pays attention could better their everyday lives. We proposed an optical flow feedback convolutional neural network according to the video stream in a home environment. Our proposed model uses rule-based filters before an input convolutional layer and the recorded optical flow for supervising the optical flow of variation. Detecting human posture is a key factor while fall events are like a falling posture. By sequencing frames of action it is possible to recognize a fall. Our system can clearly detect the normal lying posture and lying after falling. Our proposed method can efficiently detect action motion and recognize the action posture. We compared the performance with other standard benchmark datasets and deployed our model to simulate a real home situation, and the correct ratio achieved 82.7% and 98% separately.
Yi-Zeng Hsieh; Yu-Lin Jeng. Development of Home Intelligent Fall Detection IoT System Based on Feedback Optical Flow Convolutional Neural Network. IEEE Access 2017, 6, 6048 -6057.
AMA StyleYi-Zeng Hsieh, Yu-Lin Jeng. Development of Home Intelligent Fall Detection IoT System Based on Feedback Optical Flow Convolutional Neural Network. IEEE Access. 2017; 6 (99):6048-6057.
Chicago/Turabian StyleYi-Zeng Hsieh; Yu-Lin Jeng. 2017. "Development of Home Intelligent Fall Detection IoT System Based on Feedback Optical Flow Convolutional Neural Network." IEEE Access 6, no. 99: 6048-6057.