This page has only limited features, please log in for full access.

Unclaimed
Shih-Syun Lin
Department of Computer Science and Engineering, National Taiwan Ocean University, Keelung City 202301, Taiwan

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

The user biography is not available.
Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Journal article
Published: 07 August 2021 in Mathematics
Reads 0
Downloads 0

In this study, we use OpenPose to capture many facial feature nodes, create a data set and label it, and finally bring in the neural network model we created. The purpose is to predict the direction of the person’s line of sight from the face and facial feature nodes and finally add object detection technology to calculate the object that the person is observing. After implementing this method, we found that this method can correctly estimate the human body’s form. Furthermore, if multiple lenses can get more information, the effect will be better than a single lens, evaluating the observed objects more accurately. Furthermore, we found that the head in the image can judge the direction of view. In addition, we found that in the case of the test face tilt, approximately at a tilt angle of 60 degrees, the face nodes can still be captured. Similarly, when the inclination angle is greater than 60 degrees, the facing node cannot be used.

ACS Style

Yu-Shiuan Tsai; Nai-Chi Chen; Yi-Zeng Hsieh; Shih-Syun Lin. The Development of Long-Distance Viewing Direction Analysis and Recognition of Observed Objects Using Head Image and Deep Learning. Mathematics 2021, 9, 1880 .

AMA Style

Yu-Shiuan Tsai, Nai-Chi Chen, Yi-Zeng Hsieh, Shih-Syun Lin. The Development of Long-Distance Viewing Direction Analysis and Recognition of Observed Objects Using Head Image and Deep Learning. Mathematics. 2021; 9 (16):1880.

Chicago/Turabian Style

Yu-Shiuan Tsai; Nai-Chi Chen; Yi-Zeng Hsieh; Shih-Syun Lin. 2021. "The Development of Long-Distance Viewing Direction Analysis and Recognition of Observed Objects Using Head Image and Deep Learning." Mathematics 9, no. 16: 1880.

Journal article
Published: 22 July 2021 in Energies
Reads 0
Downloads 0

This study uses deep learning to model the discharge characteristic curve of the lithium-ion battery. The battery measurement instrument was used to charge and discharge the battery to establish the discharge characteristic curve. The parameter method tries to find the discharge characteristic curve and was improved by MLP (multilayer perceptron), RNN (recurrent neural network), LSTM (long short-term memory), and GRU (gated recurrent unit). The results obtained by these methods were graphs. We used genetic algorithm (GA) to obtain the parameters of the discharge characteristic curve equation.

ACS Style

Shih-Wei Tan; Sheng-Wei Huang; Yi-Zeng Hsieh; Shih-Syun Lin. The Estimation Life Cycle of Lithium-Ion Battery Based on Deep Learning Network and Genetic Algorithm. Energies 2021, 14, 4423 .

AMA Style

Shih-Wei Tan, Sheng-Wei Huang, Yi-Zeng Hsieh, Shih-Syun Lin. The Estimation Life Cycle of Lithium-Ion Battery Based on Deep Learning Network and Genetic Algorithm. Energies. 2021; 14 (15):4423.

Chicago/Turabian Style

Shih-Wei Tan; Sheng-Wei Huang; Yi-Zeng Hsieh; Shih-Syun Lin. 2021. "The Estimation Life Cycle of Lithium-Ion Battery Based on Deep Learning Network and Genetic Algorithm." Energies 14, no. 15: 4423.

Journal article
Published: 08 July 2021 in Energies
Reads 0
Downloads 0

The focus of this study is under the auspices of China Steel Corporation, Taiwan, in carrying out the national energy policy of 2025 Non-Nuclear Home. Under this policy, an estimated 600 offshore wind turbines will be installed by 2025. In order to carry out the wind energy project effectively, a preliminary study must be conducted. In this article, we investigated the influence of the wake effect on the efficiency of the turbines’ layout in a windfarm. A distributed genetic algorithm is deployed to study the wind turbines’ layout in order to alleviate the detrimental wake effect. In the current stage of this research, the historical weather data of weather stations near the site of the 29th windfarm, Taiwan, were collected by Academia Sinica. Our wake effect resilient optimized windfarm showed superior performance over that of the conventional windfarm. Additionally, an operation cost minimization process is also demonstrated and implemented using an ant colony optimization algorithm to optimize the total length of the power-carrying interconnecting cables for the turbines inside the optimized windfarm.

ACS Style

Yi-Zeng Hsieh; Shih-Syun Lin; En-Yu Chang; Kwong-Kau Tiong; Shih-Wei Tan; Chiou-Yi Hor; Shyi-Chy Cheng; Yu-Shiuan Tsai; Chao-Rong Chen. Wind Technologies for Wake Effect Performance in Windfarm Layout Based on Population-Based Optimization Algorithm. Energies 2021, 14, 4125 .

AMA Style

Yi-Zeng Hsieh, Shih-Syun Lin, En-Yu Chang, Kwong-Kau Tiong, Shih-Wei Tan, Chiou-Yi Hor, Shyi-Chy Cheng, Yu-Shiuan Tsai, Chao-Rong Chen. Wind Technologies for Wake Effect Performance in Windfarm Layout Based on Population-Based Optimization Algorithm. Energies. 2021; 14 (14):4125.

Chicago/Turabian Style

Yi-Zeng Hsieh; Shih-Syun Lin; En-Yu Chang; Kwong-Kau Tiong; Shih-Wei Tan; Chiou-Yi Hor; Shyi-Chy Cheng; Yu-Shiuan Tsai; Chao-Rong Chen. 2021. "Wind Technologies for Wake Effect Performance in Windfarm Layout Based on Population-Based Optimization Algorithm." Energies 14, no. 14: 4125.

Article
Published: 08 September 2020 in Multimedia Tools and Applications
Reads 0
Downloads 0

Filming stereoscopic videos has become easier with the development of science and technology, and such videos now proliferate on the Internet. Meanwhile, video stabilization is an important research topic. Thus, this study presents a method of stabilizing stereoscopic videos with preserving the disparities between objects in the frames. First, the feature points must be tracked and separated into many groups. We posit that the shaky motion is caused not only by translations but also by rotations. Thus, directly smoothing the path will not produce a similar trajectory so that we solve the shakiness of the turning before smoothing the path. To address such shakiness, we initially estimate the rotation angles between two adjacent frames. By determining the angle changes of all the frames, we can find out the preference of rotation in a video. Furthermore, the inconsistent angular velocity can be alleviated and the shakiness of the turning is solved by rotating the frame appropriately. Then, the Bézier curve is utilized to smooth the trajectories. We split a trajectory into a set of subtrajectories and subsequently smooth the latter independently. Unlike previous researches, we split the trajectory according to the feature tracking rate to obtain similar trajectories in the original video path. After making subtrajectories smooth, we merge them to attain a smoothed trajectory. The joint of the two subtrajectories is replaced by their interpolation. Finally, we optimize the smoothness and context preservation to stabilize videos without requiring extensive clipping.

ACS Style

Shih-Syun Lin; Thi Ngoc Hanh Le; Pang-Yu Wu; Tong-Yee Lee. Content-and-disparity-aware stereoscopic video stabilization. Multimedia Tools and Applications 2020, 80, 1545 -1564.

AMA Style

Shih-Syun Lin, Thi Ngoc Hanh Le, Pang-Yu Wu, Tong-Yee Lee. Content-and-disparity-aware stereoscopic video stabilization. Multimedia Tools and Applications. 2020; 80 (1):1545-1564.

Chicago/Turabian Style

Shih-Syun Lin; Thi Ngoc Hanh Le; Pang-Yu Wu; Tong-Yee Lee. 2020. "Content-and-disparity-aware stereoscopic video stabilization." Multimedia Tools and Applications 80, no. 1: 1545-1564.

Article
Published: 11 August 2020 in Multimedia Tools and Applications
Reads 0
Downloads 0

This study proposes a design for a wearable guide device for blind or visually impaired persons on the basis of video streaming and deep learning. This work mainly aims to provide supplementary assistance to white canes used by visually impaired persons and offer them increased freedom of movement and independence using the proposed wearable device. The considerable amount of environmental information provided by the device also ensures enhanced safety for its users. Computer vision in the proposed device uses an RGB camera instead of the RGBD camera commonly used in computer vision. Deep learning is applied to convert RGB images into depth images and calculate the plane for detecting indoor objects and safe walking routes. A convolutional neural network (CNN) is adopted, and its neural network structure, which is similar to that of the human brain, simulates a neural transmission mechanism similar to that triggered in human learning. Therefore, this system can learn a large number of feature routes and then generate a model from the learning result. The proposed system can help blind or visually impaired persons identify flat and safe walking routes.

ACS Style

Yi-Zeng Hsieh; Shih-Syun Lin; Fu-Xiong Xu. Development of a wearable guide device based on convolutional neural network for blind or visually impaired persons. Multimedia Tools and Applications 2020, 79, 1 -19.

AMA Style

Yi-Zeng Hsieh, Shih-Syun Lin, Fu-Xiong Xu. Development of a wearable guide device based on convolutional neural network for blind or visually impaired persons. Multimedia Tools and Applications. 2020; 79 (39-40):1-19.

Chicago/Turabian Style

Yi-Zeng Hsieh; Shih-Syun Lin; Fu-Xiong Xu. 2020. "Development of a wearable guide device based on convolutional neural network for blind or visually impaired persons." Multimedia Tools and Applications 79, no. 39-40: 1-19.

Journal article
Published: 10 August 2020 in Mathematics
Reads 0
Downloads 0

In recent years, the breakthrough of neural networks and the rise of deep learning have led to the advancement of machine vision, which has been commonly used in the practical application of image recognition. Automobiles, drones, portable devices, behavior recognition, indoor positioning and many other industries also rely on the integrated application, and require the support of deep learning and machine vision. As for these technologies, there is a high demand for the accuracy related to the recognition of portraits or objects. The recognition of human figures is also a research goal that has drawn great attention in various fields. However, the portrait will be affected by various factors such as height, weight, posture, angle and whether it is covered or not, which affects the accuracy of recognition. This paper applies the application of deep learning to portraits with different poses and angles, especially the actual distance of a single lens for the shadowed portrait (depth estimation), so that it can be used for automatic control of drones in the future. Traditional methods for calculating depth using images are mainly divided into three types: one—single-lens estimation, two—lens estimation, and three—optical band estimation. In view of the fact that both the second and third categories require relatively large and expensive equipment to effectively perform distance calculations, numerous methods for calculating distance using a single lens have recently been produced. However, whether it is the use of traditional “units of distance measurement calibration”, “defocus distance measurement”, or the “three-dimensional grid space messages distance measurement method”, all of these face corresponding difficulties and problems. Additionally, they have to deal with outside disturbances and process the shadowed image. Therefore, under the new research method, OpenPose, which is proposed by Carnegie Mellon University, this paper intends to propose a depth algorithm for a single-lens occluded portrait to estimate the actual portrait distance for different poses, angles of view and obscuration.

ACS Style

Yu-Shiuan Tsai; Li-Heng Hsu; Yi-Zeng Hsieh; Shih-Syun Lin. The Real-Time Depth Estimation for an Occluded Person Based on a Single Image and OpenPose Method. Mathematics 2020, 8, 1333 .

AMA Style

Yu-Shiuan Tsai, Li-Heng Hsu, Yi-Zeng Hsieh, Shih-Syun Lin. The Real-Time Depth Estimation for an Occluded Person Based on a Single Image and OpenPose Method. Mathematics. 2020; 8 (8):1333.

Chicago/Turabian Style

Yu-Shiuan Tsai; Li-Heng Hsu; Yi-Zeng Hsieh; Shih-Syun Lin. 2020. "The Real-Time Depth Estimation for an Occluded Person Based on a Single Image and OpenPose Method." Mathematics 8, no. 8: 1333.

Journal article
Published: 12 July 2020 in Sustainability
Reads 0
Downloads 0

Under the vigorous development of global anticipatory computing in recent years, there have been numerous applications of artificial intelligence (AI) in people’s daily lives. Learning analytics of big data can assist students, teachers, and school administrators to gain new knowledge and estimate learning information; in turn, the enhanced education contributes to the rapid development of science and technology. Education is sustainable life learning, as well as the most important promoter of science and technology worldwide. In recent years, a large number of anticipatory computing applications based on AI have promoted the training professional AI talent. As a result, this study aims to design a set of interactive robot-assisted teaching for classroom setting to help students overcoming academic difficulties. Teachers, students, and robots in the classroom can interact with each other through the ARCS motivation model in programming. The proposed method can help students to develop the motivation, relevance, and confidence in learning, thus enhancing their learning effectiveness. The robot, like a teaching assistant, can help students solving problems in the classroom by answering questions and evaluating students’ answers in natural and responsive interactions. The natural interactive responses of the robot are achieved through the use of a database of emotional big data (Google facial expression comparison dataset). The robot is loaded with an emotion recognition system to assess the moods of the students through their expressions and sounds, and then offer corresponding emotional responses. The robot is able to communicate naturally with the students, thereby attracting their attention, triggering their learning motivation, and improving their learning effectiveness.

ACS Style

Yi-Zeng Hsieh; Shih-Syun Lin; Yu-Cin Luo; Yu-Lin Jeng; Shih-Wei Tan; Chao-Rong Chen; Pei-Ying Chiang. ARCS-Assisted Teaching Robots Based on Anticipatory Computing and Emotional Big Data for Improving Sustainable Learning Efficiency and Motivation. Sustainability 2020, 12, 5605 .

AMA Style

Yi-Zeng Hsieh, Shih-Syun Lin, Yu-Cin Luo, Yu-Lin Jeng, Shih-Wei Tan, Chao-Rong Chen, Pei-Ying Chiang. ARCS-Assisted Teaching Robots Based on Anticipatory Computing and Emotional Big Data for Improving Sustainable Learning Efficiency and Motivation. Sustainability. 2020; 12 (14):5605.

Chicago/Turabian Style

Yi-Zeng Hsieh; Shih-Syun Lin; Yu-Cin Luo; Yu-Lin Jeng; Shih-Wei Tan; Chao-Rong Chen; Pei-Ying Chiang. 2020. "ARCS-Assisted Teaching Robots Based on Anticipatory Computing and Emotional Big Data for Improving Sustainable Learning Efficiency and Motivation." Sustainability 12, no. 14: 5605.

Article
Published: 10 July 2020 in Multimedia Tools and Applications
Reads 0
Downloads 0

This study aims at generating a long strip stereoscopic panorama with a rectangular boundary from a stereoscopic video. The issues arising from this goal are how to automatically select appropriate frames to reduce geometric distortion in image stitching, how to preserve disparity under image warping, and how to generate a rectangular panoramic stereoscopic image without the loss of boundary information. To deal with these issues and to generate visually smooth stereoscopic panorama, a disparity-aware image warping is proposed. Moreover, the image warping method is performed on the irregular left and right panoramic images simultaneously with a hybrid control mesh to generate a rectangular panorama while preserving the spatial shape and disparity as much as possible. Experimental results on various stereo video contents show that the proposed method can effectively preserve both the spatial shapes and pixel disparity.

ACS Style

I-Cheng Yeh; Shih-Syun Lin; Shuo-Tse Hung; Tong-Yee Lee. Disparity-preserving image rectangularization for stereoscopic panorama. Multimedia Tools and Applications 2020, 79, 26123 -26138.

AMA Style

I-Cheng Yeh, Shih-Syun Lin, Shuo-Tse Hung, Tong-Yee Lee. Disparity-preserving image rectangularization for stereoscopic panorama. Multimedia Tools and Applications. 2020; 79 (35-36):26123-26138.

Chicago/Turabian Style

I-Cheng Yeh; Shih-Syun Lin; Shuo-Tse Hung; Tong-Yee Lee. 2020. "Disparity-preserving image rectangularization for stereoscopic panorama." Multimedia Tools and Applications 79, no. 35-36: 26123-26138.

Journal article
Published: 08 May 2020 in IEEE Sensors Journal
Reads 0
Downloads 0

This study presents a stereo vision robotic arm assistance system, in which five degrees of catching can be performed by the robot arm in a single instance. The algorithm of the control system is built for population-based optimization and specifically aimed to assist people with disabilities. The proposed stereo vision-based robot arm system enables users to manipulate objects based on the robot’s ability to aim at objects by using computer vision. The stereo vision system counts the parameters by focusing on the real-word position of the instance in the coordinate system. A trained deep fully connected network is then adopted to compensate the location measurement errors incurred by the inaccurate parameters measured from the deep learning procedure. Subsequently, the proposed Q-learning-based swarm optimization algorithm is adopted to solve the forward kinematics problem and count the angles of each servo. The performance of the robot arm is compared with several real-life experiments to test its ability to grip a target object in different positions.

ACS Style

Yi-Zeng Hsieh; Shih-Syun Lin. Robotic Arm Assistance System Based on Simple Stereo Matching and Q-Learning Optimization. IEEE Sensors Journal 2020, 20, 10945 -10954.

AMA Style

Yi-Zeng Hsieh, Shih-Syun Lin. Robotic Arm Assistance System Based on Simple Stereo Matching and Q-Learning Optimization. IEEE Sensors Journal. 2020; 20 (18):10945-10954.

Chicago/Turabian Style

Yi-Zeng Hsieh; Shih-Syun Lin. 2020. "Robotic Arm Assistance System Based on Simple Stereo Matching and Q-Learning Optimization." IEEE Sensors Journal 20, no. 18: 10945-10954.

Journal article
Published: 19 December 2019 in Sensors
Reads 0
Downloads 0

The human eye is a vital sensory organ that provides us with visual information about the world around us. It can also convey such information as our emotional state to people with whom we interact. In technology, eye tracking has become a hot research topic recently, and a growing number of eye-tracking devices have been widely applied in fields such as psychology, medicine, education, and virtual reality. However, most commercially available eye trackers are prohibitively expensive and require that the user’s head remain completely stationary in order to accurately estimate the direction of their gaze. To address these drawbacks, this paper proposes an inner corner-pupil center vector (ICPCV) eye-tracking system based on a deep neural network, which does not require that the user’s head remain stationary or expensive hardware to operate. The performance of the proposed system is compared with those of other currently available eye-tracking estimation algorithms, and the results show that it outperforms these systems.

ACS Style

Mu-Chun Su; Tat-Meng U; Yi-Zeng Hsieh; Zhe-Fu Yeh; Shu-Fang Lee; Shih-Syun Lin. An Eye-Tracking System based on Inner Corner-Pupil Center Vector and Deep Neural Network. Sensors 2019, 20, 25 .

AMA Style

Mu-Chun Su, Tat-Meng U, Yi-Zeng Hsieh, Zhe-Fu Yeh, Shu-Fang Lee, Shih-Syun Lin. An Eye-Tracking System based on Inner Corner-Pupil Center Vector and Deep Neural Network. Sensors. 2019; 20 (1):25.

Chicago/Turabian Style

Mu-Chun Su; Tat-Meng U; Yi-Zeng Hsieh; Zhe-Fu Yeh; Shu-Fang Lee; Shih-Syun Lin. 2019. "An Eye-Tracking System based on Inner Corner-Pupil Center Vector and Deep Neural Network." Sensors 20, no. 1: 25.

Journal article
Published: 15 May 2019 in Computers & Graphics
Reads 0
Downloads 0

Tile maps are a visualization tool to display geographic data without the accurate representation of geographic boundaries. Each region in a tile map is represented as a tile of identical shape and size. The tiles are fit in a regular grid at positions that approximate their geographic positions such that large regions do not dominate the map visualization, and information in small regions can be enhanced. In this study, the automatic generation of a tile map composed of puzzle tiles is proposed for spatial-temporal data visualization. A puzzle tile is an extension of a standard square tile. A sequence of connected and directional pieces in a puzzle tile is used to represent time-varying quantities in a geographic region. To generate a puzzle tile map, the proposed method includes algorithms for optimizing district-to-tile mapping according to not only geographic positions but also region orientations and for placing puzzle pieces in a tile. The proposed puzzle tile map can serve as a choropleth map in which the ordered pieces in a tile are shaded in proportion to the measurements of a statistical time variable, such as a time sequence of fertility rates, air pollution (PM2.5), or transfer of residential property, being displayed on a 2D map. Experimental demonstrations of various cases show that the proposed methods for district-to-tile mapping optimization and puzzle generation are feasible for automatic puzzle tile map generation. User studies show the capabilities of the puzzle tile map in terms of usability, readability, and comparability of spatial-temporal data visualization.

ACS Style

Shih-Syun Lin; Juo-Yu Yang; Huang-Sin Syu; Chao-Hung Lin; Tun-Wen Pai. Automatic generation of puzzle tile maps for spatial-temporal data visualization. Computers & Graphics 2019, 82, 1 -12.

AMA Style

Shih-Syun Lin, Juo-Yu Yang, Huang-Sin Syu, Chao-Hung Lin, Tun-Wen Pai. Automatic generation of puzzle tile maps for spatial-temporal data visualization. Computers & Graphics. 2019; 82 ():1-12.

Chicago/Turabian Style

Shih-Syun Lin; Juo-Yu Yang; Huang-Sin Syu; Chao-Hung Lin; Tun-Wen Pai. 2019. "Automatic generation of puzzle tile maps for spatial-temporal data visualization." Computers & Graphics 82, no. : 1-12.