This page has only limited features, please log in for full access.
With intelligent big data, a variety of gesture-based recognition systems have been developed to enable intuitive interaction by utilizing machine learning algorithms. Realizing a high gesture recognition accuracy is crucial, and current systems learn extensive gestures in advance to augment their recognition accuracies. However, the process of accurately recognizing gestures relies on identifying and editing numerous gestures collected from the actual end users of the system. This final end-user learning component remains troublesome for most existing gesture recognition systems. This paper proposes a method that facilitates end-user gesture learning and recognition by improving the editing process applied on intelligent big data, which is collected through end-user gestures. The proposed method realizes the recognition of more complex and precise gestures by merging gestures collected from multiple sensors and processing them as a single gesture. To evaluate the proposed method, it was used in a shadow puppet performance that could interact with on-screen animations. An average gesture recognition rate of 90% was achieved in the experimental evaluation, demonstrating the efficacy and intuitiveness of the proposed method for editing visualized learning gestures.
Jisun Park; Yong Jin; Seoungjae Cho; Yunsick Sung; Kyungeun Cho. Advanced Machine Learning for Gesture Learning and Recognition Based on Intelligent Big Data of Heterogeneous Sensors. Symmetry 2019, 11, 929 .
AMA StyleJisun Park, Yong Jin, Seoungjae Cho, Yunsick Sung, Kyungeun Cho. Advanced Machine Learning for Gesture Learning and Recognition Based on Intelligent Big Data of Heterogeneous Sensors. Symmetry. 2019; 11 (7):929.
Chicago/Turabian StyleJisun Park; Yong Jin; Seoungjae Cho; Yunsick Sung; Kyungeun Cho. 2019. "Advanced Machine Learning for Gesture Learning and Recognition Based on Intelligent Big Data of Heterogeneous Sensors." Symmetry 11, no. 7: 929.
This paper proposes a method to reconstruct three-dimensional (3D) objects using real-time fusion and analysis of multiple sensor data. This paper attempts to create a realistic 3D visualization with which a remote pilot can intuitively control a remote unmanned robot by utilizing the characteristics of massive sensor data. The 3D reconstruction system proposed in this paper is comprised of 3D and two-dimensional (2D) data segmentation method, a 3D reconstruction method applied to each object, and a projective texture mapping method. Specifically, we propose applying both a 2D region extraction method and a 3D mesh modeling method to each object. The proposed schemes are implemented as a real-time application to verify real-time performance. This paper proves that 3D meshes can be modeled in real time by using the proposed method. The proposed method allows the remote control of a robot for real-time 3D rendering of remote scenes, which is essential for various tasks in areas that cannot be easily accessed by humans.
Seoungjae Cho; Kyungeun Cho. Real-time 3D reconstruction method using massive multi-sensor data analysis and fusion. The Journal of Supercomputing 2019, 75, 3229 -3248.
AMA StyleSeoungjae Cho, Kyungeun Cho. Real-time 3D reconstruction method using massive multi-sensor data analysis and fusion. The Journal of Supercomputing. 2019; 75 (6):3229-3248.
Chicago/Turabian StyleSeoungjae Cho; Kyungeun Cho. 2019. "Real-time 3D reconstruction method using massive multi-sensor data analysis and fusion." The Journal of Supercomputing 75, no. 6: 3229-3248.
The data computing process is utilized in various areas such as autonomous driving. Autonomous vehicles are intended to detect and track nearby moving objects avoiding collisions and to navigate in complex situations, such as heavy traffic and dense pedestrian areas. Therefore, object tracking is the core technology in the environment perception systems of autonomous vehicles and requires the monitoring of surrounding objects and the prediction of the moving states of objects in real time. In this paper, a multiple object tracking method based on light detection and ranging (LiDAR) data is proposed by using a Kalman filter and data computing process. We suppose that the movements of the tracking objects are captured consecutively as frames; thus, model-based detection and tracking of dynamic objects are possible. A Kalman filter is applied for predicting posterior state of tracking object based on anterior state of the tracking object. State denotes the positions, shapes, and sizes of objects. By computing the likelihood probability between predicted tracking objects and clusters which registered from tracking objects, the data association process of the tracking objects can be generated. Experimental results showed enhanced object tracking performance in a dynamic environment. The average matching probability of the tracking object was greater than 92.9%.
Weiqiang Zhang; Seoungjae Cho; Jeongsook Chae; Yunsick Sung; Kyungeun Cho. Object tracking method based on data computing. The Journal of Supercomputing 2018, 75, 3217 -3228.
AMA StyleWeiqiang Zhang, Seoungjae Cho, Jeongsook Chae, Yunsick Sung, Kyungeun Cho. Object tracking method based on data computing. The Journal of Supercomputing. 2018; 75 (6):3217-3228.
Chicago/Turabian StyleWeiqiang Zhang; Seoungjae Cho; Jeongsook Chae; Yunsick Sung; Kyungeun Cho. 2018. "Object tracking method based on data computing." The Journal of Supercomputing 75, no. 6: 3217-3228.
Nowadays, unmanned ground vehicles (UGVs) are widely used for many applications. UGVs have sensors including multi-channel laser sensors, two-dimensional (2D) cameras, Global Positioning System receivers, and inertial measurement units (GPS–IMU). Multi-channel laser sensors and 2D cameras are installed to collect information regarding the environment surrounding the vehicle. Moreover, the GPS–IMU system is used to determine the position, acceleration, and velocity of the vehicle. This paper proposes a fast and effective method for modeling nonground scenes using multiple types of sensor data captured through a remote-controlled robot. The multi-channel laser sensor returns a point cloud in each frame. We separated the point clouds into ground and nonground areas before modeling the three-dimensional (3D) scenes. The ground part was used to create a dynamic triangular mesh based on the height map and vehicle position. The modeling of nonground parts in dynamic environments including moving objects is more challenging than modeling of ground parts. In the first step, we applied our object segmentation algorithm to divide nonground points into separate objects. Next, an object tracking algorithm was implemented to detect dynamic objects. Subsequently, nonground objects other than large dynamic ones, such as cars, were separated into two groups: surface objects and non-surface objects. We employed colored particles to model the non-surface objects. To model the surface and large dynamic objects, we used two dynamic projection panels to generate 3D meshes. In addition, we applied two processes to optimize the modeling result. First, we removed any trace of the moving objects, and collected the points on the dynamic objects in previous frames. Next, these points were merged with the nonground points in the current frame. We also applied slide window and near point projection techniques to fill the holes in the meshes. Finally, we applied texture mapping using 2D images captured using three cameras installed in the front of the robot. The results of the experiments prove that our nonground modeling method can be used to model photorealistic and real-time 3D scenes around a remote-controlled robot.
Phuong Minh Chu; Seoungjae Cho; Sungdae Sim; Kiho Kwak; Kyungeun Cho. Multimedia System for Real-Time Photorealistic Nonground Modeling of 3D Dynamic Environment for Remote Control System. Symmetry 2018, 10, 83 .
AMA StylePhuong Minh Chu, Seoungjae Cho, Sungdae Sim, Kiho Kwak, Kyungeun Cho. Multimedia System for Real-Time Photorealistic Nonground Modeling of 3D Dynamic Environment for Remote Control System. Symmetry. 2018; 10 (4):83.
Chicago/Turabian StylePhuong Minh Chu; Seoungjae Cho; Sungdae Sim; Kiho Kwak; Kyungeun Cho. 2018. "Multimedia System for Real-Time Photorealistic Nonground Modeling of 3D Dynamic Environment for Remote Control System." Symmetry 10, no. 4: 83.
Clustering plays an important role in processing light detection and ranging points in the autonomous perception tasks of robots. Clustering usually occurs near the start of processing three-dimensional point clouds obtained from light detection and ranging for detection and classification. Therefore, errors caused by clustering will directly affect the detection and classification accuracy. In this article, a clustering method is presented that combines density-based spatial clustering of application with noise and two-dimensional range image composed by scan lines of light detection and ranging based on the order of generation time. The results show that the proposed method achieves state-of-the-art performance in aspect of time efficiency and clustering accuracy. A ground extraction method based on scan line is also presented in this article, which has strong ability to separate ground points and non-ground points.
Mingyun Wen; Seoungjae Cho; Jeongsook Chae; Yunsick Sung; Kyungeun Cho. Range image-based density-based spatial clustering of application with noise clustering method of three-dimensional point clouds. International Journal of Advanced Robotic Systems 2018, 15, 1 .
AMA StyleMingyun Wen, Seoungjae Cho, Jeongsook Chae, Yunsick Sung, Kyungeun Cho. Range image-based density-based spatial clustering of application with noise clustering method of three-dimensional point clouds. International Journal of Advanced Robotic Systems. 2018; 15 (2):1.
Chicago/Turabian StyleMingyun Wen; Seoungjae Cho; Jeongsook Chae; Yunsick Sung; Kyungeun Cho. 2018. "Range image-based density-based spatial clustering of application with noise clustering method of three-dimensional point clouds." International Journal of Advanced Robotic Systems 15, no. 2: 1.
In order to navigate in an unknown environment, autonomous robots must distinguish traversable ground regions from impassible obstacles. Thus, ground segmentation is a crucial step for handling this issue. This study proposes a new ground segmentation method combining of two different techniques: gradient threshold segmentation and mean height evaluation. Ground regions near the center of the sensor are segmented using the gradient threshold technique, while sparse regions are segmented using mean height evaluation. The main contribution of this study is a new ground segmentation algorithm that can be applied to various 3D point clouds. The processing time is acceptable and allows real-time processing of sensor data.
Hoang Vu; Hieu Trong Nguyen; Phuong Chu; Seoungjae Cho; Kyungeun Cho. A Ground Segmentation Method Based on Gradient Fields for 3D Point Clouds. Lecture Notes in Electrical Engineering 2017, 388 -393.
AMA StyleHoang Vu, Hieu Trong Nguyen, Phuong Chu, Seoungjae Cho, Kyungeun Cho. A Ground Segmentation Method Based on Gradient Fields for 3D Point Clouds. Lecture Notes in Electrical Engineering. 2017; ():388-393.
Chicago/Turabian StyleHoang Vu; Hieu Trong Nguyen; Phuong Chu; Seoungjae Cho; Kyungeun Cho. 2017. "A Ground Segmentation Method Based on Gradient Fields for 3D Point Clouds." Lecture Notes in Electrical Engineering , no. : 388-393.
For an autonomous mobile robot operating in an unknown environment, distinguishing obstacles from the traversable ground region is an essential step in determining whether the robot can traverse the area. Ground segmentation thus plays a critical role in autonomous mobile robot navigation in challenging environments, especially in real time. In this article, a ground segmentation method is proposed that combines three techniques: gradient threshold, adaptive break point detection, and mean height evaluation. Based on three-dimensional (3D) point clouds obtained from a Velodyne HDL-32E sensor, and by exploiting the structure of a two-dimensional reference image, the 3D data are represented as a graph data structures. This process serves as both a preprocessing step and a visualization of very large data sets, mobile-generated data for segmentation, and building maps of the area. Various types of 3D data—such as ground regions near the sensor center, uneven regions, and sparse regions—need to be represented and segmented. For the ground regions, we apply the gradient threshold technique for segmentation. We address the uneven regions using adaptive break points. Finally, for the sparse region, we segment the ground by using a mean height evaluation.
Hoang Vu; Hieu Trong Nguyen; Phuong Minh Chu; Weiqiang Zhang; Seoungjae Cho; Yong Woon Park; Kyungeun Cho. Adaptive ground segmentation method for real-time mobile robot control. International Journal of Advanced Robotic Systems 2017, 14, 1 .
AMA StyleHoang Vu, Hieu Trong Nguyen, Phuong Minh Chu, Weiqiang Zhang, Seoungjae Cho, Yong Woon Park, Kyungeun Cho. Adaptive ground segmentation method for real-time mobile robot control. International Journal of Advanced Robotic Systems. 2017; 14 (6):1.
Chicago/Turabian StyleHoang Vu; Hieu Trong Nguyen; Phuong Minh Chu; Weiqiang Zhang; Seoungjae Cho; Yong Woon Park; Kyungeun Cho. 2017. "Adaptive ground segmentation method for real-time mobile robot control." International Journal of Advanced Robotic Systems 14, no. 6: 1.
Highlights•We proposed a framework to simulate a network robot in a virtual smart home.•A network robot agent identifies daily routines of a resident and executes service.•The framework shows a network robot could help and reduce tasks of a human agent.•The simulator verified the framework reduces costs of developing network robots. AbstractSmart homes provide residents with services that offer convenience using sensor networks and a variety of ubiquitous instruments. Network robots based on such networks can perform direct services for these residents. Information from various ubiquitous instruments and sensors located in smart homes is shared with network robots. These robots effectively help residents in their daily routine by accessing this information. However, the development of network robots in an actual environment requires significant time, space, labor, and money. A network robot that has not been fully developed may cause physical damage in unexpected situations. In this paper, we propose a framework that allows the design and simulation of network robot avatars and a variety of smart homes in a virtual environment to address the above problems. This framework activates a network robot avatar based on information obtained from various sensors mounted in the smart home; these sensors identify the daily routine of the human avatar residing in the smart home. Algorithms that include reinforcement learning and action planning are integrated to enable the network robot avatar to serve the human avatar. Further, this paper develops a network robot simulator to verify whether the network robot functions effectively using the framework.
Seoungjae Cho; Simon Fong; Yong Woon Park; Kyungeun Cho. Simulation framework of ubiquitous network environments for designing diverse network robots. Future Generation Computer Systems 2017, 76, 468 -473.
AMA StyleSeoungjae Cho, Simon Fong, Yong Woon Park, Kyungeun Cho. Simulation framework of ubiquitous network environments for designing diverse network robots. Future Generation Computer Systems. 2017; 76 ():468-473.
Chicago/Turabian StyleSeoungjae Cho; Simon Fong; Yong Woon Park; Kyungeun Cho. 2017. "Simulation framework of ubiquitous network environments for designing diverse network robots." Future Generation Computer Systems 76, no. : 468-473.
With the aim of providing a fast and effective segmentation method based on the flood-fill algorithm, in this study, we propose a new approach to segment a 3D point cloud gained by a 3D multi-channel laser range sensor into different objects. First, we divide the point cloud into two groups: ground and nonground points. Next, we segment clusters in each scanline dataset from the group of nonground points. Each scanline cluster is joined with other scanline clusters using the flood-fill algorithm. In this manner, each group of scanline clusters represents an object in the 3D environment. Finally, we obtain each object separately. Experiments show that our method has the ability to segment objects accurately and in real time.
Phuong Minh Chu; Seoungjae Cho; Yong Woon Park; Kyungeun Cho. Fast point cloud segmentation based on flood-fill algorithm. 2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI) 2017, 656 -659.
AMA StylePhuong Minh Chu, Seoungjae Cho, Yong Woon Park, Kyungeun Cho. Fast point cloud segmentation based on flood-fill algorithm. 2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI). 2017; ():656-659.
Chicago/Turabian StylePhuong Minh Chu; Seoungjae Cho; Yong Woon Park; Kyungeun Cho. 2017. "Fast point cloud segmentation based on flood-fill algorithm." 2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI) , no. : 656-659.
In this paper, a method for modeling three-dimensional scenes from a Lidar point cloud as well as a billboard calibration approach for remote mobile robot control applications are presented as a combined two-step approach. First, by projecting a local three-dimensional point cloud on two-dimensional coordinate system, we obtain a list of colored points. Based on this list, we apply a proposed ground segmentation algorithm to separate ground and non-ground areas. With the ground part, a dynamic triangular mesh is created by means of a height map and the vehicle position. The non-ground part is divided into small groups. Then, a local voxel map is applied for modeling each group. As a result, all the inner surfaces are eliminated. Second, for billboard calibration, we implement three stages in each frame. In the first stage, at the billboard location, an average ground point is estimated. In the second stage, the distortion angle is calculated. The billboard is updated for each frame in the final stage and corresponds to the terrain gradient.
Phuong Minh Chu; Seoungjae Cho; Hieu Trong Nguyen; Sungdae Sim; Kiho Kwak; Kyungeun Cho. Real-time 3D scene modeling using dynamic billboard for remote robot control systems. 2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI) 2017, 354 -358.
AMA StylePhuong Minh Chu, Seoungjae Cho, Hieu Trong Nguyen, Sungdae Sim, Kiho Kwak, Kyungeun Cho. Real-time 3D scene modeling using dynamic billboard for remote robot control systems. 2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI). 2017; ():354-358.
Chicago/Turabian StylePhuong Minh Chu; Seoungjae Cho; Hieu Trong Nguyen; Sungdae Sim; Kiho Kwak; Kyungeun Cho. 2017. "Real-time 3D scene modeling using dynamic billboard for remote robot control systems." 2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI) , no. : 354-358.
In this paper, a convergent multimedia application for filtering traces of dynamic objects from accumulated point cloud data is presented. First, a fast ground segmentation algorithm is designed by dividing each frame data item into small groups. Each group is a vertical line limited by two points. The first point is orthogonally projected from a sensor’s position to the ground. The second one is a point in the outermost data circle. Two voxel maps are employed to save information on the previous and current frames. The position and occupancy status of each voxel are considered for detecting the voxels containing past data of moving objects. To increase detection accuracy, the trace data are sought in only the nonground group. Typically, verifying the intersection between the line segment and voxel is repeated numerous times, which is time-consuming. To increase the speed, a method is proposed that relies on the three-dimensional Bresenham’s line algorithm. Experiments were conducted, and the results showed the effectiveness of the proposed filtering system. In both static and moving sensors, the system immediately eliminated trace data and maintained other static data, while operating three times faster than the sensor rate.
Phuong Minh Chu; Seoungjae Cho; Sungdae Sim; Kiho Kwak; Kyungeun Cho. Convergent application for trace elimination of dynamic objects from accumulated lidar point clouds. Multimedia Tools and Applications 2017, 77, 29991 -30009.
AMA StylePhuong Minh Chu, Seoungjae Cho, Sungdae Sim, Kiho Kwak, Kyungeun Cho. Convergent application for trace elimination of dynamic objects from accumulated lidar point clouds. Multimedia Tools and Applications. 2017; 77 (22):29991-30009.
Chicago/Turabian StylePhuong Minh Chu; Seoungjae Cho; Sungdae Sim; Kiho Kwak; Kyungeun Cho. 2017. "Convergent application for trace elimination of dynamic objects from accumulated lidar point clouds." Multimedia Tools and Applications 77, no. 22: 29991-30009.
This paper proposes a cloud-based framework that optimizes the three-dimensional (3D) reconstruction of multiple types of sensor data captured from multiple remote robots. A working environment using multiple remote robots requires massive amounts of data processing in real-time, which cannot be achieved using a single computer. In the proposed framework, reconstruction is carried out in cloud-based servers via distributed data processing. Consequently, users do not need to consider computing resources even when utilizing multiple remote robots. The sensors’ bulk data are transferred to a master server that divides the data and allocates the processing to a set of slave servers. Thus, the segmentation and reconstruction tasks are implemented in the slave servers. The reconstructed 3D space is created by fusing all the results in a visualization server, and the results are saved in a database that users can access and visualize in real-time. The results of the experiments conducted verify that the proposed system is capable of providing real-time 3D scenes of the surroundings of remote robots.
Phuong Minh Chu; Seoungjae Cho; Simon Fong; Yong Woon Park; Kyungeun Cho. 3D Reconstruction Framework for Multiple Remote Robots on Cloud System. Symmetry 2017, 9, 55 .
AMA StylePhuong Minh Chu, Seoungjae Cho, Simon Fong, Yong Woon Park, Kyungeun Cho. 3D Reconstruction Framework for Multiple Remote Robots on Cloud System. Symmetry. 2017; 9 (4):55.
Chicago/Turabian StylePhuong Minh Chu; Seoungjae Cho; Simon Fong; Yong Woon Park; Kyungeun Cho. 2017. "3D Reconstruction Framework for Multiple Remote Robots on Cloud System." Symmetry 9, no. 4: 55.
Obstacle avoidance and available road identification technologies have been investigated for autonomous driving of an unmanned vehicle. In order to apply research results to autonomous driving in real environments, it is necessary to consider moving objects. This article proposes a preprocessing method to identify the dynamic zones where moving objects exist around an unmanned vehicle. This method accumulates three-dimensional points from a light detection and ranging sensor mounted on an unmanned vehicle in voxel space. Next, features are identified from the cumulative data at high speed, and zones with significant feature changes are estimated as zones where dynamic objects exist. The approach proposed in this article can identify dynamic zones even for a moving vehicle and processes data quickly using several features based on the geometry, height map and distribution of three-dimensional space data. The experiment for evaluating the performance of proposed approach was conducted using ground truth data on simulation and real environment data set.
Seongjo Lee; Seoungjae Cho; Sungdae Sim; Kiho Kwak; Yong Woon Park; Kyungeun Cho. A dynamic zone estimation method using cumulative voxels for autonomous driving. International Journal of Advanced Robotic Systems 2017, 14, 1 .
AMA StyleSeongjo Lee, Seoungjae Cho, Sungdae Sim, Kiho Kwak, Yong Woon Park, Kyungeun Cho. A dynamic zone estimation method using cumulative voxels for autonomous driving. International Journal of Advanced Robotic Systems. 2017; 14 (1):1.
Chicago/Turabian StyleSeongjo Lee; Seoungjae Cho; Sungdae Sim; Kiho Kwak; Yong Woon Park; Kyungeun Cho. 2017. "A dynamic zone estimation method using cumulative voxels for autonomous driving." International Journal of Advanced Robotic Systems 14, no. 1: 1.
The objective of this study is to solve the problem of user data not being precisely received from sensors because of sensing region limitations in invoked reality (IR) space, distortion of colors or patterns by lighting, and blocking or overlapping of a user by other users. The sensing scope range is thus expanded using multiple sensors in the IR space. Moreover, user feature data are accurately identified by user sensing. Specifically, multiple sensors are employed when not all of user data are sensed because they overlap with data of other users. In the proposed approach, all clients share the user feature data from multiple sensors. Accordingly, each client recognizes that the user is the same individual on the basis of the shared data. Furthermore, the identification accuracy is improved by identifying the user features based on colors and patterns that are less affected by lighting. Therefore, accurate identification of the user feature data is enabled, even under lighting changes. The proposed system was implemented based on system performance analysis standards. The practicality and system performance in identifying the same person using the proposed method were verified through an experiment.
Yunji Jung; Yulong Xi; Seoungjae Cho; Wei Song; Simon Fong; Kyungeun Cho. Design and implementation of a same-user identification system in invoked reality space. Multimedia Tools and Applications 2016, 76, 11429 -11447.
AMA StyleYunji Jung, Yulong Xi, Seoungjae Cho, Wei Song, Simon Fong, Kyungeun Cho. Design and implementation of a same-user identification system in invoked reality space. Multimedia Tools and Applications. 2016; 76 (9):11429-11447.
Chicago/Turabian StyleYunji Jung; Yulong Xi; Seoungjae Cho; Wei Song; Simon Fong; Kyungeun Cho. 2016. "Design and implementation of a same-user identification system in invoked reality space." Multimedia Tools and Applications 76, no. 9: 11429-11447.
In order to evaluate the quality of Internet of Things (IoT) environments in smart houses, large datasets containing interactions between people and ubiquitous environments are essential for hardware and software testing. Both testing and simulation require a substantial amount of time and volunteer resources. Consequently, the ability to simulate these ubiquitous environments has recently increased in importance. In order to create an easy-to-use simulator for designing ubiquitous environments, we propose a simulator and autonomous agent generator that simulates human activity in smart houses. The simulator provides a three-dimensional (3D) graphical user interface (GUI) that enables spatial configuration, along with virtual sensors that simulate actual sensors. In addition, the simulator provides an artificial intelligence agent that automatically interacts with virtual smart houses using a motivation-driven behavior planning method. The virtual sensors are designed to detect the states of the smart house and its living agents. The sensed datasets simulate long-term interaction results for ubiquitous computing researchers, reducing the testing costs associated with smart house architecture evaluation.
WonSik Lee; Seoungjae Cho; Phuong Chu; Hoang Vu; Sumi Helal; Wei Song; Young-Sik Jeong; Kyungeun Cho. Automatic agent generation for IoT-based smart house simulator. Neurocomputing 2016, 209, 14 -24.
AMA StyleWonSik Lee, Seoungjae Cho, Phuong Chu, Hoang Vu, Sumi Helal, Wei Song, Young-Sik Jeong, Kyungeun Cho. Automatic agent generation for IoT-based smart house simulator. Neurocomputing. 2016; 209 ():14-24.
Chicago/Turabian StyleWonSik Lee; Seoungjae Cho; Phuong Chu; Hoang Vu; Sumi Helal; Wei Song; Young-Sik Jeong; Kyungeun Cho. 2016. "Automatic agent generation for IoT-based smart house simulator." Neurocomputing 209, no. : 14-24.
This paper presents a method of removing past data of dynamic objects by employing the Velodyne LiDAR sensor to accumulate points. In the first step, a fixed voxel map is created with the sensor position as the center. In the next step, we employ Bresenham's line algorithm to create three-dimensional line segments from the sensor position to all points in the current frame. Each element in the line segment is a voxel, while each line segment is a list of voxels. Finally, past data of moving objects are removed by deleting all points obtained in the previous frames of each line segment.
Phuong Minh Chu; Seoungjae Cho; Sungdae Sim; Kiho Kwak; Yong Woon Park; Kyungeun Cho. Removing past data of dynamic objects using static Velodyne LiDAR sensor. 2016 16th International Conference on Control, Automation and Systems (ICCAS) 2016, 1637 -1640.
AMA StylePhuong Minh Chu, Seoungjae Cho, Sungdae Sim, Kiho Kwak, Yong Woon Park, Kyungeun Cho. Removing past data of dynamic objects using static Velodyne LiDAR sensor. 2016 16th International Conference on Control, Automation and Systems (ICCAS). 2016; ():1637-1640.
Chicago/Turabian StylePhuong Minh Chu; Seoungjae Cho; Sungdae Sim; Kiho Kwak; Yong Woon Park; Kyungeun Cho. 2016. "Removing past data of dynamic objects using static Velodyne LiDAR sensor." 2016 16th International Conference on Control, Automation and Systems (ICCAS) , no. : 1637-1640.
Recently, the recognition of posture and gesture has been widely used in fields such as medical treatment and human–computer interaction. Previous research into the recognition of posture and gesture has mainly used human skeletons and an RGB-D camera. The resulting recognition methods utilize models of the human skeleton, with different numbers of joints. The processing of the resulting large amounts of feature data needed to recognize a gesture leads to the recognition being delayed. To overcome this issue, we designed and developed a system for learning and recognizing postures and gestures. This paper proposes a gesture recognition method with enhanced generality and processing speed. The proposed method consists of feature collection part, feature optimization part, and a posture and gesture recognition part. We have verified the solution proposed in this paper through the learning and subsequent recognition of 29 postures and 8 gestures.
Yulong Xi; Seoungjae Cho; Simon Fong; Yong Woon Park; Kyungeun Cho. Gesture Recognition Method Using Sensing Blocks. Wireless Personal Communications 2016, 91, 1779 -1797.
AMA StyleYulong Xi, Seoungjae Cho, Simon Fong, Yong Woon Park, Kyungeun Cho. Gesture Recognition Method Using Sensing Blocks. Wireless Personal Communications. 2016; 91 (4):1779-1797.
Chicago/Turabian StyleYulong Xi; Seoungjae Cho; Simon Fong; Yong Woon Park; Kyungeun Cho. 2016. "Gesture Recognition Method Using Sensing Blocks." Wireless Personal Communications 91, no. 4: 1779-1797.
To develop cooperative content based on the hand gestures of multiple users, typical frameworks must be separately used for the communication among computers and hand gesture recognition. In this paper, we propose a framework that enables users who are far apart to interact using hand gestures in the same virtual environment. This approach makes possible remote cooperative education content. We present the design of a server and client, as well as techniques for respectively managing multiple users, controlling crashes among multiple users, and managing gesture data for activating the proposed framework on clouds. To verify that the framework enables multiple users to interact, we developed a virtual chemical experiment based on the proposed design and performed the simulation with it. The framework can be used for a variety of educational content using interactions among multiple users.
Yeji Kim; Seoungjae Cho; Simon Fong; Yong Woon Park; Kyungeun Cho. Design of hand gesture interaction framework on clouds for multiple users. The Journal of Supercomputing 2016, 73, 2851 -2866.
AMA StyleYeji Kim, Seoungjae Cho, Simon Fong, Yong Woon Park, Kyungeun Cho. Design of hand gesture interaction framework on clouds for multiple users. The Journal of Supercomputing. 2016; 73 (7):2851-2866.
Chicago/Turabian StyleYeji Kim; Seoungjae Cho; Simon Fong; Yong Woon Park; Kyungeun Cho. 2016. "Design of hand gesture interaction framework on clouds for multiple users." The Journal of Supercomputing 73, no. 7: 2851-2866.
Dana Wang; Simon Fong; Seoungjae Cho; Yongwoon Park. Diabetes Therapy Prognosis through Data Stream Mining Methods and Technologies. Biomedical Engineering 2016, 1 .
AMA StyleDana Wang, Simon Fong, Seoungjae Cho, Yongwoon Park. Diabetes Therapy Prognosis through Data Stream Mining Methods and Technologies. Biomedical Engineering. 2016; ():1.
Chicago/Turabian StyleDana Wang; Simon Fong; Seoungjae Cho; Yongwoon Park. 2016. "Diabetes Therapy Prognosis through Data Stream Mining Methods and Technologies." Biomedical Engineering , no. : 1.
Automated understanding and recognition of human activities and behaviors in a smart space (e.g., smart house) is of paramount importance to many critical human-centered applications. Recognized activities are the input to the pervasive computer (the smart space) which intelligently interacts with the users to maintain the application's goal be it assistance, safety, child-development, entertainment or other goals. Research in this area is fascinating but severely lacks adequate validation which often relies on datasets that contain sensory data representing the activities. Availing adequate datasets that can be used in a large variety of spaces, for different user groups, and aiming at different goals is very challenging. This is due to the prohibitive cost and the human capital needed to instrument physical spaces and to recruit human subjects to perform the activities and generate data. Simulation of human activities in smart spaces has therefore emerged as an alternative approach to bridge this deficit. Traditional event-driven approaches have been proposed. However, the complexity of human activity simulation was proved to be challenging to these initial simulation efforts. In this paper, we present Persim 3D-an alternative context-driven approach to simulating human activities capable of supporting complex activity scenarios. We present the context-activity-action nexus and show how our approach combines modeling and visualization of actions with context and activity simulation. We present the Persim 3D architecture and algorithms, and describe a detailed validation study of our approach to verify the accuracy and realism of the simulation output (datasets and visualizations) and the scalability of the human effort in using Persim 3D to simulate complex scenarios. We show positive and promising results that validate our approach.
Jae Woong Lee; Seoungjae Cho; Sirui Liu; Kyungeun Cho; Sumi Helal. Persim 3D: Context-Driven Simulation and Modeling of Human Activities in Smart Spaces. IEEE Transactions on Automation Science and Engineering 2015, 12, 1243 -1256.
AMA StyleJae Woong Lee, Seoungjae Cho, Sirui Liu, Kyungeun Cho, Sumi Helal. Persim 3D: Context-Driven Simulation and Modeling of Human Activities in Smart Spaces. IEEE Transactions on Automation Science and Engineering. 2015; 12 (4):1243-1256.
Chicago/Turabian StyleJae Woong Lee; Seoungjae Cho; Sirui Liu; Kyungeun Cho; Sumi Helal. 2015. "Persim 3D: Context-Driven Simulation and Modeling of Human Activities in Smart Spaces." IEEE Transactions on Automation Science and Engineering 12, no. 4: 1243-1256.