This page has only limited features, please log in for full access.

Unclaimed
Sungdae Sim
2nd R&D Institute, Agency for Defense Development, P.O. Box 35, Yuseong-gu, Daejeon 34186, Korea

Basic Info

Basic Info is private.

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

The user biography is not available.
Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Journal article
Published: 28 March 2018 in Symmetry
Reads 0
Downloads 0

Nowadays, unmanned ground vehicles (UGVs) are widely used for many applications. UGVs have sensors including multi-channel laser sensors, two-dimensional (2D) cameras, Global Positioning System receivers, and inertial measurement units (GPS–IMU). Multi-channel laser sensors and 2D cameras are installed to collect information regarding the environment surrounding the vehicle. Moreover, the GPS–IMU system is used to determine the position, acceleration, and velocity of the vehicle. This paper proposes a fast and effective method for modeling nonground scenes using multiple types of sensor data captured through a remote-controlled robot. The multi-channel laser sensor returns a point cloud in each frame. We separated the point clouds into ground and nonground areas before modeling the three-dimensional (3D) scenes. The ground part was used to create a dynamic triangular mesh based on the height map and vehicle position. The modeling of nonground parts in dynamic environments including moving objects is more challenging than modeling of ground parts. In the first step, we applied our object segmentation algorithm to divide nonground points into separate objects. Next, an object tracking algorithm was implemented to detect dynamic objects. Subsequently, nonground objects other than large dynamic ones, such as cars, were separated into two groups: surface objects and non-surface objects. We employed colored particles to model the non-surface objects. To model the surface and large dynamic objects, we used two dynamic projection panels to generate 3D meshes. In addition, we applied two processes to optimize the modeling result. First, we removed any trace of the moving objects, and collected the points on the dynamic objects in previous frames. Next, these points were merged with the nonground points in the current frame. We also applied slide window and near point projection techniques to fill the holes in the meshes. Finally, we applied texture mapping using 2D images captured using three cameras installed in the front of the robot. The results of the experiments prove that our nonground modeling method can be used to model photorealistic and real-time 3D scenes around a remote-controlled robot.

ACS Style

Phuong Minh Chu; Seoungjae Cho; Sungdae Sim; Kiho Kwak; Kyungeun Cho. Multimedia System for Real-Time Photorealistic Nonground Modeling of 3D Dynamic Environment for Remote Control System. Symmetry 2018, 10, 83 .

AMA Style

Phuong Minh Chu, Seoungjae Cho, Sungdae Sim, Kiho Kwak, Kyungeun Cho. Multimedia System for Real-Time Photorealistic Nonground Modeling of 3D Dynamic Environment for Remote Control System. Symmetry. 2018; 10 (4):83.

Chicago/Turabian Style

Phuong Minh Chu; Seoungjae Cho; Sungdae Sim; Kiho Kwak; Kyungeun Cho. 2018. "Multimedia System for Real-Time Photorealistic Nonground Modeling of 3D Dynamic Environment for Remote Control System." Symmetry 10, no. 4: 83.

Research article
Published: 01 January 2017 in International Journal of Advanced Robotic Systems
Reads 0
Downloads 0

Obstacle avoidance and available road identification technologies have been investigated for autonomous driving of an unmanned vehicle. In order to apply research results to autonomous driving in real environments, it is necessary to consider moving objects. This article proposes a preprocessing method to identify the dynamic zones where moving objects exist around an unmanned vehicle. This method accumulates three-dimensional points from a light detection and ranging sensor mounted on an unmanned vehicle in voxel space. Next, features are identified from the cumulative data at high speed, and zones with significant feature changes are estimated as zones where dynamic objects exist. The approach proposed in this article can identify dynamic zones even for a moving vehicle and processes data quickly using several features based on the geometry, height map and distribution of three-dimensional space data. The experiment for evaluating the performance of proposed approach was conducted using ground truth data on simulation and real environment data set.

ACS Style

Seongjo Lee; Seoungjae Cho; Sungdae Sim; Kiho Kwak; Yong Woon Park; Kyungeun Cho. A dynamic zone estimation method using cumulative voxels for autonomous driving. International Journal of Advanced Robotic Systems 2017, 14, 1 .

AMA Style

Seongjo Lee, Seoungjae Cho, Sungdae Sim, Kiho Kwak, Yong Woon Park, Kyungeun Cho. A dynamic zone estimation method using cumulative voxels for autonomous driving. International Journal of Advanced Robotic Systems. 2017; 14 (1):1.

Chicago/Turabian Style

Seongjo Lee; Seoungjae Cho; Sungdae Sim; Kiho Kwak; Yong Woon Park; Kyungeun Cho. 2017. "A dynamic zone estimation method using cumulative voxels for autonomous driving." International Journal of Advanced Robotic Systems 14, no. 1: 1.

Journal article
Published: 22 June 2016 in Sensors
Reads 0
Downloads 0

LiDAR and cameras have been broadly utilized in computer vision and autonomous vehicle applications. However, in order to convert data between the local coordinate systems, we must estimate the rigid body transformation between the sensors. In this paper, we propose a robust extrinsic calibration algorithm that can be implemented easily and has small calibration error. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. The features are edge and centerline features on a v-shaped calibration target. The proposed algorithm contributes two ways to improve the calibration accuracy. First, we use different weights to distance between a point and a line feature according to the correspondence accuracy of the features. Second, we apply a penalizing function to exclude the influence of outliers in the calibration datasets. Additionally, based on our robust calibration approach for a single LiDAR-camera pair, we introduce a joint calibration that estimates the extrinsic parameters of multiple sensors at once by minimizing one objective function with loop closing constraints. We conduct several experiments to evaluate the performance of our extrinsic calibration algorithm. The experimental results show that our calibration method has better performance than the other approaches.

ACS Style

Sungdae Sim; Juil Sock; Kiho Kwak. Indirect Correspondence-Based Robust Extrinsic Calibration of LiDAR and Camera. Sensors 2016, 16, 933 .

AMA Style

Sungdae Sim, Juil Sock, Kiho Kwak. Indirect Correspondence-Based Robust Extrinsic Calibration of LiDAR and Camera. Sensors. 2016; 16 (6):933.

Chicago/Turabian Style

Sungdae Sim; Juil Sock; Kiho Kwak. 2016. "Indirect Correspondence-Based Robust Extrinsic Calibration of LiDAR and Camera." Sensors 16, no. 6: 933.