This page has only limited features, please log in for full access.

Unclaimed
Bong-Soo Sohn
Department of Computer Engineering, Chung-Ang University, 84 Heukseok-Ro, Seoul 06974, Korea

Basic Info

Basic Info is private.

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

The user biography is not available.
Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Journal article
Published: 19 August 2020 in Sustainability
Reads 0
Downloads 0

Building Information Modeling (BIM) refers to 3D-based digital modeling of buildings and infrastructure for efficient design, construction, and management. Governments have recognized and encouraged BIM as a primary method for enabling advanced construction technologies. However, BIM is not universally employed in industries, and most designers still use Computer-Aided Design (CAD) drawings, which have been used for several decades. This is because the initial costs for setting up a BIM work environment and the maintenance costs involved in using BIM software are substantially high. With this motivation, we propose a novel software system that automatically generates BIM models from two-dimensional (2D) CAD drawings. This is highly significant because only 2D CAD drawings are available for most of the existing buildings. Notably, such buildings can benefit from the BIM technology using our low-cost conversion system. One of the common problems in existing methods is possible loss of information that may occur during the process of conversion from CAD to BIM because they mainly focus on creating 3D geometric models for BIM by using only floor plans. The proposed method has an advantage of generating BIM that contains property information in addition to the 3D models by analyzing floor plans and other member lists in the input design drawings together. Experimental results show that our method can quickly and accurately generate BIM models from 2D CAD drawings.

ACS Style

Youngsoo Byun; Bong-Soo Sohn. ABGS: A System for the Automatic Generation of Building Information Models from Two-Dimensional CAD Drawings. Sustainability 2020, 12, 6713 .

AMA Style

Youngsoo Byun, Bong-Soo Sohn. ABGS: A System for the Automatic Generation of Building Information Models from Two-Dimensional CAD Drawings. Sustainability. 2020; 12 (17):6713.

Chicago/Turabian Style

Youngsoo Byun; Bong-Soo Sohn. 2020. "ABGS: A System for the Automatic Generation of Building Information Models from Two-Dimensional CAD Drawings." Sustainability 12, no. 17: 6713.

Journal article
Published: 23 July 2020 in Applied Sciences
Reads 0
Downloads 0

The reduction of unnecessary details is important in a variety of imaging tasks. Image denoising can be generally formulated as a diffusion process that iteratively suppresses undesirable image features with high variance. We propose a recursive diffusion process that simultaneously computes the local geometrical property of the image features and determines the size and shape of the diffusion kernel, leading to an anisotropic scale-space. In the construction of the proposed anisotropic scale-space, image features due to undesirable noise are suppressed while significant geometrical image features such as edges and corners are preserved across the scale-space. The diffusion kernels are iteratively determined based on the local geometrical properties of the image features. We demonstrate the effectiveness and robustness of the proposed algorithm in the detection of curvilinear features using simple yet effective synthetic and real images. The algorithm is quantitatively evaluated based on the identification of fissures in lung CT imagery. The experimental results indicate that the proposed algorithm can be used for the detection of linear or curvilinear structures in a variety of images ranging from satellite to medical images.

ACS Style

Numonov Sardorbek; Bong-Soo Sohn; Byung-Woo Hong. Coherence Enhancement Based on Recursive Anisotropic Scale-Space with Adaptive Kernels. Applied Sciences 2020, 10, 5079 .

AMA Style

Numonov Sardorbek, Bong-Soo Sohn, Byung-Woo Hong. Coherence Enhancement Based on Recursive Anisotropic Scale-Space with Adaptive Kernels. Applied Sciences. 2020; 10 (15):5079.

Chicago/Turabian Style

Numonov Sardorbek; Bong-Soo Sohn; Byung-Woo Hong. 2020. "Coherence Enhancement Based on Recursive Anisotropic Scale-Space with Adaptive Kernels." Applied Sciences 10, no. 15: 5079.

Journal article
Published: 18 June 2020 in Applied Sciences
Reads 0
Downloads 0

This paper provides a new approach that improves collaborative filtering results in recommendation systems. In particular, we aim to ensure the reliability of the data set collected which is to collect the cognition about the item similarity from the users. Hence, in this work, we collect the cognitive similarity of the user about similar movies. Besides, we introduce a three-layered architecture that consists of the network between the items (item layer), the network between the cognitive similarity of users (cognition layer) and the network between users occurring in their cognitive similarity (user layer). For instance, the similarity in the cognitive network can be extracted from a similarity measure on the item network. In order to evaluate our method, we conducted experiments in the movie domain. In addition, for better performance evaluation, we use the F-measure that is a combination of two criteria P r e c i s i o n and R e c a l l . Compared with the Pearson Correlation, our method more accurate and achieves improvement over the baseline 11.1% in the best case. The result shows that our method achieved consistent improvement of 1.8% to 3.2% for various neighborhood sizes in MAE calculation, and from 2.0% to 4.1% in RMSE calculation. This indicates that our method improves recommendation performance.

ACS Style

Luong Vuong Nguyen; Min-Sung Hong; Jason J. Jung; Bong-Soo Sohn. Cognitive Similarity-Based Collaborative Filtering Recommendation System. Applied Sciences 2020, 10, 4183 .

AMA Style

Luong Vuong Nguyen, Min-Sung Hong, Jason J. Jung, Bong-Soo Sohn. Cognitive Similarity-Based Collaborative Filtering Recommendation System. Applied Sciences. 2020; 10 (12):4183.

Chicago/Turabian Style

Luong Vuong Nguyen; Min-Sung Hong; Jason J. Jung; Bong-Soo Sohn. 2020. "Cognitive Similarity-Based Collaborative Filtering Recommendation System." Applied Sciences 10, no. 12: 4183.

Research article
Published: 09 May 2018 in Mobile Information Systems
Reads 0
Downloads 0

3D maps such as Google Earth and Apple Maps (3D mode), in which users can see and navigate in 3D models of real worlds, are widely available in current mobile and desktop environments. Users usually use a monitor for display and a keyboard/mouse for interaction. Head-mounted displays (HMDs) are currently attracting great attention from industry and consumers because they can provide an immersive virtual reality (VR) experience at an affordable cost. However, conventional keyboard and mouse interfaces decrease the level of immersion because the manipulation method does not resemble actual actions in reality, which often makes the traditional interface method inappropriate for the navigation of 3D maps in virtual environments. From this motivation, we design immersive gesture interfaces for the navigation of 3D maps which are suitable for HMD-based virtual environments. We also describe a simple algorithm to capture and recognize the gestures in real-time using a Kinect depth camera. We evaluated the usability of the proposed gesture interfaces and compared them with conventional keyboard and mouse-based interfaces. Results of the user study indicate that our gesture interfaces are preferable for obtaining a high level of immersion and fun in HMD-based virtual environments.

ACS Style

Yea Som Lee; Bong-Soo Sohn. Immersive Gesture Interfaces for Navigation of 3D Maps in HMD-Based Mobile Virtual Environments. Mobile Information Systems 2018, 2018, 1 -11.

AMA Style

Yea Som Lee, Bong-Soo Sohn. Immersive Gesture Interfaces for Navigation of 3D Maps in HMD-Based Mobile Virtual Environments. Mobile Information Systems. 2018; 2018 ():1-11.

Chicago/Turabian Style

Yea Som Lee; Bong-Soo Sohn. 2018. "Immersive Gesture Interfaces for Navigation of 3D Maps in HMD-Based Mobile Virtual Environments." Mobile Information Systems 2018, no. : 1-11.

Article
Published: 28 October 2017 in Multimedia Tools and Applications
Reads 0
Downloads 0

This paper describes a new algorithm that generates a cartoon-style bas-relief surface from photographs of general scenes. Most previous methods for bas-relief generation have focused on accurate restoration of input 3D models on a background plane. The generation of bas-reliefs with artistic effects has rarely been studied. Considering that non-photorealistic rendering (NPR) techniques are currently very popular and 3D printing technology is developing rapidly, extending NPR techniques to the generation of a bas-relief surface with artistic effects is natural and valuable. Furthermore, cartoon is a basic non-realistic and artistic style familiar to general users. From this motivation, our method focuses on generating a cartoon-style bas-relief surface. We use the lens blur function of Google Camera, which is a smartphone application, to obtain a photograph and its depth map as inputs. Using coherent line drawing and histogram-based quantization methods, we construct a depth map that contains the salient features of given input scenes in abstract form. Displacement mapping from the depth map onto a thin plane generates a cartoon-style bas-relief. Experimental results show that our method generates bas-relief surfaces that contain the characteristics of cartoons, such as coherent border lines and quantized layers.

ACS Style

Seungchan Lee; Bong-Soo Sohn. Generation of cartoon-style bas-reliefs from photographs. Multimedia Tools and Applications 2017, 78, 28391 -28407.

AMA Style

Seungchan Lee, Bong-Soo Sohn. Generation of cartoon-style bas-reliefs from photographs. Multimedia Tools and Applications. 2017; 78 (20):28391-28407.

Chicago/Turabian Style

Seungchan Lee; Bong-Soo Sohn. 2017. "Generation of cartoon-style bas-reliefs from photographs." Multimedia Tools and Applications 78, no. 20: 28391-28407.

Journal article
Published: 11 March 2017 in Sensors
Reads 0
Downloads 0

This paper describes a new method to automatically generate digital bas-reliefs with depth-of-field effects from general scenes. Most previous methods for bas-relief generation take input in the form of 3D models. However, obtaining 3D models of real scenes or objects is often difficult, inaccurate, and time-consuming. From this motivation, we developed a method that takes as input a set of photographs that can be quickly and ubiquitously captured by ordinary smartphone cameras. A depth map is computed from the input photographs. The value range of the depth map is compressed and used as a base map representing the overall shape of the bas-relief. However, the resulting base map contains little information on details of the scene. Thus, we construct a detail map using pixel values of the input image to express the details. The base and detail maps are blended to generate a new depth map that reflects both overall depth and scene detail information. This map is selectively blurred to simulate the depth-of-field effects. The final depth map is converted to a bas-relief surface mesh. Experimental results show that our method generates a realistic bas-relief surface of general scenes with no expensive manual processing.

ACS Style

Bong-Soo Sohn. Ubiquitous Creation of Bas-Relief Surfaces with Depth-of-Field Effects Using Smartphones. Sensors 2017, 17, 572 .

AMA Style

Bong-Soo Sohn. Ubiquitous Creation of Bas-Relief Surfaces with Depth-of-Field Effects Using Smartphones. Sensors. 2017; 17 (3):572.

Chicago/Turabian Style

Bong-Soo Sohn. 2017. "Ubiquitous Creation of Bas-Relief Surfaces with Depth-of-Field Effects Using Smartphones." Sensors 17, no. 3: 572.

Article
Published: 14 September 2016 in Multimedia Tools and Applications
Reads 0
Downloads 0

This paper describes a novel method for generating a bas-relief surface from the photographic image of a human face. One of the simplest methods is to take each pixel brightness as a depth value and use it to elevate the resulting surface. Although this approach can generate a bas-relief surface with realistic textures, it has the disadvantage of generating erroneous 3D depth. This problem is especially serious in the areas of facial features, such as hair, eyes, eyebrows, nose, and lips, because they are often composed of dark pixel values, and hence make the corresponding area sunken on the resulting surface. Our main contribution is to resolve this problem by detecting the facial features and making them protrude by adjusting the brightness values of the areas. The experimental results show that our method generates realistic and natural looking bas-relief surfaces that represent more accurate 3D depth, especially in the areas of facial features.

ACS Style

Hai Thien To; Bong-Soo Sohn. Bas-relief generation from face photograph based on facial feature enhancement. Multimedia Tools and Applications 2016, 76, 10407 -10423.

AMA Style

Hai Thien To, Bong-Soo Sohn. Bas-relief generation from face photograph based on facial feature enhancement. Multimedia Tools and Applications. 2016; 76 (8):10407-10423.

Chicago/Turabian Style

Hai Thien To; Bong-Soo Sohn. 2016. "Bas-relief generation from face photograph based on facial feature enhancement." Multimedia Tools and Applications 76, no. 8: 10407-10423.

Journal article
Published: 07 May 2012 in Computerized Medical Imaging and Graphics
Reads 0
Downloads 0

Maximum intensity projection (MIP) is an important visualization method that has been widely used for the diagnosis of enhanced vessels or bones by rotating or zooming MIP images. With the rapid spread of multidetector-row computed tomography (MDCT) scanners, MDCT scans of a patient generate a large data set. However, previous acceleration methods for MIP rendering of such a data set failed to generate MIP images at interactive rates. In this paper, we propose novel culling methods in both object and image space for interactive MIP rendering of large medical data sets. In object space, for the visibility test of a block, we propose the initial occluder resulting from a preceding image to utilize temporal coherence and increase the block culling ratio a lot. In addition, we propose the hole filling method using the mesh generation and rendering to improve the culling performance during the generation of the initial occluder. In image space, we find out that there is a trade-off between the block culling ratio in object space and the culling efficiency in image space. In this paper, we classify the visible blocks into two types by their visibility. And we propose a balanced culling method by applying a different culling algorithm in image space for each type to utilize the trade-off and improve the rendering speed. Experimental results on twenty CT data sets showed that our method achieved 3.85 times speed up in average without any loss of image quality comparing with conventional bricking method. Using our visibility culling method, we achieved interactive GPU-based MIP rendering of large medical data sets.

ACS Style

Heewon Kye; Bong-Soo Sohn; Jeongjin Lee. Interactive GPU-based maximum intensity projection of large medical data sets using visibility culling based on the initial occluder and the visible block classification. Computerized Medical Imaging and Graphics 2012, 36, 366 -374.

AMA Style

Heewon Kye, Bong-Soo Sohn, Jeongjin Lee. Interactive GPU-based maximum intensity projection of large medical data sets using visibility culling based on the initial occluder and the visible block classification. Computerized Medical Imaging and Graphics. 2012; 36 (5):366-374.

Chicago/Turabian Style

Heewon Kye; Bong-Soo Sohn; Jeongjin Lee. 2012. "Interactive GPU-based maximum intensity projection of large medical data sets using visibility culling based on the initial occluder and the visible block classification." Computerized Medical Imaging and Graphics 36, no. 5: 366-374.