This page has only limited features, please log in for full access.
Although breast ultrasonography is the mainstay modality for differentiating between benign and malignant breast masses, it has intrinsic problems with false positives and substantial interobserver variability. Artificial intelligence (AI), particularly with deep learning models, is expected to improve workflow efficiency and serve as a second opinion. AI is highly useful for performing three main clinical tasks in breast ultrasonography: detection (localization/segmentation), differential diagnosis (classification), and prognostication (prediction). This article provides a current overview of AI applications in breast ultrasonography, with a discussion of methodological considerations in the development of AI models and an up-to-date literature review of potential clinical applications.
Jaeil Kim; Hye Jung Kim; Chanho Kim; Won Hwa Kim. Artificial intelligence in breast ultrasonography. Ultrasonography 2021, 40, 183 -190.
AMA StyleJaeil Kim, Hye Jung Kim, Chanho Kim, Won Hwa Kim. Artificial intelligence in breast ultrasonography. Ultrasonography. 2021; 40 (2):183-190.
Chicago/Turabian StyleJaeil Kim; Hye Jung Kim; Chanho Kim; Won Hwa Kim. 2021. "Artificial intelligence in breast ultrasonography." Ultrasonography 40, no. 2: 183-190.
The use of three-dimensional face-scanning systems to obtain facial models is of increasing interest, however, systematic assessments of the reliability of portable face-scan devices have not been widely conducted. Therefore, a systematic review and meta-analysis were performed considering the accuracy of facial models obtained by portable face-scanners in comparison with that of those obtained by stationary face-scanning systems. A systematic literature search was conducted in electronic databases following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines for articles published from 1 January 2009 to 18 March 2020. A total of 2806 articles were identified, with 21 articles available for the narrative review and nine studies available for meta-analysis. The meta-analysis revealed that the accuracy of the digital face models generated by the portable scanners was not significantly different from that of the stationary face-scanning systems (standard mean difference (95% confidence interval) = −0.325 mm (−1.186 to 0.536); z = −0.74; p = 0.459). Within the comparison of the portable systems, no statistically significant difference was found concerning the accuracy of the facial models among scanning methods (p = 0.063). Overall, portable face-scan devices can be considered reliable for obtaining facial models. However, caution is needed when applying face-scanners with respect to scanning device settings, control of involuntary facial movements, landmark and facial region identifications, and scanning protocols.
Hang-Nga Mai; Jaeil Kim; Youn-Hee Choi; Du-Hyeong Lee. Accuracy of Portable Face-Scanning Devices for Obtaining Three-Dimensional Face Models: A Systematic Review and Meta-Analysis. International Journal of Environmental Research and Public Health 2020, 18, 94 .
AMA StyleHang-Nga Mai, Jaeil Kim, Youn-Hee Choi, Du-Hyeong Lee. Accuracy of Portable Face-Scanning Devices for Obtaining Three-Dimensional Face Models: A Systematic Review and Meta-Analysis. International Journal of Environmental Research and Public Health. 2020; 18 (1):94.
Chicago/Turabian StyleHang-Nga Mai; Jaeil Kim; Youn-Hee Choi; Du-Hyeong Lee. 2020. "Accuracy of Portable Face-Scanning Devices for Obtaining Three-Dimensional Face Models: A Systematic Review and Meta-Analysis." International Journal of Environmental Research and Public Health 18, no. 1: 94.
We propose an unsupervised network with adversarial learning, the Raindrop-aware GAN, which enhances the quality of coastal video images contaminated by raindrops. Raindrop removal from coastal videos faces two main difficulties: converting the degraded image into a clean one by visually removing the raindrops, and restoring the background coastal wave information in the raindrop regions. The components of the proposed network—a generator and a discriminator for adversarial learning—are trained on unpaired images degraded by raindrops and clean images free from raindrops. By creating raindrop masks and background-restored images, the generator restores the background information in the raindrop regions alone, preserving the input as much as possible. The proposed network was trained and tested on an open-access dataset and directly collected dataset from the coastal area. It was then evaluated by three metrics: the peak signal-to-noise ratio, structural similarity, and a naturalness-quality evaluator. The indices of metrics are 8.2% (+2.012), 0.2% (+0.002), and 1.6% (−0.196) better than the state-of-the-art method, respectively. In the visual assessment of the enhanced video image quality, our method better restored the image patterns of steep wave crests and breaking than the other methods. In both quantitative and qualitative experiments, the proposed method more effectively removed the raindrops in coastal video and recovered the damaged background wave information than state-of-the-art methods.
Jinah Kim; Dong Huh; Taekyung Kim; Jaeil Kim; Jeseon Yoo; Jae-Seol Shim. Raindrop-Aware GAN: Unsupervised Learning for Raindrop-Contaminated Coastal Video Enhancement. Remote Sensing 2020, 12, 3461 .
AMA StyleJinah Kim, Dong Huh, Taekyung Kim, Jaeil Kim, Jeseon Yoo, Jae-Seol Shim. Raindrop-Aware GAN: Unsupervised Learning for Raindrop-Contaminated Coastal Video Enhancement. Remote Sensing. 2020; 12 (20):3461.
Chicago/Turabian StyleJinah Kim; Dong Huh; Taekyung Kim; Jaeil Kim; Jeseon Yoo; Jae-Seol Shim. 2020. "Raindrop-Aware GAN: Unsupervised Learning for Raindrop-Contaminated Coastal Video Enhancement." Remote Sensing 12, no. 20: 3461.
The early and accurate diagnosis of skin cancer is crucial for providing patients with advanced treatment by focusing medical personnel on specific parts of the skin. Networks based on encoder–decoder architectures have been effectively implemented for numerous computer-vision applications. U-Net, one of CNN architectures based on the encoder–decoder network, has achieved successful performance for skin-lesion segmentation. However, this network has several drawbacks caused by its upsampling method and activation function. In this paper, a fully convolutional network and its architecture are proposed with a modified U-Net, in which a bilinear interpolation method is used for upsampling with a block of convolution layers followed by parametric rectified linear-unit non-linearity. To avoid overfitting, a dropout is applied after each convolution block. The results demonstrate that our recommended technique achieves state-of-the-art performance for skin-lesion segmentation with 94% pixel accuracy and a 88% dice coefficient, respectively.
Karshiev Sanjar; Olimov Bekhzod; Jaeil Kim; Jaesoo Kim; Anand Paul; Jeonghong Kim. Improved U-Net: Fully Convolutional Network Model for Skin-Lesion Segmentation. Applied Sciences 2020, 10, 3658 .
AMA StyleKarshiev Sanjar, Olimov Bekhzod, Jaeil Kim, Jaesoo Kim, Anand Paul, Jeonghong Kim. Improved U-Net: Fully Convolutional Network Model for Skin-Lesion Segmentation. Applied Sciences. 2020; 10 (10):3658.
Chicago/Turabian StyleKarshiev Sanjar; Olimov Bekhzod; Jaeil Kim; Jaesoo Kim; Anand Paul; Jeonghong Kim. 2020. "Improved U-Net: Fully Convolutional Network Model for Skin-Lesion Segmentation." Applied Sciences 10, no. 10: 3658.
Kim, J. and Kim, J., 2020. Estimation of water surface flow velocity using coastal video imagery by visual tracking with deep learning. In: Malvárez, G. and Navas, F. (eds.), Global Coastal Issues of 2020. Journal of Coastal Research, Special Issue No. 95, pp. 522-526. Coconut Creek (Florida), ISSN 0749-0208.This paper describes the method of flow velocity estimation of water surface in video imagery by tracking waves using deep neural network for visual object tracking with unsupervised learning. The model of deep neural network consists of two stages for scene separation and image registration to extract waves only and track the propagated waves, respectively. The dataset of video imagery acquired at Anmok beach of south Korea has been used to training the model and it learns the behavior of propagated waves. The performance of model is evaluated by measuring image similarity using test dataset. And the estimated flow velocity of water surface in propagated waves is compared with the flow from conventional image processing method of particle image velocity. The results show that our proposed approach with deep learning method is very promising to measure and predict coastal waves especially in the surf zone.
Jinah Kim; Jaeil Kim. Estimation of Water Surface Flow Velocity in Coastal Video Imagery by Visual Tracking with Deep Learning. Journal of Coastal Research 2020, 95, 522 -526.
AMA StyleJinah Kim, Jaeil Kim. Estimation of Water Surface Flow Velocity in Coastal Video Imagery by Visual Tracking with Deep Learning. Journal of Coastal Research. 2020; 95 (sp1):522-526.
Chicago/Turabian StyleJinah Kim; Jaeil Kim. 2020. "Estimation of Water Surface Flow Velocity in Coastal Video Imagery by Visual Tracking with Deep Learning." Journal of Coastal Research 95, no. sp1: 522-526.
Activation functions play important roles in determining the depth and non-linearity of deep learning models. Since the Rectified Linear Unit (ReLU) was introduced, many modifications, in which noise is intentionally injected, have been proposed to avoid overfitting. Exponential Linear Unit (ELU) and their variants, with trainable parameters, have been proposed to reduce the bias shift effects which are often observed in ReLU-type activation functions. In this paper, we propose a novel activation function, called the Elastic Exponential Linear Unit (EELU), which combines the advantages of both types of activation functions in a generalized form. EELU has an elastic slope in the positive part, and preserves the negative signal by using a small non-zero gradient. We also present a new strategy to insert neuronal noise using a Gaussian distribution in the activation function to improve generalization. We demonstrated how EELU can represent a wider variety of features with random noise than other activation functions, by visualizing the latent features of convolutional neural networks. We evaluated the effectiveness of the EELU approach through extensive experiments with image classification using the CIFAR-10/CIFAR-100, ImageNet, and Tiny ImageNet datasets. Our experimental results show that EELU achieved better generalization performance and improved classification accuracy over conventional activation functions, such as ReLU, ELU, ReLU- and ELU-like variants, Scaled ELU, and Swish. EELU produced performance improvements in image classification using a smaller number of training samples, owing to its noise injection strategy, which allows significant variation in function outputs, including deactivation.
Daeho Kim; Jinah Kim; Jaeil Kim. Elastic exponential linear units for convolutional neural networks. Neurocomputing 2020, 406, 253 -266.
AMA StyleDaeho Kim, Jinah Kim, Jaeil Kim. Elastic exponential linear units for convolutional neural networks. Neurocomputing. 2020; 406 ():253-266.
Chicago/Turabian StyleDaeho Kim; Jinah Kim; Jaeil Kim. 2020. "Elastic exponential linear units for convolutional neural networks." Neurocomputing 406, no. : 253-266.
In this paper, we propose a series of procedures for coastal wave-tracking using coastal video imagery with deep neural networks. It consists of three stages: video enhancement, hydrodynamic scene separation and wave-tracking. First, a generative adversarial network, trained using paired raindrop and clean videos, is applied to remove image distortions by raindrops and to restore background information of coastal waves. Next, a hydrodynamic scene of propagated wave information is separated from surrounding environmental information in the enhanced coastal video imagery using a deep autoencoder network. Finally, propagating waves are tracked by registering consecutive images in the quality-enhanced and scene-separated coastal video imagery using a spatial transformer network. The instantaneous wave speed of each individual wave crest and breaker in the video domain is successfully estimated through learning the behavior of transformed and propagated waves in the surf zone using deep neural networks. Since it enables the acquisition of spatio-temporal information of the surf zone though the characterization of wave breakers inclusively wave run-up, we expect that the proposed framework with the deep neural networks leads to improve understanding of nearshore wave dynamics.
Jinah Kim; Jaeil Kim; Taekyung Kim; Dong Huh; Sofia Caires. Wave-Tracking in the Surf Zone Using Coastal Video Imagery with Deep Neural Networks. Atmosphere 2020, 11, 304 .
AMA StyleJinah Kim, Jaeil Kim, Taekyung Kim, Dong Huh, Sofia Caires. Wave-Tracking in the Surf Zone Using Coastal Video Imagery with Deep Neural Networks. Atmosphere. 2020; 11 (3):304.
Chicago/Turabian StyleJinah Kim; Jaeil Kim; Taekyung Kim; Dong Huh; Sofia Caires. 2020. "Wave-Tracking in the Surf Zone Using Coastal Video Imagery with Deep Neural Networks." Atmosphere 11, no. 3: 304.
사단법인 한국컴퓨터그래픽스학회의 논문지로서 그래픽스 분야와 그에 연관된 연구영역의 새로운 발견과 최첨단 연구결과를 발표하는 토론의 장입니다. 학회를 대표하는 학술지인 논문지는 1995년에 논문지 발간을 시작으로 2000년부터 3월, 6월, 9월, 12월 총 4회 발간되었고, 2015년부터는 KCGS 학술대회 특별호를 7월에 발간하면서 총 5회 발간되고 있습니다.
Dong Huh; Jaeil Kim; Jinah Kim. Raindrop Removal and Background Information Recovery in Coastal Wave Video Imagery using Generative Adversarial Networks. Journal of the Korea Computer Graphics Society 2019, 25, 1 -9.
AMA StyleDong Huh, Jaeil Kim, Jinah Kim. Raindrop Removal and Background Information Recovery in Coastal Wave Video Imagery using Generative Adversarial Networks. Journal of the Korea Computer Graphics Society. 2019; 25 (5):1-9.
Chicago/Turabian StyleDong Huh; Jaeil Kim; Jinah Kim. 2019. "Raindrop Removal and Background Information Recovery in Coastal Wave Video Imagery using Generative Adversarial Networks." Journal of the Korea Computer Graphics Society 25, no. 5: 1-9.
Taekyung Kim; Jaeil Kim; Jinah Kim. Hydrodynamic scene separation from video imagery of ocean wave using autoencoder. Journal of the Korea Computer Graphics Society 2019, 25, 9 -16.
AMA StyleTaekyung Kim, Jaeil Kim, Jinah Kim. Hydrodynamic scene separation from video imagery of ocean wave using autoencoder. Journal of the Korea Computer Graphics Society. 2019; 25 (4):9-16.
Chicago/Turabian StyleTaekyung Kim; Jaeil Kim; Jinah Kim. 2019. "Hydrodynamic scene separation from video imagery of ocean wave using autoencoder." Journal of the Korea Computer Graphics Society 25, no. 4: 9-16.
Missing data is a common problem in longitudinal studies due to subject dropouts and failed scans. We present a graph-based convolutional neural network to predict missing diffusion MRI data. In particular, we consider the relationships between sampling points in the spatial domain and the diffusion wave-vector domain to construct a graph. We then use a graph convolutional network to learn the non-linear mapping from available data to missing data. Our method harnesses a multi-scale residual architecture with adversarial learning for prediction with greater accuracy and perceptual quality. Experimental results show that our method is accurate and robust in the longitudinal prediction of infant brain diffusion MRI data.
Yoonmi Hong; Jaeil Kim; Geng Chen; Weili Lin; Pew-Thian Yap; Dinggang Shen. Longitudinal Prediction of Infant Diffusion MRI Data via Graph Convolutional Adversarial Networks. IEEE Transactions on Medical Imaging 2019, 38, 2717 -2725.
AMA StyleYoonmi Hong, Jaeil Kim, Geng Chen, Weili Lin, Pew-Thian Yap, Dinggang Shen. Longitudinal Prediction of Infant Diffusion MRI Data via Graph Convolutional Adversarial Networks. IEEE Transactions on Medical Imaging. 2019; 38 (12):2717-2725.
Chicago/Turabian StyleYoonmi Hong; Jaeil Kim; Geng Chen; Weili Lin; Pew-Thian Yap; Dinggang Shen. 2019. "Longitudinal Prediction of Infant Diffusion MRI Data via Graph Convolutional Adversarial Networks." IEEE Transactions on Medical Imaging 38, no. 12: 2717-2725.