This page has only limited features, please log in for full access.

Unclaimed
Jyun-Ping Jhan
Department of Geomatics, National Cheng Kung University, 34912 Tainan, Taiwan

Basic Info

Basic Info is private.

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

The user biography is not available.
Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Journal article
Published: 12 May 2021 in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Reads 0
Downloads 0

The original multispectral (MS) images obtained from multi-lens multispectral cameras (MSCs) have significant misregistration errors, which require image registration for precise spectral measurement. However, due to the non-linearity intensity differences among MS images, performing image matching is difficult to find sufficient correct matches (CMs) for image registration, and results in a complex coarse-to-fine solution. Based on the modification of Speed-up Robust Feature (SURF), we proposed a normalized SURF (N-SURF) that can significantly increase the amount of CMs among different pairs of MS images and make one-step image registration possible. In this study, we first introduce N-SURF and adopt different MS datasets acquired from three representative MSCs (MCA-12, Altum, and Sequoia) to evaluate its matching ability. Meanwhile, we utilized three image transformation modelsAffine Transform (AT), Projective Transform (PT), and an Extended Projective Transform (EPT) to correct the misregistration errors of MSCs and evaluate their co-registration correctness. The results show that N-SURF can obtain 620 times more CMs than SURF and can successfully match all pairs of MS images, while SURF failed in the cases of significant spectral differences. Moreover, visual comparison, accuracy assessment, and residual analysis show that EPT can more accurately correct the viewpoint and lens distortion differences of MSCs than AT and PT, and it can obtain co-registration accuracy of 0.20.4 pixels. Subsequently, using the successful N-SURF matching and EPT model, we developed an automatic MS image registration tool that is suitable for various multi-lens MSCs.

ACS Style

Jyun-Ping Jhan; Jiann-Yeou Rau. A Generalized Tool for Accurate and Efficient Image Registration of UAV Multi-lens Multispectral Cameras by N-SURF Matching. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 2021, PP, 1 -1.

AMA Style

Jyun-Ping Jhan, Jiann-Yeou Rau. A Generalized Tool for Accurate and Efficient Image Registration of UAV Multi-lens Multispectral Cameras by N-SURF Matching. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2021; PP (99):1-1.

Chicago/Turabian Style

Jyun-Ping Jhan; Jiann-Yeou Rau. 2021. "A Generalized Tool for Accurate and Efficient Image Registration of UAV Multi-lens Multispectral Cameras by N-SURF Matching." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing PP, no. 99: 1-1.

Journal article
Published: 14 January 2021 in IEEE Geoscience and Remote Sensing Letters
Reads 0
Downloads 0

The effectiveness of damaged building investigation relies on rapid data collection, while jointly applying an unmanned aerial vehicle (UAV) and a backpack panoramic imaging system can quickly and comprehensively record the damage status. Meanwhile, integrating them for generating complete 3-D point clouds (3DPCs) is important for further assisting the 3-D measurement of the damaged areas. During the 2016 Meinong earthquake (Taiwan), the system collected multiview aerial images (MVAIs) and ground panoramic images of two collapsed buildings. However, due to the spatial offsets of the spherical camera result in nonideal panoramic images (NIPIs), an appropriate spherical radius has to be chosen to reduce the distance-related stitching errors. In order to evaluate the impact of using NIPIs for 3-D mapping, the geometric accuracy of the 3-D scene reconstruction (3DSR) and usability of the 3DPCs were assessed. This study introduces the stitching errors of panoramic images, uses sky masks for successful 3DSR, and obtains clean point clouds. It then analyzes the usability of point clouds that were obtained from only NIPIs, only MVAIs, and their integration. The analysis shows that NIPIs have more rapid processing efficiency than their unstitched original images and can increase the completeness of point clouds at the building's lower floor, while MVAIs can reduce the stitching errors of NIPIs to an acceptable range. Therefore, integrating both images is necessary to achieve rapid and complete point cloud generation.

ACS Style

Jyun-Ping Jhan; Norman Kerle; Jiann-Yeou Rau. Integrating UAV and Ground Panoramic Images for Point Cloud Analysis of Damaged Building. IEEE Geoscience and Remote Sensing Letters 2021, PP, 1 -5.

AMA Style

Jyun-Ping Jhan, Norman Kerle, Jiann-Yeou Rau. Integrating UAV and Ground Panoramic Images for Point Cloud Analysis of Damaged Building. IEEE Geoscience and Remote Sensing Letters. 2021; PP (99):1-5.

Chicago/Turabian Style

Jyun-Ping Jhan; Norman Kerle; Jiann-Yeou Rau. 2021. "Integrating UAV and Ground Panoramic Images for Point Cloud Analysis of Damaged Building." IEEE Geoscience and Remote Sensing Letters PP, no. 99: 1-5.

Journal article
Published: 12 August 2020 in Remote Sensing
Reads 0
Downloads 0

The Zengwen desilting tunnel project installed an Elephant Trunk Steel Pipe (ETSP) at the bottom of the reservoir that is designed to connect the new bypass tunnel and reach downward to the sediment surface. Since ETSP is huge and its underwater installation is an unprecedented construction method, there are several uncertainties in its dynamic motion changes during installation. To assure construction safety, a 1:20 ETSP scale model was built to simulate the underwater installation procedure, and its six-degrees-of-freedom (6-DOF) motion parameters were monitored by offline underwater 3D rigid object tracking and photogrammetry. Three cameras were used to form a multicamera system, and several auxiliary devices—such as waterproof housing, tripods, and a waterproof LED—were adopted to protect the cameras and to obtain clear images in the underwater environment. However, since it is difficult for the divers to position the camera and ensure the camera field of view overlap, each camera can only observe the head, middle, and tail parts of ETSP, respectively, leading to a small overlap area among all images. Therefore, it is not possible to perform a traditional method via multiple images forward intersection, where the camera’s positions and orientations have to be calibrated and fixed in advance. Instead, by tracking the 3D coordinates of ETSP and obtaining the camera orientation information via space resection, we propose a multicamera coordinate transformation and adopted a single-camera relative orientation transformation to calculate the 6-DOF motion parameters. The offline procedure is to first acquire the 3D coordinates of ETSP by taking multiposition images with a precalibrated camera in the air and then use the 3D coordinates as control points to perform the space resection of the calibrated underwater cameras. Finally, we calculated the 6-DOF of ETSP by using the camera orientation information through both multi- and single-camera approaches. In this study, we show the results of camera calibration in the air and underwater environment, present the 6-DOF motion parameters of ETSP underwater installation and the reconstructed 4D animation, and compare the differences between the multi- and single-camera approaches.

ACS Style

Jyun-Ping Jhan; Jiann-Yeou Rau; Chih-Ming Chou. Underwater 3D Rigid Object Tracking and 6-DOF Estimation: A Case Study of Giant Steel Pipe Scale Model Underwater Installation. Remote Sensing 2020, 12, 2600 .

AMA Style

Jyun-Ping Jhan, Jiann-Yeou Rau, Chih-Ming Chou. Underwater 3D Rigid Object Tracking and 6-DOF Estimation: A Case Study of Giant Steel Pipe Scale Model Underwater Installation. Remote Sensing. 2020; 12 (16):2600.

Chicago/Turabian Style

Jyun-Ping Jhan; Jiann-Yeou Rau; Chih-Ming Chou. 2020. "Underwater 3D Rigid Object Tracking and 6-DOF Estimation: A Case Study of Giant Steel Pipe Scale Model Underwater Installation." Remote Sensing 12, no. 16: 2600.

Articles
Published: 28 November 2018 in GIScience & Remote Sensing
Reads 0
Downloads 0

The fraction of absorbed photosynthetically active radiation (fAPAR) is an important plant physiological index that is used to assess the ability of vegetation to absorb PAR, which is utilized to sequester carbon in the atmosphere. This index is also important for monitoring plant health and productivity, which has been widely used to monitor low stature crops and is a crucial metric for food security assessment. The fAPAR has been commonly correlated with a greenness index derived from spaceborne optical imagery, but the relatively coarse spatial or temporal resolution may prohibit its application on complex land surfaces. In addition, the relationships between fAPAR and remotely sensed greenness data may be influenced by the heterogeneity of canopies. Multispectral and hyperspectral unmanned aerial vehicle (UAV) imaging systems, conversely, can provide several spectral bands at sub-meter resolutions, permitting precise estimation of fAPAR using chemometrics. However, the data pre-processing procedures are cumbersome, which makes large-scale mapping challenging. In this study, we applied a set of well-verified image processing protocols and a chemometric model to a lightweight, frame-based and narrow-band (10 nm) UAV imaging system to estimate the fAPAR over a relatively large cultivated land area with a variety of low stature vegetation of tropical crops along with native and non-native grasses. A principal component regression was applied to 12 bands of spectral reflectance data to minimize the collinearity issue and compress the data variation. Stepwise regression was employed to reduce the data dimensionality, and the first, third and fifth components were selected to estimate the fAPAR. Our results indicate that 77% of the fAPAR variation was explained by the model. All bands that are sensitive to foliar pigment concentrations, canopy structure and/or leaf water content may contribute to the estimation, especially those located close to (720 nm) or within (750 nm and 780 nm) the near-infrared spectral region. This study demonstrates that this narrow-band frame-based UAV system would be useful for vegetation monitoring. With proper pre-flight planning and hardware improvement, the mapping of a narrow-band multispectral UAV system could be comparable to that of a manned aircraft system.

ACS Style

Cho-Ying Huang; Hsin-Lin Wei; Jiann-Yeou Rau; Jyun-Ping Jhan. Use of principal components of UAV-acquired narrow-band multispectral imagery to map the diverse low stature vegetation fAPAR. GIScience & Remote Sensing 2018, 56, 605 -623.

AMA Style

Cho-Ying Huang, Hsin-Lin Wei, Jiann-Yeou Rau, Jyun-Ping Jhan. Use of principal components of UAV-acquired narrow-band multispectral imagery to map the diverse low stature vegetation fAPAR. GIScience & Remote Sensing. 2018; 56 (4):605-623.

Chicago/Turabian Style

Cho-Ying Huang; Hsin-Lin Wei; Jiann-Yeou Rau; Jyun-Ping Jhan. 2018. "Use of principal components of UAV-acquired narrow-band multispectral imagery to map the diverse low stature vegetation fAPAR." GIScience & Remote Sensing 56, no. 4: 605-623.

Journal article
Published: 01 March 2018 in ISPRS Journal of Photogrammetry and Remote Sensing
Reads 0
Downloads 0
ACS Style

Jyun-Ping Jhan; Jiann-Yeou Rau; Norbert Haala. Robust and adaptive band-to-band image transform of UAS miniature multi-lens multispectral camera. ISPRS Journal of Photogrammetry and Remote Sensing 2018, 137, 47 -60.

AMA Style

Jyun-Ping Jhan, Jiann-Yeou Rau, Norbert Haala. Robust and adaptive band-to-band image transform of UAS miniature multi-lens multispectral camera. ISPRS Journal of Photogrammetry and Remote Sensing. 2018; 137 ():47-60.

Chicago/Turabian Style

Jyun-Ping Jhan; Jiann-Yeou Rau; Norbert Haala. 2018. "Robust and adaptive band-to-band image transform of UAS miniature multi-lens multispectral camera." ISPRS Journal of Photogrammetry and Remote Sensing 137, no. : 47-60.

Journal article
Published: 01 April 2016 in ISPRS Journal of Photogrammetry and Remote Sensing
Reads 0
Downloads 0

MiniMCA (Miniature Multiple Camera Array) is a lightweight, frame-based, and multilens composed multispectral sensor, which is suitable to mount on an unmanned aerial systems (UAS) to acquire high spatial and temporal resolution imagery for various remote sensing applications. Since MiniMCA has significant band misregistration effect, an automatic and precise band-to-band registration (BBR) method is proposed in this study. Based on the principle of sensor plane-to-plane projection, a modified projective transformation (MPT) model is developed. It is to estimate all coefficients of MPT from indoor camera calibration, together with two systematic errors correction. Therefore, we can transfer all bands into the same image space. Quantitative error analysis shows that the proposed BBR scheme is scene independent and can achieve 0.33 pixels of accuracy, which demonstrating the proposed method is accurate and reliable. Meanwhile, it is difficult to mark ground control points (GCPs) on the MiniMCA images, as its spatial resolution is low when the flight height is higher than 400 m. In this study, a higher resolution RGB camera is adopted to produce digital surface model (DSM) and assist MiniMCA ortho-image generation. After precise BBR, only one reference band of MiniMCA image is necessary for aerial triangulation because all bands have same exterior and interior orientation parameters. It means that all the MiniMCA imagery can be ortho-rectified through the same exterior and interior orientation parameters of the reference band. The result of the proposed ortho-rectification procedure shows the co-registration errors between MiniMCA reference band and the RGB ortho-images is less than 0.6 pixels.

ACS Style

Jyun-Ping Jhan; Jiann-Yeou Rau; Cho-Ying Huang. Band-to-band registration and ortho-rectification of multilens/multispectral imagery: A case study of MiniMCA-12 acquired by a fixed-wing UAS. ISPRS Journal of Photogrammetry and Remote Sensing 2016, 114, 66 -77.

AMA Style

Jyun-Ping Jhan, Jiann-Yeou Rau, Cho-Ying Huang. Band-to-band registration and ortho-rectification of multilens/multispectral imagery: A case study of MiniMCA-12 acquired by a fixed-wing UAS. ISPRS Journal of Photogrammetry and Remote Sensing. 2016; 114 ():66-77.

Chicago/Turabian Style

Jyun-Ping Jhan; Jiann-Yeou Rau; Cho-Ying Huang. 2016. "Band-to-band registration and ortho-rectification of multilens/multispectral imagery: A case study of MiniMCA-12 acquired by a fixed-wing UAS." ISPRS Journal of Photogrammetry and Remote Sensing 114, no. : 66-77.

Journal article
Published: 01 August 2014 in IEEE Transactions on Geoscience and Remote Sensing
Reads 0
Downloads 0

In addition to aerial imagery, point clouds are important remote sensing data in urban environment studies. It is essential to extract semantic information from both images and point clouds for such purposes; thus, this study aims to automatically classify 3-D point clouds generated using oblique aerial imagery (OAI)/vertical aerial imagery (VAI) into various urban object classes, such as roof, facade, road, tree, and grass. A multicamera airborne imaging system that can simultaneously acquire VAI and OAI is suggested. The acquired small-format images contain only three RGB spectral bands and are used to generate photogrammetric point clouds through a multiview-stereo dense matching technique. To assign each 3-D point cloud to a corresponding urban object class, we first analyzed the original OAI through object-based image analyses. A rule-based hierarchical semantic classification scheme that utilizes spectral information and geometry- and topology-related features was developed, in which the object height and gradient features were derived from the photogrammetric point clouds to assist in the detection of elevated objects, particularly for the roof and facade. Finally, the photogrammetric point clouds were classified into the aforementioned five classes. The classification accuracy was assessed on the image space, and four experimental results showed that the overall accuracy is between 82.47% and 91.8%. In addition, visual and consistency analyses were performed to demonstrate the proposed classification scheme's feasibility, transferability, and reliability, particularly for distinguishing elevated objects from OAI, which has a severe occlusion effect, image-scale variation, and ambiguous spectral characteristics.

ACS Style

Jiann-Yeou Rau; Jyun-Ping Jhan; Ya-Ching Hsu. Analysis of Oblique Aerial Images for Land Cover and Point Cloud Classification in an Urban Environment. IEEE Transactions on Geoscience and Remote Sensing 2014, 53, 1304 -1319.

AMA Style

Jiann-Yeou Rau, Jyun-Ping Jhan, Ya-Ching Hsu. Analysis of Oblique Aerial Images for Land Cover and Point Cloud Classification in an Urban Environment. IEEE Transactions on Geoscience and Remote Sensing. 2014; 53 (3):1304-1319.

Chicago/Turabian Style

Jiann-Yeou Rau; Jyun-Ping Jhan; Ya-Ching Hsu. 2014. "Analysis of Oblique Aerial Images for Land Cover and Point Cloud Classification in an Urban Environment." IEEE Transactions on Geoscience and Remote Sensing 53, no. 3: 1304-1319.