This page has only limited features, please log in for full access.
6DOF camera relocalization is an important component of autonomous driving and navigation. Deep learning has recently emerged as a promising technique to tackle this problem. In this paper, we present a novel relative geometry-aware Siamese neural network to enhance the performance of deep learning-based methods through explicitly exploiting the relative geometry constraints between images. We perform multi-task learning and predict the absolute and relative poses simultaneously. We regularize the shared-weight twin networks in both the pose and feature domains to ensure that the estimated poses are globally as well as locally correct. We employ metric learning and design a novel adaptive metric distance loss to learn a feature that is capable of distinguishing poses of visually similar images from different locations.We evaluate the proposed method on public indoor and outdoor benchmarks and the experimental results demonstrate that our method can significantly improve localization performance. Furthermore, extensive ablation evaluations are conducted to demonstrate the effectiveness of different terms of the loss function.
Qing Li; Jiasong Zhu; Rui Cao; Ke Sun; Jonathan M. Garibaldi; Qingquan Li; Bozhi Liu; Guoping Qiu. Relative geometry-aware siamese neural network for 6DOF camera relocalization. Neurocomputing 2020, 426, 134 -146.
AMA StyleQing Li, Jiasong Zhu, Rui Cao, Ke Sun, Jonathan M. Garibaldi, Qingquan Li, Bozhi Liu, Guoping Qiu. Relative geometry-aware siamese neural network for 6DOF camera relocalization. Neurocomputing. 2020; 426 ():134-146.
Chicago/Turabian StyleQing Li; Jiasong Zhu; Rui Cao; Ke Sun; Jonathan M. Garibaldi; Qingquan Li; Bozhi Liu; Guoping Qiu. 2020. "Relative geometry-aware siamese neural network for 6DOF camera relocalization." Neurocomputing 426, no. : 134-146.
Accurately locating the fovea is a prerequisite for developing computer aided diagnosis (CAD) of retinal diseases. In colour fundus images of the retina, the fovea is a fuzzy region lacking prominent visual features and this makes it difficult to directly locate the fovea. While traditional methods rely on explicitly extracting image features from the surrounding structures such as the optic disc and various vessels to infer the position of the fovea, deep learning based regression technique can implicitly model the relation between the fovea and other nearby anatomical structures to determine the location of the fovea in an end-to-end fashion. Although promising, using deep learning for fovea localisation also has many unsolved challenges. In this paper, we present a new end-to-end fovea localisation method based on a hierarchical coarse-to-fine deep regression neural network. The innovative features of the new method include a multi-scale feature fusion technique and a self-attention technique to exploit location, semantic, and contextual information in an integrated framework, a multi-field-of-view (multi-FOV) feature fusion technique for context-aware feature learning and a Gaussian-shift-cropping method for augmenting effective training data. We present extensive experimental results on two public databases and show that our new method achieved state-of-the-art performances. We also present a comprehensive ablation study and analysis to demonstrate the technical soundness and effectiveness of the overall framework and its various constituent components.
Ruitao Xie; Jingxin Liu; Rui Cao; Connor S. Qiu; Jiang Duan; Jon Garibaldi; Guoping Qiu. End-to-End Fovea Localisation in Colour Fundus Images With a Hierarchical Deep Regression Network. IEEE Transactions on Medical Imaging 2020, 40, 116 -128.
AMA StyleRuitao Xie, Jingxin Liu, Rui Cao, Connor S. Qiu, Jiang Duan, Jon Garibaldi, Guoping Qiu. End-to-End Fovea Localisation in Colour Fundus Images With a Hierarchical Deep Regression Network. IEEE Transactions on Medical Imaging. 2020; 40 (1):116-128.
Chicago/Turabian StyleRuitao Xie; Jingxin Liu; Rui Cao; Connor S. Qiu; Jiang Duan; Jon Garibaldi; Guoping Qiu. 2020. "End-to-End Fovea Localisation in Colour Fundus Images With a Hierarchical Deep Regression Network." IEEE Transactions on Medical Imaging 40, no. 1: 116-128.
Predicting depth from a single image is an attractive research topic since it provides one more dimension of information to enable machines to better perceive the world. Recently, deep learning has emerged as an effective approach to monocular depth estimation. As obtaining labeled data is costly, there is a recent trend to move from supervised learning to unsupervised learning to obtain monocular depth. However, most unsupervised learning methods capable of achieving high depth prediction accuracy will require a deep network architecture which will be too heavy and complex to run on embedded devices with limited storage and memory spaces. To address this issue, we propose a new powerful network with a recurrent module to achieve the capability of a deep network while at the same time maintaining an extremely lightweight size for real-time high performance unsupervised monocular depth prediction from video sequences. Besides, a novel efficient upsample block is proposed to fuse the features from the associated encoder layer and recover the spatial size of features with the small number of model parameters. We validate the effectiveness of our approach via extensive experiments on the KITTI dataset. Our new model can run at a speed of about 110 frames per second (fps) on a single GPU, 37 fps on a single CPU, and 2 fps on a Raspberry Pi 3. Moreover, it achieves higher depth accuracy with nearly 33 times fewer model parameters than state-of-the-art models. To the best of our knowledge, this work is the first extremely lightweight neural network trained on monocular video sequences for real-time unsupervised monocular depth estimation, which opens up the possibility of implementing deep learning-based real-time unsupervised monocular depth prediction on low-cost embedded devices.
Jun Liu; Qing Li; Rui Cao; Wenming Tang; Guoping Qiu. MiniNet: An extremely lightweight convolutional neural network for real-time unsupervised monocular depth estimation. ISPRS Journal of Photogrammetry and Remote Sensing 2020, 166, 255 -267.
AMA StyleJun Liu, Qing Li, Rui Cao, Wenming Tang, Guoping Qiu. MiniNet: An extremely lightweight convolutional neural network for real-time unsupervised monocular depth estimation. ISPRS Journal of Photogrammetry and Remote Sensing. 2020; 166 ():255-267.
Chicago/Turabian StyleJun Liu; Qing Li; Rui Cao; Wenming Tang; Guoping Qiu. 2020. "MiniNet: An extremely lightweight convolutional neural network for real-time unsupervised monocular depth estimation." ISPRS Journal of Photogrammetry and Remote Sensing 166, no. : 255-267.
Monocular depth estimation plays a crucial role in understanding 3D scene geometry and is a challenging computer vision task. Recently, deep convolutional neural networks have been applied to solve this problem. However, existing methods either directly exploiting RGB pixels which can introduce much noise into the depth map or utilizing over smoothed internal representation features which can cause blur in the depth map. In this paper, we propose a contextual CRF network (CCN) to tackle these issues. The new CCN adopts the popular encoder-decoder architecture with a new contextual CRF module (CCM) which is guided by the depth features and regularizes the information flow from the encoder layer to the corresponding layer in the decoder, thus can reduce the mismatch between the RGB pixel and the depth map cue while at the same time retain detail features to output a fine-grained depth map. Moreover, we propose a depth-guided loss function which pays a balanced attention to near and far pixels thus addressing the long-tailed distribution of depth information. We have conducted extensive experiments on three public datasets for monocular depth estimation. Results demonstrate that our proposed CCN achieves superior performances in terms of visual quality and competitive quantitative results when compared with state-of-the-art methods.
Jun Liu; Qing Li; Rui Cao; Wenming Tang; Guoping Qiu. A contextual conditional random field network for monocular depth estimation. Image and Vision Computing 2020, 98, 103922 .
AMA StyleJun Liu, Qing Li, Rui Cao, Wenming Tang, Guoping Qiu. A contextual conditional random field network for monocular depth estimation. Image and Vision Computing. 2020; 98 ():103922.
Chicago/Turabian StyleJun Liu; Qing Li; Rui Cao; Wenming Tang; Guoping Qiu. 2020. "A contextual conditional random field network for monocular depth estimation." Image and Vision Computing 98, no. : 103922.
Urban region function recognition is key to rational urban planning and management. Due to the complex socioeconomic nature of functional land use, recognizing urban region function in high-density cities using remote sensing images alone is difficult. The inclusion of social sensing has the potential to improve the function classification performance. However, effectively integrating the multi-source and multi-modal remote and social sensing data remains technically challenging. In this paper, we have proposed a novel end-to-end deep learning-based remote and social sensing data fusion model to address this issue. Two neural network based methods, one based on a 1-dimensional convolutional neural network (CNN) and the other based on a long short-term memory (LSTM) network, have been developed to automatically extract discriminative time-dependent social sensing signature features, which are fused with remote sensing image features extracted via a residual neural network. One of the major difficulties in exploiting social and remote sensing data is that the two data sources are asynchronous. We have developed a deep learning-based strategy to address this missing modality problem by enforcing cross-modal feature consistency (CMFC) and cross-modal triplet (CMT) constraints. We train the model in an end-to-end manner by simultaneously optimizing three costs, including the classification cost, the CMFC cost and the CMT cost. Extensive experiments have been conducted on publicly available datasets to demonstrate the effectiveness of the proposed method in fusing remote and social sensing data for urban region function recognition. The results show that the seemingly unrelated physically sensed image data and social activities sensed signatures can indeed complement each other to help enhance the accuracy of urban region function recognition.
Rui Cao; Wei Tu; Cuixin Yang; Qing Li; Jun Liu; Jiasong Zhu; Qian Zhang; Qingquan Li; Guoping Qiu. Deep learning-based remote and social sensing data fusion for urban region function recognition. ISPRS Journal of Photogrammetry and Remote Sensing 2020, 163, 82 -97.
AMA StyleRui Cao, Wei Tu, Cuixin Yang, Qing Li, Jun Liu, Jiasong Zhu, Qian Zhang, Qingquan Li, Guoping Qiu. Deep learning-based remote and social sensing data fusion for urban region function recognition. ISPRS Journal of Photogrammetry and Remote Sensing. 2020; 163 ():82-97.
Chicago/Turabian StyleRui Cao; Wei Tu; Cuixin Yang; Qing Li; Jun Liu; Jiasong Zhu; Qian Zhang; Qingquan Li; Guoping Qiu. 2020. "Deep learning-based remote and social sensing data fusion for urban region function recognition." ISPRS Journal of Photogrammetry and Remote Sensing 163, no. : 82-97.
Image localization is an important supplement to GPS-based methods, especially in indoor scenes. Traditional methods depending on image retrieval or structure from motion (SfM) techniques either suffer from low accuracy or even fail to work due to the texture-less or repetitive indoor surfaces. With the development of range sensors, 3D colourless maps are easily constructed in indoor scenes. How to utilize such a 3D colourless map to improve single image localization performance is a timely but unsolved research problem. In this paper, we present a new approach to addressing this problem by inferring the 3D geometry from a single image with an initial 6DOF pose estimated by a neural network based method. In contrast to previous methods that rely multiple overlapping images or videos to generate sparse point clouds, our new approach can produce dense point cloud from only a single image. We achieve this through estimating the depth map of the input image and performing geometry matching in the 3D space. We have developed a novel depth estimation method by utilizing both the 3D map and RGB images where we use the RGB image to estimate a dense depth map and use the 3D map to guide the depth estimation. We will show that our new method significantly outperforms current RGB image based depth estimation methods for both indoor and outdoor datasets. We also show that utilizing the depth map predicted by the new method for single indoor image localization can improve both position and orientation localization accuracy over state-of-the-art methods.
Qing Li; Jiasong Zhu; Jun Liu; Rui Cao; Hao Fu; Jonathan M. Garibaldi; Qingquan Li; Bozhi Liu; Guoping Qiu. 3D map-guided single indoor image localization refinement. ISPRS Journal of Photogrammetry and Remote Sensing 2020, 161, 13 -26.
AMA StyleQing Li, Jiasong Zhu, Jun Liu, Rui Cao, Hao Fu, Jonathan M. Garibaldi, Qingquan Li, Bozhi Liu, Guoping Qiu. 3D map-guided single indoor image localization refinement. ISPRS Journal of Photogrammetry and Remote Sensing. 2020; 161 ():13-26.
Chicago/Turabian StyleQing Li; Jiasong Zhu; Jun Liu; Rui Cao; Hao Fu; Jonathan M. Garibaldi; Qingquan Li; Bozhi Liu; Guoping Qiu. 2020. "3D map-guided single indoor image localization refinement." ISPRS Journal of Photogrammetry and Remote Sensing 161, no. : 13-26.
Image-based geolocalization is an important alternative to GPS-based localization in GPS-denied situations. Among them, ground-to-aerial geolocalization is particularly promising but also difficult due to drastic viewpoint and appearance differences between ground and aerial images. In this paper, we propose a novel spatial-aware Siamese-like network to address the issue by exploiting the spatial transformer layer to effectively alleviate the large view variation and learn location discriminative embeddings from the cross-view images. Furthermore, we propose to combine the triplet ranking loss with a simple and effective location identity loss to further enhance the performances. We test our method on a publicly available dataset and the results show that the proposed method outperforms state-of-the-art by a large margin.
Rui Cao; Jiasong Zhu; Qing Li; Qian Zhang; Qingquan Li; Bozhi Liu; Guoping Qiu. Learning Spatial-Aware Cross-View Embeddings for Ground-to-Aerial Geolocalization. Transactions on Petri Nets and Other Models of Concurrency XV 2019, 57 -67.
AMA StyleRui Cao, Jiasong Zhu, Qing Li, Qian Zhang, Qingquan Li, Bozhi Liu, Guoping Qiu. Learning Spatial-Aware Cross-View Embeddings for Ground-to-Aerial Geolocalization. Transactions on Petri Nets and Other Models of Concurrency XV. 2019; ():57-67.
Chicago/Turabian StyleRui Cao; Jiasong Zhu; Qing Li; Qian Zhang; Qingquan Li; Bozhi Liu; Guoping Qiu. 2019. "Learning Spatial-Aware Cross-View Embeddings for Ground-to-Aerial Geolocalization." Transactions on Petri Nets and Other Models of Concurrency XV , no. : 57-67.
Rui Cao; Qian Zhang; Jiasong Zhu; Qing Li; Qingquan Li; Bozhi Liu; Guoping Qiu. Enhancing remote sensing image retrieval using a triplet deep metric learning network. International Journal of Remote Sensing 2019, 41, 740 -751.
AMA StyleRui Cao, Qian Zhang, Jiasong Zhu, Qing Li, Qingquan Li, Bozhi Liu, Guoping Qiu. Enhancing remote sensing image retrieval using a triplet deep metric learning network. International Journal of Remote Sensing. 2019; 41 (2):740-751.
Chicago/Turabian StyleRui Cao; Qian Zhang; Jiasong Zhu; Qing Li; Qingquan Li; Bozhi Liu; Guoping Qiu. 2019. "Enhancing remote sensing image retrieval using a triplet deep metric learning network." International Journal of Remote Sensing 41, no. 2: 740-751.
With the rapid growing of remotely sensed imagery data, there is a high demand for effective and efficient image retrieval tools to manage and exploit such data. In this letter, we present a novel content-based remote sensing image retrieval method based on Triplet deep metric learning convolutional neural network (CNN). By constructing a Triplet network with metric learning objective function, we extract the representative features of the images in a semantic space in which images from the same class are close to each other while those from different classes are far apart. In such a semantic space, simple metric measures such as Euclidean distance can be used directly to compare the similarity of images and effectively retrieve images of the same class. We also investigate a supervised and an unsupervised learning methods for reducing the dimensionality of the learned semantic features. We present comprehensive experimental results on two publicly available remote sensing image retrieval datasets and show that our method significantly outperforms state-of-the-art.
Rui Cao; Qian Zhang; Jiasong Zhu; Qing Li; Qingquan Li; Bozhi Liu; Guoping Qiu. Enhancing Remote Sensing Image Retrieval with Triplet Deep Metric Learning Network. 2019, 1 .
AMA StyleRui Cao, Qian Zhang, Jiasong Zhu, Qing Li, Qingquan Li, Bozhi Liu, Guoping Qiu. Enhancing Remote Sensing Image Retrieval with Triplet Deep Metric Learning Network. . 2019; ():1.
Chicago/Turabian StyleRui Cao; Qian Zhang; Jiasong Zhu; Qing Li; Qingquan Li; Bozhi Liu; Guoping Qiu. 2019. "Enhancing Remote Sensing Image Retrieval with Triplet Deep Metric Learning Network." , no. : 1.
This paper presents a novel indoor topological localization method based on mobile phone videos. Conventional methods suffer from indoor dynamic environmental changes and scene ambiguity. The proposed Visual Landmark Sequence-based Indoor Localization (VLSIL) method is capable of addressing problems by taking steady indoor objects as landmarks. Unlike many feature or appearance matching-based localization methods, our method utilizes highly abstracted landmark sematic information to represent locations and thus is invariant to illumination changes, temporal variations, and occlusions. We match consistently detected landmarks against the topological map based on the occurrence order in the videos. The proposed approach contains two components: a convolutional neural network (CNN)-based landmark detector and a topological matching algorithm. The proposed detector is capable of reliably and accurately detecting landmarks. The other part is the matching algorithm built on the second order hidden Markov model and it can successfully handle the environmental ambiguity by fusing sematic and connectivity information of landmarks. To evaluate the method, we conduct extensive experiments on the real world dataset collected in two indoor environments, and the results show that our deep neural network-based indoor landmark detector accurately detects all landmarks and is expected to be utilized in similar environments without retraining and that VLSIL can effectively localize indoor landmarks.
Jiasong Zhu; Qing Li; Rui Cao; Ke Sun; Tao Liu; Jonathan M. Garibaldi; Qingquan Li; Bozhi Liu; Guoping Qiu. Indoor Topological Localization Using a Visual Landmark Sequence. Remote Sensing 2019, 11, 73 .
AMA StyleJiasong Zhu, Qing Li, Rui Cao, Ke Sun, Tao Liu, Jonathan M. Garibaldi, Qingquan Li, Bozhi Liu, Guoping Qiu. Indoor Topological Localization Using a Visual Landmark Sequence. Remote Sensing. 2019; 11 (1):73.
Chicago/Turabian StyleJiasong Zhu; Qing Li; Rui Cao; Ke Sun; Tao Liu; Jonathan M. Garibaldi; Qingquan Li; Bozhi Liu; Guoping Qiu. 2019. "Indoor Topological Localization Using a Visual Landmark Sequence." Remote Sensing 11, no. 1: 73.
Urban land use is key to rational urban planning and management. Traditional land use classification methods rely heavily on domain experts, which is both expensive and inefficient. In this paper, deep neural network-based approaches are presented to label urban land use at pixel level using high-resolution aerial images and ground-level street view images. We use a deep neural network to extract semantic features from sparsely distributed street view images and interpolate them in the spatial domain to match the spatial resolution of the aerial images, which are then fused together through a deep neural network for classifying land use categories. Our methods are tested on a large publicly available aerial and street view images dataset of New York City, and the results show that using aerial images alone can achieve relatively high classification accuracy, the ground-level street view images contain useful information for urban land use classification, and fusing street image features with aerial images can improve classification accuracy. Moreover, we present experimental studies to show that street view images add more values when the resolutions of the aerial images are lower, and we also present case studies to illustrate how street view images provide useful auxiliary information to aerial images to boost performances.
Rui Cao; Jiasong Zhu; Wei Tu; Qingquan Li; Jinzhou Cao; Bozhi Liu; Qian Zhang; Guoping Qiu. Integrating Aerial and Street View Images for Urban Land Use Classification. Remote Sensing 2018, 10, 1553 .
AMA StyleRui Cao, Jiasong Zhu, Wei Tu, Qingquan Li, Jinzhou Cao, Bozhi Liu, Qian Zhang, Guoping Qiu. Integrating Aerial and Street View Images for Urban Land Use Classification. Remote Sensing. 2018; 10 (10):1553.
Chicago/Turabian StyleRui Cao; Jiasong Zhu; Wei Tu; Qingquan Li; Jinzhou Cao; Bozhi Liu; Qian Zhang; Guoping Qiu. 2018. "Integrating Aerial and Street View Images for Urban Land Use Classification." Remote Sensing 10, no. 10: 1553.
Understanding urban public ridership is essential for promoting public transportation. However, limited efforts have been made to reveal the spatial variations of multi-modal public ridership (such as buses, metro systems, and taxis) and the underlying controlling factors. This study explores multi-modal public ridership and compares the similarities and differences of the associated factors. Daily bus, metro, and taxi ridership patterns are first extracted from multiple sources of big transportation data, including vehicle (bus and taxi) GPS trajectories and smart card data. Multivariate regression analysis and geographically weighted regression analysis are used to reveal the associations between these data and demographic, land use, and transportation factors. An empirical study in Shenzhen, China, suggests that employment, mixed land use, and road density have significant effects on the ridership of each mode; however, some effects vary from negative to positive across the city. The results also indicate that road density, income, and metro accessibility do not have significant effects on metro, transit or bus ridership. These findings suggest that the effects of the associated factors vary depending on the mode of travel being considered and that the city should carefully consider which factors to emphasize in formulating future transport policy.
Wei Tu; Rui Cao; Yang Yue; Baoding Zhou; Qiuping Li; Qingquan Li. Spatial variations in urban public ridership derived from GPS trajectories and smart card data. Journal of Transport Geography 2018, 69, 45 -57.
AMA StyleWei Tu, Rui Cao, Yang Yue, Baoding Zhou, Qiuping Li, Qingquan Li. Spatial variations in urban public ridership derived from GPS trajectories and smart card data. Journal of Transport Geography. 2018; 69 ():45-57.
Chicago/Turabian StyleWei Tu; Rui Cao; Yang Yue; Baoding Zhou; Qiuping Li; Qingquan Li. 2018. "Spatial variations in urban public ridership derived from GPS trajectories and smart card data." Journal of Transport Geography 69, no. : 45-57.
Meng Zhou; Donggen Wang; Qingquan Li; Yang Yue; Wei Tu; Rui Cao. Impacts of weather on public transport ridership: Results from mining data from different sources. Transportation Research Part C: Emerging Technologies 2017, 75, 17 -29.
AMA StyleMeng Zhou, Donggen Wang, Qingquan Li, Yang Yue, Wei Tu, Rui Cao. Impacts of weather on public transport ridership: Results from mining data from different sources. Transportation Research Part C: Emerging Technologies. 2017; 75 ():17-29.
Chicago/Turabian StyleMeng Zhou; Donggen Wang; Qingquan Li; Yang Yue; Wei Tu; Rui Cao. 2017. "Impacts of weather on public transport ridership: Results from mining data from different sources." Transportation Research Part C: Emerging Technologies 75, no. : 17-29.
The quantification of human movements is very hard because of the sparsity of traditional data and the labour intensive of the data collecting process. Recently, much spatial-temporal data give us an opportunity to observe human movement. This research investigates the relationship of city-wide human movements inferring from two types of spatial-temporal data at traffic analysis zone (TAZ) level. The first type of human movement is inferred from long-time smart card transaction data recording the boarding actions. The second type of human movement is extracted from citywide time sequenced mobile phone data with 30 minutes interval. Travel volume, travel distance and travel time are used to measure aggregated human movements in the city. To further examine the relationship between the two types of inferred movements, the linear correlation analysis is conducted on the hourly travel volume. The obtained results show that human movements inferred from smart card data and mobile phone data have a correlation of 0.635. However, there are still some non-ignorable differences in some special areas. This research not only reveals the citywide spatial-temporal human dynamic but also benefits the understanding of the reliability of the inference of human movements with big spatial-temporal data.
Rui Cao; Wei Tu; Jinzhou Cao; Qingquan Li. COMPARISON OF URBAN HUMAN MOVEMENTS INFERRING FROM MULTI-SOURCE SPATIAL-TEMPORAL DATA. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 2016, XLI-B2, 471 -476.
AMA StyleRui Cao, Wei Tu, Jinzhou Cao, Qingquan Li. COMPARISON OF URBAN HUMAN MOVEMENTS INFERRING FROM MULTI-SOURCE SPATIAL-TEMPORAL DATA. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 2016; XLI-B2 ():471-476.
Chicago/Turabian StyleRui Cao; Wei Tu; Jinzhou Cao; Qingquan Li. 2016. "COMPARISON OF URBAN HUMAN MOVEMENTS INFERRING FROM MULTI-SOURCE SPATIAL-TEMPORAL DATA." The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B2, no. : 471-476.
The purpose of this report was to demonstrate the effect of amphiphilic polysaccharides-based self-assembling micelles on enhancing the oral absorption of low molecular weight chondroitin sulfate (LMCS) in vitro and in vivo, and identify the transepithelial transport mechanism of LMCS micelles across the intestinal barrier. α-Linolenic acid-low molecular weight chondroitin sulfate polymers(α-LNA-LMCS) were successfully synthesized, and characterized by FTIR, (1)HNMR, TGA/DSC, TEM, laser light scattering and zeta potential. The significant oral absorption enhancement and elimination half-life (t₁/₂) extension of LNA-LMCS2 in rats were evidenced by intragastric administration in comparison with CS and LMCS. Caco-2 transport studies demonstrated that the apparent permeability coefficient (Papp) of LNA-LMCS2 was significantly higher than that of CS and LMCS (p<0.001), and no significant effects on the overall integrity of the monolayer were observed during the transport process. In addition, α-LNA-LMCS micelles accumulated around the cell membrane and intercellular space observed by confocal laser scanning microscope (CLSM). Furthermore, evident alterations in the F-actin cytoskeleton were detected by CLSM observation following the treatment of the cell monolayers with α-LNA-LMCS micelles, which further certified the capacity of α-LNA-LMCS micelles to open the intercellular tight junctions rather than disrupt the overall integrity of the monolayer. Therefore, LNA-LMCS2 with low cytotoxicity and high bioavailability might be a promising substitute for CS in clinical use, such as treating osteoarthritis, atherosclerosis, etc.
Yuliang Xiao; Pingli Li; Yanna Cheng; Xinke Zhang; Juzheng Sheng; Decai Wang; Juan Li; Qian Zhang; Chuanqing Zhong; Rui Cao; Fengshan Wang. Enhancing the intestinal absorption of low molecular weight chondroitin sulfate by conjugation with α-linolenic acid and the transport mechanism of the conjugates. International Journal of Pharmaceutics 2014, 465, 143 -158.
AMA StyleYuliang Xiao, Pingli Li, Yanna Cheng, Xinke Zhang, Juzheng Sheng, Decai Wang, Juan Li, Qian Zhang, Chuanqing Zhong, Rui Cao, Fengshan Wang. Enhancing the intestinal absorption of low molecular weight chondroitin sulfate by conjugation with α-linolenic acid and the transport mechanism of the conjugates. International Journal of Pharmaceutics. 2014; 465 (1-2):143-158.
Chicago/Turabian StyleYuliang Xiao; Pingli Li; Yanna Cheng; Xinke Zhang; Juzheng Sheng; Decai Wang; Juan Li; Qian Zhang; Chuanqing Zhong; Rui Cao; Fengshan Wang. 2014. "Enhancing the intestinal absorption of low molecular weight chondroitin sulfate by conjugation with α-linolenic acid and the transport mechanism of the conjugates." International Journal of Pharmaceutics 465, no. 1-2: 143-158.