This page has only limited features, please log in for full access.

Unclaimed
Ruiying Shen
Beijing Engineering Researcher Center of Mixed Reality and Advanced Display, School of Optics and Photonics Beijing Institute of Technology Beijing China

Basic Info

Basic Info is private.

Honors and Awards

The user has no records in this section


Career Timeline

The user has no records in this section.


Short Biography

The user biography is not available.
Following
Followers
Co Authors
The list of users this user is following is empty.
Following: 0 users

Feed

Regular contributed paper
Published: 11 January 2021 in Journal of the Society for Information Display
Reads 0
Downloads 0

During the immersive virtual reality experience, because the visual senses are completely enclosed in the virtual environment, they are unable to perceive environmental changes in the real world and produce unsafe feelings, which affects their experience in virtual reality systems. In order to improve people's sense of security in the immersive virtual reality experience, the paper has designed four ways to interact with the real world in a virtual environment, so that users can get real‐world information in the virtual environment. This paper tests users' changes in their sense of security during immersive virtual reality experiences under the influence of various interaction methods and conduct psychological analysis. Twenty‐one volunteers are recruited to participate in the experiment, and their personal safety and psychological effects are tested. The experimental results show that the method of using Yolcat neural network to segment the captured images according to categories and then fuse the segmented images into the virtual environment can improve the user's sense of security without destroying the immersion.

ACS Style

Mingwei Hu; Dongdong Weng; Jie Guo; Yongtian Wang. The influence of fusion display mode on the user's sense of personal security in virtual immersion system. Journal of the Society for Information Display 2021, 29, 254 -263.

AMA Style

Mingwei Hu, Dongdong Weng, Jie Guo, Yongtian Wang. The influence of fusion display mode on the user's sense of personal security in virtual immersion system. Journal of the Society for Information Display. 2021; 29 (4):254-263.

Chicago/Turabian Style

Mingwei Hu; Dongdong Weng; Jie Guo; Yongtian Wang. 2021. "The influence of fusion display mode on the user's sense of personal security in virtual immersion system." Journal of the Society for Information Display 29, no. 4: 254-263.

Journal article
Published: 02 November 2020 in Sustainability
Reads 0
Downloads 0

This study demonstrates how playing a well-designed multitasking motion video game in a virtual reality (VR) environment can positively impact the cognitive and physical health of older players. We developed a video game that combines cognitive and physical training in a VR environment. The impact of playing the game was measured through a four-week longitudinal experiment. Twenty healthy older adults were randomly assigned to either an intervention group (i.e., game training) or a control group (i.e., no contact). Participants played three 45-min sessions per week completing cognitive tests for attention, working memory, reasoning and a test for physical balance before and after the intervention. Results showed that compared to the control group, the game group showed significant improvements in working memory and a potential for enhancing reasoning and balance ability. Furthermore, while the older adults enjoyed playing the video game, ability enhancements were associated with their intrinsic motivation to play. Overall, cognitive training with multitasking VR motion video games has positive impacts on the cognitive and physical health of older adults.

ACS Style

Xiaoxuan Li; Kavous Niksirat; Shanshan Chen; Dongdong Weng; Sayan Sarcar; Xiangshi Ren. The Impact of a Multitasking-Based Virtual Reality Motion Video Game on the Cognitive and Physical Abilities of Older Adults. Sustainability 2020, 12, 9106 .

AMA Style

Xiaoxuan Li, Kavous Niksirat, Shanshan Chen, Dongdong Weng, Sayan Sarcar, Xiangshi Ren. The Impact of a Multitasking-Based Virtual Reality Motion Video Game on the Cognitive and Physical Abilities of Older Adults. Sustainability. 2020; 12 (21):9106.

Chicago/Turabian Style

Xiaoxuan Li; Kavous Niksirat; Shanshan Chen; Dongdong Weng; Sayan Sarcar; Xiangshi Ren. 2020. "The Impact of a Multitasking-Based Virtual Reality Motion Video Game on the Cognitive and Physical Abilities of Older Adults." Sustainability 12, no. 21: 9106.

Conference paper
Published: 28 November 2019 in Transactions on Petri Nets and Other Models of Concurrency XV
Reads 0
Downloads 0

In order to give a virtual human rich and realistic facial expression in the film production process, a good blendshape model is needed. But selecting and capturing base expressions for blendshape model requires a lot of manual work, time and effort, and the model also lacks expressiveness. A method for automatically selecting a set of base expressions from a sequence of facial motions is proposed in this paper. In this method, the Procrustes analysis is used to estimate the difference between face meshes and determine the composition of the base expressions. And the base expressions are used to build a local blendshape model which can enhance expressiveness. The results of reconstructing facial expressions by the local blendshape model are shown in this paper. By this method, the base expressions can be automatically selected from the expression sequence, reducing the manual operation.

ACS Style

Ziqi Tu; Dongdong Weng; Dewen Cheng; Yihua Bao; Bin Liang; Le Luo. An Automatic Base Expression Selection Algorithm Based on Local Blendshape Model. Transactions on Petri Nets and Other Models of Concurrency XV 2019, 220 -231.

AMA Style

Ziqi Tu, Dongdong Weng, Dewen Cheng, Yihua Bao, Bin Liang, Le Luo. An Automatic Base Expression Selection Algorithm Based on Local Blendshape Model. Transactions on Petri Nets and Other Models of Concurrency XV. 2019; ():220-231.

Chicago/Turabian Style

Ziqi Tu; Dongdong Weng; Dewen Cheng; Yihua Bao; Bin Liang; Le Luo. 2019. "An Automatic Base Expression Selection Algorithm Based on Local Blendshape Model." Transactions on Petri Nets and Other Models of Concurrency XV , no. : 220-231.

Journal article
Published: 26 September 2019 in Remote Sensing
Reads 0
Downloads 0

Establishing the spatial relationship between 2D images captured by real cameras and 3D models of the environment (2D and 3D space) is one way to achieve the virtual–real registration for Augmented Reality (AR) in outdoor environments. In this paper, we propose to match the 2D images captured by real cameras and the rendered images from the 3D image-based point cloud to indirectly establish the spatial relationship between 2D and 3D space. We call these two kinds of images as cross-domain images, because their imaging mechanisms and nature are quite different. However, unlike real camera images, the rendered images from the 3D image-based point cloud are inevitably contaminated with image distortion, blurred resolution, and obstructions, which makes image matching with the handcrafted descriptors or existing feature learning neural networks very challenging. Thus, we first propose a novel end-to-end network, AE-GAN-Net, consisting of two AutoEncoders (AEs) with Generative Adversarial Network (GAN) embedding, to learn invariant feature descriptors for cross-domain image matching. Second, a domain-consistent loss function, which balances image content and consistency of feature descriptors for cross-domain image pairs, is introduced to optimize AE-GAN-Net. AE-GAN-Net effectively captures domain-specific information, which is embedded into the learned feature descriptors, thus making the learned feature descriptors robust against image distortion, variations in viewpoints, spatial resolutions, rotation, and scaling. Experimental results show that AE-GAN-Net achieves state-of-the-art performance for image patch retrieval with the cross-domain image patch dataset, which is built from real camera images and the rendered images from 3D image-based point cloud. Finally, by evaluating virtual–real registration for AR on a campus by using the cross-domain image matching results, we demonstrate the feasibility of applying the proposed virtual–real registration to AR in outdoor environments.

ACS Style

Weiquan Liu; Cheng Wang; Xuesheng Bian; Shuting Chen; Wei Li; Xiuhong Lin; Yongchuan Li; Dongdong Weng; Shang-Hong Lai; Jonathan Li. AE-GAN-Net: Learning Invariant Feature Descriptor to Match Ground Camera Images and a Large-Scale 3D Image-Based Point Cloud for Outdoor Augmented Reality. Remote Sensing 2019, 11, 2243 .

AMA Style

Weiquan Liu, Cheng Wang, Xuesheng Bian, Shuting Chen, Wei Li, Xiuhong Lin, Yongchuan Li, Dongdong Weng, Shang-Hong Lai, Jonathan Li. AE-GAN-Net: Learning Invariant Feature Descriptor to Match Ground Camera Images and a Large-Scale 3D Image-Based Point Cloud for Outdoor Augmented Reality. Remote Sensing. 2019; 11 (19):2243.

Chicago/Turabian Style

Weiquan Liu; Cheng Wang; Xuesheng Bian; Shuting Chen; Wei Li; Xiuhong Lin; Yongchuan Li; Dongdong Weng; Shang-Hong Lai; Jonathan Li. 2019. "AE-GAN-Net: Learning Invariant Feature Descriptor to Match Ground Camera Images and a Large-Scale 3D Image-Based Point Cloud for Outdoor Augmented Reality." Remote Sensing 11, no. 19: 2243.

Conference paper
Published: 20 July 2019 in Communications in Computer and Information Science
Reads 0
Downloads 0

We present MMRPet, a modular mixed reality pet system based on passive props. In addition to superimposing virtual pets onto pet entities to take advantages of physical interactions provided by pet entities and personalized appearance and rich expressional capabilities provided by virtual pets, the key idea behind MMRPet is the modular design of pet entities. The user can reconfigure limited modules to construct pet entities of various forms and structures. These modular pet entities can provide flexible haptic feedback and support the system to render virtual pets of personalized form and structure. By integrating tracking information from the head and hands of the user, as well as each module of pet entities, MMRPet can infer rich interaction intents and support rich human-pet interactions when the user touches, moves, rotates or gazes each module. We explore the design space for the construction of modular pet entities and the design space of the human-pet interaction enabled by MMRPet. Furthermore, a series of prototypes demonstrate the advantages of using modular entities in a mixed reality pet system.

ACS Style

YaQiong Xue; Dongdong Weng; Haiyan Jiang; Qing Gao. MMRPet: Modular Mixed Reality Pet System Based on Passive Props. Communications in Computer and Information Science 2019, 645 -658.

AMA Style

YaQiong Xue, Dongdong Weng, Haiyan Jiang, Qing Gao. MMRPet: Modular Mixed Reality Pet System Based on Passive Props. Communications in Computer and Information Science. 2019; ():645-658.

Chicago/Turabian Style

YaQiong Xue; Dongdong Weng; Haiyan Jiang; Qing Gao. 2019. "MMRPet: Modular Mixed Reality Pet System Based on Passive Props." Communications in Computer and Information Science , no. : 645-658.

Conference paper
Published: 20 July 2019 in Communications in Computer and Information Science
Reads 0
Downloads 0

As working at a video display terminal (VDT) for a long time can induce visual fatigue, this paper proposed a method to use dynamic disparity on the situation of video watching in head-mounted displays (HMD), based on the accommodative training. And an experiment was designed to evaluate whether it can alleviate visual fatigue. Subjective and objective methods were combined in the experiment under different disparity conditions to evaluate the visual fatigue of the subjects. The objective assessment was the blink frequency of the subjects, achieved by the eye tracker. The subjective assessment was questionnaire. However, we came to the conclusion that dynamic disparity caused by the movement of left and right eye images in the HMD can’t effectively alleviate visual fatigue. According to the change of the average eye blink frequency ratio of the subjects during the experiment, the change of the visual fatigue over time was analyzed.

ACS Style

Ruiying Shen; Dongdong Weng; Jie Guo; Hui Fang; Haiyan Jiang. Effects of Dynamic Disparity on Visual Fatigue Caused by Watching 2D Videos in HMDs. Communications in Computer and Information Science 2019, 310 -321.

AMA Style

Ruiying Shen, Dongdong Weng, Jie Guo, Hui Fang, Haiyan Jiang. Effects of Dynamic Disparity on Visual Fatigue Caused by Watching 2D Videos in HMDs. Communications in Computer and Information Science. 2019; ():310-321.

Chicago/Turabian Style

Ruiying Shen; Dongdong Weng; Jie Guo; Hui Fang; Haiyan Jiang. 2019. "Effects of Dynamic Disparity on Visual Fatigue Caused by Watching 2D Videos in HMDs." Communications in Computer and Information Science , no. : 310-321.

Journal article
Published: 11 July 2019 in Sensors
Reads 0
Downloads 0

We present a text entry technique called HiFinger, which is an eyes-free, one-handed wearable text entry technique for immersive virtual environments by thumb-to-fingers touch. This technique enables users to input text quickly, accurately, and comfortably with the sense of touch and a two-step input mode. It is especially suitable for mobile scenarios where users need to move (such as walking) in virtual environments. Various input signals can be triggered by moving the thumb towards ultra-thin pressure sensors placed on other fingers. After acquiring the comfort range of the touch between the thumb and other fingers, six placement modes for text entry are designed and tested, resulting in an optimal placement mode that leverages six pressure sensors for the text entry and two for the control function. A three-day study is conducted to evaluate the proposed technique, and experimental results show that novices can achieve an average text entry efficiency of 9.82 words per minute (WPM) in virtual environments based on head-mounted displays after a training period of 25 min.

ACS Style

Haiyan Jiang; Dongdong Weng; Zhenliang Zhang; Feng Chen. HiFinger: One-Handed Text Entry Technique for Virtual Environments Based on Touches between Fingers. Sensors 2019, 19, 3063 .

AMA Style

Haiyan Jiang, Dongdong Weng, Zhenliang Zhang, Feng Chen. HiFinger: One-Handed Text Entry Technique for Virtual Environments Based on Touches between Fingers. Sensors. 2019; 19 (14):3063.

Chicago/Turabian Style

Haiyan Jiang; Dongdong Weng; Zhenliang Zhang; Feng Chen. 2019. "HiFinger: One-Handed Text Entry Technique for Virtual Environments Based on Touches between Fingers." Sensors 19, no. 14: 3063.

Regular contributed paper
Published: 26 December 2018 in Journal of the Society for Information Display
Reads 0
Downloads 0

During continuous use of displays, a short rest can relax users' eyes and relieve visual fatigue. As one of the most important devices of virtual reality, head‐mounted displays (HMDs) can create an immersive 3D virtual world. When users have a short rest during the using of HMDs, they will experience a transition from virtual world to real world. In order to investigate how this change affects users' eye condition, we designed a 2 × 2 experiment to explore the effects of short rest during continuous using of HMDs and compared the results with those of 2D displays. The Visual Fatigue Scale, critical flicker frequency, visual acuity, pupillary diameter, and accommodation response of 80 participants were measured to assess the subject's performance. The experimental results indicated that a short rest during the using of 2D displays could significantly reduce users' visual fatigue. However, the experimental results of using HMDs showed that short rest during continuous using of HMD induced more severe symptoms of subjectively visual discomfort, but reduced the objectively visual fatigue.

ACS Style

Jie Guo; Dongdong Weng; Zhenliang Zhang; Yue Liu; Henry B.‐L. Duh; Yongtian Wang. Subjective and objective evaluation of visual fatigue caused by continuous and discontinuous use of HMDs. Journal of the Society for Information Display 2018, 27, 108 -119.

AMA Style

Jie Guo, Dongdong Weng, Zhenliang Zhang, Yue Liu, Henry B.‐L. Duh, Yongtian Wang. Subjective and objective evaluation of visual fatigue caused by continuous and discontinuous use of HMDs. Journal of the Society for Information Display. 2018; 27 (2):108-119.

Chicago/Turabian Style

Jie Guo; Dongdong Weng; Zhenliang Zhang; Yue Liu; Henry B.‐L. Duh; Yongtian Wang. 2018. "Subjective and objective evaluation of visual fatigue caused by continuous and discontinuous use of HMDs." Journal of the Society for Information Display 27, no. 2: 108-119.

Regular contributed paper
Published: 19 November 2018 in Journal of the Society for Information Display
Reads 0
Downloads 0

Building a human‐centered editable world can be fully realized in a virtual environment. Both mixed reality (MR) and virtual reality (VR) are feasible solutions to support the attribute of edition. Based on the current development of MR and VR, we present the vision‐tangible interactive display method and its implementation in both MR and VR. We address the issue of MR and VR together because they are similar regarding the proposed method. The editable mixed and virtual reality system is useful for studies, which exploit it as a platform. In this paper, we construct a virtual reality environment based on the Oculus Rift, and an MR system based on a binocular optical see‐through head‐mounted display. In the MR system about manipulating the Rubik's cube, and the VR system about deforming the virtual objects, the proposed vision‐tangible interactive display method is utilized to provide users with a more immersive environment. Experimental results indicate that the vision‐tangible interactive display method can improve the user experience and can be a promising way to make the virtual environment better.

ACS Style

Zhenliang Zhang; Yue Li; Jie Guo; Dongdong Weng; Yongtian Wang. Vision‐tangible interactive display method for mixed and virtual reality: Toward the human‐centered editable reality. Journal of the Society for Information Display 2018, 27, 72 -84.

AMA Style

Zhenliang Zhang, Yue Li, Jie Guo, Dongdong Weng, Yongtian Wang. Vision‐tangible interactive display method for mixed and virtual reality: Toward the human‐centered editable reality. Journal of the Society for Information Display. 2018; 27 (2):72-84.

Chicago/Turabian Style

Zhenliang Zhang; Yue Li; Jie Guo; Dongdong Weng; Yongtian Wang. 2018. "Vision‐tangible interactive display method for mixed and virtual reality: Toward the human‐centered editable reality." Journal of the Society for Information Display 27, no. 2: 72-84.

Regular contributed paper
Published: 12 August 2018 in Journal of the Society for Information Display
Reads 0
Downloads 0

Calibration accuracy is one of the most important factors to affect the user experience in mixed reality applications. For a typical mixed reality system built with the optical see‐through head‐mounted display, a key problem is how to guarantee the accuracy of hand–eye coordination by decreasing the instability of the eye and the head‐mounted display in long‐term use. In this paper, we propose a real‐time latent active correction algorithm to decrease hand–eye calibration errors accumulated over time. Experimental results show that we can guarantee an effective calibration result and improve the user experience with the proposed latent active correction algorithm. Based on the proposed system, experiments about virtual buttons are also designed, and the interactive performance regarding different scales of virtual buttons is presented. Finally, a direct physics‐inspired input method is constructed, which shares a similar performance with the gesture‐based input method but provides a lower learning cost due to its naturalness.

ACS Style

Zhenliang Zhang; Yue Liu; Jie Guo; Dongdong Weng; Yongtian Wang. Task-driven latent active correction for physics-inspired input method in near-field mixed reality applications. Journal of the Society for Information Display 2018, 26, 496 -509.

AMA Style

Zhenliang Zhang, Yue Liu, Jie Guo, Dongdong Weng, Yongtian Wang. Task-driven latent active correction for physics-inspired input method in near-field mixed reality applications. Journal of the Society for Information Display. 2018; 26 (8):496-509.

Chicago/Turabian Style

Zhenliang Zhang; Yue Liu; Jie Guo; Dongdong Weng; Yongtian Wang. 2018. "Task-driven latent active correction for physics-inspired input method in near-field mixed reality applications." Journal of the Society for Information Display 26, no. 8: 496-509.

Journal article
Published: 22 October 2017 in Sensors
Reads 0
Downloads 0

In 2015, HTC and Valve launched a virtual reality headset empowered with Lighthouse, the cutting-edge space positioning technology. Although Lighthouse is superior in terms of accuracy, latency and refresh rate, its algorithms do not support base station expansion, and is flawed concerning occlusion in moving targets, that is, it is unable to calculate their poses with a small set of sensors, resulting in the loss of optical tracking data. In view of these problems, this paper proposes an improved pose estimation algorithm for cases where occlusion is involved. Our algorithm calculates the pose of a given object with a unified dataset comprising of inputs from sensors recognized by all base stations, as long as three or more sensors detect a signal in total, no matter from which base station. To verify our algorithm, HTC official base stations and autonomous developed receivers are used for prototyping. The experiment result shows that our pose calculation algorithm can achieve precise positioning when a few sensors detect the signal.

ACS Style

Yi Yang; Dongdong Weng; Dong Li; Hang Xun. An Improved Method of Pose Estimation for Lighthouse Base Station Extension. Sensors 2017, 17, 2411 .

AMA Style

Yi Yang, Dongdong Weng, Dong Li, Hang Xun. An Improved Method of Pose Estimation for Lighthouse Base Station Extension. Sensors. 2017; 17 (10):2411.

Chicago/Turabian Style

Yi Yang; Dongdong Weng; Dong Li; Hang Xun. 2017. "An Improved Method of Pose Estimation for Lighthouse Base Station Extension." Sensors 17, no. 10: 2411.