Download presentation
Presentation is loading. Please wait.
1
School of Computer Science and Software Engineering Region Warping in a Virtual Reality System with Priority Rendering Monash University Yang-Wai Chow Ronald Pose Matthew Regan
2
Overview Background Address Recalculation Pipeline Priority Rendering Description of the challenges/problems Large object segmentation Tearing Solution to the problem Region Priority Rendering Region Warping Experimental Results Future Work
3
Background Address Recalculation Pipeline Priority Rendering
4
Latency is a major factor that plagues the designing of immersive Head Mounted Display (HMD) virtual reality systems End-to-end latency is defined as the time between a user’s actions and when those actions are reflected by the display The Address Recalculation Pipeline (ARP) was designed to reduce the end-to-end latency due to user head rotations for immersive Head Mounted Display (HMD) virtual reality systems User ActionsDelay End-to-end latency Actions reflected by Display The latency problem
5
Lengthy delays in immersive Head Mounted Display (HMD) virtual reality systems can have adverse effects on the user Latency can completely destroy the illusion of reality that the virtual reality system attempts to present to the user Head Mounted Display (HMD)
6
Conventional virtual reality display systems attempt to shorten the end- to-end latency by reducing scene complexity and/or by using faster rendering engines Even with the fast graphics accelerators available today that can render over 100 frames per second (fps) the end-to-end latency remains a factor to be contended with The update cycle is still bound by the need to obtain up-to-date head orientation information (where the user is looking) before any form of rendering can commence Head Tracking Image Creation Buffer Swap Image Valid Normal sequence of events Conventional systems attempt to shorten this section
7
On conventional graphics systems, the rendering process is bound by the need to obtain up-to-date head orientation information prior to rendering Geometric transform Database traversal Face classific. Lighting Clipping Viewport mapping Scan conv. Pixel addressing Image comp. Display image Display buffer Head Orientation Conventional virtual reality system
8
The ARP is fundamentally different from conventional systems in that it implements delayed viewport mapping, a concept whereby viewport mapping is performed post rendering Geometric transform Database traversal Face classific. LightingClipping Viewport mapping Scan conv. Image comp. Display image Display buffer Anti- aliasing Pixel addressing Wide angle correction Locate pixel Head Orientation The Address Recalculation Pipeline (ARP) virtual reality system
9
The ARP effectively decouples viewport orientation mapping from the rendering process, and in this manner removes the usually lengthy rendering time and buffer swapping delays from the latency In separating the viewport orientation mapping from the rendering process, latency is now bound to the HMD unit’s update rate and the time required to fetch pixels from display memory The systems is very much less dependent on the rendering frame rate and is therefore fairly independent of scene complexity Average latency to head rotations WITHOUT pipeline Average latency to head rotations WITH pipeline Head Tracking Image Creation Buffer Swap Image Valid Head Tracking Image Valid
10
In order to implement delayed viewport mapping, the ARP requires the scene that encapsulates the user’s head to be pre-rendered onto display memory The surface of a cube was chosen to be the rendering surface surrounding the user’s head, mainly because of its rendering simplicity The rendering surface of a cube contains six standard viewport mappings each orthogonal from the other There are standard algorithms for cube surface rendering The use of such rendering can be found in a computer graphics technique known as cube environment mapping Top Front Bottom RightLeftBack
11
A rendering method known as Priority Rendering was developed to be used in conjunction with the ARP system, for the purpose of reducing the overall rendering load Priority rendering is based on the concept of Image Composition Different section of the scene can be rendered onto separate display memories before being combined to form an image of the whole scene Image Composition
12
Priority Rendering allows different section of the scene that surrounds the user’s head to be rendered onto separate display memories and therefore can updated at different update rates In the ARP system, most objects in the scene will remain valid upon user head rotations Perspective ‘foreshortening’ – objects closer to the display will appear larger than distant objects Also, upon user translations objects closer to the display will appear to move by larger amounts compared to distant objects Priority Rendering
13
Description of the challenges/problems Large object segmentation Tearing
14
It is conceivable that the use of large object segmentation in conjunction with Priority Rendering could potentially further reduce the overall rendering load Fractal terrain example – A fractal terrain typically consists of thousands of polygons. If the terrain were to be segmented for priority rendering, different sections of the fractal terrain could be updated at different update rates Large object segmentation
15
The implementation of object segmentation with Priority Rendering gives rise to a potential scene tearing problem Tearing can potentially occur when different sections of the same object are rendered at different update rates, whilst the user is translating through the scene The tearing problem
16
Scene tearing artefacts will completely destroy the illusion of reality, and therefore has to be addressed before object segmentation can be used effectively Fractal terrain tearing example
17
Solution to the problem Region Priority Rendering Region Warping
18
Region Priority Rendering was devised to implicitly sort objects spatially and also to provide a criterion for object segmentation Region Priority Rendering This methodology involved dividing the virtual world objects into equal sized clusters or regions
19
By dividing the virtual world into square based regions, object segments could be assigned to the different display memories with the different update rates Objects in the regions were assigned to the display memories with the different update rates based on spatial locality Large objects were to be segmented along region boundaries In this way tearing would be predictable and the size of the tearing could also be computed Region Priority Rendering
20
Region Warping was designed to hide the scene tearing artefacts resulting from object segmentation with Priority Rendering Region Warping Region Warping essentially involves the perturbation of object vertices in order to hide the tearing artefacts
21
Experimental Method
22
Before region warping could be performed, normalization of the vertices had to be performed in order to determine the exact amount of perturbation required for each vertex Normalization was performed using what can be seen as concentric squares centered on the circumference of the region the user is currently located in Normalization
23
All vertices in the regions had to be perturbed in order to avoid the potential problem of objects looking out of place The warping Region warping forces the vertices on the different regions to align, thus hiding the tearing from the user Two interpolation methods were used in the experiments, these were linear interpolation and squared interpolation Analysis was conducted to determine which form of interpolation produced better results
24
Scene used for the experiments Gallery scene
25
Experimental Results
26
An example of a single frame showing the scene tearing effect Scene tearing
27
The exact same frame, this time with Region Warping Region Warping results
28
The level of distortion cause by linear interpolation and squared interpolation region warping were analyzed The Mean Squared Error (MSE) and Peak Signal-to-Noise Ration (PSNR) error metrics were used to analyse the two warping methods mathematically These error metrics are commonly used to measure the level of distortion in video or image compression techniques Analysis
29
The lower the Mean-Squared-Error (MSE) value, indicates less errors in the frames from the original (normal rendering) Mean-Squared-Error (MSE) results
30
The higher the Peak Signal-to-Noise Ratio (PSNR) values, means the closer the frames are compared to the original (normal rendering) Peak Signal-to-Noise Ratio (PSNR) results PSNR (dB) Frame NumbersLinear InterpolationSquared Interpolation 89029.9179430.44601 89130.2899232.71463 89280.82784 89329.4170631.64415 89424.5890326.82375 89530.0952630.47705 89630.5250333.17345 89785.69346 89829.7813531.54816 89924.2385527.04248
31
Observation of the difference images showed that squared interpolation region warping is more attractive as it pushes the bulk of errors further away from the user’s point of view Difference image showing error between normal rendering and linear interpolation region warping Difference image showing error between normal rendering, linear interpolation and squared interpolation region warping Difference images
32
Where to from here… Human visual perception experiments Relationships between level of distortion, region sizes, speed of user translations, and etc. Computational and rendering load Dynamic shadow generation and shaders Future work
33
Questions or Suggestions?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.