Download presentation
Presentation is loading. Please wait.
Published byErica Patterson Modified over 9 years ago
1
Gregory Fotiades
2
Global illumination techniques are highly desirable for realistic interaction due to their high level of accuracy and photorealism Mobile devices are typically too weak in terms of computational abilities to perform global illumination techniques
3
For real time performance we need rendering speeds in the order of < 40ms (25fps) Low powered devices by their nature do not have access to high performance hardware
4
Pre-rendering the scene using tools such as “RenderMan on Demand” [5] is not feasible as we do not know what commands the user will send Lorio et al. [6] did work in using cloud computational power to model urban environments in a realistic time frame
5
Low powered device will offload intensive global illumination computations to the cloud Cloud will perform computations on distributed parallel architecture Rendered scenes will be sent back to client
7
Renderer in cloud must perform operations real time (< 40ms) – even less Renderer will be hardware accelerated to achieve said real time Renderer will need to scale in future works
9
RAW formats are too large even if we use convert colorspaces to make use of subsampling HeightWidthFormat Bytes Per Pixel (BPP) Frames per Second (FPS) Required Bandwidth 432 RGBA424137Mbps 10801920RGBA4241519Mbps 432 YUV444324103Mbps 432 YUV4201.52452Mbps 432 YUV4201.52043Mbps 10801920YUV4201.520475Mbps
10
Video compression must be utilized ◦ Compression codec is rather irrelevant, but VP8/VP9 come to mind first [29][30] Real time encoding/decoding in software is very common (ie. video conferencing) Network transmission takes a lot of time
11
Renderer is the ray tracing core Renderer makes use of Nvidia GTX 280 GFX CUDA API [24] is used for HW acceleration 432x432 resolution used ◦ 16x16 threads used across 27x27 blocks Only modified objects updated to lower overhead (copy scene to GPU) QT Toolkit [27] used for UI
12
Recursion not supported by GFX card ◦ No stack pointer or concept of frames ◦ Iterative conversion done Static frame stack implemented and used ◦ Recursion supported on newer HW, future work may investigate possible port to recursion Stack/recursion depth of 8 used ◦ Higher than 8 had no noticeable improvement Some basic optimizations performed, but all trivial
13
Triangle intersection detection not implemented, so complex meshes cannot be rendered ◦ This is topic to analyze in future work as triangular meshes will drastically increase render time ◦ Multiple client implementation may be necessary Ray tracer based on CG2 ray tracer, so lacks advanced shading techniques Ray tracer uses static scene intersection detection, needs some form of dynamic detection in future
14
Implementation video compression is outside the scope of project RDP clients fill niche of transmission assuming they perform well enough for UX to be smooth as they also handle cmds to server Kinoni Remote Desktop [33] used for transmission Future work should explore its own compression or direct compression control and color space conversion
16
CPUGPU Min Render Time87.17714.089 Max Render Time92.62917.576 Avg Render Time89.44115.989 Rendering speedup of ~5.6x ◦ More important than speedup is scalability
17
200Kbits/sec bandwidth usage 4.4Mbit/sec download easily feasible over 4g ◦ Not enough for raw transfers Commands to serverRendering to Client Avg bytes / second245426967 Avg Kbits / second19.172210.680 Avg Mbits / second0.0190.206 UploadDownload Avg bytes / second919.62577540.72 Avg Kbits / second7.1854512.037 Avg Mbits / second0.0074.406
18
Real time interactive global illumination is feasible if parallel resources and a network connection are available Video compression is necessary to transmit images
19
Scalability across multiple systems and multiple GPUs Implement triangular mesh intersection detection Implement dynamic object management In-house video compression
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.