Download presentation
Presentation is loading. Please wait.
Published byAileen Bradford Modified over 9 years ago
1
Eurographics Symposium on Rendering 2008 Yue Dong, Sylvain Lefebvre, Xin Tong, George Drettakis
2
We introduce a new algorithm with the unique ability to restrict synthesis to a subset of the voxels, while enforcing spatial determinism ◦ Only a thick layer around the surface needs to be synthesized Synthesize a volume from a set of pre- computed 3D candidates ◦ Carefully select in a pre-process only those candidates forming consistent triples
3
Runs efficiently on the GPU ◦ Generates high resolution solid textures on surfaces within seconds ◦ Memory usage and synthesis time only depend on the output textured surface area ◦ Our method rapidly synthesizes new textures for the surfaces appearing when interactively breaking or cutting objects
4
Solid textures define the texture content directly in 3D ◦ Removes the need of a planar parameterization ◦ Unique feeling that the object has been carved out of a block of matter
5
Implicit Volume ◦ Color = f(x, y, z) ◦ Procedural texturing Texturing and Modeling: A Procedural Approach EBERT D., MUSGRAVE K., PEACHEY D., PERLIN K., WORLEY Academic Press, 1994 ◦ Spectral analysis Spectral analysis for automatic 3d texture generation GHAZANFARPOUR D., DISCHLER J.-M. Computers & Graphics, 1995 Generation of 3d texture using multiple 2d models analysis GHAZANFARPOUR D., DISCHLER J.-M. Computers & Graphics,1996 Low memory usage Limited range of materials
6
Explicit Volume ◦ Color = g[x, y, z] ◦ Histogram matching Pyramid-Based texture analysis/synthesis HEEGER D. J., BERGEN J. R. SIGGRAPH, 1995 ◦ Stereological technique Stereological techniques for solid textures JAGNOW R., DORSEY J., RUSHMEIER H. SIGGRAPH, 2004 ◦ Neighborhood matching Texture synthesis by fixed neighborhood searching WEI L.-Y. PhD thesis, 2002, Stanford University Aura 3d textures QIN X., YANG Y.-H. IEEE Transactions on Visualization and Computer Graphics, 2007 Solid texture synthesis from 2d exemplars KOPF J., FU C.-W., COHEN-OR D., DEUSSEN O., LISCHINSKI D., WONG T.-T. SIGGRAPH, 2007 Good Quality Can synthesis various materials Take long time to compute
7
Pre-computation ◦ 3D candidates from 2D exemplars Multi-resolution pyramid synthesis ◦ Upsample ◦ Jitter ◦ Correction
8
Pixel : 2D / Voxel : 3D Triple : a set of three 2D coordinates Crossbar : a set of pixels which are crossing in three neighborhoods of size N (N = 5)
9
We select candidate triples following two important properties ◦ A good triple must have matching colors along the crossbar of the three neighborhood To provide color consistency ◦ A good triple must have a good coherence across all three exemplars Which is likely to form coherent patches with other neighboring candidates
10
A suitable candidate should be consistent across the crossbar ◦ Minimize the color difference of the crossbar ◦ Compute L 2 color difference between each pairs ◦ The sum of difference for the three pairs defines a crossbar error CB
11
In each pixel of each exemplar ◦ Form triples using the pixel itself and two neighborhoods from the other two exemplars ◦ Select the triples producing the smallest crossbar error To speed up the process ◦ Extract S most-similar pixel strips from each of the two exemplars, using ANN library ◦ Form S 2 triples then take 100 best triples ◦ S = 65
12
Check whether a candidate may form coherent patches in all directions with candidates from neighboring pixels For each coordinate within a candidate triple ◦ Verify that at least one candidate from a neighboring pixel has a continuous coordinate
13
pp+1 ExEx EyEy EzEz
14
x C – Candidates for E x x C k [p] – k-th candidate triple for pixel p in Ex x C k [p] y – E y coordinate of the triple x C k [p]
15
Iterate until having no more than 12 candidates per pixel ◦ Typically requires 2 iterations If more candidates remain select first 12 with the smallest crossbar error It is possible to have no candidate at all ◦ Rare in practice
16
Candidates are not only useful for neighborhood matching, but also provide a very good initialization for the synthesis process For each pixel ◦ One 2D neighborhood is in the plane of the exemplar ◦ Two others are orthogonal to it and intersect along a line of neighborhood size of N voxels
17
To initialize synthesis we create such a slab using the best (first) candidate at each pixel Using the slab directly as a 3D exemplar would be very limiting ◦ This would ignore all other candidates ◦ Uses a slab only for initialization
18
Extended ‘Parallel Controllable Texture Synthesis’ [SIGGRAPH 2005] Same overall structure ◦ Upsample ◦ Jitter ◦ Correction
19
Contrary to the original scheme we perturb the result through jitter only once, after initialization ◦ If finer control is desired, jitter could be explicitly added after each upsampling step
20
To reduce synthesis time, multi-resolution synthesis algorithms can start from an intermediate level of the image pyramid A good initialization is key to achieve high- quality synthesis We simply choose one of the candidate slabs and tile it in the volume ◦ Three levels above the finest (Maximum Level L – 3) ◦ Using the candidate slab from the corresponding level
21
Random InitializationSlab Initialization
22
To explicitly introduce variations to generate variety in the result Perturb the initial result by applying a continuous deformation, similar to a random warp
23
J – Jittered Volume v – Voxel coordinate c i – Random point in space d i – Nomalized Random direction G = 200 A i = 0.1 ~ 0.3 σ i = 0.01 ~ 0.05
24
It is important for A i to have larger magnitude with smaller σ i ◦ Adds stronger perturbation at small scales, while adding subtle distortions to coarser scales ◦ Small scale distortions are corrected by synthesis, introducing variety The overall magnitude of the jitter is directly controllable by the user
25
Each of the eight child volume cells inherits three coordinates from its parent, one for each direction
26
Performed on all synthesized voxels simultaneously, in parallel We compute a color by averaging the corresponding three colors from the exemplars We visit each of its direct neighbors, and use the stored coordinate triples to gather the candidate sets
27
P x – 3 x 2 matrix transforming a 2D offset from E x to volume space
28
Search for the best matching candidate by distance between voxel neighborhood and 3D candidate Distance is measured by L 2 norm on color differences ◦ Can use PCA projection to speed up the process Replace the triple with best matching candidate ◦ triples have been pre-computed and optimized to guarantee that the color disparity between the three colors in each voxel is low Two correction pass for every level ◦ Using sub-pass mechanism of PCTS ◦ 8 pass sub-pass
29
We gather 12 candidates from the 33 = 27 direct neighbors, for a total of 324 candidates per voxel ◦ Too many candidates Search for the best matching 2D candidates in each of the three directions then gather the 3D candidates only from these three best matching pixels ◦ Still a lot ◦ In practice we keep 4 2D and 12 3D candidates per exemplar pixel at coarse levels 27×4 = 108 2D candidates 3×12 = 36 3D candidates ◦ 2 2D and 4 3D candidates at the finest level
30
Determine the entire dependency chain throughout the volume pyramid from a requested set of voxels to synthesize the smallest number of voxels ◦ Compute a synthesis mask Mask l p ⊗Neighborhood Shape - dilation of the mask by the shape of the neighborhoods
31
To compute a single voxel, with N = 5, 2 passes and synthesis of the 3 last levels, our scheme requires a dependency chain of 6778 voxels ◦ The size of the dependency chain grows quadratically with the number of passes
32
Entirely in software and using the GPU to accelerate the actual synthesis Intel Core2 6400 (2.13GHz) CPU and an NVIDIA GeForce 8800 Ultra We sometimes add feature distance
33
Most results in the paper are computed from a single example image repeated three times ◦ Pre-computed candidates may be shared depending on the orientation chosen for the image ◦ Typically 7 seconds for 64 2 exemplars ◦ 25 to 35 seconds for 128 2 exemplars ◦ Includes building the exemplar pyramids, computing the PCA bases and building the candidate sets ◦ 231KB memory required for a 64 2 exemplar
34
Implemented in fragment shaders, using the OpenGL Shading Language ◦ Unfold volumes in tiled 2D textures, using three 2- channel 16 bit render targets to store the synthesized triples ◦ Pre-compute and reduce the dimensionality of all candidate 3-neighborhoods using PCA, keeping between 12 and 8 dimensions Keep more terms at coarser levels since less variance is captured by the first dimensions ◦ Quantize the neighborhoods to 8-bits to reduce bandwidth Stored in RGBA 8 bit textures
35
In order to minimize memory consumption, we perform synthesis into a TileTree data structure ◦ LEFEBVRE S., DACHSBACHER C. In Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D ◦ Graphics and Games (2007)
36
When interactively cutting an object, synthesis occurs only once for the newly appearing surfaces ◦ Tile-Tree cannot be updated interactively Store the result in a 2D texture map for display ◦ Our implementation only allows planar cuts new surfaces are planar and are trivially parameterized onto the 2D texture synthesized when the cut occurs
39
7.22 seconds for synthesizing the 64 3 volume from 64 2 exemplar ◦ 7 seconds for pre-computation and 220 milliseconds for synthesis ◦ Memory requirement during synthesis is 3.5MB 28.7 seconds for synthesizing the 128 3 volume from 128 2 exemplar ◦ 27 seconds for pre-computation and 1.7 seconds for synthesis ◦ ‘Solid texture synthesis from 2d exemplars’ [SIGGRAPH 2007] takes 10 to 90 minutes
41
4.1 seconds (dragon) to 17 seconds (complex structure), excluding pre-computation Storage of the texture data requires between 17.1MB (statue) and 54MB (complex structure) ◦ The equivalent volume resolution is 1024 3 which would require 3GB Slower than state-of-the-art pure surface texture synthesis approaches ◦ But inherits all the properties of solid texturing
42
On demand synthesis when cutting or breaking objects (Fig. 10) ◦ Resolution of 256 3 Initially requires 1.3MB ◦ The average time for synthesizing a 256 2 texture for a new cut is 8 ms ◦ Synthesizing a 256 2 slice of texture content requires 14.4MB Due to the necessary padding to ensure spatial determinism
45
We also implemented our synthesis algorithm using only standard 2D candidates ◦ Takes roughly twice the number of iterations to obtain a result of equivalent visual quality ◦ Due to the increased number of iterations, the size of the dependency chain for computing a single voxel grows from 6778 voxels with 3D candidates to 76812 voxels with 2D candidates A factor of 11.3 in both memory usage and speed
47
A new algorithm for solid synthesis ◦ with the unique ability to restrict synthesis to a subset of the voxels, while enforcing spatial determinism ◦ Synthesize a volume from a set of pre-computed 3D candidates ◦ GPU implementation is fast enough to provide on demand synthesis when interactively cutting or breaking objects
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.