Download presentation
Presentation is loading. Please wait.
Published byArchibald Chambers Modified over 9 years ago
1
Capture of Hair Geometry from Multiple Images Sylvain Paris – Hector M. Briceño – François X. Sillion ARTIS is a team of the GRAVIR - IMAG research lab (UMR 5527 between CNRS, INPG, INRIA and UJF), and a project of INRIA.
2
MotivationMotivation Digital humans more and more common Movies, games… Hairstyle is important Characteristic feature Duplicating real hairstyle Dusk demo - NVIDIA © 2004 NVIDIA Corporation. Dusk image is © 2004 by NVIDIA Corporation. All rights reserved.
3
MotivationMotivation User-based duplication of hair Creation from scratch Edition at fine level Image-based capture Automatic creation Copy of original features Edition still possible [Kim02]
4
Our approach Digital copy of real hairstyle Only static geometry (animation and appearance as future work) Dense set of 3D strands from multiple images
5
OutlineOutline Previous work Definitions and overview Details of the hair capture Results Conclusions
6
Previous work Shape reconstruction Computer Vision techniques –Shape from motion, shading, specularities 3D scanners Difficulties with hair complexity Only surfaces
7
Previous work Light-field approach Matusik et al. 2002 New images from: Different viewpoints + lights Alpha mattes Duplication of hairstyle No 3D strands Not editable [Matusik02]
8
Previous work Editing packages Hadap and Magnenat-Thalmann 2001 Kim et al. 2002 Dedicated tools to help the user 3D strands Total control Time-consuming Duplication very hard [Hadap01] MIRALab, University of Geneva [Kim02]
9
Previous work Procedural & Image-based Kong et al. 1997 Hair volume from images Procedural filling 3D strands Duplication of hair volume No duplication of hairstyle New procedure for each hair type [Kong97]
10
Previous work Image-based Grabli et al. 2002 Fixed camera, moving light 3D from shading 3D strands Duplication of hairstyle Partial reconstruction (holes) We build upon their approach. [Grabli02] Captured geometry Sample input image
11
1. Dense and reliable 2D data Robust image analysis 2. From 2D to 3D Reflection variation analysis Light moves, camera is fixed. Several light sweeps for all hair orientations 3. Complete hairstyle Above process from several viewpoints Our approach
12
OutlineOutline Previous work Definitions and overview Details of the hair capture Results Conclusions
13
DefinitionsDefinitions Fiber Strand Visible entity Segment Project on 1 pixel Orientation ~1mm
14
Setup & input
15
Input summary We use: 4 viewpoints 2 sweeps per viewpoint 50 to 100 images per sweep Camera and light positions known Hair region known (binary mask)
16
Cameras one by one All cameras together Main steps 1.Image analysis 2D orientation 2.Highlight analysis 3D orientation 3.Segment chaining 3D strands
17
Cameras one by one All cameras together Main steps 1.Image analysis 2D orientation 2.Highlight analysis 3D orientation 3.Segment chaining 3D strands
18
Measure of 2D orientation Difficult points Fiber smaller than pixel aliasing Complex light interaction Scattering, self-shadowing… Varying image properties Select measure method per pixel
19
Measure of 2D orientation Useful information Many images available …… Select light position per pixel
20
Our approach Based on oriented filters Try several options Use the ``best’’ = argmax | K I| 0°0°180° response Most reliable most discriminant Lowest variance 90°
21
Filter selection
22
ImplementationImplementation 1. Pre-processing: Filter images 2. For each pixel, test: Filter profiles Filter parameters Light positions Pick option with lowest variance 3. Post-processing: Smooth orientations (bilateral filter) 248 8
23
Per pixel selection 4 Canny 2 Gabor 4 2 nd Gauss.
24
2D results Sobel [Grabli02] Our result (More results in the paper) 8 filter profiles 3 filter parameters 9 light positions
25
All cameras together Cameras one by one Main steps 1.Image analysis 2D orientation 2.Highlight analysis 3D orientation 3.Segment chaining 3D strands
26
Mirror reflection Computing segment normal ~3° accuracy [Marschner03] For each pixel: Light position?
27
Practical measure
28
Orientation from 2 planes Intersection 2 planes 3D orientation (3D position determined later)
29
All cameras together Cameras one by one Main steps 1.Image analysis 2D orientation 2.Highlight analysis 3D orientation 3.Segment chaining 3D strands
30
Starting point of a strand Head approximation 3D ellipsoid …
31
Chaining the segments ?
32
Blending weights
33
Ending criterion Strand grows until: Limit length (user setting) Out of volume (visual hull)
34
OutlineOutline Previous work Definitions and overview Details of the hair capture Results Conclusions
35
ResultsResults
36
Result summary Similar reflection patterns Duplication of hairstyle Curly, wavy and tangled Blond, auburn and black Middle length, long and short
37
ConclusionsConclusions General contributions –Dense 2D orientation (filter selection) –3D from highlights on hair System – Proof of concept – Sufficient to validate the approach Capture of a whole head of hair Different hair types
38
LimitationsLimitations Image-based approach: only visible part Occlusions not handled (curls) Head: poor approximation Setup: makes the subject move During light sweep Between viewpoints
39
Future work Better setup and better head approximation Short term Data structures for editing and animation Reflectance Long term Hair motion capture Extended study of filter selection
40
Thanks… Questions ? The authors thank Stéphane Grabli, Steven Marschner, Laure Heïgéas, Stéphane Guy, Marc Lapierre, John Hughes, and Gilles Debunne. The images in the previous work appear by courtesy by NVIDIA, Tae-Yong Kim, Wojciech Matusik, Nadia Magnenat-Thalmann, Hiroki Takahashi, and Stéphane Grabli. Rendering using deep shadow maps kindly provided by Florence Bertails.
41
QuestionsQuestions Visual hullGrazing angle 2D validation Pre-processing Post-processing Comparisons
42
Reference image Four radial sines discontinuities Wavelength: 2 pixels aliasing
43
Results on reference image Our result (mean error 2.3°) Variance
44
Comparison with Sobel Our result (mean error 2.3°) Sobel filter (mean error 17°)
45
Visual hull 90° between the viewpoints Sharp edges
46
Reliable regions Front facing view Grazing view
47
Frequency selection Band-filtering (difference of Gaussians) Input image
48
Bilateral filtering Accounting for the adjacent pixels Spatial distance Filter reliability (variance) Appearance similarity (color) Weighted mean (Gaussian weights)
49
ComparisonComparison Without variance selection Reference
50
ComparisonComparison Without bilateral filtering Reference
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.