Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 MURI review meeting 09/21/2004 Dynamic Scene Modeling Video and Image Processing Lab University of California, Berkeley Christian Frueh Avideh Zakhor.

Similar presentations


Presentation on theme: "1 MURI review meeting 09/21/2004 Dynamic Scene Modeling Video and Image Processing Lab University of California, Berkeley Christian Frueh Avideh Zakhor."— Presentation transcript:

1 1 MURI review meeting 09/21/2004 Dynamic Scene Modeling Video and Image Processing Lab University of California, Berkeley Christian Frueh Avideh Zakhor

2 2 Dynamic Scene Modeling  4D Capture of a dynamic scene -3D geometry/depth + time  Applications: -Battlefield scenario -Event analysis, modeling, and visualization -Action classification and recognition

3 3 Battlefield Scenario

4 4

5 5 Objectives  Minimal interference with objects in the scene  Especially visible domain  humans  Capture 3D depth as well as intensity  Capture, model, and reconstruct a time- varying scene at video-rate  Off-the-shelf components -Low cost: e.g. camcorders, halogen lamp  Experiments: -Indoors -Offline processing

6 6 Proposed Acquisition Setup rotating mirror IR line laser IR camera vertical IR line projector VIS-light camera

7 7 Proposed Approach  Active system  Structured infrared light (IR) for depth estimationinvisible to human eye  Project static pattern of vertical IR stripes  Sweep horizontal IR line vertically  Capture with camcorder + IR filter  Depth via triangulation  Synchronized video camera for texture acquisition  3D arena equipped with stationary cameras/projectors

8 8 Prototype System rotating mirror IR line laser Digital camcorder with IR-filter Halogen lamp with IR-filter VIS-light camera PC Sync electronic Reference object for H-line Roast with vertical slices

9 9 Prototype System H-laser, polygonal mirror Stripe pattern for V-Lines Halogen lamp with IR filter Control PC Video sync generator Video camera Camcorder with IR filter

10 10 Depth From Structured Light Principle: Triangulation baseline laser ray object camera baseline obtain depth along 1 line How can we get dense depth? Light plane Use multiple parallel lines

11 11 Depth From Structured Light Problem: How to identify/distinguish individual lines?

12 12 Identify V-lines Via the Horizontal Line  Sweep horizontal laser line across scene, e.g. with 1Hz  Only one horizontal line easy to identify t0t0 Rotating mirror line laser  Depth along this line can be computed  Depth at intersections of horizontal (H) and vertical (V) lines is known  2 points + vertical -> V-plane equation -> depth  Intra-Frame Tracking  Track V-lines in frame  Problem: Depth only along some V-lines

13 13 Track V-lines across Frames  H-line sweeps across scene every V-line intersects with H-line in some frame  Track V-lines across frames  For each V-line, search for identified V-lines in previous/future frame around same location  Use V-line plane equation from previous / future frame  Inter-Frame Tracking 8 frames later

14 14 Captured Video Streams IR video stream VIS video stream  Frame rate: 30 Hz (NTSC)  Frame rate: 10 Hz  Synchronized with IR video stream

15 15 Overview of Processing Steps IR video stream V-Line detection VIS video stream H-Line detection Foreground identification Inter-Frame Tracking Dense Depth Frames Intra-Frame Tracking Foreground identification & VIS Projection Depth Inter/Extra- polation

16 16 H-Line Detection (1) How to determine current H-light plane equation? Find H-line spot on reference object

17 17 H-Line Detection (2) Apply horizontal edge filter to IR-frame Problem: Some wrinkles appear like H-lines

18 18 H-Line Detection (3)  H-line is at different location in every frame  Wrinkles are roughly at the same location across 2 frames: limited motion  Solution: H-feature is only a H-line, if location changes 371 372

19 19 H-Line Detection (4) Before After

20 20 H-Line Detection: Result

21 21 V-Line Detection Start with infrared image

22 22 V-Line Detection (2) Apply vertical edge filter

23 23 V-Line Detection (3) Thin out vertical edges

24 24 V-Line Detection (4) Track vertical edges

25 25 Clip V-lines To “Active Area” Background differencing

26 26 Clip V-lines To “Active Area” (2) Difference thresholding

27 27 Clip V-lines To “Active Area” (3) Region defragmentation via segmentation & majority voting => IR-active regions

28 28 Clip V-lines To “Active Area” (4) Clipping of V-lines to IR-active regions

29 29 Clip V-lines To “Active Area”: Result V-lines

30 30 Depth Estimation for V-lines  Search for intersection point with H-line  For every point on V-line, search for H-line point in proximity  Choose closest H-line point for light plane computation  Intra-frame tracking:  Track the V-line in the image and compute depth for each of its pixels

31 31 Intra-Frame Tracking Depth from intersection with H-lines

32 32 Inter-Frame Tracking  Object moves forward  lines shift right  Object moves backwards  lines shift left  If V-line pattern on object shifts less than half the line spacing, V-lines can be tracked across frames vertical laser plane moving object camera t0t0 t1t1 t2t2  For each unidentified V-line, search within half the line spacing for a identified V-line in the previous or subsequent frame  If found, use light plane equation

33 33 Inter-Frame Tracking: Forward Direction +Depth inferred from previous V-lines

34 34 Inter-Frame Tracking: Fwd + Bckwd +Depth inferred from future V-lines

35 35 Inter-Frame Tracking Intra-frame only Inter-frame forwards only Inter-frame forward and backwards

36 36 Resulting Depth for V-lines

37 37 Dense Depth From Sparse V-Lines  Depth lines sparse  No values between lines  Areas without depth information  Silhouette not accurate  Ideally: Depth value for every pixel in VIS image  Depth frame  VIS frame  Project depth lines into visible image  Accurate Silhouette from VIS image

38 38 Projected V-Lines onto VIS Frames Use depth information to project V-lines into visible domain

39 39 Fgnd/Bckgnd Separation in VIS-Frames Background subtraction followed by morphological operations/segmention

40 40 Movies Projected V-lines VIS-active areas

41 41 Dense Depth Interpolate/extrapolate to dense depth within marked foreground area

42 42 Dense Depth Depth along V-lines Dense depth

43 43 Results Depth video Visible video

44 44 Overview of Processing Steps IR video stream V-Line detection VIS video stream H-Line detection Foreground identification Inter-Frame Tracking Dense Depth Frames Intra-Frame Tracking Foreground identification & VIS Projection Depth Inter/Extra- polation

45 45 System Parameters and Trade-offs  Ideally: shutter time short to avoid motion blur  Limit: Sensitivity  Noise  Brightness, stripe contrast Camera:  Ideally: fast sweep, for small delay of V-Line identification  Limit: camera shutter time  Motion blurring  wide H-line H-line:

46 46 System Parameters and Trade-offs  Ideally: Many V-lines, for dense depth reconstruction  Limits:  (a) camera resolution  intra-frame tracking  (b) maximum object velocity  inter- frame tracking V-lines:  Ideally: Monochromatic IR-light, narrow bandwidth to reduce noise light  Limits:  cheap halogen lamp as light source  camera sensitivity

47 47 Future Work  Extension to outdoors  Multiple capturing stations – scene from all sides  Potential interference of projected patterns  Extension to portable system  Improvements in processing  Consistency  Object constraints  Code optimization for speed-up  Rendering  Dynamic VRML model?  Custom renderer for interactive exploration?


Download ppt "1 MURI review meeting 09/21/2004 Dynamic Scene Modeling Video and Image Processing Lab University of California, Berkeley Christian Frueh Avideh Zakhor."

Similar presentations


Ads by Google