Presentation is loading. Please wait.

Presentation is loading. Please wait.

Change Detection in Rolling Shutter Cameras

Similar presentations


Presentation on theme: "Change Detection in Rolling Shutter Cameras"— Presentation transcript:

1 Change Detection in Rolling Shutter Cameras
Ph.D. seminar talk – I Vijay Rengarajan EE11D035 Guides: Prof. A.N.Rajagopalan and Prof. R.Aravind September 23, 2016 Good afternoon everyone. The title of my first Ph.D. seminar talk is Change Detection in Rolling Shutter Cameras. This will cover my work with Prof. A.N. Rajagopalan and Prof. R. Aravind. Vijay Rengarajan A P, EE11D035, Image Processing and Computer Vision lab, Department of Electrical Engineering, IIT Madras

2 Change Detection Find regions of changes between two images
Aerial Imagery Aircraft hovering over an area to monitor changes Observed image compared with a reference image Scene is considered to be flat Main challenge: Camera motion Change detection is the phenomenon of detecting changes between two images. As you know that an image is an array of pixel values recorded by a camera. Given two images, we need to tell the values of which pixel locations are different between them. I will discuss my work with respect to the application of aerial imagery. An aircraft equipped with a camera maps a selected location imaging from above. This reference map is collected beforehand and the images are taken carefully to not contain any artifacts. For applications such as surveillance or environmental mapping, aircrafts hover around the same region of interest to look for any changes. These observed images are sent to the server to compare with the reference image and detect regions of changes. In this scenario, the scene is considered to be flat since the aircraft flies high enough to ignore the height differences of structures on the ground. There are quite a few challenges in such change detection. One major challenge is that of camera motion. IT causes artifacts in the captured images making the change detection process difficult. ---- Challenges: Illumination changes, camera motion, noise Show simple viewpoint change illustration, two images with one rotated image. Show that one of the images has to be warped to “fit onto” the other image. Show perhaps grid also.

3 Global Shutter Cameras
All pixels expose at the same time Exposure time, te t Sensor plane All pixels To understand the artifacts produced by the camera motion, we need to understand the nature of exposure in acmeras. A commonly used camera is that of global shutter. For a specified exposure time te, the sensors in the camera are opened up to collect light. These are then converted to a digital number which we view as an image. Here, all pixels get exposed to light at the same time. Show rendered image of the aerial scene Show a video of handheld mobile capturing a video showing the mobile screen exhibiting rs effect Show a video/gif with aircraft panning the a ground now with both rs and mb Exposure open Exposure close

4 Rolling Shutter Cameras
Exposure time of rows starts sequentially Sensor plane t Exposure time, te Top row Bottom row In recent times, CMOs sensors have become prevalent. They use a shutter mechanism called rolling shutter. Unlike global shutter, each rows of pixels start their exposure time sequentially. This helps in an reduction in the light to intensity conversion circuit as it can be shared by rows. Show rendered image of the aerial scene Show a video of handheld mobile capturing a video showing the mobile screen exhibiting rs effect Show a video/gif with aircraft panning the a ground now with both rs and mb Exposure open Total line delay, Td Exposure close

5 Rolling Shutter Cameras
Exposure during no camera motion Captured image Exposure during camera motion Captured image Using rolling shutter cameras poses absolutely no problem when the camera is stationary. The scene is captured as is as would be captured using a global shutter camera. When the camera moves during the exposure of rows, each row can see a different version of the scene since the camera is moving during that time. This results in a skewing or wobbling like effect in the captured image. What you see through your human eyes is not what you get. The figure shows an illustration of what is called as the rolling shutter effect. - Show rendered image of the aerial scene Show a video of handheld mobile capturing a video showing the mobile screen exhibiting rs effect Show a video/gif with aircraft panning the a ground now with both rs and mb

6 Rolling Shutter Cameras
The type of distortion depends on the ratio of total line delay and exposure period Exposure Top row Bottom row Camera motion Based on the length of the exposure of rows compared to the total line delay, the type of distortion could vary. When the exposure is very small, what we see is only the rolling shutter effect. If the exposure is about the same as the total line delay, we see a combination of therolling shutter effect and motion blur. When the exposure period is very large, then the line delay becomes negligible and all rows experience almost the same camera motion, and this results in the traditional gloval shutter motion blur. -- Show rendered image of the aerial scene Show a video of handheld mobile capturing a video showing the mobile screen exhibiting rs effect Show a video/gif with aircraft panning the a ground now with both rs and mb Rolling shutter only (RS) Rolling shutter and motion blur (RSMB) Global shutter motion blur (GSMB)

7 Rolling shutter effect
Spot the difference! Reference image Distorted image Rolling shutter effect Change Under such distortions, change detection is challenging. Given a reference and a distorted image, we humans can easily spot the difference. To make the signal processor do such a thing, it is sessential to estimate the camera motion, in this case, the rolling shutter row-wise camera motion to register the two images that is to align them and to detect changes. It is important in the process to ignore valid geometric changes of the rolling shutter effect and photo metric changes of motion blur. Registration of the two images Estimate camera motion for every row Detection of changes Ignore valid geometric and photometric changes Motion blur

8 Simple differencing will not work
Reference image Distorted image Reference image Rolling shutter effect Change A simple differencing obviously will not work. Pixeltopixel differencing results in large errors. In this example, even a simple geometric registration such as translating or rotating one of the images to align with the other will not work. Registration of the two images Estimate camera motion for every row Detection of changes Ignore valid geometric and photometric changes Motion blur

9 Simple differencing will not work
Reference image Distorted image Distorted image Rolling shutter effect Change Registration of the two images Estimate camera motion for every row Detection of changes Ignore valid geometric and photometric changes Motion blur

10 Simple differencing will not work
Reference image Distorted image Difference image Rolling shutter effect Change Registration of the two images Estimate camera motion for every row Detection of changes Ignore valid geometric and photometric changes Motion blur

11 Simple differencing will not work
Reference image Distorted image Detected changes Rolling shutter effect Change Registration of the two images Estimate camera motion for every row Detection of changes Ignore valid geometric and photometric changes Motion blur

12 Traditional works model global motion blur
Blurred image Reference image Traditional works assume global shutter cameras and model motion blur as an averaging of different warps or geometric transformations of a latent image during the exposure time based on the camea motion. F is the latent reference image. G is the averaging of its warped versions. -- Equation of motion blur Then rowwise motion blur (rsmb) Tell special cases of rs only and mb only Camera motion Exposure time O. Whyte, J. Sivic, A. Zisserman, and J. Ponce, “Non-uniform deblurring for shaken images”, International Journal of Computer Vision, 2012.

13 We introduce row-wise motion blur for rolling shutter
Rolling shutter motion blur rows We introduce a row-wise averaging of motion blur to allow for the local exposure changes in rolling shutter cameras. Each row sees unique camera poses since their exposure times are different. Here the superscript I denotes the row of that image. The ith row of g is equal to the average of the ith row of warped versions of f. Each row i has a unique exposure interval. Equation of motion blur Then rowwise motion blur (rsmb) Tell special cases of rs only and mb only

14 We introduce row-wise motion blur for rolling shutter
Rolling shutter motion blur rows i th row exposure Discretised model Pose space S Weighted sum of warped versions This continuous time formulation can be represented in discrete form by discretising the camera pose space. A cameaa can move in s ix dimensions, 3 translations and 3 rotations. And it is continuous . We can discretise each fo he siz dimenstions and allow the pose space only a set of discrete 6d pose vectors. This pose space is denoted by S. Each row takes camera poses from this set S. This everaging can be represented in a matrix form. Each observed blurred row is a weighted combination of rows of warped f. Now that we have modelled rowwise blur, how to model the changes for our application? --- Equation of motion blur Then rowwise motion blur (rsmb) Tell special cases of rs only and mb only

15 We introduce row-wise motion blur for rolling shutter
Rolling shutter motion blur rows i th row exposure Discretised model i+1 th row exposure Weighted sum of warped versions This continuous time formulation can be represented in discrete form by discretising the camera pose space. A cameaa can move in s ix dimensions, 3 translations and 3 rotations. And it is continuous . We can discretise each fo he siz dimenstions and allow the pose space only a set of discrete 6d pose vectors. This pose space is denoted by S. Each row takes camera poses from this set S. This everaging can be represented in a matrix form. Each observed blurred row is a weighted combination of rows of warped f. Now that we have modelled rowwise blur, how to model the changes for our application? --- Equation of motion blur Then rowwise motion blur (rsmb) Tell special cases of rs only and mb only Pose space S

16 We introduce row-wise motion blur for rolling shutter
Rolling shutter motion blur rows Discretised model Weighted sum of warped versions This continuous time formulation can be represented in discrete form by discretising the camera pose space. A cameaa can move in s ix dimensions, 3 translations and 3 rotations. And it is continuous . We can discretise each fo he siz dimenstions and allow the pose space only a set of discrete 6d pose vectors. This pose space is denoted by S. Each row takes camera poses from this set S. This everaging can be represented in a matrix form. Each observed blurred row is a weighted combination of rows of warped f. Now that we have modelled rowwise blur, how to model the changes for our application? --- Equation of motion blur Then rowwise motion blur (rsmb) Tell special cases of rs only and mb only Camera pose weight vector Distorted image row Matrix multiplication Rows of warped reference image How to model changes in addition to row-wise blur?

17 Model change as an additive component
Distorted row Blur-registered reference row Changes Camera pose weight vector Distorted image row Rows of warped reference image Identity basis for changes Change weight vector We model the change as an additive component chi. Each row I has its own chi. The system matrix can be augmented with an identity matrix to facilitate the chi vector. The first part of the weight vector in orange is the camera pose vector which chooses the camera poses by a weight based on the amount the camera stays in that pose. The second part of the weight vector is the change vector which denotes the loations and intensitires of the changes. Given the reference iamge and the RSMB image, our task is to estimate the camera poses and changes for each row. That is we have to solce the system matrix g equal to B chi for hci. -- Equation of motion blur Then rowwise motion blur (rsmb) Tell special cases of rs only and mb only How to estimate and , given and ?

18 Exploit sparsity of camera motion and changes
1 Data cost Photometric and geometric registration Priors Sparsity of camera motion Sparsity of changes Non-negativity of pose weights 1 2 2 The sytem equation is underdetermined. Therefore to restrict the solution nspac,e we impose priors. Blah blah. --

19 Results Reference image RSMB distorted image Registered image
Detected changes

20 Results Reference image Reference image RSMB image Registered image
Detected changes

21 Results Distorted image Reference image RSMB image Registered image
Detected changes

22 Results Registered image Reference image RSMB image Registered image
Detected changes

23 Results Detected changes Reference image RSMB image Registered image

24 Our algorithm estimates non-uniformly warped grid
Reference image Estimated camera motion Registered image

25 Results Reference image Distorted image Registered image
Detected changes

26 Results Reference image Distorted image Registered image
Detected changes

27 Comparison with sequential framework
Global motion deblurring 2 1 RSMB image Deblurred RSMB image Deblur and Register framework 1 Detected Changes 2 Register rolling shutter effect Reference image

28 Comparison with sequential framework
Global motion deblurring 2 1 RSMB image Deblurred RSMB image Deblur and Register framework 1 Detected Changes 2 Register rolling shutter effect Reference image RS registered images Liang et al. 2008 Detected changes Our method Deblurred image Ringaby and Forssen 2012 Whyte et al. 2012 O. Whyte, J. Sivic, A. Zisserman, and J. Ponce, “Non-uniform deblurring for shaken images”, International Journal of Computer Vision, 2012. C. Liang, L. Chang, and H. Chen, “Analysis and compensation of rolling shutter effect”, IEEE Transactions on Image Processing, 2008. E. Ringaby and P.E. Forssen, “Efficient video rectification and stabilisation for cell-phones”, International Journal of Computer Vision, 2012.

29 Comparison with sequential framework
1 RSMB image 1 2 Register and Reblur framework RS-rectified image 2 Register rolling shutter effect Detected Changes Reference image Global motion blur registration

30 Comparison with sequential framework
1 RSMB image 1 2 Register and Reblur framework RS-rectified image 2 Register rolling shutter effect Detected Changes Reference image Global motion blur registration Our method RS rectified image Reblurred image Detected changes Grundmann et al.2012 Whyte et al. 2012 M. Grundmann, V. Kwatra, D. Castro, and I. Essa, “Calibration-free rolling shutter removal”, International Conference on Computational Photography, 2012. O. Whyte, J. Sivic, A. Zisserman, and J. Ponce, “Non-uniform deblurring for shaken images”, International Journal of Computer Vision, 2012.

31 Comparison with RSMB deblurring framework
Su and Heidrich (2015) deblur with the assumption of 2D parametric camera motion ignoring inplane rotations RSMB distorted image RSMB deblurred image Detected changes by Su and Heidrich Our method Reference image S. Su and W. Heidrich, “Rolling shutter motion deblurring”, IEEE Conference on Computer Vision and Pattern Recognition, 2015

32 So far Model row-wise motion blur for rolling shutter cameras Model change as an additive component Jointly estimate camera motion and changes Only for flat 2D scenes! Show animation A ground, few buildings, a well on the ground perhaps Show a video or a gif panning the ground from a drone. The drone is not far from the building tops. The rendered video should show different velocities for different objects based on their heights

33 Aerial Imagery of 3D Scenes
Imaging from drones -- prevalent Scene -- no longer flat Challenge: Motion registration at all depths Show animation A ground, few buildings, a well on the ground perhaps Show a video or a gif panning the ground from a drone. The drone is not far from the building tops. The rendered video should show different velocities for different objects based on their heights

34 Layered 3D Scene Model Rolling shutter motion blur model for 3D scene : Image from layer for camera pose : 2D Image from all layers for camera pose : For all camera poses :

35 Motion of a 3D Scene Captured image at one camera pose

36 Motion of a 3D Scene Images sensed during camera motion

37 Motion of a 3D Scene Blurred image
Different blur lengths at different layers

38 Spot the difference! – 3D Scene Version
Reference image Distorted image Registration of the two images Estimate camera motion for every row and at every depth layer Detection of changes Ignore valid geometric and photometric changes

39 Change Detection for 3D Scenes
1 Background registration assuming flat scene The resultant changes include 3D objects and the actual changes Object filling Identify 3D object regions Layer registration Register the resultant changes at a scaled pose space of the background Regions that do not register at any relative depth correspond to the final change 2 3 Detect 3dobjects + changes Fill the objects Register one by one Left over is the final change

40 Change Detection for 3D Scenes
1 Background registration assuming flat scene The resultant changes include 3D objects and the actual changes Object filling Identify 3D object regions Layer registration Register the resultant changes at a scaled pose space of the background Regions that do not register at any relative depth correspond to the final change 2 3 Detect 3dobjects + changes Fill the objects Register one by one Left over is the final change

41 Change Detection for 3D Scenes
1 Background registration assuming flat scene The resultant changes include 3D objects and the actual changes Object filling Identify 3D object regions Layer registration Register the resultant changes at a scaled pose space of the background Regions that do not register at any relative depth correspond to the final change 2 3 Detect 3dobjects + changes Fill the objects Register one by one Left over is the final change

42 Change Detection for 3D Scenes
1 Background registration assuming flat scene The resultant changes include 3D objects and the actual changes Object filling Identify 3D object regions Layer registration Register the resultant changes at a scaled pose space of the background Regions that do not register at any relative depth correspond to the final change 2 3 Detect 3dobjects + changes Fill the objects Register one by one Left over is the final change

43 Scaled motion at different depth layers
All depths see the same camera trajectory, but their motions on the image plane differ Motion at image plane For background Translations and rotations For layer ℓ Translations multiplied by 1/dℓ Rotations remain the same Camera dbg 1 d dℓ<1 Relative depth of layer ℓ

44 Change Detection for 3D Scenes
1 Background registration assuming flat scene The resultant changes include 3D objects and the actual changes Object filling Identify 3D object regions Layer registration Register the resultant changes at a scaled motion of the background Regions that do not register at any relative depth correspond to the final change 2 3 Detect 3dobjects + changes Fill the objects Register one by one Left over is the final change

45 Change Detection for 3D Scenes - Example
Clean image

46 Change Detection for 3D Scenes - Example
Ground-truth depth map The lighter the gray, the closer to the camera

47 Change Detection for 3D Scenes - Example
RSMB distorted image

48 Change Detection for 3D Scenes - Example
Reference image

49 Change Detection for 3D Scenes - Example
RSMB distorted image

50 Change Detection for 3D Scenes - Example
1 Background registered image

51 Change Detection for 3D Scenes - Example
Detected changes after background registration

52 Change Detection for 3D Scenes - Example
2 Extracted object regions

53 Change Detection for 3D Scenes - Example
Extracted object regions

54 Change Detection for 3D Scenes - Example
Reference image Extracted object regions

55 Change Detection for 3D Scenes - Example
Registration error at different relative depths Reference image Extracted object regions

56 Change Detection for 3D Scenes - Example
Reference image Extracted object regions Final changes

57 Results Reference image Distorted image Final changes
The lighter the gray, the closer to the camera The red regions denote changes

58 Results 1 2 3 Background registered image
Changes after background registration Reference image Distorted image 2 3 Object filling Detected changes

59 Comparison with locally adaptive registration
Reference image Locally adaptive registration methods Linger and Goshtasby, 2015 Zaragoza et al., 2014 Our method M. Linger and A. Goshtasby, “Aerial image registration for tracking”, IEEE Transactions on Geoscience and Remote Sensing, April 2015 J. Zaragoza, T.J. Chin, Q.H. Tran, M. Brown, and D. Suter, “As-projective-as-possible image stitching with moving DLT”, IEEE Transactions on Pattern Analysis and Machine Intelligence, July 2014 Distorted image

60 Results – Drone Imaging
Depth map and detected changes Reference image Distorted image Registered image

61 Results – Drone Imaging
Reference image Reference image Distorted image Registered image Detected changes

62 Results – Drone Imaging
Distorted image Reference image Distorted image Registered image Detected changes

63 Results – Drone Imaging
Background registered image Reference image Distorted image Registered image Detected changes

64 Results – Drone Imaging
Depth map and detected changes Reference image Distorted image Registered image Detected changes

65 Results – Drone Imaging
Depth map and detected changes Reference image Distorted image Registered image

66 Results – Non-layered 3D scene
Reference image Distorted image Registered image Detected changes

67 Results – Non-layered 3D scene
Reference image Distorted image Registered image Detected changes

68 Rolling shutter cameras need special motion model
Row-wise motion blur model Sparsity priors for joint motion estimation and change detection Layered depth model to tackle 3D scenes Journals Vijay Rengarajan, A.N.Rajagopalan, R.Aravind, and Guna Seetharaman, "Image Registration and Change Detection under Rolling Shutter Motion Blur," Submitted to IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2016. Conferences Vijay Rengarajan, A.N. Rajagopalan, and R. Aravind, "Change Detection in the Presence of Motion Blur and Rolling Shutter Effect," European Conference on Computer Vision (ECCV), Zurich, Switzerland, September 2014 Vijay Rengarajan, Sheetal B. Gupta, A.N. Rajagopalan, and Guna Seetharaman, "Illumination Robust Change Detection with CMOS Imaging Sensors," SPIE Defense + Security Symposium, International Society for Optics and Photonics, Baltimore, Maryland, USA, April 2015

69 Comparing straight and curved roads
Reference image RSMB image Registered image Detected changes

70 Results Reference image Distorted image Final changes

71 Backup

72 Optimization Speedup Continuity of camera motion Pose space based on row neighbour

73 Background Registration
3D scene model with changes Model as background and non-background layers Detect 3dobjects + changes Fill the objects Register one by one Left over is the final change Register only the background layer L

74 Change Detection for 3D Scenes
1 Background registration The resultant changes include 3D objects and the actual changes Object filling Identify 3D object regions Layer registration Register the resultant changes at a scaled pose space of the background Regions that do not register at any relative depth correspond to the final change 2 3 Detect 3dobjects + changes Fill the objects Register one by one Left over is the final change

75 Results – Non-layered 3D scene
Reference image

76 Results – Non-layered 3D scene
Distorted image

77 Results – Non-layered 3D scene
Registered image


Download ppt "Change Detection in Rolling Shutter Cameras"

Similar presentations


Ads by Google