EEC-693/793 Applied Computer Vision with Depth Cameras

Slides:



Advertisements
Similar presentations
Page 1 | Microsoft Work With Depth Data Kinect for Windows Video Courses Jan 2013.
Advertisements

Verification of specifications and aptitude for short-range applications of the Kinect v2 depth sensor Cecilia Chen, Cornell University Lewis’ Educational.
5/19/2015 EEC484/584: Computer Networks 1 EEC-490 Senior Design (CE) Kinect Programming Tutorial 1 Wenbing Zhao
EEC-492/592 Kinect Application Development Lecture 15 Wenbing Zhao
Work With Skeleton Data
By Rishabh Maheshwari. Objective of today’s lecture Play Angry Birds in 3D.
EEC-693/793 Applied Computer Vision with Depth Cameras
Page 1 | Microsoft Work With Color Data Kinect for Windows Video Courses Jan 2013.
EEC-492/592 Kinect Application Development
EEC-492/592 Kinect Application Development Lecture 10 Wenbing Zhao
Kinect SDK Crash Course (In 12 slides or less) Elliot Babchick.
1 EEC-492/592 Kinect Application Development Lecture 2 Wenbing Zhao
12/5/2015 EEC492/693/793 - iPhone Application Development 1 EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 4 Wenbing Zhao
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 8 Wenbing Zhao
2/16/2016 EEC492/693/793 - iPhone Application Development 1 EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 4 Wenbing Zhao
3/3/2016 EEC492/693/793 - iPhone Application Development 1 EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 4 Wenbing Zhao
EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 9 Wenbing Zhao
EEC-492/592 Kinect Application Development
Sai Goud Durgappagari Dr. Yingcai Xiao
Depth Analysis With Stereo Cameras
Creating Desktop Video and Animation
Introduction to Microsoft Kinect Sensor Programming
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
Image Segmentation Classify pixels into groups having similar characteristics.
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
Tutorial 19 - Microwave Oven Application Building Your Own Classes and Objects Outline Test-Driving the Microwave Oven Application Designing.
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-492/592 Kinect Application Development
EEC-492/592 Kinect Application Development
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
Presentation transcript:

EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 7 Wenbing Zhao wenbing@ieee.org

Outline Depth data stream How depth measurement is done Capture and process depth data Classes related to depth data Getting the Distance from a Particular Pixel

Depth Data Stream Depth data stream consists of a sequence of depth image frames Each depth image frame is in the form of a 16-bit grey scale image Viewable range: 43 degrees vertical, 57 degrees horizontal A depth pixel contains the following (x,y) coordinates represent the location of the object in the depth sensor view The value of the pixel at (x,y) contains the distance between the object and the Kinect sensor in millimeters

Depth Data Stream Depth data stream supports 640x480 at 30fps 320x240 at 30fps 80x60 at 30fps Kinect depth sensing range Up to 13.1 feet (4 meters) As close as 40cm

How the Depth Data is Obtained In the polling model, the application opens a channel for the stream, and whenever the application needs a frame, it sends a request to get the frame

Stereo Triangulation Stereo triangulation is an analysis algorithm for computing the 3D position of points in an image frame In general stereo triangulation, two images are used to obtain the two different views on a scene, in a similar fashion to human binocular vision. By comparing these two images, the relative depth information is calculated Kinect stereo triangulation uses one real (IR) image and a virtual image (more accurately, the line of each projected point) http://www.youtube.com/watch?v=uq9SEJxZiUg

Depth Measurement When there is a pixel in any of the diagonal views of the sensor, it internally draws a line that is perpendicular to the sensor and then calculates the distance directly from there

Capturing and Processing Depth Data Enable the depth stream channel with the type of depth image format Attach the event handler to the stream channel Process the incoming depth frames Render the frames on UI this.sensor = KinectSensor.KinectSensors[0]; this.sensor.DepthStream.Enable(); sensor.DepthFrameReady += depthFrameReady; void depthFrameReady(object sender, DepthImageFrameReadyEventArgs e) { }

Processing Depth Data void depthFrameReady(object sender, DepthImageFrameReadyEventArgs e) { using (DepthImageFrame imageFrame = e.OpenDepthImageFrame()) { if (null == imageFrame) { return; } imageFrame.CopyPixelDataTo(depthPixels); // Write the pixel data into our bitmap this.depthBitmap.WritePixels( new Int32Rect(0, 0, this.depthBitmap.PixelWidth, this.depthBitmap.PixelHeight), this.depthPixels, stride, 0); Very similar to the code for ColorImage Processing: color => depth

Classes Related to Depth Data

Classes Related to Depth Data

Depth Data and Distance The DepthImageStream and DepthImageFrame classes have the MaxDepth and MinDepth properties, which return the maximum and minimum depth ranges for that particular stream or captured image frame, in millimeters This range value returns the best range where distance can be measured properly These values will change automatically based on the selection of DepthRange for the stream

Depth Range The Range property sets the viewable range for the Kinect sensor, which is a type of DepthRange enumeration A DepthRange enumeration has the following two values: Default Near sensor.DepthStream.Range = DepthRange.Near;

Special Depth Range Values DepthImageStream defines three additional read-only properties TooFarDepth: It provides the depth point where the object is beyond the range of the sensor TooNearDepth: It provides the depth point where the object is too close to the sensor UnknownDepth: There are instances when a depth is completely indeterminate, and the value will be zero

Default Depth Range

Depth Data Representation The Kinect sensor returns 16-bit raw depth frame data The first three bits are used to represent the identified players and the remaining 13 bits give you the measured distance in millimeters To retrieve the 13-bit depth data, perform a bitwise shift operation >> to move the bits to the beginning int depth = depthFrame[depthIndex] >> DepthImageFrame.PlayerIndexBitmaskWidth;

Depth Data Representation

Getting the Distance from a Particular Pixel Add feature to our DepthCam app Very important: make the image control exactly the same size as the resolution chosen Image should be 320x240 if Resolution320x240Fps30 Image should be 640x480 if Resolution640x480Fps30 Use mouse to select a pixel in the image => display the depth for that pixel Need to add event handler for MouseDown Point currentPoint = e.GetPosition(depthImageControl); int pixelIndex = (int)(currentPoint.X + ((int)currentPoint.Y * this.frame.Width)); int distancemm = this.pixelData[pixelIndex] >> DepthImageFrame. PlayerIndexBitmaskWidth;

Build the DepthCam App Create a new WFP project named DepthCam Draw the GUI for the app Modify xaml file to add WindowLoaded and WindowClosing as before Add Microsoft.Kinect reference Add member variables for displaying depth images Start Kinect sensor, enable depth data stream, register depth stream handler, connect image control to the bitmap for display Process each depth image Add code for displaying depth data for pixel clicked

GUI Design Image control Group control for depth information Two labels and two textboxes Use meaningful names for the textboxes Group control for pixel information Four labels and four textboxes

Adding Code Add member variables: WindowLoade method (WindowClosing() same as before): KinectSensor sensor; private WriteableBitmap depthBitmap; private short[] depthPixels; private int frameWidth; // to calculate depthIndex private void WindowLoaded(object sender, RoutedEventArgs e) { if (KinectSensor.KinectSensors.Count > 0) { this.sensor = KinectSensor.KinectSensors[0]; if (this.sensor != null && !this.sensor.IsRunning) { this.sensor.DepthStream.Enable(DepthImageFormat.Resolution320x240Fps30); this.depthPixels = new short[this.sensor.DepthStream.FramePixelDataLength]; this.depthBitmap = new WriteableBitmap(this.sensor.DepthStream.FrameWidth, this.sensor.DepthStream.FrameHeight, 96.0, 96.0, PixelFormats.Gray16, null); this.image1.Source = this.depthBitmap; this.sensor.DepthFrameReady += this.depthFrameReady; this.sensor.Start(); } else { MessageBox.Show("No device is connected!"); this.Close(); } } }

Adding Code Event handler for depth frames: void depthFrameReady(object sender, DepthImageFrameReadyEventArgs e) { using (DepthImageFrame imageFrame = e.OpenDepthImageFrame()) { if (null == imageFrame) { return; } this.frameWidth = imageFrame.Width; this.maxDepthField.Text = "" + imageFrame.MaxDepth; this.minDepthField.Text = "" + imageFrame.MinDepth; imageFrame.CopyPixelDataTo(depthPixels); int stride = imageFrame.Width * imageFrame.BytesPerPixel; this.depthBitmap.WritePixels( new Int32Rect(0, 0, this.depthBitmap.PixelWidth, this.depthBitmap.PixelHeight), this.depthPixels, stride, 0);

Adding Code Add mouse down event on image control Find the MouseDown line, double click it to get event handler template Event handler for mouse click: private void image1_MouseDown(object sender, MouseButtonEventArgs e) { Point currentPoint = e.GetPosition(image1); this.pixelXField.Text = currentPoint.X.ToString(); this.pixelYField.Text = currentPoint.Y.ToString(); int pixelIndex = (int)(currentPoint.X + ((int)currentPoint.Y * this.frameWidth)); this.depthIndexField.Text = "" + pixelIndex; int distancemm = this.depthPixels[pixelIndex] >> DepthImageFrame.PlayerIndexBitmaskWidth; this.depthField.Text = "" + distancemm; }