12/5/2015 EEC492/693/793 - iPhone Application Development 1 EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 4 Wenbing Zhao

Slides:



Advertisements
Similar presentations
Core Java Lecture 4-5. What We Will Cover Today What Are Methods Scope and Life Time of Variables Command Line Arguments Use of static keyword in Java.
Advertisements

Introduction to Java Classes, events, GUI’s. Understand: How to use TextPad How to define a class or object How to create a GUI interface How event-driven.
EEC-492/592 Kinect Application Development Lecture 15 Wenbing Zhao
CS 4731: Computer Graphics Lecture 21: Raster Graphics Part 2 Emmanuel Agu.
Page 1 | Microsoft Work With Color Data Kinect for Windows Video Courses Jan 2013.
Graphics and Multimedia. Introduction The language contains many sophisticated drawing capabilities as part of namespace System.Drawing and the other.
EEC-492/592 Kinect Application Development
EEC-492/592 Kinect Application Development Lecture 10 Wenbing Zhao
Computer Programming and Basic Software Engineering 9 Building Graphical User Interface A Brief Introduction to GDI+ S.R.G. Fraser, Pro Visual C++/CLI.
Object Oriented Programming Graphics and Multimedia Dr. Mike Spann
1 EEC-492/592 Kinect Application Development Lecture 2 Wenbing Zhao
Using the JImageViewer classes. JImageViewer classes JImageViewer class JImageViewer class ImagePanel class ImagePanel class Image class Image class.
Graphic User Interface. Graphic User Interface (GUI) Most of us interact with computers using GUIs. GUIs are visual representations of the actions you.
Java™ How to Program, 10/e © Copyright by Pearson Education, Inc. All Rights Reserved.
1 9/6/05CS360 Windows Programming CS360 Windows Programming.
Programming II Array of objects. this Using the this Pointer this Objects use the this pointer implicitly or explicitly. – this is – this is used implicitly.
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 8 Wenbing Zhao
2/16/2016 EEC492/693/793 - iPhone Application Development 1 EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 4 Wenbing Zhao
3/3/2016 EEC492/693/793 - iPhone Application Development 1 EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 4 Wenbing Zhao
Arrays & Enum & Events. Arrays Arrays are data structures consisting of related data items of the same type. Arrays are fixed-length entities—they remain.
Basic Class Structure. Class vs. Object class - a template for building an object –defines the instance data that the object will hold –defines instance.
Multimedia and weBLOGging Grade 7-9 | Cahaya Bangsa Classical School (C) 2010 Digital Media Production Facility 03 - Still Picture 01 – Basics.
EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 9 Wenbing Zhao
Chapter 5 Introduction to Defining Classes Fundamentals of Java.
Computer Graphics: An Introduction
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-492/592 Kinect Application Development
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
Graphics and Multimedia
Structs.
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-492/592 Kinect Application Development
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
Presentation transcript:

12/5/2015 EEC492/693/793 - iPhone Application Development 1 EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 4 Wenbing Zhao

2 Outline Working with Kinect color stream Types of color images Retrieving color images: event & polling models Steps on handling of color images Event handler: C# delegate Build the KinectCam app

3 Working with Kinect Streams Kinect SDK supports two image stream formats  Color image stream  Depth image stream  Both are children of ImageStream class The image frames are stored into a buffer for use by the application. If there is any delay in reading the buffer data and rendering it as images, the buffer will be overwritten => old frame lost Main steps in handling image streams  Enabling the stream  Capturing the stream frame by frame  Processing the image frames

Types of Color Images RGB: read-green-blue color space (RGB)  Each pixel is an array of four: Blue, Green, Red, Alpha  Alpha: transparency degree  32bits per pixel at 640x480 at 30FPS or 1280x960 at 12FPS YUV:  Y: luminance channel; U: blue channel, V: red channel  16 bits per pixel at 640x480 at 15 FPS  Uses less memory than RGB Bayer: raw Bayer color image format with a Bayer color filter array (or Bayer filter):  50% green, 25% red, 25% blue  Resolution at 640x480 at 30 FPS or 1280x960 at 12 FPS

Two ways: event model and polling model Event model  Kinect sends the frame to the app whenever a new frame is captured by the sensor  Need to register a event handler for the event Retrieving Color Image Stream

Polling Model The application sends a request to the sensor whenever there is a need to get an image frame Pass the time interval after which the sensor will return the image frame

Capturing Color Images Main steps  Starting Kinect sensor  Enable color stream channel with desirable image format  Subscribe (register) an event handler (a method) with the stream channel  Process the image frame  Render the image frame on the user interface

Enable Color Stream Channel In class ColorImageStream, two methods public void Enable();  By default, use RgbResolution640x480Fps30 public void Enable(ColorImageFormat format);  RgbResolution640x480Fps30  RgbResolution1280x960Fps12  YuvResolution640x480Fps15  RawYuvResolution640x480Fps15  InfraredResolution640x480Fps30  RawBayerResolution640x480Fps30  RawBayerResolution1280x960Fps12  Undefined You can enable only one type of color stream at a time

Registering an Event Handler What is colorFrameReady?!  It is the event handler that you will implement  You can use any method name as you wish  The method must have the following signature: void colorFrameReady(object sender, ColorImageFrameReadyEventArgs e)  Sender: the object that fires the event  e: the object that holds data to be retrieved (i.e., image frame) this.sensor.ColorFrameReady += colorFrameReady;

Event Handler Event handler is similar to a function pointer in C/C++ In C# (and objective C), the event handler is a delegate with pedefined signature  General format: public delegate void MyEventHandler(object sender, MyEventArgs e);  Sender: files the event  e: holds the data to be handled A delegate allows you to pass methods of one class to objects of other classes that can call those methods

Delegate in C# ( handling-in-net-using-c/2/ ) handling-in-net-using-c/2/ using System; // Step 1. Declare a delegate with the signature of the encapsulated method public delegate void MyDelegate(string input); //Step 2. Define methods that match with signature of delegate declaration class MyClass1{ public void delegateMethod1(string input){ Console.WriteLine("delegateMethod1 and input to the method is {0}",input); } public void delegateMethod2(string input){ Console.WriteLine("delegateMethod2 and input to the method is {0}",input); } } }

Delegate in C# ( t-handling-in-net-using-c/2/ ) t-handling-in-net-using-c/2/ //Step 3. Create delegate object and plug in the methods class MyClass2 { public MyDelegate createDelegate(){ MyClass1 c2=new MyClass1(); MyDelegate d1 = new MyDelegate(c2.delegateMethod1); MyDelegate d2 = new MyDelegate(c2.delegateMethod2); MyDelegate d3 = d1 + d2; return d3; } }

Delegate in C# ( handling-in-net-using-c/2/ ) handling-in-net-using-c/2/ //Step 4. Call the encapsulated methods through the delegate class MyClass3{ public void callDelegate(MyDelegate d, string input){ d(input); } } class Driver{ static void Main(string[] args){ MyClass2 c2 = new MyClass2(); MyDelegate d = c2.createDelegate(); MyClass3 c3 = new MyClass3(); c3.callDelegate(d,"Calling the delegate"); } }

Event Handler Delegate for ColorImage In Kinect.Sensor class That is why we can this: namespace System { public delegate void EventHandler (object sender, TEventArgs e); } public event EventHandler ColorFrameReady; this.sensor.ColorFrameReady += colorFrameReady;

Retrieve the Color Image Frame What is using(a new object) { }?  It defines the scope of the new object  Once the last statement in the {} block is executed, the object is garbage deleted void colorFrameReady(object sender, ColorImageFrameReadyEventArgs e) { using (ColorImageFrame imageFrame = e.OpenColorImageFrame()) { ……. }

Handling/Displaying Color Image // copy image to a byte array imageFrame.CopyPixelDataTo(colorPixels); // width in bytes of a single row of pixel data including padding. int stride = imageFrame.Width * imageFrame.BytesPerPixel; // Write the pixel data into our bitmap this.colorBitmap.WritePixels( new Int32Rect(0, 0, this.colorBitmap.PixelWidth, this.colorBitmap.PixelHeight), this.colorPixels, stride, 0);

Color Image Handling Updates the pixels in the specified region of the bitmap. srcRect: The rectangle of the System.Windows.Media.Imaging.WriteableBitmap to update pixels: The pixel array used to update the bitmap. stride: The stride of the update region in pixels. offset: The input buffer offset public void WritePixels(Int32Rect srcRect, Array pixels, int stride, int offset);

Build KinectCam App Create a new WFP project named KinectCam Draw the GUI for the app Modify xaml file to add WindowLoaded and WindowClosing as before Add member variables for displaying color images Start Kinect sensor, enable color stream, register color stream handler, connect image control to the bitmap for display Process each color image

Design GUI Add label title Add image control

Add Member Variables KinectSensor sensor; // Bitmap that will hold color information private WriteableBitmap colorBitmap; // Intermediate storage for the color data received from the camera private byte[] colorPixels;

WriteableBitmap Class Inherits from BitmapSource Constructor  pixelWidth:width of the bitmap  pixelHeight: height of the bitmap  dpiX: horizontal dots per inch (dpi)  dpiY: vertical dots per inch  pixelFormat: pixel format  palette: finite set of colors public WriteableBitmap(int pixelWidth, int pixelHeight, double dpiX, double dpiY, PixelFormat pixelFormat, BitmapPalette palette);

System.Windows.Media.PixelFormats Bgr32 property: blue, green, red channels (3 bytes), last byte is set to 0  Bgra32 Bgr24 Gray2 Gray16 Etc.

Initialization (in WindowLoaded()) this.sensor.Start(); // Turn on the color stream to receive color frames this.sensor.ColorStream.Enable(); // Allocate space to put the pixels we'll receive this.colorPixels = new byte[this.sensor.ColorStream.FramePixelDataLength]; // This is the bitmap we'll display on-screen this.colorBitmap = new WriteableBitmap(this.sensor.ColorStream.FrameWidth, this.sensor.ColorStream.FrameHeight, 96.0, 96.0, PixelFormats.Bgr32, null); // Set the image we display to point to the bitmap this.image1.Source = this.colorBitmap; // Add an event handler to be called whenever there is new color frame data this.sensor.ColorFrameReady += this.colorFrameReady; // Start the sensor! this.sensor.Start();

Handling/Displaying Color Images void colorFrameReady(object sender, ColorImageFrameReadyEventArgs e) { using (ColorImageFrame imageFrame = e.OpenColorImageFrame()) { if (null == imageFrame) return; imageFrame.CopyPixelDataTo(colorPixels); int stride = imageFrame.Width * imageFrame.BytesPerPixel; // Write the pixel data into our bitmap this.colorBitmap.WritePixels( new Int32Rect(0, 0, this.colorBitmap.PixelWidth, this.colorBitmap.PixelHeight), this.colorPixels, stride, 0); }