Kinect SDK Crash Course (In 12 slides or less) Elliot Babchick.

Slides:



Advertisements
Similar presentations
Digital Images in Java Java’s imaging classes. Java imaging library  Java has good support for image processing  Must master a handful of classes and.
Advertisements

Page 1 | Microsoft Work With Depth Data Kinect for Windows Video Courses Jan 2013.
Davide Spano CNR-ISTI, HIIS Laboratory, Via G. Moruzzi Pisa, Italy.
Coordinatate systems are used to assign numeric values to locations with respect to a particular frame of reference commonly referred to as the origin.
What’s New in Kinect for Windows v2 Click to add title
Capturing Your Audience with Kinect
SUMMARY: abstract classes and interfaces 1 Make a class abstract so instances of it cannot be created. Make a method abstract so it must be overridden.
Joshua Fabian Tyler Young James C. Peyton Jones Garrett M. Clayton Integrating the Microsoft Kinect With Simulink: Real-Time Object Tracking Example (
KINECT REHABILITATION
Kinect + TFS aka Kinban Jeremy Novak Farm Credit Services of America.
Week 7 - Friday.  What did we talk about last time?  Allocating 2D arrays.
5/19/2015 EEC484/584: Computer Networks 1 EEC-490 Senior Design (CE) Kinect Programming Tutorial 1 Wenbing Zhao
Kinect H4x Gesture Recognition and Playback Tools (+Inspiration)
Work With Skeleton Data
Kinect SDK Tutorial Skeleton and Camera (RGB) Anant Bhardwaj.
CPVR 2013 Tutorial. Native Managed Applications Toolkit Drivers Runtime Skeletal Tracking.
By Rishabh Maheshwari. Objective of today’s lecture Play Angry Birds in 3D.
Manipulating 2D arrays in Java
Page 1 | Microsoft Work With Color Data Kinect for Windows Video Courses Jan 2013.
1 References: 1. J.M. Hart, Windows System Programming, 4th Ed., Addison-Wesley, 2010, Ch.12 2.Microsoft Kinect SDK for Developers,
Page 1 | Microsoft Streams sync and coordinate mapping Kinect for Windows Video Courses.
TOPIC 9 MODIFYING PIXELS IN A MATRIX: COPYING, CROPPING 1 Notes adapted from Introduction to Computing and Programming with Java: A Multimedia Approach.
EEC-492/592 Kinect Application Development Lecture 10 Wenbing Zhao
CS 102 Computers In Context (Multimedia)‏ 01 / 28 / 2009 Instructor: Michael Eckmann.
In.  This presentation will only make sense if you view it in presentation mode (hit F5). If you don’t do that there will be multiple slides that are.
Module Code: CU0001NI Technical Information on Digital Images Week -2.
Java: Chapter 1 Computer Systems Computer Programming II.
Page 1 | Microsoft Work With Skeleton Data Kinect for Windows Video Courses Jan 2013.
Page 1 | Microsoft Work With Color Data Kinect for Windows Video Courses Jan 2013.
Buffers Textures and more Rendering Paul Taylor & Barry La Trobe University 2009.
1 EEC-492/592 Kinect Application Development Lecture 2 Wenbing Zhao
Programming with the Kinect for Windows SDK
Review of ICS 102. Lecture Objectives To review the major topics covered in ICS 102 course Refresh the memory and get ready for the new adventure of ICS.
Ben Lower Kinect Community Evangelism Kinect for Windows in 5 Minutes.
12/5/2015 EEC492/693/793 - iPhone Application Development 1 EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 4 Wenbing Zhao
CONTENT 1. Introduction to Kinect 2. Some Libraries for Kinect 3. Implement 4. Conclusion & Future works 1.
KINECT FOR WINDOWS Ken Casada Developer Evangelist, Microsoft Switzerland | blogblog.
1 Arrays of Arrays An array can represent a collection of any type of object - including other arrays! The world is filled with examples Monthly magazine:
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 8 Wenbing Zhao
2/16/2016 EEC492/693/793 - iPhone Application Development 1 EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 4 Wenbing Zhao
CS 5JA Introduction to Java Graphics One of the powerful things about Java is that there is.
CS 31 Discussion, Week 7 Faisal Alquaddoomi, Office Hours: BH 2432, W 4:30-6:30pm, F 12:30-1:30pm.
Chapter 7 Continued Arrays & Strings. Arrays of Structures Arrays can contain structures as well as simple data types. Let’s look at an example of this,
Coordinatate systems are used to assign numeric values to locations with respect to a particular frame of reference commonly referred to as the origin.
EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 9 Wenbing Zhao
Demystifying the Pixel. What is a Pixel The smallest unit of measurement in a image It contains color space information in RGB, CMYK, HSB Resolution information.
Creative Coding & the New Kinect
SECTION 2 SETUP, WRITING AND CREATING
Introduction to Microsoft Kinect Sensor Programming
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
Functions Inputs Output
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
C# Event Processing Model
EEC-693/793 Applied Computer Vision with Depth Cameras
Kinect for Creative Development with open source frameworks
Week 9 – Lesson 1 Arrays – Character Strings
EEC-490 Senior Design (CE)
EEC-492/592 Kinect Application Development
BIT116: Scripting Functions.
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
CS297 Graphics with Java and OpenGL
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
CS2310 Milestone2 Zihang Huang Project: pose recognition using Kinect
Random Stuff Colors Sizes CSS Shortcuts.
Presentation transcript:

Kinect SDK Crash Course (In 12 slides or less) Elliot Babchick

What Do You Get?

Important! Kinect shines brightest when you use it in a wide, open space. Specifically, meters is the supported range. The sweet spot is in the middle (~2.5 m). If it can’t see your entire body, it can’t track you. Make sure your entire body is in the frame!

Setting it Up We did this for you in the skeleton code, but quickly: Import it into your project via references include ‘using Microsoft.Research.Kinect.Nui’ (NUI = Natural User Interface) The Nui is declared a “Runtime” object, pass it the sensors you want using RuntimeOptions and pipes (‘|’). You must specify these up-front (no asking for them after initialized). nui = new Runtime(); try { nui.Initialize(RuntimeOptions.UseDepthAndPlayerIndex | RuntimeOptions.UseSkeletalTracking | RuntimeOptions.UseColor); } catch (InvalidOperationException) { return 42 ; }

Event-driven data streams An example handler, taking the RGB video and putting it into a WPF element named “video” (really an image) nui.DepthFrameReady += new EventHandler (nui_DepthFrameReady); nui.SkeletonFrameReady += new EventHandler (nui_SkeletonFrameReady); nui.VideoFrameReady += new EventHandler (nui_ColorFrameReady); void nui_ColorFrameReady( object sender, ImageFrameReadyEventArgs e) { PlanarImage Image = e.ImageFrame.Image; video.Source = BitmapSource.Create( Image.Width, Image.Height, 96, 96, PixelFormats.Bgr32, null, Image.Bits, Image.Width * Image.BytesPerPixel); }

What’s In A... ImageFrame We’ll cover this in more detail next week. For now, just know that you have access to the raw bytes (misnamed “bits”) that makes up the pixels

What’s in a... DepthFrame Look familiar? It’s the same ImageFrame, but has a different Type field value (it’s a depth image, not a color image)

Making (quick) sense of a depth image Raw data in ImageFrame.Image.Bits Array of bytes: public byte[] Bits; 2 bytes per pixel, moves left to right then top to bottom Every 2 bytes tells how far away that particular pixel is (in millimeters). But you can’t just read the bytes straight out... You need to bit-shift differently depending on whether you’re tracking depth and skeletons or just depth... more on this next week, see the link for more detail if you need it sooner: ickstarts/Working-with-Depth-Data ickstarts/Working-with-Depth-Data

What’s In A... SkeletonFrame A collection of skeletons, each with a collection of joints

Skeleton Data In Detail You get all the joints you see above with. Z values get larger as you move away from the sensor. Moving right (your right) gives you larger X values. Moving up is left to you as an exercise (get it?). Units in meters (note that raw depth was in millimeters)

Mapping coordinates to the UI Coding4Fun Library extends the Joint object with: ScaleTo(int x, int y, float maxSkeletonX, float maxSkeleton y) x and y describe the rectangular space of pixels you’d like to scale a joint to. The second two arguments specify how far you need to move to traverse the scaled range. For example, skeleton.Joints[JointID.HandRight].ScaleTo(640, 480,.5f,.5f) means that your right hand will only need to travel one meter (-.5 to.5) to cover the full 640-pixel-wide distance on screen. Convenient function for converting ImageFrame’s byte data to actual images: ImageFrame.ToBitmapSource() Find Coding4Fun here (but it’s already in the starter code project):

This is Slide #12 I had a slide to spare. Now let’s look at the skeleton code.