EEC-693/793 Applied Computer Vision with Depth Cameras

Slides:



Advertisements
Similar presentations
5/19/2015 EEC484/584: Computer Networks 1 EEC-490 Senior Design (CE) Kinect Programming Tutorial 1 Wenbing Zhao
Advertisements

EEC-492/592 Kinect Application Development Lecture 15 Wenbing Zhao
Work With Skeleton Data
CPVR 2013 Tutorial. Native Managed Applications Toolkit Drivers Runtime Skeletal Tracking.
By Rishabh Maheshwari. Objective of today’s lecture Play Angry Birds in 3D.
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-492/592 Kinect Application Development Lecture 16 Wenbing Zhao
Page 1 | Microsoft Work With Color Data Kinect for Windows Video Courses Jan 2013.
EEC-492/592 Kinect Application Development
EEC-492/592 Kinect Application Development Lecture 10 Wenbing Zhao
C# Events and WPF #W5. Horizontal Prototype WPF Designed for rapid user interface design Used for many devices: Windows Phone, Tablets, PCs,
1 EEC-492/592 Kinect Application Development Lecture 2 Wenbing Zhao
12/5/2015 EEC492/693/793 - iPhone Application Development 1 EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 4 Wenbing Zhao
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 8 Wenbing Zhao
2/16/2016 EEC492/693/793 - iPhone Application Development 1 EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 4 Wenbing Zhao
3/3/2016 EEC492/693/793 - iPhone Application Development 1 EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 4 Wenbing Zhao
EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 9 Wenbing Zhao
Computing with C# and the .NET Framework
EEC-492/592 Kinect Application Development
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
Gathering User Input Event Handling Create Dynamic controls
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-492/592 Kinect Application Development
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
Presentation transcript:

EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 15 Wenbing Zhao wenbing@ieee.org

Outline Approaches to gesture recognition Rules based Single pose based (this lecture) Multiple poses based Machine learning based

Gesture Recognition Engine The recognition engine typically performs the following tasks: It accepts user actions in a form of skeleton data It matches the data points with predefined logic for a specific gesture It executes actions if the gesture is recognized It responds to the users 3

Gesture Recognition Engine 4

Gesture Recognition Engine 5

Recognizing Two Gestures 6

Gesture Recognition Engine public enum GestureType { HandsClapping, TwoHandsRaised } GestureType RecognitionResult GestureEventArgs public enum RecognitionResult { Unknown, Failed, Success } public class GestureEventArgs : EventArgs { public GestureType gsType { get; internal set; } public RecognitionResult Result { get; internal set; } public GestureEventArgs(GestureType t, RecognitionResult result) { this.Result = result; this.gsType = t; } 7

Gesture Recognition Engine public class GestureRecognitionEngine { public GestureRecognitionEngine() { } public event EventHandler<GestureEventArgs> GestureRecognized; public Skeleton Skeleton { get; set; } public GestureType GestureType { get; set; } public void StartRecognize(GestureType t) { this.GestureType = t; switch (t) { case GestureType.HandsClapping: this.MatchHandClappingGesture(this.Skeleton); break; case GestureType.TwoHandsRaised: this.MatchTwoHandsRaisedGesture(this.Skeleton); default: } 8

Gesture Recognition Engine float previousDistance = 0.0f; private void MatchHandClappingGesture(Skeleton skeleton) { if (skeleton == null) { return; } if (skeleton.Joints[JointType.HandRight].TrackingState == JointTrackingState.Tracked && skeleton.Joints[JointType.HandLeft].TrackingState == JointTrackingState.Tracked) { float currentDistance = GetJointDistance(skeleton.Joints[JointType.HandRight], skeleton.Joints[JointType.HandLeft]); if (currentDistance < 0.1f && previousDistance > 0.1f) { if (this.GestureRecognized != null) this.GestureRecognized(this, new GestureEventArgs(GestureType.HandsClapping, RecognitionResult.Success)); } previousDistance = currentDistance; 9

Gesture Recognition Engine private void MatchTwoHandsRaisedGesture(Skeleton skeleton) { if (skeleton == null) { return; } float threshold = 0.3f; if (skeleton.Joints[JointType.HandRight].Position.Y > skeleton.Joints[JointType.Head].Position.Y + threshold && skeleton.Joints[JointType.HandLeft].Position.Y > skeleton.Joints[JointType.Head].Position.Y + threshold) { if (this.GestureRecognized != null) this.GestureRecognized(this, new GestureEventArgs(GestureType.TwoHandsRaised,RecognitionResult.Success)); } 10

Gesture Recognition Engine private float GetJointDistance(Joint firstJoint, Joint secondJoint) { float distanceX = firstJoint.Position.X - secondJoint.Position.X; float distanceY = firstJoint.Position.Y - secondJoint.Position.Y; float distanceZ = firstJoint.Position.Z - secondJoint.Position.Z; return (float)Math.Sqrt(Math.Pow(distanceX, 2) + Math.Pow(distanceY, 2) + Math.Pow(distanceZ, 2)); } 11

Vectors, Dot Product, Angles Segments of body can be represented using vectors You can create a class/struct for Vector3 or use Unity Vector3 type Dot product of two vectors Angle formed by two vectors in degrees Vector3 seg; seg.X = Joint1.Position.X – Joint2.Position.X; seg.Y = Joint1.Position.Y – Joint2.Position.Y; seg.Z = Joint1.Position.Z – Joint2.Position.Z; Vector3 seg1, seg2; float dotproduct = seg1.X*seg2.X+seg1.Y*seg2.Y+seg1.Z*seg2.Z; Vector3 seg1, seg2; float dotproduct = seg1.X*seg2.X+seg1.Y*seg2.Y+seg1.Z*seg2.Z; float seg1magnitude = Math.Sqt(seg1.X*seg1.X+seg1.Y*seg1.Y+seg1.Z*seg1.Z); float seg2magnitude = Math.Sqt(seg2.X*seg2.X+seg2.Y*seg2.Y+seg2.Z*seg2.Z); float angle = Math.Acos(dotproduct/seg1magnitude/seg2magnitude)*180/Math.PI; 12

Build a Gesture Recognition App The app can recognize two gestures Clapping hand Two hands raised Setting up project Create a new C# WPF project with a name GestureRecognitionBasic Add Microsoft.Kinect reference, import name space, etc Add GUI component Create a new C# file named GestureRecognitionEngine.cs, and copy the code to this class In solution explore, right click, then Add => New Item 13

Build a Gesture Recognition App User interface TextBox Canvas Image 14

Build a Gesture Recognition App Add member variables Modify constructor KinectSensor sensor; private WriteableBitmap colorBitmap; private byte[] colorPixels; Skeleton[] totalSkeleton = new Skeleton[6]; Skeleton skeleton; GestureRecognitionEngine recognitionEngine; public MainWindow() { InitializeComponent(); Loaded += new RoutedEventHandler(WindowLoaded); } 15

Build a Gesture Recognition App private void WindowLoaded(object sender, RoutedEventArgs e) { if (KinectSensor.KinectSensors.Count > 0) { this.sensor = KinectSensor.KinectSensors[0]; if (this.sensor != null && !this.sensor.IsRunning) { this.sensor.Start(); this.sensor.ColorStream.Enable(); this.colorPixels = new byte[this.sensor.ColorStream.FramePixelDataLength]; this.colorPixels = new WriteableBitmap(this.sensor.ColorStream.FrameWidth, this.sensor.ColorStream.FrameHeight, 96.0, 96.0, PixelFormats.Bgr32, null); this.image1.Source = this.colorBitmap; this.sensor.ColorFrameReady += this.colorFrameReady; this.sensor.SkeletonStream.Enable(); this.sensor.SkeletonFrameReady += skeletonFrameReady; recognitionEngine = new GestureRecognitionEngine(); recognitionEngine.GestureRecognized += gestureRecognized; } 16

Build a Gesture Recognition App Gesture recognized event handler colorFrameReady(), DrawSkeleton(), drawBone(), ScalePosition() same as before void gestureRecognized(object sender, GestureEventArgs e) { textBox1.Text = e.gsType.ToString(); } 17

Build a Gesture Recognition App Handle skeleton frame ready event void skeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e) { canvas1.Children.Clear(); using (SkeletonFrame skeletonFrame = e.OpenSkeletonFrame()) { if (skeletonFrame == null) { return; } skeletonFrame.CopySkeletonDataTo(totalSkeleton); skeleton = (from trackskeleton in totalSkeleton where trackskeleton.TrackingState == SkeletonTrackingState.Tracked select trackskeleton).FirstOrDefault(); if (skeleton == null) return; DrawSkeleton(skeleton); recognitionEngine.Skeleton = skeleton; recognitionEngine.StartRecognize(GestureType.HandsClapping); recognitionEngine.StartRecognize(GestureType.TwoHandsRaised); } 18

Challenge Tasks Add recognition of two more gestures Right hand is raised Left hand is raised Add the GestureRecognitionEngine.cs to a Unity+Kinect app, and add visual feedback on the gestures recognized 19