EEC-693/793 Applied Computer Vision with Depth Cameras

Slides:



Advertisements
Similar presentations
EEC-492/592 Kinect Application Development Lecture 15 Wenbing Zhao
Advertisements

Work With Skeleton Data
EEC-492/592 Kinect Application Development Lecture 16 Wenbing Zhao
EEC-492/592 Kinect Application Development
EEC-492/592 Kinect Application Development Lecture 10 Wenbing Zhao
Spring 2008 Mark Fontenot CSE 1341 Principles of Computer Science I Note Set 2.
1 EEC-492/592 Kinect Application Development Lecture 2 Wenbing Zhao
12/5/2015 EEC492/693/793 - iPhone Application Development 1 EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 4 Wenbing Zhao
 In the java programming language, a keyword is one of 50 reserved words which have a predefined meaning in the language; because of this,
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 8 Wenbing Zhao
2/16/2016 EEC492/693/793 - iPhone Application Development 1 EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 4 Wenbing Zhao
3/3/2016 EEC492/693/793 - iPhone Application Development 1 EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 4 Wenbing Zhao
EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 9 Wenbing Zhao
Chapter 13 Recursion Copyright © 2016 Pearson, Inc. All rights reserved.
The need for Programming Languages
EECE 310: Software Engineering
EEC-492/592 Kinect Application Development
Introduction to Microsoft Kinect Sensor Programming
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
University of Central Florida COP 3330 Object Oriented Programming
Computing with C# and the .NET Framework
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
CIS 470 Mobile App Development
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
null, true, and false are also reserved.
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
Units with – James tedder
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-492/592 Kinect Application Development
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
CIS 199 Final Review.
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
Data Structures & Algorithms
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
Presentation transcript:

EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 16 Wenbing Zhao wenbing@ieee.org

Outline Algorithmic gesture recognition overview Swipe to left gesture Object-oriented new design for recognition engine Implementation details Build an app to test

Algorithmic Gesture Recognition Uses a set of predefined conditions and parameters to detect and validate a gesture against each of them Validate a gesture as it is being performed, by ensuring the start points, constraints, parameters, and the end points are always valid The algorithmic approach not only recognizes the gestures, but it also tracks if the gesture is performed correctly or not Examples gestures which can be considered as algorithmic Hand moving in the same direction A swipe to the right or the left Zooming in and out Waving hands 3

Algorithmic Gesture Recognition Start Condition Validation Finish To start with any gesture, there will always be an initial position—we call it the "start" position. This is the entry point for any gesture and has to be validated before validating other positions. Once the start position is validated and the gesture is being performed by the end user, every single frame has to be validated under the predefined "condition" for the particular gesture types. If any of these conditions fail to satisfy during the complete execution cycle, we can stop the gesture tracking and wait for it to start again. Finally, there should be a condition that triggers the end of the gesture and "validates" the final position, which indicates that gesture recognition is "finished". 4

SwipeToLeft Example Start: Condition & validation Finish: The left hand joint should be below the left elbow and the spine joint The right hand joint should be below the right shoulder joint and above the right elbow joint Condition & validation The user should move the hand from right to left while maintaining the right hand and left hand joint positions Finish: After a specific number of frames when the gesture reaches to validate the last condition, it will check if the distance between the right hand joint and the left shoulder has reduced from the starting point To start with any gesture, there will always be an initial position—we call it the "start" position. This is the entry point for any gesture and has to be validated before validating other positions. Once the start position is validated and the gesture is being performed by the end user, every single frame has to be validated under the predefined "condition" for the particular gesture types. If any of these conditions fail to satisfy during the complete execution cycle, we can stop the gesture tracking and wait for it to start again. Finally, there should be a condition that triggers the end of the gesture and "validates" the final position, which indicates that gesture recognition is "finished". 5

Implementation of Algorithmic Gesture Recognition public enum GestureType { SwipeToRight, SwipeToLeft, ZoomIn, ZoomOut } GestureType: Event argument To start with any gesture, there will always be an initial position—we call it the "start" position. This is the entry point for any gesture and has to be validated before validating other positions. Once the start position is validated and the gesture is being performed by the end user, every single frame has to be validated under the predefined "condition" for the particular gesture types. If any of these conditions fail to satisfy during the complete execution cycle, we can stop the gesture tracking and wait for it to start again. Finally, there should be a condition that triggers the end of the gesture and "validates" the final position, which indicates that gesture recognition is "finished". public class GestureEventArgs : EventArgs { public RecognitionResult Result { get ; internal set; } public GestureType GestureType { get; internal set; } public GestureEventArgs(RecognitionResult result, GestureType type) this.Result = result; this.GestureType = type; } 6

Implementation of Algorithmic Gesture Recognition To start with any gesture, there will always be an initial position—we call it the "start" position. This is the entry point for any gesture and has to be validated before validating other positions. Once the start position is validated and the gesture is being performed by the end user, every single frame has to be validated under the predefined "condition" for the particular gesture types. If any of these conditions fail to satisfy during the complete execution cycle, we can stop the gesture tracking and wait for it to start again. Finally, there should be a condition that triggers the end of the gesture and "validates" the final position, which indicates that gesture recognition is "finished". 7

Implementation of Algorithmic Gesture Recognition 8

Implementation of Algorithmic Gesture Recognition Add a new C# file called GestureBase.cs Common operation for all types of gestures suitable for algorithmic gesture recognition using Microsoft.Kinect; public abstract class GestureBase { public GestureBase(GestureType type) { this.CurrentFrameCount = 0; this.GestureType = type; } public bool IsRecognitionStarted { get; set; } private int CurrentFrameCount { get; set; } public GestureType GestureType { get; set; } protected virtual int MaximumNumberOfFrameToProcess { get { return 15; } 9

GestureBase.cs public long GestureTimeStamp { get; set; } protected abstract bool ValidateGestureStartCondition(Skeleton skeleton); protected abstract bool ValidateGestureEndCondition(Skeleton skeleton); protected abstract bool ValidateBaseCondition(Skeleton skeleton); protected abstract bool IsGestureValid(Skeleton skeleton); 10

GestureBase.cs public virtual bool CheckForGesture(Skeleton skeleton) { if (this.IsRecognitionStarted == false) { if (this.ValidateGestureStartCondition(skeleton)) { this.IsRecognitionStarted = true; this.CurrentFrameCount = 0; } } else { if (this.CurrentFrameCount == this.MaximumNumberOfFrameToProcess) { this.IsRecognitionStarted = false; if (ValidateBaseCondition(skeleton) && ValidateGestureEndCondition(skeleton)) { return true; } this.CurrentFrameCount++; if (!IsGestureValid(skeleton) && ! ValidateBaseCondition(skeleton)) { return false; 11

SwipeToLeftGesture.cs using Microsoft.Kinect; public class SwipeToLeftGesture : GestureBase { // intermediate right hand position used for validation of the gesture private SkeletonPoint validatePosition; // starting right hand position when the gesture start condition is met (starting pose) private SkeletonPoint startingPostion; // distance between the hand right and the left shoulder private float shoulderDiff; // constructor public SwipeToLeftGesture() : base(GestureType.SwipeToLeft) { } // check to see if the starting pose is seen // called for every skeleton frame received protected override bool ValidateGestureStartCondition(Skeleton skeleton) { } // … } 12

SwipeToLeftGesture.cs protected override bool ValidateGestureStartCondition(Skeleton skeleton) { var handRightPoisition = skeleton.Joints[JointType.HandRight].Position; var handLeftPosition = skeleton.Joints[JointType.HandLeft].Position; var shoulderRightPosition = skeleton.Joints[JointType.ShoulderRight].Position; var spinePosition = skeleton.Joints[JointType.Spine].Position; // Starting pose: // right hand lower than right shoulder && right hand higher than right elbow // && left hand lower than spine if ((handRightPoisition.Y < shoulderRightPosition.Y) && (handRightPoisition.Y > skeleton.Joints[JointType.ElbowRight].Position.Y) && handLeftPosition.Y < spinePosition.Y) { shoulderDiff = GestureHelper.GetJointDistance(skeleton.Joints[JointType.HandRight], skeleton.Joints[JointType.ShoulderLeft]); validatePosition = skeleton.Joints[JointType.HandRight].Position; startingPostion = skeleton.Joints[JointType.HandRight].Position; return true; } return false; 13

SwipeToLeftGesture.cs // called for every skeleton frame protected override bool IsGestureValid(Skeleton skeletonData) { // current right hand position var currentHandRightPoisition = skeletonData.Joints[JointType.HandRight].Position; // current right hand should be on the left of the previous right hand position, // i.e., the right hand is moving to the left if (validatePosition.X < currentHandRightPoisition.X) // if the right hand is moving to the right, stop doing gesture recognition return false; } // update the validatePosition using the current right hand position validatePosition = currentHandRightPoisition; // gesture so far so good return true; 14

SwipeToLeftGesture.cs // check if the final pose has reached protected override bool ValidateGestureEndCondition(Skeleton skeleton) { // distance between the staring right hand position and // the last right hand position double distance = Math.Abs(startingPostion.X - validatePosition.X); // the distance between the current right hand and the left shoulder float currentshoulderDiff = GestureHelper.GetJointDistance(skeleton.Joints[JointType.HandRight], skeleton.Joints[JointType.ShoulderLeft]); // the right hand has moved for 0.1m since its starting position and // the right hand is getting closer to the left shoulder => we are done! if (distance > 0.1 && currentshoulderDiff < shoulderDiff) return true; // otherwise, the right hand has not moved enough distance yet return false; } 15

SwipeToLeftGesture.cs protected override bool ValidateBaseCondition(Skeleton skeleton) { var handRightPoisition = skeleton.Joints[JointType.HandRight].Position; var handLeftPosition = skeleton.Joints[JointType.HandLeft].Position; var shoulderRightPosition = skeleton.Joints[JointType.ShoulderRight].Position; var spinePosition = skeleton.Joints[JointType.Spine].Position; // right hand is to the left of the right shoulder, and // right hand is higher than right elbow, and // left hand is lower than spine if ((handRightPoisition.Y < shoulderRightPosition.Y) && (handRightPoisition.Y > skeleton.Joints[JointType.ElbowRight].Position.Y) && (handLeftPosition.Y < spinePosition.Y)) // swipe to the left is ongoing, so far so good return true; } // condition is not met, terminate return false; 16

GestureRecognitionEngine.cs Add a new C# file called GestureRecognitionEgine.cs to the project Resembles the previous one, but uses inheritance Add GestureType, RecogntionResult, GestureEventArgs to the new file, or three separate files class GestureRecognitionEngine { int SkipFramesAfterGestureIsDetected = 0; public event EventHandler<GestureEventArgs> GestureRecognized; public GestureType GestureType { get; set; } public Skeleton Skeleton { get; set; } public bool IsGestureDetected { get; set; } // list of gestures to be detected private List<GestureBase> gestureCollection = null; public GestureRecognitionEngine() this.InitilizeGesture(); } ….. 17

GestureRecognitionEngine.cs private void InitilizeGesture() { this.gestureCollection = new List<GestureBase>(); //this.gestureCollection.Add(new ZoomInGesture()); //this.gestureCollection.Add(new ZoomOutGesture()); //this.gestureCollection.Add(new SwipeToRightGesture()); // add SwipeToLeftGesture recognizer to the list this.gestureCollection.Add(new SwipeToLeftGesture()); } // reset data structures for a new round of gesture recognition private void RestGesture() this.gestureCollection = null; this.InitilizeGesture(); this.SkipFramesAfterGestureIsDetected = 0; this.IsGestureDetected = false; 18

GestureRecognitionEngine.cs public void StartRecognize() { if (this.IsGestureDetected) { // create a short break when we are done one round of gesture recognition while (this.SkipFramesAfterGestureIsDetected <= 30) { this.SkipFramesAfterGestureIsDetected++; } // reset our data structures for a new round of gesture recognition this.RestGesture(); return; // perform gesture recognition for every gesture recognizer in our list foreach (var item in this.gestureCollection) { if (item.CheckForGesture(this.Skeleton)) { if (this.GestureRecognized != null) { // fire a gesture event when a gesture is recognized this.GestureRecognized(this, new GestureEventArgs(RecognitionResult.Success, item.GestureType)); this.IsGestureDetected = true; 19

GestureHelper.cs Added a GestureHelper.cs file to your project The class has only the following static method public static float GetJointDistance(Joint firstJoint, Joint secondJoint) { float distanceX = firstJoint.Position.X - secondJoint.Position.X; float distanceY = firstJoint.Position.Y - secondJoint.Position.Y; float distanceZ = firstJoint.Position.Z - secondJoint.Position.Z; return (float)Math.Sqrt(Math.Pow(distanceX, 2) + Math.Pow(distanceY, 2) + Math.Pow(distanceZ, 2)); } 20

Build a Gesture Recognition App for SwipeToLeft Gesture User interface TextBox Canvas Image 21

Build a Gesture Recognition App Add member variables Modify constructor KinectSensor sensor; private WriteableBitmap colorBitmap; private byte[] colorPixels; Skeleton[] totalSkeleton = new Skeleton[6]; Skeleton skeleton; GestureRecognitionEngine recognitionEngine; public MainWindow() { InitializeComponent(); Loaded += new RoutedEventHandler(WindowLoaded); } 22

Build a Gesture Recognition App private void WindowLoaded(object sender, RoutedEventArgs e) { if (KinectSensor.KinectSensors.Count > 0) { this.sensor = KinectSensor.KinectSensors[0]; if (this.sensor != null && !this.sensor.IsRunning) { this.sensor.Start(); this.sensor.ColorStream.Enable(); this.colorPixels = new byte[this.sensor.ColorStream.FramePixelDataLength]; this.colorPixels = new WriteableBitmap(this.sensor.ColorStream.FrameWidth, this.sensor.ColorStream.FrameHeight, 96.0, 96.0, PixelFormats.Bgr32, null); this.image1.Source = this.colorBitmap; this.sensor.ColorFrameReady += this.colorFrameReady; this.sensor.SkeletonStream.Enable(); this.sensor.SkeletonFrameReady += skeletonFrameReady; recognitionEngine = new GestureRecognitionEngine(); recognitionEngine.GestureRecognized += gestureRecognized; } 23

Build a Gesture Recognition App Gesture recognized event handler colorFrameRead(), DrawSkeleton(), drawBone(), ScalePosition() same as before void gestureRecognized(object sender, GestureEventArgs e) { textBox1.Text = e.GestureType.ToString(); } 24

Build a Gesture Recognition App Handle skeleton frame ready event void skeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e) { canvas1.Children.Clear(); using (SkeletonFrame skeletonFrame = e.OpenSkeletonFrame()) { if (skeletonFrame == null) { return; } skeletonFrame.CopySkeletonDataTo(totalSkeleton); skeleton = (from trackskeleton in totalSkeleton where trackskeleton.TrackingState == SkeletonTrackingState.Tracked select trackskeleton).FirstOrDefault(); if (skeleton == null) return; DrawSkeleton(skeleton); recognitionEngine.Skeleton = skeleton; recognitionEngine.StartRecognize(); } 25

Challenge Tasks Implement the recognition of zoom-in and zoom-out gestures, and use the two gestures to manipulate a large image of your choice The following are the rules for zoom-in. You can design the rules for zoom-out in the similar fashion Start condition Right hand lower than right shoulder, left hand lower than right shoulder Right hand higher than hip center Left hand higher than hip center Distance between two hands smaller than 0.5m. This distance is recorded in a variable for comparison later Check gesture for validity Current distance between two hands must be bigger than initial value End gesture condition Distance between two hands exceed 1.0m Base condition Right hand lower than right shoulder Left hand lower than right shoulder 26