EEC-693/793 Applied Computer Vision with Depth Cameras

Slides:



Advertisements
Similar presentations
UFCFSU-30-13D Technologies for the Web Creating and Updating a Graphical Heads-Up Display (HUD)
Advertisements

INNER WORKINGS OF UNITY 3D. WHAT WE ARE GOING TO COVER Intro to Unity Physics & Game Objects Cameras & Lighting Textures & Materials Quaternions and Rotation.
SE 313 – Computer Graphics Lecture 13: Lighting and Materials Practice Lecturer: Gazihan Alankuş 1.
Processing Processing is a simple programming environment that was created to make it easier to develop visually oriented applications with an emphasis.
Introduction to 3D Graphics John E. Laird. Basic Issues u Given a internal model of a 3D world, with textures and light sources how do you project it.
SE 313 – Computer Graphics Lecture 11: Projection Painting and Merging Textures Lecturer: Gazihan Alankuş 1.
Modeling and Animation with 3DS MAX R 3.1 Graphics Lab. Korea Univ. Reference URL :
EEC-492/592 Kinect Application Development Lecture 10 Wenbing Zhao
University of Illinois at Chicago Electronic Visualization Laboratory (EVL) CS 426 Intro to 3D Computer Graphics © 2003, 2004, 2005 Jason Leigh Electronic.
CIS 205—Web Design & Development Flash Chapter 1 Getting Started with Adobe Flash CS3.
EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 13 Wenbing Zhao
Week 2 - Wednesday CS361.
© 2008 The McGraw-Hill Companies, Inc. All rights reserved. M I C R O S O F T ® Designing Original Illustrations Lesson 8.
Domain 3 Understanding the Adobe Dreamweaver CS5 Interface.
SE 350 – Programming Games Lecture 7: Programming with Unity Lecturer: Gazihan Alankuş Please look at the last slide for assignments (marked with TODO)
EEC-492/592 Kinect Application Development Lecture 14 Wenbing Zhao
UFCFS D Technologies for the Web Unity 3D: Review of Topics and Related Concepts.
SE 320 – Introduction to Game Development Lecture 7: Programming Lecturer: Gazihan Alankuş Please look at the last two slides for assignments (marked with.
UFCEK-20-3Web Games Programming Unity 3D: Review of Topics Publishing for the Web.
Basic 3D Concepts. Overview 1.Coordinate systems 2.Transformations 3.Projection 4.Rasterization.
Problem Solving Methodology Rachel Gauci. Problem Solving Methodology Development Design Analysis Evaluation Solution requirements and constraints. Scope.
Derived from Kirill Muzykov’s Rocket Mouse Tutorial WakeUpAndCode.com.
EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 8 Wenbing Zhao
GameDevClub CODE CHEAT SHEET NOTE: ALL OF THE CODE IS CASE-SENSITIVE AND THE SYNTAX IS STRICT SO A LOT OF YOUR ERRORS WILL PROBABLY COME FROM TYPOS If.
Computer Graphics Imaging Lecture 13 and 14 UV Mapping.
Expressive Intelligence Studio // Center for Games and Playable Media // Unity Pro John Murray Expressive.
INTRO TO UNITY Building your first 3D game. DISCLAIMER  “This website is not affiliated with, maintained, endorsed or sponsored by Unity Technologies.
EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 9 Wenbing Zhao
Chapter 9 Advanced Assembly Modeling Techniques. After completing this chapter, you will be able to perform the following: –Create design view representations.
UFCEKU-20-3Web Games Programming Creating and Updating a Graphical Heads-Up Display (HUD)
Adobe Premiere interface overview
EEC-693/793 Applied Computer Vision with Depth Cameras
Quick Intro to Unity Lecture 2.
Computer Graphics Imaging
3GB3 Game Design Unity 3D Basics.
Lecture 2 Richard Gesick
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
Introducing Blender.
INTERACTIVE TRANSPARENCY BUILDING A Character IN ANIMATION
Introducing Blender.
EEC-693/793 Applied Computer Vision with Depth Cameras
Introducing Blender.
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
Introduction to 3D Art and Animation
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
Oculus Rift DK2 + Leap Motion Unity Tutorial
Computer Animation UV Mapping.
Computer Animation Texture Mapping.
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
Create a Simple UI Game in 10 Steps
A beginner’s tutorial for Unity and VR
EEC-693/793 Applied Computer Vision with Depth Cameras
Week 6: Time and triggers!
Using the QGIS Layout Manager to create
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
Game Programming Algorithms and Techniques
Unity Game Development
Presentation transcript:

EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 14 Wenbing Zhao wenbing@ieee.org

Outline Unity3D + ZDK http://www.zigfu.com/en/zdk/unity3d/

Creating a New Unity Project Open Unity, create a new project, choose a location folder of your choice Important ZDK plugin Go to Assets menu, select “Import Package”, then “Custom Package…” In the dialogue, find and select the ZDK plugin and click OK: ZDK_Unity40_1.1_trial.unitypackage In the “Import package” dialogue, keep default selection and click Import button 3

Project Layout after ZDK is imported 4

Constructing Scene Add the following items to the scene: Avatar: Dana (from ZDK) In Assets from Project panel, go to ZigFu folder, then _Data folder Drag and drop Dana@t-pose_3 to the Hierarchy panel Rotation 180 degree so that it faces front Floor Add a Cube game object, change its size to a thin layer Scale: x=30, y=0.1, z=30 Change material to floor (imported from ZDK) Use Inspector, to go Mesh Renderer section, then materials => Element 0 => browse => choose Floor Directional light Empty object: to enable bootstrapping with ZDK From GameObject menu, choose Create Empty Rename it to InitZDK (right click the object, then Rename) Adjust main camera so that Dana almost fill the entire game view 5

Connecting Existing Scripts Change Project panel to one column Expand ZigFu folder, then expand Scripts folder and all subfolders Locate Zig script, drag and drop it to InitZDK in the Hierarchy panel Locate ZigDepthViewer script under the Viewers subfoler to InitZDK Locate ZigUsersRadar script under the Viewers subfoler to InitZDK Locate ZigEngageSingleUser script under the UserEngagers subfoler to InitZDK Drag and drop Dana from the Hierarchy panel to Engaged Users section under Zig Engage Singler User in the inspector Locate kinectSpecific script under the _Internal subfoler to InitZDK Locate ZigSkeleton script under the UserControls subfoler to Dana 6

After Connecting Existing Scripts 7

Connecting Joints for Dana the Avatar Expand the entire hierarchy of Dana mesh Drag and drop matching joints from the Dana mesh in the hierarchy panel to the appropriate joints under Zig Skeleton section in the inspector Head => Head Left Hip => LeftUpLeg Neck => Neck Left Knee=> LeftLeg Torso => Spine1 Left Ankle => LeftFoot Waist => Spine Right Hip => RightUpLeg Left Shoulder => LeftArm Right Knee => RightLeg Left Elbow => LeftForeArm Right Ankle => RightFoot Left Wrist => Left Hand Right Shoulder => RightArm Right Elbow => RightForeArm Right Wrist => RightHand Check the Mirror checkbox in the Dana inspector 8

Connecting Joints for Dana the Avatar 9

What Do the Scripts Do? Zig.cs Take user settings on what to update, smoothing, and smoothing parameters Public member variables can be set via the inspector public class Zig : MonoBehaviour { public ZigInputType inputType = ZigInputType.Auto; public ZigInputSettings settings = new ZigInputSettings(); public List<GameObject> listeners = new List<GameObject>(); public bool Verbose = true; You can’t select objects in this view and, unlike the Scene view, it has no default lighting. You’ll have to use the Main Camera to see objects in the Game window 10

ZigDepthViewer.cs OnGUI(): called for rendering and handling GUI events static void DrawTexture(Rect position, Texture image, ScaleMode scaleMode = ScaleMode.StretchToFill, bool alphaBlend = true, float imageAspect = 0); Position: Rectangle on the screen to draw the texture within. Image: Texture to display. scaleMode: How to scale the image when the aspect ratio of it doesn't fit the aspect ratio to be drawn within. alphaBlend: Whether to apply alpha blending when drawing the image (enabled by default). imageAspect: Aspect ratio to use for the source image void OnGUI() { if (null == target) { GUI.DrawTexture(new Rect(Screen.width - texture.width - 10, Screen.height - texture.height - 10, texture.width, texture.height), texture); } 11

ZigDepthViewer.cs Texture: the visual and especially tactile quality of a surface. Parent class for Texture2D Texture2D reference http://docs.unity3d.com/Documentation/ScriptReference/Texture2D.html You can also add a button void OnGUI() { if (GUI.Button(new Rect(10, 10, 150, 100), "I am a button")) print("You clicked the button!"); } 12

ZigDepthViewer.cs Getting input from Kinect void Zig_Update(ZigInput input) { if (UseHistogram) { UpdateHistogram(ZigInput.Depth); // ZigInput.Depth contains the depth data } else { depthToColor[0] = Color.black; for (int i = 1; i < MaxDepth; i++) { float intensity = 1.0f - (i/(float)MaxDepth); depthToColor[i].r = (byte)(BaseColor.r * intensity); depthToColor[i].g = (byte)(BaseColor.g * intensity); depthToColor[i].b = (byte)(BaseColor.b * intensity); depthToColor[i].a = 255; } UpdateTexture(ZigInput.Depth); 13

ZigDepthViewer.cs Update Depth Image void UpdateTexture(ZigDepth depth) { short[] rawDepthMap = depth.data; int depthIndex = 0; int factorX = depth.xres / textureSize.Width; int factorY = ((depth.yres / textureSize.Height) - 1) * depth.xres; // invert Y axis while doing the update for (int y = textureSize.Height - 1; y >= 0 ; --y, depthIndex += factorY) { int outputIndex = y * textureSize.Width; for (int x = 0; x < textureSize.Width; ++x, depthIndex += factorX, ++outputIndex) { outputPixels[outputIndex] = depthToColor[rawDepthMap[depthIndex]]; } texture.SetPixels32(outputPixels); texture.Apply(); 14

ZigUserRadar.cs: Track where the user is void OnGUI () { if (!ZigInput.Instance.ReaderInited) return; int width = (int)((float)PixelsPerMeter * (RadarRealWorldDimensions.x / 1000.0f)); int height = (int)((float)PixelsPerMeter * (RadarRealWorldDimensions.y / 1000.0f)); GUI.BeginGroup (new Rect (Screen.width - width - 20, 20, width, height)); Color oldColor = GUI.color; GUI.color = boxColor; GUI.Box(new Rect(0, 0, width, height), "Users Radar", style); GUI.color = oldColor; foreach (ZigTrackedUser currentUser in ZigInput.Instance.TrackedUsers.Values) { // normalize the center of mass to radar dimensions Vector3 com = currentUser.Position; Vector2 radarPosition = new Vector2(com.x / RadarRealWorldDimensions.x, -com.z / RadarRealWorldDimensions.y); // X axis: 0 in real world is actually 0.5 in radar units (middle of field of view) radarPosition.x += 0.5f; radarPosition.x = Mathf.Clamp(radarPosition.x, 0.0f, 1.0f); radarPosition.y = Mathf.Clamp(radarPosition.y, 0.0f, 1.0f); Color orig = GUI.color; GUI.color = (currentUser.SkeletonTracked) ? Color.blue : Color.red; GUI.Box(new Rect(radarPosition.x * width - 10, radarPosition.y * height - 10, 20, 20), currentUser.Id.ToString()); GUI.color = orig; } GUI.EndGroup(); 15

ZigEngageSingleUser.cs Connect ZigInput to the avatar: that is why you must drag and drop Dana to EngagedUsers field public class ZigEngageSingleUser : MonoBehaviour { public bool SkeletonTracked = true; public bool RaiseHand; public List<GameObject> EngagedUsers; void Start() { // make sure we get zig events ZigInput.Instance.AddListener(gameObject); } void Zig_Update(ZigInput zig) { if (SkeletonTracked && null == engagedTrackedUser) { foreach (ZigTrackedUser trackedUser in zig.TrackedUsers.Values) { if (trackedUser.SkeletonTracked) { EngageUser(trackedUser); } 16

ZigEngageSingleUser.cs void EngageUser(ZigTrackedUser user) { if (null == engagedTrackedUser) { engagedTrackedUser = user; foreach (GameObject go in EngagedUsers) user.AddListener(go); SendMessage("UserEngaged", this, SendMessageOptions.DontRequireReceiver); } Component.SendMessage: void SendMessage(string methodName, object value = null, SendMessageOptions options = SendMessageOptions.RequireReceiver); // Calls the method named methodName on every MonoBehaviour in this game object 17

kinectSpecific.cs: Kinect specific settings void OnGUI() { longWord = GUI.TextField(new Rect(10, 10, 200, 30), readingAngle ? getAngle().ToString() : longWord, 20); if (GUI.Button(new Rect(10, 40, 200, 30), "SetElevation")) { angle = int.Parse(longWord); NuiWrapper.NuiCameraElevationSetAngle(angle); t = new Thread(setAngle); //attempted a Paramaterized Thread to no avail t.Start(); Thread.Sleep(0); } readingAngle = GUI.Toggle(new Rect(10, 80, 200, 30), readingAngle, "Read Angle"); bool nNearMode = GUI.Toggle(new Rect(10, 160, 200, 20), NearMode, "Near Mode"); if (nNearMode != NearMode) { NearMode = nNearMode; ZigInput.Instance.SetNearMode(NearMode); bool nSeatedMode = GUI.Toggle(new Rect(10, 190, 200, 20), SeatedMode, "Seated Mode"); bool nTrackSkeletonInNearMode = GUI.Toggle(new Rect(10, 220, 200, 20), TrackSkeletonInNearMode, "Track Skeleton In NearMode"); if ((nSeatedMode != SeatedMode) || (TrackSkeletonInNearMode != nTrackSkeletonInNearMode)) { SeatedMode = nSeatedMode; TrackSkeletonInNearMode = nTrackSkeletonInNearMode; ZigInput.Instance.SetSkeletonTrackingSettings(SeatedMode, TrackSkeletonInNearMode); 18

ZigSkeleton.cs public class ZigSkeleton : MonoBehaviour { public Transform Head; public Transform Neck; public Transform Torso; public Transform Waist; public Transform LeftCollar; public Transform LeftShoulder; public Transform LeftElbow; public Transform LeftWrist; public Transform LeftHand; public Transform LeftFingertip; public Transform RightCollar; public Transform RightShoulder; public Transform RightElbow; public Transform RightWrist; public Transform RightHand; public Transform RightFingertip; public Transform LeftHip; public Transform LeftKnee; public Transform LeftAnkle; public Transform LeftFoot; public Transform RightHip; public Transform RightKnee; public Transform RightAnkle; public Transform RightFoot; public bool mirror = false; public bool UpdateJointPositions = false; public bool UpdateRootPosition = false; public bool UpdateOrientation = true; public bool RotateToPsiPose = false; public float RotationDamping = 30.0f; public float Damping = 30.0f; public Vector3 Scale = new Vector3(0.001f, 0.001f, 0.001f); public Vector3 PositionBias = Vector3.zero; private Transform[] transforms; private Quaternion[] initialRotations; private Vector3 rootPosition; Quaternions are used to represent rotations, they are based on complex numbers and are not easy to understand intuitively 19

ZigSkeleton.cs ZigJointId mirrorJoint(ZigJointId joint) { switch (joint) { case ZigJointId.LeftCollar: return ZigJointId.RightCollar; case ZigJointId.LeftShoulder: return ZigJointId.RightShoulder; case ZigJointId.LeftElbow: return ZigJointId.RightElbow; case ZigJointId.LeftWrist: return ZigJointId.RightWrist; case ZigJointId.LeftHand: return ZigJointId.RightHand; case ZigJointId.LeftFingertip: return ZigJointId.RightFingertip; case ZigJointId.LeftHip: return ZigJointId.RightHip; ….. // map right to left default: return joint; } 20

ZigSkeleton.cs public void Awake() { int jointCount = Enum.GetNames(typeof(ZigJointId)).Length; transforms = new Transform[jointCount]; initialRotations = new Quaternion[jointCount]; transforms[(int)ZigJointId.Head] = Head; transforms[(int)ZigJointId.Neck] = Neck; transforms[(int)ZigJointId.Torso] = Torso; transforms[(int)ZigJointId.Waist] = Waist; ….. // save all initial rotations // NOTE: Assumes skeleton model is in "T" pose since all rotations are relative to that pose foreach (ZigJointId j in Enum.GetValues(typeof(ZigJointId))) { if (transforms[(int)j]) // we will store the relative rotation of each joint from the gameobject rotation // we need this since we will be setting the joint's rotation (not localRotation) but we // still want the rotations to be relative to our game object initialRotations[(int)j] = Quaternion.Inverse(transform.rotation) * transforms[(int)j].rotation; } 21

ZigSkeleton.cs void Zig_UpdateUser(ZigTrackedUser user) { UpdateRoot(user.Position); if (user.SkeletonTracked) { foreach (ZigInputJoint joint in user.Skeleton) { if (joint.GoodPosition) UpdatePosition(joint.Id, joint.Position); if (joint.GoodRotation) UpdateRotation(joint.Id, joint.Rotation); } void UpdateRoot(Vector3 skelRoot) { // +Z is backwards in OpenNI coordinates, so reverse it rootPosition = Vector3.Scale(new Vector3(skelRoot.x, skelRoot.y, skelRoot.z), doMirror(Scale)) + PositionBias; if (UpdateRootPosition) { transform.localPosition = (transform.rotation * rootPosition); } Vector3.Scale(Vector3 a, Vector3 b): Multiplies two vectors component-wise. 22

ZigSkeleton.cs void UpdateRotation(ZigJointId joint, Quaternion orientation) { joint = mirror ? mirrorJoint(joint) : joint; // make sure something is hooked up to this joint if (!transforms[(int)joint]) { return; } if (UpdateOrientation) { Quaternion newRotation = transform.rotation * orientation * initialRotations[(int)joint]; if (mirror) { newRotation.y = -newRotation.y; newRotation.z = -newRotation.z; } transforms[(int)joint].rotation = Quaternion.Slerp(transforms[(int)joint].rotation, newRotation, Time.deltaTime * RotationDamping); } static Quaternion Slerp(Quaternion from, Quaternion to, float t); // Spherically interpolates between from and to by t. void UpdatePosition(ZigJointId joint, Vector3 position) { joint = mirror ? mirrorJoint(joint) : joint; if (!transforms[(int)joint]) { return; } if (UpdateJointPositions) { Vector3 dest = Vector3.Scale(position, doMirror(Scale)) - rootPosition; //Vector3.Lerp: Linearly interpolates between two vectors. transforms[(int)joint].localPosition = Vector3.Lerp(transforms[(int)joint].localPosition, dest, Time.deltaTime * Damping); } 23