File Format for Representing Live Actor and Entity in MAR

Slides:



Advertisements
Similar presentations
How it works? Actor & green screen 3D virtual background Complete video Camera position Must be very accurate ISLT Broadcaster.
Advertisements

A Natural Interactive Game By Zak Wilson. Background This project was my second year group project at University and I have chosen it to present as it.
Business and Computing Deanery 3D Modelling Tools Week 10 File formats and exporting models.
Head Tracking and Virtual Reality by Benjamin Nielsen.
© De Montfort University, D Graphics and VRML Howell Istance and Chris Hand* De Montfort University * now at
Planar Matchmove Using Invariant Image Features Andrew Kaufman.
A Short Ride on the Reality- Virtuality Continuum Ehud Sharlin Interactions Lab, University of Calgary.
Placenti & Schaad A Presentation by Multimedia, Web Authoring, and More & Henry Placenti Ray Schaad.
Mixed Reality Research in Design Xiangyu Wang, Ph.D. Design Computing Honour Pre. Seminar March 2008.
VRML Virtual Reality Modeling Language. What Are We Going to See? What is VRML? Syntax of the language Features Examples.
FYP Project LYU0303: 1 Video Object Tracking and Replacement for Post TV Production.
X3D Extension for (Mobile) AR Contents International AR Standards Workshop Seoul, Korea Oct 11-12, 2010 Gerard J. Kim (WG 6 AR Standards Study Group Coordinator)
Electronic Visualization Laboratory University of Illinois at Chicago Interaction between Real and Virtual Humans: Playing Checkers R. Torre, S. Balcisoy.
Digital data formats for representation of real objects Adolf Knoll National Library of the Czech Republic
INTRODUCTION Generally, after stroke, patient usually has cerebral cortex functional barrier, for example, the impairment in the following capabilities,
Computer Graphics Group Jiří Žára. Computer Graphics Group 2X3D Contents 1.Web3D Consortium 2.X3D specification 3.GeoVRML 4.NurbsVRML 2.
1 VR assignment#2 WTK SDK Introduction By Jin-Bey Yu 2001/11/12.
Interactive Textures as Spatial User Interfaces in X3D Web3D 2010 Symposium Sabine Webel Y. Jung, M. Olbrich, T. Drevensek, T. Franke, M.Roth, D.Fellner,
Useful Tools for Making Video Games Part II An overview of.
Data dan Teknologi Multimedia Sesi 07 Nofriyadi Nurdam.
FYP Project LYU0304: “Monster Battle”: A Prototype of Augmented Reality Card Game.
VIRTUAL REALITY PRESENTED BY, JANSIRANI.T, NIRMALA.S, II-ECE.
Augmented Reality Generic Enabler Introduction Nonprofit educational material, fair use of copyrighted material, if any, assumed.
 Motion cameras  Cameras  The motion camera works when an animal is moving or when their in motion. Camera sensor traps offer a sensoring trap thing.
Augmented Reality. The blending of digital information with the physical world User can see and is apart of the real world User is immersed in a artificial.
1 Perception and VR MONT 104S, Fall 2008 Lecture 20 Computer Graphics and VR.
Tiered Imaging Aman Kansal,Mohammad Rahimi. High Resolution Imaging Design Objective: Maximize spatial coverage Maximize resolution –Sensing performance.
VR Final Project AR Shooting Game
Mixed Reality Conferencing Hirokazu Kato, Mark Billinghurst HIT Lab., University of Washington.
What is Multimedia Anyway? David Millard and Paul Lewis.
THINKING OUTSIDE THE BOX . (AR) IN GAMES might-look-like-playing-videogames-very-Shttp://
Future Trends in Simulation - Virtual, Augmented and Mixed Realities
Processing visual information for Computer Vision
Mixed Reality Benjamin Lok.
Using Virtual Reality to Monitor the GreenLight Instrument
A brief introduction of experience of K-Sky Limited
Introduction to Virtual Environments & Virtual Reality
Multimedia Application
Jun Shimamura, Naokazu Yokoya, Haruo Takemura and Kazumasa Yamazawa
Object Model for Live Actor and Entity in a MAR world
A C++ generic model for the GLAST Geometric Description
Live Actor and Entity Representation in MAR
Google Cardboard.
Augmented Reality And Virtual Reality.
Augmented Reality By: Erica Baas.
Virtual Reality By: brady adger.
AUGMENTED REALITY MULTIPLAYER GAME
Video and Sensor Network Architecture and Displays
(c) 2001 by The McGraw-Hill Companies, Inc. All rights reserved.
EEC-693/793 Applied Computer Vision with Depth Cameras
Basics of Motion Generation
Computer Animation System Overview
DrillSim July 2005.
EEC-693/793 Applied Computer Vision with Depth Cameras
Overview of Multimedia Mapping Three technical sessions…
Mixed Reality Server under Robot Operating System
for Display Antique and Art Object Information
Virtual Reality.
Structure Chart Equipment Tracking & Inspection System 1 User Login 2
Application to Animating a Digital Actor on Flat Terrain
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
Lecture 9: Virtual and augmented environments for design
Introduction How do you see a Large Image within a Limited Screen?
IEEE MEDIA INDEPENDENT HANDOVER DCN:
EEC-693/793 Applied Computer Vision with Depth Cameras
Klaas Werkman Arjen Vellinga
EEC-693/793 Applied Computer Vision with Depth Cameras
Closing Remarks.
Building Dynamic Interactive X3D Scenes A Cookbook
Presentation transcript:

File Format for Representing Live Actor and Entity in MAR Kwan-Hee Yoo ISO/IEC JTC 1 SC24 WG9 Meeting 2015. 8. 24 Chungbuk National University

Augmented Reality Continuum [Paul Milgram’s Reality-Virtuality Continuum (1994)] Virtual Environment Real Mixed Reality Augmented Reality Virtuality HITLab KBS

System Framework for representing LAEs in a MAR world User Physical world Sensors (2D/3D camera, GPS, Gyro, etc) Recognizer/ Tracker Renderer/ Simulator Spatial Mapper Display/ UI Event Mapper MAR Scene (spatial scene, events, targets, etc)

Live Actor and Entity in a MAR world In a Real World In a MAR World -How to sense LAEs in real world -Need to model objects for LAEs

Characteristics of Live Actor and Entity in a MAR world -Sensing LAEs -Tracking LAEs -Recognizing LAEs -Spatial Mapping of LAEs into MAR world -Event Mapping between LAEs and MAR world File Format for representing the live actor and entity

File Format for Representing Live Actor and Entity -use X3D file or others -use HTML5 -define Object Models for Live Actors and Entities -define Motion Volume of Live Actors and Entities in real world -define Motion Volume of Live Actors and Entities in virtual world -define Spatial Mapper -define Event Scene -define Event Mapper

Sensing Live Actor and Entity RCSensingDeviceNode { SFString[in] id “” SFString[in] description “” SFString[in] type camera // SFFloat[in] fov 45.0 SFInt [in] framerate 20 // 초당 프레임 수 SFImage[out] image 0 0 0 // 캡쳐 이미지 좌표 값 MFString[in] jointypes “” // 조인트 포지션의 이름(H-Anim) MFVector[out] values 0 0 0 // 조인트 포지션 깊이 좌표 값 SFBool[in] usedChromakeying false //크로마 킹 여부(true, false) }

Bounding Box Node for Real World RCBBRSNode { SFString[in] id “” SFString[in] description “” SFVector[in,out] startpoint 0 0 0 SFVector[in,out] endpoint 320 240 0 RCSensingDeviceNode[in] sensingDevice “” }

Bounding Box Node for Virtual World RCBBVSNode { SFString id “” SFString description “” SFVector[in, out] startpoint 0 0 0 SFVector[in, out] endpoint 1 1 1 SFString[in] virtualspace null // 가상공간 오브젝트 로드 SFVector[out] vsStartpoint 0 0 0 SFVector[out] vsEndpoint 1 1 1 }

Spatial Mapper RCSpatialMapper { SFString id “” SFString description “” RCBBRSNode[in, out] realspace “” RCBBVSNode[in, out] virtualspace “” SFVector[in, out] direction 0 0 1 SFVector[in, out] scale 1 1 1 // 크기조절 SFVector[in, out] up 0 1 0 SFBool[in] usedCollision false SFString[in, out] vcObject Null }

From General Camera Sensor <RCSensingDeviceNode id = "cam0" type = "camera" fov="50" framerate="30“ usedChromakeying = “false”jointtypes = “all”> </ RCSensingDeviceNode > Get two points for a bounding box in real space <RCBBRSNode id = “bbrs1" description = “movable space of real characters in real space" startpoint=“0 0 8000" endpoint=“640 480 30000” sensingDevice =“cam0”> </ RCBBRSNode > Get two points for a bounding box in virtual space <RCBBVSNode id = “bbvs1" description = “movable space of real characters in Virtual space" startpoint=“0 0 0" endpoint=“1 1 1” virtualspace=“classroom.3ds”> </ RCBBVSNode > <RCSpatialMapper id = “rcsp1” realspace =“bbrs1” virtualspace=“bbvs1” direction=“0 0 1” vcObject=“plane.3ds”> </RCSpatialMapper>

From Depth Camera Sensor <RCSensingDeviceNode id = “depthcam1" type = “depthcamera" fov="50" framerate="30“ jointtypes = “all” </RCSensingDeviceNode > Get two points for a bounding box in real space <RCBBRSNode id = “bbrs2" description = “movable space of real characters in real space" startpoint=“0 0 8000" endpoint=“640 480 30000” sensingDevice =“depthcam1”> </RCBBRSNode > Get two points for a bounding box in Virtual space <RCBBVSNode id = “bbvs2" description = “movable space of real characters in Virtual space" startpoint=“0 0 0" endpoint=“1 1 1” virtualspace=“artRoom.3ds”> </RCBBVSNode > <RCSpatialMapper id = “rcsp2” realspace =“bbrs2” virtualspace=“bbvs2” direction=“0 0 1” vcObject=“person.3ds”> </RCSpatialMapper>

Size Scale <RCSensingDeviceNode id = "cam0" type = “depthcam1" fov="50" framerate="30”usedChromakeying = “false”> </ RCSensingDeviceNode > Get two points for a bounding box in real space <RCBBRSNode id = “bbrs3" description = “movable space of real characters in real space" startpoint=“0 0 0" endpoint=“640 480 30000” sensingDevice =“cam0”> </ RCBBRSNode > Get two points for a bounding box in virtual space <RCBBVSNode id = “bbvs3" description = “movable space of real characters in Virtual space" startpoint=“0 0 0" endpoint=“1 1 1” virtualspace=“classroom.3ds”> </ RCBBVSNode > <RCSpatialMapper id = “rcsp3” realspace =“bbrs3” virtualspace=“bbvs3” direction=“0 0 1” scale= “1.5 1 1”vcObject=“person.3ds”> </RCSpatialMapper>

Direction update <RCSensingDeviceNode id = "cam0" type = “depthcam1" fov="50" framerate="30”usedChromakeying = “false”> </ RCSensingDeviceNode > Get two points for a bounding box in real space <RCBBRSNode id = “bbrs3" description = “movable space of real characters in real space" startpoint=“0 0 0" endpoint=“640 480 30000” sensingDevice =“cam0”> </ RCBBRSNode > Get two points for a bounding box in virtual space <RCBBVSNode id = “bbvs3" description = “movable space of real characters in Virtual space" startpoint=“0 0 0" endpoint=“1 1 1” virtualspace=“classroom.3ds”> </ RCBBVSNode > <RCSpatialMapper id = “rcsp3” realspace =“bbrs3” virtualspace=“bbvs3” direction=“-1 0 1” scale= “1.5 1 1”vcObject=“person.3ds”> </RCSpatialMapper>

Chromakeying-1 <RCSensingDeviceNode id = "cam0" type = “depthcam1" fov="50" framerate="30”usedChromakeying = “true”> </ RCSensingDeviceNode > Get two points for a bounding box in real space <RCBBRSNode id = “bbrs4" description = “movable space of real characters in real space" startpoint=“0 0 0" endpoint=“640 480 30000” sensingDevice =“cam0”> </ RCBBRSNode > Get two points for a bounding box in virtual space <RCBBVSNode id = “bbvs4" description = “movable space of real characters in Virtual space" startpoint=“0 0 0" endpoint=“1 1 1” virtualspace=“classRoom.3ds”> </ RCBBVSNode > <RCSpatialMapper id = “rcsp4” realspace =“bbrs4” virtualspace=“bbvs4” direction=“0 0 1” scale= “1 1 1”vcObject=“plane.3ds”> </RCSpatialMapper>

Chromakeying-2 <RCSensingDeviceNode id = "cam0" type = “depthcam1" fov="50" framerate="30”usedChromakeying = “true”> </RCSensingDeviceNode > Get two points for a bounding box in real space <RCBBRSNode id = “bbrs4" description = “movable space of real characters in real space" startpoint=“0 0 15000" endpoint=“640 480 30000” sensingDevice =“cam0”> </ RCBBRSNode > Get two points for a bounding box in virtual space <RCBBVSNode id = “bbvs4" description = “movable space of real characters in Virtual space" startpoint=“0 0 0.2" endpoint=“1 1 0.8” virtualspace=“artRoom.3ds”> </ RCBBVSNode > <RCSpatialMapper id = “rcsp4” realspace =“bbrs4” virtualspace=“bbvs4” direction=“0 0 1” vcObject=“plane.3ds”> </RCSpatialMapper>