Augmented Reality Services based on Embedded Metadata Byoung-Dai Lee Department of Computer Science, Kyonggi University, Suwon, Korea Abstract.

Slides:



Advertisements
Similar presentations
1.1 Designed and Presented by Dr. Ayman Elshenawy Elsefy Dept. of Systems & Computer Eng.. Al-Azhar University
Advertisements

                      Digital Audio 1.
Dedicated Computer Systems
Virtual Dart: An Augmented Reality Game on Mobile Device Supervisor: Professor Michael R. Lyu Prepared by: Lai Chung Sum Siu Ho Tung.
ADVISE: Advanced Digital Video Information Segmentation Engine
Introduction to Databases Transparencies
What is adaptive web technology?  There is an increasingly large demand for software systems which are able to operate effectively in dynamic environments.
REAL-TIME DETECTION AND TRACKING FOR AUGMENTED REALITY ON MOBILE PHONES Daniel Wagner, Member, IEEE, Gerhard Reitmayr, Member, IEEE, Alessandro Mulloni,
                      Digital Video 1.
Smart Learning Services Based on Smart Cloud Computing
Chapter II The Multimedia Sysyem. What is multimedia? Multimedia means that computer information can be represented through audio, video, and animation.
Multimedia. Definition What is Multimedia? Multimedia can have a many definitions these include: Multimedia means that computer information can be represented.
Systems Analysis – Analyzing Requirements.  Analyzing requirement stage identifies user information needs and new systems requirements  IS dev team.
Shape Recognition and Pose Estimation for Mobile Augmented Reality Author : N. Hagbi, J. El-Sana, O. Bergig, and M. Billinghurst Date : Speaker.
Version 4.0. Objectives Describe how networks impact our daily lives. Describe the role of data networking in the human network. Identify the key components.
Presentation by: K.G.P.Srikanth. CONTENTS  Introduction  Components  Working  Applications.
Database System Concepts and Architecture
GENERAL PRESENTATION SUBMITTED BY:- Neeraj Dhiman.
Speaker : Meng-Shun Su Adviser : Chih-Hung Lin Ten-Chuan Hsiao Ten-Chuan Hsiao Date : 2010/01/26 ©2010 STUT. CSIE. Multimedia and Information Security.
Submitted by:- Vinay kr. Gupta Computer Sci. & Engg. 4 th year.
Advanced Computer Technology II FTV and 3DV KyungHee Univ. Master Course Kim Kyung Yong 10/10/2015.
Implementing a Sentient Computing System Presented by: Jing Lin, Vishal Kudchadkar, Apurva Shah.
MULTIMEDIA DEFINITION OF MULTIMEDIA
Tele Immersion. What is Tele Immersion? Tele-immersion is a technology to be implemented with Internet2 that will enable users in different geographic.
Wireless communications and mobile computing conference, p.p , July 2011.
Traffic Pattern-Based Content Leakage Detection for Trusted Content Delivery Networks.
Augmented Reality Authorized By: Miss.Trupti Pardeshi. NDMVP, Comp Dept. Augmented Reality 1/ 23.
Design of On-Demand Analysis for Cloud Service Configuration using Related-Annotation Hyogun Yoon', Hanku Lee' 2 `, ' Center for Social Media Cloud Computing,
Dasar-Dasar Multimedia
Development of a Software Renderer for utilizing 3D Contents on a 2D-based Mobile System Sungkwan Kang 1, Joonseub Cha 2, Jimin Lee 1 and Jongan Park 1,
Advanced Science and Technology Letters Vol.32 (Architecture and Civil Engineering 2013), pp A Study on.
Design of Context-Aware based Information Prevention Sungmo Jung 1, Younsam Chae 2, Jonghun Shin 2, Uyeol Baek 2, Seoksoo Kim ∗ 1, * Department of Multimedia,
A Method for Providing Personalized Home Media Service Using Cloud Computing Technology Cui Yunl, Myoungjin Kim l and Hanku Lee l 'z * ' Department of.
A Study of Secure Communications in WiFi Networks Bumjo Park 1 and Namgi Kim 11 1 Dept. Of Computer Science, Kyonggi Univ. San 94-1, Iui, Yeongtong, Suwon,
Advanced Science and Technology Letters Vol.32 (Architecture and Civil Engineering 2013), pp A Preliminary.
Advanced Science and Technology Letters Vol.106 (Information Technology and Computer Science 2015), pp.17-21
OCR Software Architecture for Embedded Device Seho Kim', Jaehwa Park Computer Science, Chung-Ang University, Seoul, Korea
HTML5 based Notification System for Updating E-Training Contents Yu-Doo Kim 1 and Il-Young Moon 1 1 Department of Computer Science Engineering, KoreaTech,
Big traffic data processing framework for intelligent monitoring and recording systems 學生 : 賴弘偉 教授 : 許毅然 作者 : Yingjie Xia a, JinlongChen a,b,n, XindaiLu.
Car Management System with In-Vehicle Networks Jong-Wook Jang 1,Sung-Hyun Baek 1, Yun-Sik, Yu 2 1 Department of Computer Engineering, Dong-Eui University,
Interactive Mirror System based on Personal Purchase Information Donghyun Kim 1, Younsam Chae 2, Jonghun Shin 2, Uyeol Baek 2, Seoksoo Kim * 1,* Dept of.
Digital Video File Formats an overview. Introduction Digital Video & Audio files are also known as container formats. These “containers” are digital files.
A Framework with Behavior-Based Identification and PnP Supporting Architecture for Task Cooperation of Networked Mobile Robots Joo-Hyung Kiml, Yong-Guk.
The Study on the Car Mechanics e-Training AR(Augmented Reality) System for Real-time Augmented Contents Ji-Yean Yoon,1, Dong-Jin Kim 1, Yu-Doo Kim 1 and.
Border Code: an Efficient Code System for Augmented Reality Seong-hun Park and Young-guk Ha' Konkuk University, Department of Computer Science and Engineering,
Soon Joo Hyun Database Systems Research and Development Lab. US-KOREA Joint Workshop on Digital Library t Introduction ICU Information and Communication.
SAPIR Search in Audio-Visual Content using P2P Information Retrival For more information visit: Support.
DANIELA KOLAROVA INSTITUTE OF INFORMATION TECHNOLOGIES, BAS Multimedia Semantics and the Semantic Web.
Contents Introduction What are Location-based services Working of Location-based services Location Tracking Technologies Power profiling a mobile phone.
Advanced Science and Technology Letters Vol.74 (ASEA 2014), pp Development of Optimization Algorithm for.
BLFS: Supporting Fast Editing/Writing for Large- Sized Multimedia Files Seung Wan Jung 1, Seok Young Ko 2, Young Jin Nam 3, Dae-Wha Seo 1, 1 Kyungpook.
Advanced Science and Technology Letters Vol.28 (EEC 2013), pp Fuzzy Technique for Color Quality Transformation.
Outline Introduction Related Work System Overview Methodology Experiment Conclusion and Future Work.
Advanced Science and Technology Letters Vol.54 (Networking and Communication 2014), pp Efficient Duplicate.
Designing an Embedded Algorithm for Data Hiding using Steganographic Technique by File Hybridization G. Sahoo1 and R. K. Tiwari2 Presented by Pan Meng.
Advanced Science and Technology Letters Vol.43 (Multimedia 2013), pp Superscalar GP-GPU design of SIMT.
VLSI Design of View Synthesis for 3DVC/FTV Jongwoo Bae' and Jinsoo Cho 2, 1 Department of Information and Communication Engineering, Myongji University.
MPEG 7 &MPEG 21.
Nosipho Masilela COSC 480.  Define Augmented Reality  Augmented Reality vs. Reality  History of AR and its Applications  Augmented Tracking  Future.
Digital Media Preservation Based on Change History Byoung-Dai Lee 1, Sungryeul Rhyu 2, Kyungmo Park 2, Jaeyeon Song 2 1 Department of Computer Science,
Advanced Science and Technology Letters Vol.46 (Games and Graphics 2014), pp On Study of the Volumetric.
The Design of Smart RFID Tag System for Food Poisoning Index Monitoring Chang Won Lee 1.1, Nghia Truong Van 1, Kyung Kwon Jung 2, Joo Woong Kim 1, Woo.
Design of Augmented Object Compositing System based on Diminished Reality Donghyun Kim 1, Kwang Hyun Jang 2, Seoksoo Kim * 1,* Dept of Multimeida Engineering,
Digital Media Preservation Based on Change History
Seunghui Cha1, Wookhyun Kim1
YangSun Lee*, YunSik Son**
Aziz Nasridinov and Young-Ho Park*
Il-Kyoung Kwon1, Sang-Yong Lee2
Young Hoon Ko1, Yoon Sang Kim2
                      Digital Audio 1.
Presentation transcript:

Augmented Reality Services based on Embedded Metadata Byoung-Dai Lee Department of Computer Science, Kyonggi University, Suwon, Korea Abstract. Feature extraction and tracking is one of the core AR technologies. In particular, the markerless AR provides natural synthesis of a real-world and virtual objects, as it is able to identify objects directly within the video and obtain relevant information. However, the downside of the markerless AR is that it may not be appropriate for resource-constrained devices such as mobile phones, due to considerable amount of computations. In this paper, we propose a method to address the problem by putting into multimedia content the metadata necessary to provide AR services, such as virtual object information and supplementary information required for on-screen display. Keywords: Augmented Reality, Feature Extraction and Tracking, ISO Base Media File, Metadata. 1 Introduction Augmented Reality (AR) is a technology that provides augmented information services by synthesizing real-time image/voice and virtual objects or supplementary information. Recently, mobile devices with various built-in sensors such as cameras and GPS have been widely distributed, presenting diverse convergence services using high-speed mobile Internet and rapidly spreading mobile AR services. Most of the existing AR services use real-time image recognition results to provide virtual object or supplementary information. That is, a played video is analyzed in real-time and the area where virtual object will be rendered is identified. However, technology that extracts features by accurately recognizing the object within the image on a real-time basis requires considerable amount of computations, which indicates that the quality of AR services depends on the complexity of feature extraction algorithms as well as the resource capability of the device. Along with the difficulty of extracting features, there is another weakness in the existing AR services: tightly-coupledness of AR application programs and the augmented information shown to the users. For instance, in the case of a service that shows a corporate logo image in the middle of a video to promote a product, when the logo will be presented to the users is determined by the logic of the application program. Thus, there is a possibility that a corporate logo unrelated to the actual video content that is being played appears on screen, resulting in a decreased advertising effect. SoftTech 2013, ASTL Vol. 19, pp , © SERSC

Proceedings, The 2nd International Conference on Software Technology Fig. 1. Logical structure of the proposed media file with embedded metadata for AR services. AR in multimedia services based on stored media is different from AR based on general real-time videos in that the editing of multimedia content by the service provider can be preceded. Therefore, this paper aims to solve the aforementioned problems by putting into multimedia content the metadata necessary to provide AR services, such as virtual object information and supplementary information required for on-screen display. In particular, this paper proposes a method to construct media files as the container that includes only the metadata to display AR contents on screen, so that the AR is not tightly coupled with certain AR technologies. Constructing media files by putting in metadata, as proposed by this paper, offers the following advantages. (1) Complicated processing is not required to extract features in a receiving device, thereby enabling easy use on a mobile device, which is typically resource constrained. (2) Deterioration of multimedia contents can be prevented by determining in advance the most suitable location from each scene of the video in which the virtual object will be displayed. (3) The image and virtual object are not tightly coupled mutually and can provide the AR services most appropriate for the user context (e.g., user location, performance of device). 2 Metadata for Augmented Reality Fig. 1 shows the logical structure of the media files proposed in this paper. The media files include Audio/Video Track as well as Location Track, storing the location information that displays the virtual object provided in AR, and 3D Object Track, storing the actual virtual object information to be displayed at the relevant location. The main role of Location Track is to save the location information of the space that displays the virtual object within the image. In particular, for natural synthesis between the image and virtual object, the virtual object must sequentially move with the image based on the time scale when the image is played. For the purpose, this paper defined AR region on the time scale based on the rate of movement of the virtual object. Fig. 2 depicts the example of defining AR regions on the time scale based on the rate of movement of the virtual object. 224

Augmented Reality Services based on Embedded Metadata Fig. 2. An example of Location Track. According to Fig. 2, the virtual object is displayed on screen from 15:10 to 15:22 after the image began playing and moves from the upper-left to lower-right corner of the screen. In AR Region #1 (15:10:00-15:15:00), it moves from the upper-left to the lower-right corner at a constant rate (e.g., α m/sec); in AR Region #2 (15:15:00- 15:20:00), it moves from the lower-left to the lower-right corner at the same rate as in AR Region #1, However, in AR Region #3 (15:20:00-15:22:00), the traffic line of the object is the same as that of AR Region #2 (that is, from the lower left to the lower right), whereas the rate of movement is different (e.g., β m/sec); thus, it is defined as a different AR region, and the location information of the object is to be saved. As stated above, AR regions save the location information of the object that moves in a straight line at an equal speed based on the time scale. Therefore, as the range of AR region is narrower, the superimposition of the virtual object and image becomes softer. 3D Object Track describes the actual virtual object to be displayed at the location within the image clarified in Location Track. Multiple 3D Object Tracks can exist in the media files to support various AR contents, and the 3D Object Track most suitable for user conditions is selected by using reference data on the virtual object included in Location Track. Another characteristic of 3D Object Track is that it provides neutrality in certain representation techniques of virtual objects. Virtual objects used in AR can be represented using various techniques. However, 3D Object Track is not tightly coupled with certain representation, but it plays the role of a container regardless of the internal representation, therefore providing neutrality in the representation technique of virtual objects. 3 Conclusion Feature extraction and tracking, which is one of the core AR technologies, is critical in determining the area on screen in which the virtual object is to be displayed, and it is one of the most difficult fields of study. Currently, many related technologies exist, but they have limitations for use on resource-constrained mobile devices, because they require significant amount of computations in most cases. Moreover, a great 225

Proceedings, The 2nd International Conference on Software Technology number of computations ultimately require the consumption of many batteries, making such technologies more difficult to apply to mobile devices. In this paper, we proposed a method to construct AR-enabled media files by putting into multimedia content the metadata necessary to provide AR services, such as virtual object information and supplementary information required for on-screen display. Acknowledgments. This work was supported by Kyonggi University Research Grant References 1.Klein, G., Murray, D., Parallel Tracking and Mapping for Small AR Workspace, Proc. of the 6 th IEEE/ACM Int’l Symposium on Mixed and Augmented Reality (2007) 2.Lee, W., Woo, W., Real-time Color Correction for Marker-based Augmented Reality Applications, Proc. of Int’l Workshop on Ubiquitous Virtual Reality (2009) 3.Wagner, D., Reitmayr, G., Mulloni, A., Drummond, T., Schmalstieg, D., Pose Tracking from Natural Features on Mobile Phones, Proc. of the 7 th IEEE/ACM Int’l Symposium on Mixed and Augmented Reality (2008) 4.ARhrrr!, 5.ETSI 3GPP TS 26.44, Transparent End-to-End Packet Switched Streaming Service (PSS): 3GPP File Format (3GP), The 3 rd Generation Partnership Project (2009) 6.ISO/IEC :2008(E), Information Technology-Coding of Audio-Visual Objects- Part 12: ISO Base Media File Format, ISO/IEC (2008) 7.ISO/IEC , Information Technology-Coding of Audio-Visual Objects-Part 14: MP4 File Format, ISO/IEC (2003) 226