Application of AR to Water System Isolation Task in NPP Md. Khalaquzzaman 7 th April 2008.

Slides:



Advertisements
Similar presentations
TEMPLATE DESIGN © The basic model for a trigonometric setup requires that the HID be seen by at least two cameras at any.
Advertisements

QR Code Recognition Based On Image Processing
Human Identity Recognition in Aerial Images Omar Oreifej Ramin Mehran Mubarak Shah CVPR 2010, June Computer Vision Lab of UCF.
CMPE 466 COMPUTER GRAPHICS Chapter 8 2D Viewing Instructor: D. Arifler Material based on - Computer Graphics with OpenGL ®, Fourth Edition by Donald Hearn,
Announcements Final Exam May 13th, 8 am (not my idea).
Mapping: Scaling Rotation Translation Warp
Real Time Motion Capture Using a Single Time-Of-Flight Camera
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Formation et Analyse d’Images Session 8
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Virtual Dart: An Augmented Reality Game on Mobile Device Supervisor: Professor Michael R. Lyu Prepared by: Lai Chung Sum Siu Ho Tung.
Active Calibration of Cameras: Theory and Implementation Anup Basu Sung Huh CPSC 643 Individual Presentation II March 4 th,
CH24 in Robotics Handbook Presented by Wen Li Ph.D. student Texas A&M University.
A Study of Approaches for Object Recognition
ART: Augmented Reality Table for Interactive Trading Card Game Albert H.T. Lam, Kevin C. H. Chow, Edward H. H. Yau and Michael R. Lyu Department of Computer.
Passive Object Tracking from Stereo Vision Michael H. Rosenthal May 1, 2000.
Augmented Reality: Object Tracking and Active Appearance Model
Automatic Camera Calibration for Image Sequences of a Football Match Flávio Szenberg (PUC-Rio) Paulo Cezar P. Carvalho (IMPA) Marcelo Gattass (PUC-Rio)
MSU CSE 803 Fall 2008 Stockman1 CV: 3D sensing and calibration Coordinate system changes; perspective transformation; Stereo and structured light.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Computer-Based Animation. ● To animate something – to bring it to life ● Animation covers all changes that have visual effects – Positon (motion dynamic)
Overview and Mathematics Bjoern Griesbach
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
The Visual Display Transform for Virtual Reality Cyrus Moon Computer Integrated Surgery II ( )
Computer vision: models, learning and inference
Systems Analysis – Analyzing Requirements.  Analyzing requirement stage identifies user information needs and new systems requirements  IS dev team.
Francois de Sorbier Hiroyuki Shiino Hideo Saito. I. Introduction II. Overview of our system III. Violin extraction and 3D registration IV. Virtual advising.
Shape Recognition and Pose Estimation for Mobile Augmented Reality Author : N. Hagbi, J. El-Sana, O. Bergig, and M. Billinghurst Date : Speaker.
A Shaft Sensorless Control for PMSM Using Direct Neural Network Adaptive Observer Authors: Guo Qingding Luo Ruifu Wang Limei IEEE IECON 22 nd International.
Multimodal Interaction Dr. Mike Spann
VTT Technical Research Centre of Finland 3rd INTUITION Workshop “VR/VE & Industry – Challenges and Opportunities” Schwabenlandhalle, Fellbach / Stuttgart,
1 Halden Project VR Workshop, 2-3 March 2005 Development of a Tracking Method for Augmented Reality Applied to Nuclear Plant Maintenance Work Presentation.
Speaker : Meng-Shun Su Adviser : Chih-Hung Lin Ten-Chuan Hsiao Ten-Chuan Hsiao Date : 2010/01/26 ©2010 STUT. CSIE. Multimedia and Information Security.
Chapter 9.  Mathematical morphology: ◦ A useful tool for extracting image components in the representation of region shape.  Boundaries, skeletons,
3D SLAM for Omni-directional Camera
MESA LAB Multi-view image stitching Guimei Zhang MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of.
EE 492 ENGINEERING PROJECT LIP TRACKING Yusuf Ziya Işık & Ashat Turlibayev Yusuf Ziya Işık & Ashat Turlibayev Advisor: Prof. Dr. Bülent Sankur Advisor:
Passage Three Multimedia Application. Training target: In this part , you should try your best to form good reading habits. In order to avoid your ill.
Intelligent Vision Systems ENT 496 Object Shape Identification and Representation Hema C.R. Lecture 7.
University of Palestine Faculty of Applied Engineering and Urban Planning Software Engineering Department Introduction to computer vision Chapter 2: Image.
Augmented Reality and 3D modelling By Stafford Joemat Supervised by Mr James Connan.
Video Eyewear for Augmented Reality Presenter: Manjul Sharma Supervisor: Paul Calder.
Representation. Objectives Introduce concepts such as dimension and basis Introduce coordinate systems for representing vectors spaces and frames for.
Acquiring 3D models of objects via a robotic stereo head David Virasinghe Department of Computer Science University of Adelaide Supervisors: Mike Brooks.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
FUNCTIONS AND MODELS 1. The fundamental objects that we deal with in calculus are functions.
Wenqi Zhu 3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform.
Source: Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on Author: Paucher, R.; Turk, M.; Adviser: Chia-Nian.
Augmented Reality Authorized By: Miss.Trupti Pardeshi. NDMVP, Comp Dept. Augmented Reality 1/ 23.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
TEMPLATE DESIGN © E-Eye : A Multi Media Based Unauthorized Object Identification and Tracking System Tolgahan Cakaloglu.
Figure 6. Parameter Calculation. Parameters R, T, f, and c are found from m ij. Patient f : camera focal vector along optical axis c : camera center offset.
A Tutorial on using SIFT Presented by Jimmy Huff (Slightly modified by Josiah Yoder for Winter )
Interactive Mirror System based on Personal Purchase Information Donghyun Kim 1, Younsam Chae 2, Jonghun Shin 2, Uyeol Baek 2, Seoksoo Kim * 1,* Dept of.
Robust and Accurate Surface Measurement Using Structured Light IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 57, NO. 6, JUNE 2008 Rongqian.
Border Code: an Efficient Code System for Augmented Reality Seong-hun Park and Young-guk Ha' Konkuk University, Department of Computer Science and Engineering,
A Recognition Method of Restricted Hand Shapes in Still Image and Moving Image Hand Shapes in Still Image and Moving Image as a Man-Machine Interface Speaker.
1 Representation. 2 Objectives Introduce concepts such as dimension and basis Introduce coordinate systems for representing vectors spaces and frames.
Outline Introduction Related Work System Overview Methodology Experiment Conclusion and Future Work.
Portable Camera-Based Assistive Text and Product Label Reading From Hand-Held Objects for Blind Persons.
AR Based NPP Maintenance M. Khalaquzzaman NIC Lab KAIST 25 Feb 2008.
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
CIRP Annals - Manufacturing Technology 60 (2011) 1–4 Augmented assembly technologies based on 3D bare-hand interaction S.K. Ong (2)*, Z.B. Wang Mechanical.
SPACE MOUSE. INTRODUCTION  It is a human computer interaction technology  Helps in movement of manipulator in 6 degree of freedom * 3 translation degree.
Coordinate Transformations
Image Representation and Description – Representation Schemes
Author : Sang Hwa Lee, Junyeong Choi, and Jong-Il Park
Media IC & System Lab, GIEE, NTU Introduction to Augmented Reality
Categorizing sex and identity from the biological motion of faces
Scalable light field coding using weighted binary images
Presentation transcript:

Application of AR to Water System Isolation Task in NPP Md. Khalaquzzaman 7 th April 2008

Outlines Introduction Description of Water isolation system (WIS) task Application of AR to the WIS Maintenance support system using AR Tracking methods for AR applications Designing of tracking method Evaluation of tracking method Conclusion and future work

The environment where the water system isolation task is conducted has following characteristics: Characteristic of water system isolation task  there is huge number of valves and they are distributed all over the plant  there are a lot of metal obstacles in the working environment  it is difficult to place a lot of new apparatuses because lack of space  there are many types of equipment in the plant and working environment is very complicated The limitations concerning to the workers who conduct the water system isolation task as follows: a worker must be equipped with a helmet a worker cannot have equipment that restricts his movement

Water system isolation task in NPP In periodic maintenance of NPP, some water systems need to be isolated by valve operation in order to disassemble and check plant equipments. - there are more than valves in NPP. - Although some of them are controlled remotely, most of them are operated manually - needs a pair of workers to perform the task Manual water system isolation task is performed as follows: Step 1 : supplied a paper-based instruction sheet at a central control room, which describes information about valve operation such as procedure of work, IDs of valves. Step 2: walk to the valve location according to the instruction sheet. Step 3: identify a specified valve and confirm its status Step 4: operate the valve and mark the work list Step 5: walk to the next location(repeat from step-2 to 4) Step 6: input the performed valve operations at the central control room

Application of Augmented Reality System Display –Head mounted –Handheld Viewpoint Tracking –Magnetic, Ultrasonic, Computer Vision … Image Generation AR system consists of following major components

Comparison of Tracking Methods

Development of a tracking method for the maintenance support system Hybrid tracking method consists of vision sensor and inertia sensors Vision sensors: Artificial marker method : a software is employed for calculating the relative position and orientation of a camera against artificial markers. Natural feature method : in natural feature method, the algorithm recognizes object’s corners and lines.

Artificial Markers Visual code markers have evolved as a bridge connecting the physical world to the cyber- world, as a tool that contributes to stronger human-computer interaction and ubiquitous computing The visual code marker is comprised of three salient parts, which are the fixed guide bars the fixed corner elements, and the data area, as illustrated in Figure 1.

- The camera captures video of the real world and sends it to the computer. - Software on the computer searches through each video frame for any square shapes. - If a square is found, the software calculate the position of the camera relative to the black square. - Once the position of the camera is known a computer graphics model is drawn from that same position. - This model is drawn on top of the video of the real world and so appears stuck on the square marker. - The final output is shown back in the handheld display, so when the user looks through the display they see graphics overlaid on the real world. Algorithm for marker based tracking

(a) Original Image b) Binarization c) Fiducial edge detection (a) Fiducial corner detection Algorithm for marker based tracking  The outer black band of marker allows for location of a candidate fiducial in a captured image and the interior image allows for identification of the candidate from a set of expected images. Image binarization: the program uses an adaptive threshold to binarize the video image (figure 3-b). Binary images contain only the important information, and can be processed very rapidly.  The four corners of the located fiducial allow for the unambiguous determination of the position and orientation of the fiducial relative to a calibrated camera. Furthermore, in order to estimate location of a moving camera in the world coordinate system, Fiducials are placed in the fixed, physical environment Connected Regions Analysis: the system looks up connected regions of black pixels (figure 3-c) and only select the quadrilateral ones. These regions become candidates for the square marker. For each candidate found, the system segregates the contour chains (figure 3-d) into the four sides of the proposed marker, and fits a straight line to each. Finally, the coordinates of the four corners are found by intersecting these lines (figure 3-e) and are stored for the next processes.

Marker extraction: The portions of the image that contain the visual code markers are detected and are isolated from the rest of the image. Marker characterization: The parameters that characterize the stance of the visual code marker within the original image are extracted at this stage. These parameters are the rotation angle of the code marker and the position of its four corners. Marker analysis: Given the rotation angle and the location of the four corners, the location of the code marker in the original image can be found and the binary data within can be extracted. Algorithm for marker based tracking

Position and Pose Estimation of Markers

2D Geometrical Transformations Translate Rotate Scale

Transformations Translate: P’ = P+T Scale: P’ = SP Rotate: P’ = RP Translate: P’ = P+T Scale: P’ = SP Rotate: P’ = RP P(x,y) P’(x’,y’) dxdx dydy In matrix format: If we define the translation matrix, then we have P’ =P + T.

Homogeneous Coordinates For a given 2D coordinates (x, y), a third dimension is introduced: [x, y, 1] In general, a homogeneous coordinates for a 2D point has the form: [x, y, W] Two homogeneous coordinates [x, y, W] and [x’, y’, W’] are said to be of the same (or equivalent) if x = kx’eg: [2, 3, 6] = [4, 6, 12] y = ky’for some k ≠ 0 where k=2 W = kW’ Therefore any [x, y, W] can be normalised by dividing each element by W: [x/W, y/W, 1]

Homogeneous Transformations Now, redefine the translation by using homogeneous coordinates: Similarly, we have: Scaling Rotation P’ = S  P P’ = R  P

Size-known squire markers are used as a base of the coordinates frame the transformation matrices from these marker coordinates to the camera coordinates (T m ) represented in the equation are estimated by image analysis. Estimation of Transformation Matrix - V3x3 is rotation component in the transformation matrix - the four vertices coordinates of the marker in the marker coordinate frame and those coordinates in the camera screen coordinate frame, eight equations including translation component Wx Wy Wz are generated and the value of these translation component Wx Wy Wz can be obtained from these equations.

Tracking Method Employing Inertial Sensors o Orientation of the worker is measured by gyro sensors o acceleration of gravity is eliminated based on the measured orientation, and the position variation of the worker is measured by acceleration sensors o the drift error of gyro sensor and acceleration sensor come from the fluctuation of the environmental factors such as temperature

Flow chart of Hybrid Tracking -The position and orientation are measured in hybrid method by inertial sensor and artificial marker - if the vision sensor can recognize at least one marker, the output from artificial marker is used, because marker method is more accurate and reliable - in the case of natural feature method, three dimensional positions are calculated - in case both artificial markers and natural features cannot be recognized, the output from the inertial sensors is used.

Evaluation of the tracking method employing inertial sensors The authors have conducted two experiments in order to evaluate the effect of the proposed drift cancel method Gyro sensors: Gyro sensors was mounted on a tripod symmetrically and the tripod was rotated by hand. The output from the sensor was recorded by an oscilloscope separately. The output from the sensor was recorded for 5 seconds with 10msec interval and the sensors were rotated from 1.6 seconds to 3.4 seconds. The Figure shows the output from a gyro sensor. During the period of seconds and 3.4 – 5.0 seconds, the sensors were steady state so that the output from the sensors should be constant. However, because of the drift error, the output decreased with time passed.

Evaluation of the tracking method employing inertial sensors Acceleration sensors: Two acceleration sensors (C and D) were packed into a sensor unit symmetrically and the sensor unit was moved along with a straight gage by hand. The output from the sensors was recorded with a digital oscilloscope separately. The output from the sensors was recorded for 2 seconds with 10msec interval and the sensors traveled from 0.3 seconds to 1.2 seconds. In this experiment, it can be confirmed that the proposed drift cancel method can cancel the sensor drift.

Conclusions In the study, the existing tracking methods employing vision sensor and inertial sensors have been improved and combined into one tracking method. And some experiments were conducted to evaluate each improved method. Concerning to the natural feature method, it was confirmed that the calculation of the position and orientation of the camera by using 6 natural features can be conducted well and the new method can select reliable natural features. The artificial marker method is rather accurate and stable and is applied in several applications. It is, however, necessary that the artificial markers are pasted in advance and if the work area is large, a huge number of artificial markers have to be pasted. However, the number of the natural features that the proposed method can select is too little so that it cannot be applied to the maintenance support system yet. On the other hand, concerning to the inertial sensor method, the proposed drift cancel method worked well to improve the accuracy of the inertial sensors.

References 1.Hirokazu Kato and Mark Billinghurst, “Marker Tracking and HMD Calibration for a Video-based Augmented Reality Conferencing System” Augmented Reality, (IWAR '99) Proceedings. 2nd IEEE and ACM International Workshop onAugmented Reality, (IWAR '99) Proceedings. 2nd IEEE and ACM International Workshop on 2.Ahmet Altay and Emre Oto, “An Algorithm for Visual Code Marker Extraction and Processing”,at Fakhr-eddine Ababsa, and Malik Mallem, “Robust Camera Pose Estimation Using 2D Fiducials Tracking for Real-Time Augmented Reality Systems”. Proceedings of the 2004 ACM SIGGRAPH international conference on Virtual Reality continuum and its applications in industry. 4.Hirotake Ishii, Koji Matsui, Misa Kawauchi, Hiroshi Shimoda and Hidekazu Yoshikawa, “Development of an Augmented Reality System for Plant Maintenance Support” at