Prepared by: Swati Construction of Ego- model Guided by: Dr. Amitabha Mukerjee CS365 : ARTIFICIAL INTELLIGENCE PROJECT.

Slides:



Advertisements
Similar presentations
ARTIFICIAL PASSENGER.
Advertisements

Idea of Register Allocation x = m[0]; y = m[1]; xy = x*y; z = m[2]; yz = y*z; xz = x*z; r = xy + yz; m[3] = r + xz x y z xy yz xz r {} {x} {x,y} {y,x,xy}
LabView Basics.
NUS CS5247 Motion Planning for Camera Movements in Virtual Environments By Dennis Nieuwenhuisen and Mark H. Overmars In Proc. IEEE Int. Conf. on Robotics.
Literature Review: Safe Landing Zone Identification Presented by Keith Sevcik.
Segmentation of Floor in Corridor Images for Mobile Robot Navigation Yinxiao Li Clemson University Committee Members: Dr. Stanley Birchfield (Chair) Dr.
Hybrid architecture for autonomous indoor navigation Georgia Institute of Technology CS 7630 – Autonomous Robotics Spring 2008 Serge Belinski Cyril Roussillon.
Kiyoshi Irie, Tomoaki Yoshida, and Masahiro Tomono 2011 IEEE International Conference on Robotics and Automation Shanghai International Conference Center.
Designing Motion Patterns to Increase Effectiveness of the Goal Keeper in Robot Soccer David Seibert Faculty Advisor: Dr. Mohan Sridharan Texas Tech University.
More single view geometry Describes the images of planes, lines,conics and quadrics under perspective projection and their forward and backward properties.
Longin Jan Latecki Zygmunt Pizlo Yunfeng Li Temple UniversityPurdue University Project Members at Temple University: Yinfei Yang, Matt Munin, Kaif Brown,
Interactive Navigation in Complex Environments Using Path Planning Salomon et al.(2003) University of North Carolina Prepared By Xiaoshan Pan.
3 t s the black numbers next to an arc is its capacity.
CS 326 A: Motion Planning Exploring and Inspecting Environments.
Color Recognition and Image Processing CSE321 MyRo Project Rene van Ee and Ankit Agarwal.
CS430 © 2006 Ray S. Babcock CS430 – Image Processing Image Representation.
Study on Mobile Robot Navigation Techniques Presenter: 林易增 2008/8/26.
Object Detection Procedure CAMERA SOFTWARE LABVIEW IMAGE PROCESSING ALGORITHMS MOTOR CONTROLLERS TCP/IP
CSC 461: Lecture 2 1 CSC461 Lecture 2: Image Formation Objectives Fundamental imaging notions Fundamental imaging notions Physical basis for image formation.
Intelligent Ground Vehicle Competition Navigation Michael Lebson - James McLane - Image Processing Hamad Al Salem.
A Brief Overview of Computer Vision Jinxiang Chai.
Computer Vision. DARPA Challenge Seeks Robots To Drive Into Disasters. DARPA's Robotics Challenge offers a $2 million prize if you can build a robot capable.
컴퓨터 그래픽스 분야의 캐릭터 자동생성을 위하여 인공생명의 여러 가지 방법론이 어떻게 적용될 수 있는지 이해
Construction and Motion control of a mobile robot using Visual Roadmap
Visibility Graphs and Cell Decomposition By David Johnson.
Affective Computing Multimedia Communications University of Ottawa Ana Laura Pérez Rocha Anwar Fallatah.
Computers Are Your Future © 2006 Prentice-Hall, Inc.
Teaching Deliberative Navigation Using the LEGO RCX and Standard LEGO Components Gary R. Mayer, Dr. Jerry Weinberg, Dr. Xudong Yu
SPIE'01CIRL-JHU1 Dynamic Composition of Tracking Primitives for Interactive Vision-Guided Navigation D. Burschka and G. Hager Computational Interaction.
CSCE 5013 Computer Vision Fall 2011 Prof. John Gauch
Visual Odometry in a 2-D environment CS-365A Course Project BY: Aakriti Mittal (12005) Keerti Anand (13344) Under the guidance of: Prof. Amitabha Mukherjee.
Today’s Agenda 1.Scribbler Program Assignment 1.Project idea due next class 2.Program demonstration due Wednesday, June 3 2.Attendance & lab pair groupings.
Graph Coloring Solution in a Deterministic Machine The deterministic solution to coloring problem uses these steps to assign colors to vertices: 1- Given.
MULTISENSOR INTEGRATION AND FUSION Presented by: Prince Garg.
More LEGO Mark Green School of Creative Media. Introduction  Now that we know the basics its time to look at putting some robots (or toys) together 
A Formal Approach in Robot Development Process using a UML Model Authors : Olarn Wongwirat Tanachai Hanidthikul Natee Vuthikulvanich 2008 IEEE 10th Intl.
I-CAR. Contents Factors causing accidents and head on collision Existing technologies New technologies Advantages and drawbacks conclusion.
ROBOTIC VISION CHAPTER 5: NAVIGATION. CONTENTS INTRODUCTION OF ROBOT NAVIGATION SYSTEM. VISUAL GUIDED ROBOT APPLICATION: - LAND BASED NAVIGATION - MAP.
© 2007 Masahiko INAMI The University of Electro-Communications 1 Masahiko Inami Department of Mechanical Engineering and Intelligent Systems The University.
Tapia Robotics 2009: Import Antigravity Kate Burgers '11, Becky Green '11, Sabreen Lakhani '11, Pam Strom '11 and Zachary Dodds In the summer of 2008,
IEEE Robot Team Vision System Project Michael Slutskiy & Paul Nguyen ECE 533 Presentation.
The George Washington University Department of ECE ECE Intro: Electrical & Computer Engineering Dr. S. Ahmadi Class 4/Lab3.
1 An interactive tool for analyzing and visualizing Kronecker Graph Huy N. Nguyen / MIT CSAIL Alan Edelman / MIT Dep. of Mathematics 13th Annual Workshop.
LibA 2016 Intelligent Machine Design Lab Professors: Dr. A. Antonio Arroyo and Dr. Eric M. Schwartz By, Nukul Shah UFID:
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
NETWORK FLOWS Shruti Aggrawal Preeti Palkar. Requirements 1.Implement the Ford-Fulkerson algorithm for computing network flow in bipartite graphs. 2.For.
Learning Roomba Module 5 - Localization. Outline What is Localization? Why is Localization important? Why is Localization hard? Some Approaches Using.
Vision-Guided Humanoid Footstep Planning for Dynamic Environments P. Michel, J. Chestnutt, J. Kuffner, T. Kanade Carnegie Mellon University – Robotics.
  Computer vision is a field that includes methods for acquiring,prcessing, analyzing, and understanding images and, in general, high-dimensional data.
MEH108 - Intro. To Engineering Applications KOU Electronics and Communications Engineering.
HUMANOID ROBOTS. What is a Robot ? “A re-programmable, multifunctional manipulator designed to move material, parts, tools, or specialized devices through.
IEEE South East Conference 2016 MID-SEMESTER PRESENTATION.
Evolving robot brains using vision Lisa Meeden Computer Science Department Swarthmore College.
Vision-Guided Humanoid Footstep Planning for Dynamic Environments
VEX IQ Curriculum It’s Your Future Lesson 11 Lesson Materials:
VEX IQ Curriculum Smart Machines Lesson 09 Lesson Materials:
Avneesh Sud Vaibhav Vaish under the guidance of Dr. Subhashis Banerjee
Daniel A. Taylor Pitt- Bradford University
An Investigation into Manifold Learning Techniques and Biometric Face Registration for User Authentication Brandon Barker, Boise State University Faculty.
What is the next line of the proof?
Visual Exploration Algorithm that uses Semantic Clues
Robot Programming Computer Literacy S2.
Forward Until Touch Robot goes forward until it hits a wall.
HW2 EE 562.
Sampling based Mission Planning for Multiple Robots
Sumo: Stay in Ring Algorithm
Your Poster Title Goes Here
Humanoid Motion Planning for Dual-Arm Manipulation and Re-Grasping Tasks Nikolaus Vahrenkamp, Dmitry Berenson, Tamim Asfour, James Kuffner, Rudiger Dillmann.
Color Box Button - Gray Type : object Type : object Type : object
Presentation transcript:

Prepared by: Swati Construction of Ego- model Guided by: Dr. Amitabha Mukerjee CS365 : ARTIFICIAL INTELLIGENCE PROJECT

Aim Given two dimensional planar robot arm with camera at it’s tip and sensors on it’s skin ( using synthetic data) we have to  Vision :Generate the images seen by the camera  Sensor : To display that part of robot which touches the obstacle  Manifold path planning Assume the world to be a two dimensional planar. Fig1: Robotic arm with camera at tip Fig2. 2d image of box containing robot arm with all walls with different colors and corner in black colors

Vision  Algorithm is straight forward  Vision of the camera is 180 degrees Algorithm Throw the rays from the camera towards the walls and display the image seen by it. Each points on the walls are assigned with colors which are stored in an array. Fig.3 Fig.4 Array[0,…,90] displaying images seen by robot Bunch of vertical images of blue color Bunch of vertical images of green color Corner of the box

Sensor  Using sensors on the skin we can show whether obstacle touches the robot or not and glow that part which touches the obstacle. Algorithm  Overlap the robot arm images with the obstacle call it “A”.  Apply following to get : if (robot arm image = = (A - obstacle image) print “Obstacle doesn’t hit the robot” else display the part hit by obstacle Fig.5 Fig.6

Manifold Motion Planning  Scree plot for the given robot arm images will look Fig.7 :  Path planning also possible by removing all the nodes from the graph hitting the obstacle. The plot will look like Fig.8 Fig.7Fig.8

References  “Obstacle Avoidance Control of Humanoid Robot Arm through Tactile Interaction” by Dzmitry Tsetserukou, Naoki Kawakami, Susumu Tachi  “The advantages of mounting a camera on robot arm” by Radu Horaud, Roger Mohr, Fadi Dornaika, and Boubakeur Boufama  In 2006 IEEE Intl. Conference on Robotics and Automation (ICRA 2006), Orlando, FL, May 2006 “Robot Navigation Using 1D Panoramic Images” by Amy Briggs_,Yunpeng Li, Daniel Scharstein & Matt Wilder  Robotic Arm Camera (RAC) built by the University of Arizona and Max Planck Institute, Germany   1st-night-images/ 1st-night-images/