Combining Human and Machine Capabilities for Improved Accuracy and Speed in Visual Recognition Tasks Research Experiment Design Sprint: IVS Flower Recognition.

Slides:



Advertisements
Similar presentations
Monitoring Fish Passage with an Automated Imaging System Steve R. Brink, Senior Fisheries Biologist Northwest Hydro Annual Meeting 2014, Seattle.
Advertisements

Pseudo-Relevance Feedback For Multimedia Retrieval By Rong Yan, Alexander G. and Rong Jin Mwangi S. Kariuki
Combining Human and Machine Capabilities for Improved Accuracy and Speed in Visual Recognition Tasks Team 1 Amir Schir (Team Leader), Fritz Gabriel, Jean.
ICS103 Programming in C Lecture 1: Overview of Computers & Programming
Facial feature localization Presented by: Harvest Jang Spring 2002.
Move With Me S.W Graduation Project An Najah National University Engineering Faculty Computer Engineering Department Supervisor : Dr. Raed Al-Qadi Ghada.
LYU0103 Speech Recognition Techniques for Digital Video Library Supervisor : Prof Michael R. Lyu Students: Gao Zheng Hong Lei Mo.
Raster Data. The Raster Data Model The Raster Data Model is used to model spatial phenomena that vary continuously over a surface and that do not have.
Interactive Visual System By Arthur Evans, John Sikorski, and Patricia Thomas.
LYU0203 Smart Traveller with Visual Translator for OCR and Face Recognition Supervised by Prof. LYU, Rung Tsong Michael Prepared by: Wong Chi Hang Tsang.
USER VERIFICATION SYSTEM Scope Develop a User Verification System based on the application of one or more pattern recognition techniques. To begin with.
Preprocessing ROI Image Geometry
Personality Assessment from Handwriting Aaron Dancygier Jayson Diaz Sunday Olatunbosun Stacy Bryan.
Cliff Rhyne and Jerry Fu June 5, 2007 Parallel Image Segmenter CSE 262 Spring 2007 Project Final Presentation.
Data Input How do I transfer the paper map data and attribute data to a format that is usable by the GIS software? Data input involves both locational.
Simultaneous Localization and Map Building System for Prototype Mars Rover CECS 398 Capstone Design I October 24, 2001.
Face Processing System Presented by: Harvest Jang Group meeting Fall 2002.
SDLC. Information Systems Development Terms SDLC - the development method used by most organizations today for large, complex systems Systems Analysts.
SIEVE—Search Images Effectively through Visual Elimination Ying Liu, Dengsheng Zhang and Guojun Lu Gippsland School of Info Tech,
Tal Mor  Create an automatic system that given an image of a room and a color, will color the room walls  Maintaining the original texture.
What is A-Tracker A-Track is a Palm OS application that allows a company to track its equipment - office equipment, computers, vehicles, automobiles, library,
Input Design Objectives
Image Pattern Recognition The identification of animal species through the classification of hair patterns using image pattern recognition: A case study.
Image Recognition using Hierarchical Temporal Memory Radoslav Škoviera Ústav merania SAV Fakulta matematiky, fyziky a informatiky UK.
By Meidika Wardana Kristi, NRP  Digital cameras used to take picture of an object requires three sensors to store the red, blue and green color.
Efficient Editing of Aged Object Textures By: Olivier Clément Jocelyn Benoit Eric Paquette Multimedia Lab.
Beginning Photography
Human-Assisted Pattern Classification Kathryn Durfee, Neville Kapoor, Matthew Muccioli, Richard Smart, David Wilkins, Amir Schur.
Land Cover Classification Defining the pieces that make up the puzzle.
Steps for Extraction Creation of cyan intensity image in preparation for thresholding: Although many dorsal fin images exhibit a good contrast between.
Object-Oriented Software Engineering Practical Software Development using UML and Java Chapter 7: Focusing on Users and Their Tasks.
Graphics and Animation Multimedia Projects Part 2.
Product Evaluation & Quality Improvement. Overview Objectives Background Materials Procedure Report Closing.
ArcGIS Data Reviewer: An Introduction
Product Evaluation & Quality Improvement. Overview  Objectives  Background  Materials  Procedure  Report  Closing.
Pulkit Agrawal Y7322 BVV Sri Raj Dutt Y7110 Sushobhan Nayak Y7460.
Pleasing in appearance.
Image Comparison Tool Product Proposal Tim La Fond and Peter Beckfield.
Picture Composition. There are two parts to taking good photographs –Exposure –Composition Exposure is the technical part of the photographic process.
TAUCHI – Tampere Unit for Computer-Human Interaction Development of the text entry self-training system for people with low vision Tatiana Evreinova Multimodal.
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
THE ISSUE: Physically occupying every location for data collection is not always possible. A blue- tooth enabled laser rangefinder allows the user to collect.
Team Members Ming-Chun Chang Lungisa Matshoba Steven Preston Supervisors Dr James Gain Dr Patrick Marais.
EG1003: Introduction to Engineering and Design Product Evaluation & Quality Improvement.
MINOLTA-QMS Convenience Copier Station ++ MINOLTA-QMS Convenience Copier Station ++ MINOLTA-QMS Convenience Copier Station MINOLTA-QMS Digital Copier Station.
MTA EXAM Software Testing Fundamentals : OBJECTIVE 6 Automate Software Testing.
Lucent Technologies - Proprietary 1 Interactive Pattern Discovery with Mirage Mirage uses exploratory visualization, intuitive graphical operations to.
Image Photos vs. Classified Image Which one is better?
CAPTCHA solving Tianhui Cai Period 3. CAPTCHAs Completely Automated Public Turing tests to tell Computers and Humans Apart User is human or machine? Prevents.
Non-parametric Methods for Clustering Continuous and Categorical Data Steven X. Wang Dept. of Math. and Stat. York University May 13, 2010.
CIVET seminar Presentation day: Presenter : Park, GilSoon.
Seminar On CNC Machine Submitted To: Submitted By:
TRANSACTION PROCESSING SYSTEM (TPS)
Efficient Image Classification on Vertically Decomposed Data
Submitted by: Ala Berawi Sujod Makhlof Samah Hanani Supervisor:
Automatic Digitizing.
Project Objectives Open an image Save an image Resize an image
Graphics and Animation
Chapter 12: Automated data collection methods
Pearson Lanka (Pvt) Ltd.
Efficient Image Classification on Vertically Decomposed Data
Product Evaluation & Quality Improvement
Higher School of Economics , Moscow, 2016
Product Evaluation & Quality Improvement
Interactive Visual System
Product Evaluation & Quality Improvement
Dissertation Progress & Status Report
Higher School of Economics , Moscow, 2016
THE ASSISTIVE SYSTEM SHIFALI KUMAR BISHWO GURUNG JAMES CHOU
Sign Language Recognition With Unsupervised Feature Learning
Presentation transcript:

Combining Human and Machine Capabilities for Improved Accuracy and Speed in Visual Recognition Tasks Research Experiment Design Sprint: IVS Flower Recognition System Amir Schur

Problem Statement Investigate human-computer interaction in applications of pattern recognition where – higher accuracy is required than is currently achievable by automated systems – and there is enough time for a limited amount of human interaction Measure the degree of accuracy and speed improvement attributable to the human interaction

Background Most existing automated visual recognition tasks yield far from production-level results And manual task is too cumbersome and time consuming We need to find a place in between where speed is highly increased and accuracy is still acceptable

Interactive Visual System (IVS) for Flower Identification Originally developed at RPI for desktop PC, system called “Caviar” Later developed into a handheld application in an RPI/Pace project, called IVS

Overview of Interactive Tasks IVS Flower Identification System 1.Object segmentation 2.Feature extraction (numerous tasks) 3.Matching/classifying Each task can be done by human only, automated only, or combination of human and computer

Detail of IVS Interactive Tasks IVS for flower identification has six system activities: 1.Determining the dominant color of the flower petal 2.Determining secondary (less dominant) color of the flower petal 3.Determining color of the stamen or center portion of the flower 4.Counting the number of petals 5.Getting the horizontal and vertical bounds of the target flower 6.Getting the horizontal and vertical bounds of a flower petal Original software developed for Desktop and Palm Pilot, Java code Uses k-Nearest Neighbor (thus accurate training data is required) Color determination utilizes RGB color schema

Design Sprint: Overview Three separate experiments: 1.human only 2.machine only (no human subjects necessary) 3.human and machine combined Currently IVS has – 75 training photos (three photos each of 25 flowers) – 20 test photos Half the subjects will start with human-only identification, followed by machine + human, using existing 20 test images Other half vice versa: human + machine then human only. This group will also collect new flower images. (Balanced experimental design: no subjects get an unfair advantage)

Design Sprint: Human Only Scenario Capture time and accuracy Ideas: – Use IVS test photos – Use good flower guide to identify photos

Design Sprint: Machine Only Scenario Capture time and accuracy Ideas: – First iteration: use existing 20 test photos – Record top 10 choices – More digital images must be acquired with correct identification to enlarge training data

Design Sprint: Human + Machine Capture time and accuracy of combination of human + machine activity Ideas: – Segregate each available automated task. Run all automated except for one, where this part is done by human input. – Segregate group of tasks (color determination, background segmentation). Perform one task with computer and another with human.

Anticipated Experimental Outcomes Accuracy Time Human only Machine only Machine + human Time vs accuracy in Visual Recognition Tasks

Analysis of Results Time required by human will dictate the need of machine assistance. How much time is saved by using human + machine tool? Accuracy level of human + machine will dictate the need of such tool. Can it achieve the same level of accuracy as compared to human only? What is the maximum capability of machine only in terms of time and accuracy?

Possible Extensions Many functions can be extended: – Utilize different automated methods for color recognition (HSB, LAB, YCbCr, etc). – Utilize automated texture based methods (gabor and gist) – Utilize automated contour based pattern recognitions (distance vs angles, distances projection, min/max ratio, area ratio, automated number of petals counting) – More seamless integration of human and machine input. Currently it’s one or the other: cannot update machine’s cropping and outlining result, cannot update machine’s color determination.