WEEK 7: WEB-ASSISTED OBJECT DETECTION ALEJANDRO TORROELLA & AMIR R. ZAMIR.

Slides:



Advertisements
Similar presentations
Distinctive Image Features from Scale-Invariant Keypoints
Advertisements

PERSPECTIVE DRAWING Mr. Brijesh TGT Art Education K.V Adoor Shift – 1 R/o Ernakulam, Kerala.
“What Is It Like To Be a Bat?”. Depth Perception The world (visually) appears to us as though it is three-dimensional. Some things appear closer to us.
Street Crossing Tracking from a moving platform Need to look left and right to find a safe time to cross Need to look ahead to drive to other side of road.
Dynamic Occlusion Analysis in Optical Flow Fields
Business Identification: Spatial Detection Alexander Darino Week 8.
Texture Visual detail without geometry. Texture Mapping desire for heightened realism.
Lecture 6: Feature matching CS4670: Computer Vision Noah Snavely.
Edge detection. Edge Detection in Images Finding the contour of objects in a scene.
Edge Detection Lecture 2: Edge Detection Jeremy Wyatt.
Visual Odometry for Vehicles in Urban Environments CS223B Computer Vision, Winter 2008 Team 3: David Hopkins, Christine Paulson, Justin Schauer.
CSE473/573 – Stereo Correspondence
Perceptual Hysteresis Thresholding: Towards Driver Visibility Descriptors Nicolas Hautière, Jean-philippe Tarel, Roland Brémond Laboratoire Central des.
A Vision-Based System that Detects the Act of Smoking a Cigarette Xiaoran Zheng, University of Nevada-Reno, Dept. of Computer Science Dr. Mubarak Shah,
WEEK VI Malcolm Collins-Sibley Mentor: Shervin Ardeshir.
1 REAL-TIME IMAGE PROCESSING APPROACH TO MEASURE TRAFFIC QUEUE PARAMETERS. M. Fathy and M.Y. Siyal Conference 1995: Image Processing And Its Applications.
Symmetric Architecture Modeling with a Single Image
GrowingKnowing.com © Percentile What ‘s the difference between percentile and percent? Percent measures ratio 90% percent on a test shows you.
Sensors. Sensors are for Perception Sensors are physical devices that measure physical quantities. – Such as light, temperature, pressure – Proprioception.
Introduction to Photography: The Exposure Triangle Ms. Whiteside * Circle High School Photo by V Whiteside.
Vision-based parking assistance system for leaving perpendicular and angle parking lots 2013/12/17 指導教授 : 張元翔 老師 研究生 : 林柏維 通訊碩一
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
WEEK 8: WEB-ASSISTED OBJECT DETECTION ALEJANDRO TORROELLA & AMIR R. ZAMIR.
Introductory presentation on the SAHER Program
The Green Cross Code is a set of steps to help you cross the road. Remembering the Green Cross Code when you cross the road will help you get to the other.
Facial Feature Extraction Yuri Vanzine C490/B657 Computer Vision.
Choosing Weight and Threshold Values for Single Perceptrons n CS/PY 231 Lab Presentation # 2 n January 24, 2005 n Mount Union College.
Photography What is photography? – The art of capturing an image – From the Greek words, light and writing – Ability to freeze time and record a vision.
EDGE DETECTION IN COMPUTER VISION SYSTEMS PRESENTATION BY : ATUL CHOPRA JUNE EE-6358 COMPUTER VISION UNIVERSITY OF TEXAS AT ARLINGTON.
Tracking CSE 6367 – Computer Vision Vassilis Athitsos University of Texas at Arlington.
1 Artificial Intelligence: Vision Stages of analysis Low level vision Surfaces and distance Object Matching.
21 June 2009Robust Feature Matching in 2.3μs1 Simon Taylor Edward Rosten Tom Drummond University of Cambridge.
The SLR Experience: TTL u New and improved viewfinder Viewfinder appears larger Magnification increased to 0.87x (XTi 0.80x) Easier to see 24.5° viewing.
WEEK 1-2 ALEJANDRO TORROELLA. CONVERTING AN IMAGE FROM RGB TO HSV AND DISPLAYING THE SEPARATE CHANNELS.
WEEK 4: WEB-ASSISTED OBJECT DETECTION ALEJANDRO TORROELLA & AMIR R. ZAMIR.
This is a Title. The font is 54 point Arial. This could be Body Text, it is 48 point Georgia. Try to keep text roughly vertically centered on a screen.
Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science.
A Photograph of two papers
Image Processing A Study in Pixel Averaging Building a Resolution Pyramid With Parallel Computing Denise Runnels and Farnaz Zand.
Innovation process S-curve. Innovators Outside the box thinkers Creators Small percentage Financially invested.
Limits, Asymptotes, and Continuity Ex.. Def. A horizontal asymptote of f (x) occurs at y = L if or Def. A vertical asymptote of f (x) occurs at.
Understanding Aperture (a beginner’s guide) Understanding Aperture (a beginner’s guide)
WEEK 10: WEB-ASSISTED OBJECT DETECTION ALEJANDRO TORROELLA & AMIR R. ZAMIR.
WEEK 3: WEB-ASSISTED OBJECT DETECTION ALEJANDRO TORROELLA.
Basic Maneuvers Chapter Six. Moving into Traffic Visibility: check oncoming traffic and the road. Notice others Time: is there enough time to move into.
Computational Properties of Perceptron Networks n CS/PY 399 Lab Presentation # 3 n January 25, 2001 n Mount Union College.
TUGIS March 15, 2016 Next Generation 911 Data Management TUGIS 2016.
1. 2 What worked: Automatic: Panorama Studio 2 Pro Manual: “MosaicJ” plugin ( for ImageJ (
Excel Information. Basics In Excel there are rows, columns and cells. Row- The horizontal lines in the workbook –These are identified by numbers on the.
“OH MY!” Apertures, Shutters, ISOs… Camera Skills
Line Plots & Box-and-Whiskers Plots
Capturing, Processing and Experiencing Indian Monuments
Different techniques used in photography.
Limits, Asymptotes, and Continuity
Chelsey, Autumn & Gabriella
Image quantization By Student Manar naji. quantization The values obtained by sampling a continuous function usually comprise of an infinite set or.
Effects of Countdown Traffic Signal On Driving
Basic Camera Settings.
Vehicle Segmentation and Tracking in the Presence of Occlusions
Day 4.
Computer Vision Lecture 9: Edge Detection II
RGB-D Image for Scene Recognition by Jiaqi Guo
From a presentation by Jimmy Huff Modified by Josiah Yoder
REU Week 1 Ivette Carreras UCF.
Spatial operations and transformations
Depth of Field (DOF) This is the distance from foreground to background that is in acceptable focus. Most of the time the digital camera focuses for you.
Wednesday/Thursday Nov 17th / 18th
Manuel Jan Roth, Matthis Synofzik, Axel Lindner  Current Biology 
DEFEND LIGHT.
Spatial operations and transformations
Presentation transcript:

WEEK 7: WEB-ASSISTED OBJECT DETECTION ALEJANDRO TORROELLA & AMIR R. ZAMIR

GEOMETRY METHOD RESULTS Made threshold extremely low for each class Needed to make sure that the true positives were detected Sifted through the many bounding boxes by using the GIS arrangement to get rid of obvious spatially incorrect detections. Ex: A trash can was detected on the left side of the image when there aren’t any according to the GIS data. Got rid of bounding boxes that were smaller than and larger than a certain percentage of the image. Had to manually set GIS data for each image, crude methods of obtaining them manually didn’t give good results.

GEOMETRY METHOD RESULTS: IMAGE 1 GIS fusion with three classes. Classes were fire hydrants, street lights, and traffic lights. Got 1/4 street lights The other street lights didn’t come up in the detectors at all. Too small in the image, or were too occluded Got 2/3 traffic signals Got 0/1 fire hydrants The detector didn’t get the true positive in the image

Before GIS fusion

After GIS fusion

GEOMETRY METHOD RESULTS: IMAGE 2 GIS fusion with two classes. Classes were trash cans and street lights. Got 2/3 street lights The last street light didn’t come up in the detectors at all. Too small in the image and/or threshold not low enough Got ½ trash cans The other trash can didn’t come up in the detectors at all. Too small in the image and/or threshold not low enough

Before GIS fusion

After GIS fusion

GEOMETRY METHOD RESULTS: IMAGE 3 GIS fusion with two classes. Classes were traffic signals and street lights. Got 2/3 street lights Got 3/6 traffic signals The other traffic signals were so close together than the detector was drown off.

Before GIS fusion

After GIS fusion

GEOMETRY METHOD: CONCLUSIONS Sifting the bounding boxes using the GIS data resulted in better results compared to sifting them by their size relative to the image. Lowering threshold helped a lot too. Detectors aren’t getting all the true positives which makes the GIS fusion fail Can’t improve what’s already failed. Need to implement some sort of vertical constraint to further improve results Had to manually set GIS data to get the best results.

GOALS FOR NEXT WEEK Look into more automatic methods for obtaining Field of vision Range of visibility Orientation of the camera Look into some sort of y-direction constraint Look into using early fusion as well as late fusion

THANK YOU FIN.