Motion Detection and Processing Performance Analysis Thomas Eggers, Mark Rosenberg Department of Electrical and Systems Engineering Abstract Histograms.

Slides:



Advertisements
Similar presentations
ARTIFICIAL PASSENGER.
Advertisements

Computer Graphics- SCC 342
1 RTL Example: Video Compression – Sum of Absolute Differences Video is a series of frames (e.g., 30 per second) Most frames similar to previous frame.
Sumitha Ajith Saicharan Bandarupalli Mahesh Borgaonkar.
Computational Biology, Part 23 Biological Imaging II Robert F. Murphy Copyright  1996, 1999, All rights reserved.
Kevin Kelly Mentor: Peter Revesz.  Importance of Project: Beam stability is crucial in CHESS, down to micron-level precision  The beam position is measured.
 Projector .55” x 2.36” x 4.64”  133 g with battery  16:9 and 4:3 aspect ratio  848 x 480 pixels  Laser Pointers  5 mW output power  532 +/- 10.
The Binary Numbering Systems
Electronic Pitch Trainer Abstract: A baseball pitch has many properties that vary from pitch-to-pitch. Some of the more apparent properties are the release.
MIPR Lecture 5 Copyright Oleh Tretiak, Medical Imaging and Pattern Recognition Lecture 5 Image Measurements and Operations Oleh Tretiak.
0 - 1 © 2007 Texas Instruments Inc, Content developed in partnership with Tel-Aviv University From MATLAB ® and Simulink ® to Real Time with TI DSPs Detecting.
Register-Transfer Level (RTL) Design
Understanding and Quantifying the Dancing Behavior of Stem Cells Before Attachment Clinton Y. Jung 1 and Dr. Bir Bhanu 2, Department of Electrical Engineering.
Tracking Migratory Birds Around Large Structures Presented by: Arik Brooks and Nicholas Patrick Advisors: Dr. Huggins, Dr. Schertz, and Dr. Stewart Senior.
Traffic Sign Recognition Jacob Carlson Sean St. Onge Advisor: Dr. Thomas L. Stewart.
Shuffleboard Scorekeeper Rochester Institute of Technology Department of Computer Engineering Senior Design Project - Fall 2008 Tim Myers, Dan Stella,
Long Fiber-Optic Perimeter Sensor: Intrusion Detection W. Tim Snider, Faculty Advisor: Dr. Christi K. Madsen Texas A&M Department of Electrical and Computer.
Hand Movement Recognition By: Tokman Niv Levenbroun Guy Instructor: Todtfeld Ari.
First Bytes - LabVIEW. Today’s Session Introduction to LabVIEW Colors and computers Lab to create a color picker Lab to manipulate an image Visual ProgrammingImage.
Behavior Analysis Midterm Report Lipov Irina Ravid Dan Kotek Tommer.
Real-Time Face Detection and Tracking Using Multiple Cameras RIT Computer Engineering Senior Design Project John RuppertJustin HnatowJared Holsopple This.
Eye-RIS. Vision System sense – process - control autonomous mode Program stora.
Kalman Tracking for Image Processing Applications Student : Julius Oyeleke Supervisor : Dr Martin Glavin Co-Supervisor : Dr Fearghal Morgan.
Cloud Imagery and Motion Mark Anderson, Scott Cornelsen, and Tom Wilkerson Space Dynamics Laboratory Utah State University, Logan, UT
TERMS TO KNOW. Programming Language A vocabulary and set of grammatical rules for instructing a computer to perform specific tasks. Each language has.
Lab 2: Capturing and Displaying Digital Image
Introduction to electrical and computer engineering Jan P. Allebach School of Electrical and Computer Engineering
Abstract Some Examples The Eye tracker project is a research initiative to enable people, who are suffering from Amyotrophic Lateral Sclerosis (ALS), to.
Topics Introduction Hardware and Software How Computers Store Data
UNDERSTANDING DYNAMIC BEHAVIOR OF EMBRYONIC STEM CELL MITOSIS Shubham Debnath 1, Bir Bhanu 2 Embryonic stem cells are derived from the inner cell mass.
Casey Smith Doug Ritchie Fred Lloyd Michael Geary School of Electrical and Computer Engineering November 2, 2011 ECE 4007 Automated Speed Enforcement Using.
Knowledge Systems Lab JN 9/10/2002 Computer Vision: Gesture Recognition from Images Joshua R. New Knowledge Systems Laboratory Jacksonville State University.
Data Acquisition Data acquisition (DAQ) basics Connecting Signals Simple DAQ application Computer DAQ Device Terminal Block Cable Sensors.
Anti-Fraud System Design: Authenticating Magnetic Fingerprint Signal Amanda Spencer AND Raja Timihiri Dr. Robert E. Morley, Jr., Edward J. Richter, Washington.
WEBCAS (WEBCAM BASED CLASSROOM ATTENDANCE SYSTEM)
September 23, 2014Computer Vision Lecture 5: Binary Image Processing 1 Binary Images Binary images are grayscale images with only two possible levels of.
Lab 7 – Misc. pieces Southern Methodist University Bryan Rodriguez.
Computational Biology, Part 23 Biological Imaging III G. Steven Vanni Robert F. Murphy Copyright  1998, All rights reserved.
September 5, 2013Computer Vision Lecture 2: Digital Images 1 Computer Vision A simple two-stage model of computer vision: Image processing Scene analysis.
Jonathan Dinger 1. Traffic footage example 2  Important step in video analysis  Background subtraction is often used 3.
Video Segmentation Prepared By M. Alburbar Supervised By: Mr. Nael Abu Ras University of Palestine Interactive Multimedia Application Development.
SIMD Image Processor Eric Liskay Andrew Northy Neraj Kumar 1.
MULTIMEDIA INPUT / OUTPUT TECHNOLOGIES
Image-Based Segmentation of Indoor Corridor Floors for a Mobile Robot Yinxiao Li and Stanley T. Birchfield The Holcombe Department of Electrical and Computer.
NA-MIC National Alliance for Medical Image Computing Diffusion Tensor Imaging tutorial Sonia Pujol, PhD Surgical Planning Laboratory.
Magic Camera Master’s Project Defense By Adam Meadows Project Committee: Dr. Eamonn Keogh Dr. Doug Tolbert.
Video Surveillance Under The Guidance of Smt. D.Neelima M.Tech., Asst. Professor Submitted by G. Subrahmanyam Roll No: 10021F0013 M.C.A.
Augmented Reality and 3D modelling Done by Stafford Joemat Supervised by Mr James Connan.
By shooting 2009/6/22. Flow chart Load Image Undistotion Pre-process Finger detection Show result Send Result to imTop Calculate Background image by 10.
Intelligent Robotics Today: Vision & Time & Space Complexity.
Toshiba IR Test Apparatus Project Ahmad Nazri Fadzal Zamir Izam Nurfazlina Kamaruddin Wan Othman.
SUREILLANCE IN THE DEPARTMENT THROUGH IMAGE PROCESSING F.Y.P. PRESENTATION BY AHMAD IJAZ & UFUK INCE SUPERVISOR: ASSOC. PROF. ERHAN INCE.
Machine Vision. Image Acquisition > Resolution Ability of a scanning system to distinguish between 2 closely separated points. > Contrast Ability to detect.
Essential components of the implementation are:  Formation of the network and weight initialization routine  Pixel analysis of images for symbol detection.
Instructor : Dr. K. R. Rao Presented by : Vigneshwaran Sivaravindiran
Introduction to LabVIEW
What you need: In order to use these programs you need a program that sends out OSC messages in TUIO format. There are a few options in programs that.
1 Berger Jean-Baptiste
Introduction to LabVIEW. Overview Objectives Background Materials Procedure Report/Presentation Closing.
Computer Graphics CC416 Lecture 04: Bresenham Line Algorithm & Mid-point circle algorithm Dr. Manal Helal – Fall 2014.
Augmented Reality and 3D modelling Done by Stafford Joemat Supervised by Mr James Connan and Mehrdad Ghaziasgar.
Wednesday NI Vision Sessions
CI-110 Plant Canopy Analyzer Instrument Training Conducted by: Brienne Meyer
Detection, Tracking and Recognition in Video Sequences Supervised By: Dr. Ofer Hadar Mr. Uri Perets Project By: Sonia KanOra Gendler Ben-Gurion University.
 Many people like the flexibility of digital images. For example:  They can be shared by attaching to /uploading to Internet  Sent via mobiles.
IMAGE PROCESSING APPLIED TO TRAFFIC QUEUE DETECTION ALGORITHM.
Software Design Team KANG Group 1.
IMAGE MOSAICING MALNAD COLLEGE OF ENGINEERING
Fig 2: System in action with athlete
Presentation transcript:

Motion Detection and Processing Performance Analysis Thomas Eggers, Mark Rosenberg Department of Electrical and Systems Engineering Abstract Histograms Methods Acknowledgements This project involved designing motion detection software. LabVIEW’s NI Vision and Motion software was used to implement the detection. A Firewire camera was used to feed the video to the computer. The software developed has the capabilities of not only reading live camera feed but can also read any AVI file. The software has the ability to detect motion, place a red circle on the center of the region of detected motion, and saves the motion file to a hard drive. The camera operates at 60 fps, and while no motion is detected, the software processes 45 fps. When motion is detected, the software slows, only writing roughly 30 frames per second. The software includes a GUI which shows the live feed, and displays where the files will be saved. The detection is also customizable, allowing the user to select the threshold values for when motion is detected, depending on the specific need of the user. 1. The VI has two modes, AVI and CAM, which are selected for with a Boolean switch. 2. First, a path is designated for the processed video output. 15. The frame rate of the output AVI is set to the same for AVI input and 20 for CAM input. 3. The Vision Acquisition subVI grabs frames from either the AVI or the CAM each loop cycle. 4. This option allows the user to write the CAM to AVI unprocessed for comparison with the processed AVI. This pair of unprocessed/processed AVI files can also be compared with the corresponding pair from AVI input. 5. The image is copied to a new path and converted to grayscale (U8) so that differences between frames are simplified to one parameter. 6. The previous frame is copied to a new path and compared to the current image. The pixels of the resulting image have grayscale values that reflect the difference between the current and previous images. 7. The image is filtered by a threshold and each pixel is represented by a black or red bit. 8. The centroid is calculated for all pixels above the threshold, and the coordinates are calculated for the circle which will be overlaid onto the output images. 9. The number of pixels that showed motion above the threshold is calculated and stored. 10. If the last 8 frames average at least 150 changed pixels, then a Boolean is toggled for processing the motion. 11. The original frame is cleared from any overlays and a timestamp is overlaid at constant coordinates. 12. The circle is overlaid at the centroid of the motion. 14. After the input AVI file is done being processed, or the CAM is stopped, the VI ends. 13. The processed frame is written to an AVI output file. 22. Averages and standard deviations are calculated from the array. 16. The VI is separated with a sequence structure. 17. Timers flank regions whose performance is being analyzed. 18. The difference in time between two timers is calculated. 19. If it is greater than the maximum or less than the minimum, it replaces them. 20. Each loop cycle, differences are added to an array. 21. The array is represented with a histogram. 6 Write AVIAVICAM No Motion Motion 1 AcquisitionAVICAM No Motion 2 SubtractAVICAM No Motion 3 ThresholdAVICAM No Motion 4 CentroidAVICAM No Motion 5 QuantifyAVICAM No Motion AVICAM MotionNo MotionMotionNo Motion Components Avg (ms)Std DevAvg (ms)Std DevAvg (ms)Std DevAvg (ms)Std Dev Acquisition Subtract Threshold Centroid Quantify Write AVI Sum Results We thank Dr. Robert Morley, Ed Richter, Kristen Heck, National Instruments, the Department of Electrical and Systems Engineering, and Washington University in St. Louis This table shows a summary of average times for each part of the analysis.