Camera Phone Color Appearance Utility Finding a Way to Identify Color Phillip Lachman Robert Prakash Elston Tochip.

Slides:



Advertisements
Similar presentations
Automatic Color Gamut Calibration Cristobal Alvarez-Russell Michael Novitzky Phillip Marks.
Advertisements

QR Code Recognition Based On Image Processing
Inpainting Assigment – Tips and Hints Outline how to design a good test plan selection of dimensions to test along selection of values for each dimension.
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Photoshop Lab colorspace A quick and easy 26 step process for enhancing your photos.
Grey Level Enhancement Contrast stretching Linear mapping Non-linear mapping Efficient implementation of mapping algorithms Design of classes to support.
Computational Biology, Part 23 Biological Imaging II Robert F. Murphy Copyright  1996, 1999, All rights reserved.
Spatial Filtering (Chapter 3)
Regional Processing Convolutional filters. Smoothing  Convolution can be used to achieve a variety of effects depending on the kernel.  Smoothing, or.
EDGE DETECTION ARCHANA IYER AADHAR AUTHENTICATION.
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction.
Image Processing A brief introduction (by Edgar Alejandro Guerrero Arroyo)
Sliding Window Filters and Edge Detection Longin Jan Latecki Computer Graphics and Image Processing CIS 601 – Fall 2004.
Computer Vision Lecture 16: Texture
EDGE DETECTION.
Chapter 8. Basic Image Process The visual information can be recorded by a TV camera or a 2D array of CCD sensors. A digital image is formed.
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
School of Computing Science Simon Fraser University
6/9/2015Digital Image Processing1. 2 Example Histogram.
MSU CSE 803 Stockman Linear Operations Using Masks Masks are patterns used to define the weights used in averaging the neighbors of a pixel to compute.
Traffic Sign Recognition Jacob Carlson Sean St. Onge Advisor: Dr. Thomas L. Stewart.
Digital Image Processing
Lecture 2: Image filtering
MSU CSE 803 Linear Operations Using Masks Masks are patterns used to define the weights used in averaging the neighbors of a pixel to compute some result.
Tal Mor  Create an automatic system that given an image of a room and a color, will color the room walls  Maintaining the original texture.
Lecture 1: Images and image filtering CS4670/5670: Intro to Computer Vision Kavita Bala Hybrid Images, Oliva et al.,
Neighborhood Operations
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30 – 11:50am.
National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Image Features Kenton McHenry, Ph.D. Research Scientist.
CS559: Computer Graphics Lecture 3: Digital Image Representation Li Zhang Spring 2008.
Dynamic Range And Granularity. Dynamic range is important. It is defined as the difference between light and dark areas of an image. All digital images.
September 23, 2014Computer Vision Lecture 5: Binary Image Processing 1 Binary Images Binary images are grayscale images with only two possible levels of.
Lecture 03 Area Based Image Processing Lecture 03 Area Based Image Processing Mata kuliah: T Computer Vision Tahun: 2010.
G52IVG, School of Computer Science, University of Nottingham 1 Edge Detection and Image Segmentation.
Graphics. Graphic is the important media used to show the appearance of integrative media applications. According to DBP dictionary, graphics mean drawing.
Data Extraction using Image Similarity CIS 601 Image Processing Ajay Kumar Yadav.
Instructor: S. Narasimhan
Vision Geza Kovacs Maslab Colorspaces RGB: red, green, and blue components HSV: hue, saturation, and value Your color-detection code will be more.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
EE 4780 Edge Detection.
CSC508 Convolution Operators. CSC508 Convolution Arguably the most fundamental operation of computer vision It’s a neighborhood operator –Similar to the.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Kylie Gorman WEEK 1-2 REVIEW. CONVERTING AN IMAGE FROM RGB TO HSV AND DISPLAY CHANNELS.
Autonomous Robots Vision © Manfred Huber 2014.
Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos VC 15/16 – TP7 Spatial Filters Miguel Tavares Coimbra.
Digital Images and Digital Cameras take notes in your journal.
Intelligent Robotics Today: Vision & Time & Space Complexity.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
ESPL 1 Motivation Problem: Amateur photographers often take low- quality pictures with digital still camera Personal use Professionals who need to document.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Introduction to Scale Space and Deep Structure. Importance of Scale Painting by Dali Objects exist at certain ranges of scale. It is not known a priory.
: Chapter 5: Image Filtering 1 Montri Karnjanadecha ac.th/~montri Image Processing.
Sharpening Spatial Filters ( high pass)  Previously we have looked at smoothing filters which remove fine detail  Sharpening spatial filters seek to.
Grauman Today: Image Filters Smooth/Sharpen Images... Find edges... Find waldo…
Digital Image Processing CSC331
Sliding Window Filters Longin Jan Latecki October 9, 2002.
Processing Images and Video for An Impressionist Effect Automatic production of “painterly” animations from video clips. Extending existing algorithms.
Edges Edges = jumps in brightness/color Brightness jumps marked in white.
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
Miguel Tavares Coimbra
Edge Detection slides taken and adapted from public websites:
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Image Deblurring and noise reduction in python
An Adept Edge Detection Algorithm for Human Knee Osteoarthritis Images
IMAGE PROCESSING INTENSITY TRANSFORMATION AND SPATIAL FILTERING
Computer Vision Lecture 5: Binary Image Processing
Computer Vision Lecture 4: Color
Computer Vision Lecture 9: Edge Detection II
Computer Vision Lecture 16: Texture II
DIGITAL IMAGE PROCESSING Elective 3 (5th Sem.)
Presentation transcript:

Camera Phone Color Appearance Utility Finding a Way to Identify Color Phillip Lachman Robert Prakash Elston Tochip

Outline  Motivation  Goal  Methodology Image Scaling via Edge Detection Color Identification Color Selection & Differentiation  Results  Lessons Learned  Future Work

Motivation  Phones becoming the portal able digital platform for variety of imaging applications i.e. pictures, video, organizers etc.  Approximately 10 million blind people within the U.S. 55,200 legally blind children 5.5 million elderly individuals  Color blind people within the U.S. ~ 8% of males ~ 0.4% - 2% females

Goal  To develop a software application that will be able to accomplish the following: 1) Receives a camera quality image 2) Identifies the predominant color(s) regions within the image 3) Estimates the color name for the predominant region 4) Audibly transmits the predominant color to the user

Locating the Target

General Guidelines and Suggestions  Use a White Card Provides a white color to baseline lighting conditions  Required for computing color of target  Suggested by fellow classmates and Bob Dougherty  Detecting the White Card Use an Edge Detection Algorithm  Many Image Processing Edge detection methods available Identify edges by computing changes in gradient around pixels.  Chose Canny Edge Detection algorithm Fundamentally easy to understand and implement Edge Detection

Finding the Target: Discrimination  How does the algorithm discriminate the target being photographed?  Background clutter and scenery complicate the image  Discrimination Solution The White Card “Scope”  Use the rectangular white card with a square target hole to allow object color through  Use Edge Detection image processing algorithms to find the white card Find the white card, find the target! Target Hole Card Background Target Color

Finding the Target: Discrimination Cont  The White Card Problem White backgrounds or light color backgrounds cause edge detection problems in Canny Algorithm Original Image After Edge Detection Where is the card??

Finding the Target: Discrimination Cont  The White Card Problem Cont. Adding a Black Outline to the card edges and target hole greatly improve detection! Original Image After Edge Detection There’s the card!!

Finding the Target: Aiming the Camera  How does a blind person AIM the camera to take a picture of the target? Photographs may NOT include target Photographing target too close may not allow enough lighting to determine color  Aiming Solution- White Card Holder Use a phone attachment which holds the white card AND attaches to phone camera Guarantees white card and target in the camera Field of View Guarantees camera is not directly on top of the object, providing ample lighting for color detection CameraWhite Card Mount Hinge Assembly to allow for folding Baseboard

Finding the Target: Aiming the Camera  Additional Benefits of Card Holding Device Fixes Orientation of the card  Chose to have card positioned vertically with edges parallel to photo edges  Simplifies algorithm detection, increasing speed Removes excess background scenery  Device maintains a fixed 6-8 inches between camera and white card Scene is dominated by white card and maximizes number of pixels covering the target

Finding the Target: Examples with and without device

 Phase 1: Blurring and Sharpening Edges Preprocess Images to blur and eliminate noisy pixels Apply a 3x3 Laplacian Matrix Kernel to resulting image  Kernel is an approximation of the second derivative, highlighting changes in intensity  Matrix = Adding results of Gaussian Blurring and Laplace yields image with cleaner and more distinct lines Finding the Target: Edge Detection Algorithm 8

Edge Detection: Blurring and Sharpening Applying Laplace removes Noise and Smoothes lines Original Image After Smoothing and Laplace

Finding the Target: Edge Detection Algorithm  Phase 2: Apply Canny Edge Detection Algorithm Step 1: Applies Gaussian smoothing in 2 dimensions to the image via convolution  Size of the mask = 20x20 with a sigma = 5 Step 2: Compute the resulting gradient of the intensities in the image Step 3: Threshold the norm of the gradient image to isolate edge pixels

Edge Detection: Applying Canny Edge Detection Algorithm Original Image After Canny and Threshold

Finding the Target: Edge Detection Algorithm  Phase 3: Finding the edges in the photo Step 1: Recursively search, row by row, from the outer left/right edges of the image towards the center.  Search for 1 quarter length from right/left side Target Hole Card Left Fourth

Finding the Target: Edge Detection Algorithm  Phase 3 Cont: Step 2: Bin and compute outer edges based on which values are closest to the center  Find left/right edges based on bin having AT LEAST 10% of the total pixels available on the each side Step 3: Compute Top and Bottom Edge  Compute the average row at which both the left and right edges begin/end Value gives rough estimate of top/bottom of white card Right Side Left Side First bin From Center Exceeding 10% First bin From Center Exceeding 10%

Finding the Target: Target Hole Detection Algorithm  Outer Target White Card edges Located Proceed to Locate INNER Target Hole edges  Phase 4: Identifying Inner Target Hole Edges Step 1: Crop Canny Thresholded image to dimensions obtained for outer edge Step 2: Perform recursive row by row outside to inside search until a high threshold is found on both sides. Step 3: Bin and compute left/right edges as before Step 4: Compute Top and Bottom edges as before

Finding the Target: Output to Color Detection  Phase 5: Compute overall Target Hole Position in ORIGINAL image Sum up inner and outer edge values computed previously Crop the original image to these dimensions and output to Color Detection model

Color Detection: Original Idea  Keep it simple: Use brightest point on white card as white point. Normalize R,G and B separately. Good results, slight reddish tinge.

Color Detection: “Gray World”  Use “Gray world” theory: Normalize means of RGB to 128 Results slightly better in low lighting conditions, but less effective under good lighting.

Color Identification: Original Idea  Keep it simple: Bin using RGB Problem:  No clear grouping  Small changes in one value, changes color dramatically Solution:  Attempt to identify groups by using max of R, G and B values.  Still contained overlaps

Color Identification: CIE- L*a*b  Convert RGB to CIELab Benefits:  Device independent. Problems:  Conversion formulas complicated and processor intensive.  Light source information is required. Solution: use HSV

Color Identification: Use HSV  Convert RGB to HSV The HSV (Hue, Saturation, Value) model is a simple transformation from RGB.  Hue, the color type (like red, blue, etc) Huecolor ranges from  Saturation, the "vibrancy" of the color: Saturation Ranges from 0-100%  Value, the brightness of the color: Valuebrightness Ranges from 0-100%

Color Identification: RGB to HSV  Equations used for conversion

Color Selection & Differentiation  Currently code identifies 24 colors based on HSV color system.  Color identification is acceptable, but starts to fail in low lighting conditions.

Results White Light Green White Indian Red

Lessons Learned  Edge Detection: Think about the Big Picture User Feasibility is Critical  If a soldier cannot aim a gun, how accurate is his shot? Simplicity is Essential  Presetting card orientation led to efficiency and shortcuts for edge detection  Slanted Orientation requires much more processing time and development Original code variants tried to and failed to account for all orientations

Lessons Learned  Color Detection: HSV is a compromise between simply binning on RGB values and conversion to L*a*b. Normalization using the white point more effective than “gray world”. Minimum level of lighting in required, since camera is low quality.

References        Class notes on Color and jpeg tutorials

Future Work  Implementing the processing onto an actual camera phone  Decreasing the processing time to audibly deliver the color to the user  Increasing color library  Refining overall algorithm to distinguish more detailed backgrounds. Patches Patterns Color Designs