Blind Contrast Restoration Assessment by Gradient Ratioing at Visible Edges Nicolas Hautière 1, Jean-Philippe Tarel 1, Didier Aubert 1-2, Eric Dumont 1.

Slides:



Advertisements
Similar presentations
Genoa, Italy September 2-4, th IEEE International Conference on Advanced Video and Signal Based Surveillance Combination of Roadside and In-Vehicle.
Advertisements

S INGLE -I MAGE R EFOCUSING AND D EFOCUSING Wei Zhang, Nember, IEEE, and Wai-Kuen Cham, Senior Member, IEEE.
QR Code Recognition Based On Image Processing
International Symposium on Automotive Lighting, Darmstadt, Germany Review of the Mechanisms of Visibility Reduction by Rain and Wet Road Nicolas Hautière,
Prénom Nom Document Analysis: Document Image Processing Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
1 Autonomous Registration of LiDAR Data to Single Aerial Image Takis Kasparis Nicholas S. Shorter
Image Processing IB Paper 8 – Part A Ognjen Arandjelović Ognjen Arandjelović
Digital Image Processing In The Name Of God Digital Image Processing Lecture3: Image enhancement M. Ghelich Oghli By: M. Ghelich Oghli
Face Recognition and Biometric Systems 2005/2006 Filters.
Vision Based Control Motion Matt Baker Kevin VanDyke.
Stereo Matching Segment-based Belief Propagation Iolanthe II racing in Waitemata Harbour.
Artificial PErception under Adverse CONditions: The Case of the Visibility Range LCPC in cooperation with INRETS, France Nicolas Hautière Young Researchers.
Intervenant - date Distributed Simulation Architecture for the Design of Cooperative ADAS D. Gruyer, S. Glaser, S. Pechberti, R. Gallen, N. Hautière 05/09/2011.
Vehicle-Infrastructure-Driver Interactions Research Unit
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
TRB 2011 “ Visibility Monitoring Using Conventional Roadside Cameras: Shedding Light On and Solving a Multi- National Road Safety Problem“ A project supported.
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
When Does a Camera See Rain? Department of Computer Science Columbia University Kshitiz Garg Shree K. Nayar ICCV Conference October 2005, Beijing, China.
Processing Digital Images. Filtering Analysis –Recognition Transmission.
Free Space Detection for autonomous navigation in daytime foggy weather Nicolas Hautière, Jean-Philippe Tarel, Didier Aubert.
Chromatic Framework for Vision in Bad Weather Srinivasa G. Narasimhan and Shree K. Nayar Computer Science Department Columbia University IEEE CVPR Conference.
Perceptual Hysteresis Thresholding: Towards Driver Visibility Descriptors Nicolas Hautière, Jean-philippe Tarel, Roland Brémond Laboratoire Central des.
Shadow Detection In Video Submitted by: Hisham Abu saleh.
Removing Weather Effects from Monochrome Images Srinivasa Narasimhan and Shree Nayar Computer Science Department Columbia University IEEE CVPR Conference.
Digital Image Processing ECE 480 Technical Lecture Team 4 Bryan Blancke Mark Heller Jeremy Martin Daniel Kim.
Despeckle Filtering in Medical Ultrasound Imaging
Entropy and some applications in image processing Neucimar J. Leite Institute of Computing
CSC 589 Lecture 22 Image Alignment and least square methods Bei Xiao American University April 13.
Linked Edges as Stable Region Boundaries* Michael Donoser, Hayko Riemenschneider and Horst Bischof This work introduces an unsupervised method to detect.
Towards Night Fog Detection through use of In-Vehicle Multipurpose Cameras Romain Gallen Aurélien Cord Nicolas Hautière Didier Aubert.
Introduction to Visible Watermarking IPR Course: TA Lecture 2002/12/18 NTU CSIE R105.
An efficient method of license plate location Pattern Recognition Letters 26 (2005) Journal of Electronic Imaging 11(4), (October 2002)
Topic 10 - Image Analysis DIGITAL IMAGE PROCESSING Course 3624 Department of Physics and Astronomy Professor Bob Warwick.
Object Stereo- Joint Stereo Matching and Object Segmentation Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on Michael Bleyer Vienna.
Extracting Barcodes from a Camera-Shaken Image on Camera Phones Graduate Institute of Communication Engineering National Taiwan University Chung-Hua Chu,
Colour changes in a natural scene due to the interaction between the light and the atmosphere Raúl Luzón González Colour Imaging Laboratory.
03/05/03© 2003 University of Wisconsin Last Time Tone Reproduction If you don’t use perceptual info, some people call it contrast reduction.
Course 9 Texture. Definition: Texture is repeating patterns of local variations in image intensity, which is too fine to be distinguished. Texture evokes.
Road Scene Analysis by Stereovision: a Robust and Quasi-Dense Approach Nicolas Hautière 1, Raphaël Labayrade 2, Mathias Perrollaz 2, Didier Aubert 2 1.
Digital Image Processing Lecture 1: Introduction February 21, 2005 Prof. Charlene Tsai Prof. Charlene Tsai
23 November Md. Tanvir Al Amin (Presenter) Anupam Bhattacharjee Department of Computer Science and Engineering,
COMP322/S2000/L171 Robot Vision System Major Phases in Robot Vision Systems: A. Data (image) acquisition –Illumination, i.e. lighting consideration –Lenses,
Autonomous Robots Vision © Manfred Huber 2014.
Jack Pinches INFO410 & INFO350 S INFORMATION SCIENCE Computer Vision I.
Robust Nighttime Vehicle Detection by Tracking and Grouping Headlights Qi Zou, Haibin Ling, Siwei Luo, Yaping Huang, and Mei Tian.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
03/04/05© 2005 University of Wisconsin Last Time Tone Reproduction –Histogram method –LCIS and improved filter-based methods.
Wonjun Kim and Changick Kim, Member, IEEE
May 16-18, Tsukuba Science City, Japan Machine Vision Applications 2005 Estimation of the Visibility Distance by Stereovision: a Generic Approach.
Face Detection Final Presentation Mark Lee Nic Phillips Paul Sowden Andy Tait 9 th May 2006.
Image Enhancement Objective: better visualization of remotely sensed images visual interpretation remains to be the most powerful image interpretation.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Performance Measurement of Image Processing Algorithms By Dr. Rajeev Srivastava ITBHU, Varanasi.
Digital Image Processing CSC331
RECONSTRUCTION OF MULTI- SPECTRAL IMAGES USING MAP Gaurav.
3D Perception and Environment Map Generation for Humanoid Robot Navigation A DISCUSSION OF: -BY ANGELA FILLEY.
An Adept Edge Detection Algorithm for Human Knee Osteoarthritis Images
DIGITAL SIGNAL PROCESSING
PLIP BASED UNSHARP MASKING FOR MEDICAL IMAGE ENHANCEMENT
Nonparametric Semantic Segmentation
Watermarking with Side Information
Abstract In this paper, an improved defogging algorithm for intelligent transportation system based on image processing is proposed. According to the.
Detecting Artifacts and Textures in Wavelet Coded Images
ECE 692 – Advanced Topics in Computer Vision
Chapter 10 – Image Segmentation
Paper Reading Dalong Du April.08, 2011.
Image Segmentation.
AHED Automatic Human Emotion Detection
Grape Detection in Vineyards Introduction To Computational and Biological Vision final project Kobi Ruham Eli Izhak.
Gradient Domain Salience-preserving Color-to-gray Conversion
Presentation transcript:

Blind Contrast Restoration Assessment by Gradient Ratioing at Visible Edges Nicolas Hautière 1, Jean-Philippe Tarel 1, Didier Aubert 1-2, Eric Dumont 1 1 Laboratoire Central des Ponts et Chaussées, Paris, France 2 Institut National de REcherche sur les Transports et leur Sécurité, Versailles, France

Presentation Overview 1. Problematic 2. Visibility Model 3. Visible Edges Ratioing 4. Visual Properties of Fog 5. Contrast Restoration 6. Visible Edges Segmentation 7. Contrast Restoration Assessment 8. Conclusion

Problematic  There is a lack of methodology to assess the performances of fog degraded images restoration.  Since fog effects are volumetric, fog can not be considered as a classical image noise or degradation which might be added and then removed.  Consequently, compared to image quality assessment or image restoration areas, there is no easy way, synthetic images from 3D models put aside, to have a reference image.  We propose such a contribution.

Visibility Model  Visibility can be related to the contrast C, defined by:  For suprathreshold contrasts, the Visibility Level (VL) of a target can be quantified by the ratio:  As L b is the same for both conditions, then this equation reduces to:  ΔL threshold depends on many parameters and can be estimated using Adrian’s empirical target visibility model (Adrian, 1989).

Visible Edges Ratioing  To assess the performances of a contrast restoration method, we compute, for each pixel belonging to a visible edge in the restored image, the ratio:  ΔI o is the gradient in the original image.  ΔI r is the gradient in the restored image.  Assuming a linear camera response function:  An object is composed of edges, r becomes: where ΔL threshold would be given by Adrian’s model.  Finally, we have: Hautière N, Dumont E (2007). Assessment of visibility in complex road scenes using digital imaging. In: The 26th session of the CIE (CIE’07), Beijing, China.

Visual Properties of Fog  Koschmieder’s law gives the apparent luminance L of an object located at distance d to the luminance L 0 measured close to this object: where L ∞ is the atmospheric luminance and β is the extinction coefficient of fog.  Duntley developed a contrast attenuation law:  The CIE defined a standard dimension called “meteorological visibility distance“: Daylight Scattering Atmospheric veil Direct transmission

 Assuming a linear camera response function, Koschmieder’s law becomes in the image plane:  Assuming a flat world scene, it is possible to estimate (β, A ∞ ) thanks to the existence of an inflection point on this curve: where depends on camera parameters and v h denotes the horizon line. Contrast Restoration: Fog Density Estimation Hautière N, Tarel JP, Lavenant J, Aubert D (2006b). Automatic Fog Detection and Estimation of Visibility Distance through use of an Onboard Camera. Machine Vision and Applications Journal 17:8–20.

To restore the contrast, we propose to reverse Koschmieder’s law. In this way, R can be estimated directly for all scene points from: The remaining problem is the depth d of each pixel. For pixels not belonging to the sky region, i.e I<A ∞, a scene model is proposed: d 1 models the depth of pixels belonging to the road plane and d 2 models the depth of the vertical surroundings. where c is a clipping plane,  > controls the relative importance of the flat world against the vertical surroundings. Contrast Restoration: Principle u v

Contrast Restoration: Algorithm One method aims at restoring the contrast of the road surface, while enhancing contrast on vertical objects without distorting them. We seek the best scene maximizes the contrast and minimizes the number of distorted pixels, i.e. the optimal values of  and c. The problem can be formulated as a minimization process: where Q is an image quality attribute, the norm of the local normalized correlation between the original image I and the restored image R: Hautière N, Tarel JP, Aubert D (2007). Towards fog-free in-vehicle vision systems through contrast restoration. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’07), Minneapolis, USA.

Contrast Restoration: Results

Visible Edges Segmentation: Principle and Implementation  By fog, the visible edges are the set of edges having a local contrast above 5%.  LIP model (Jourlin and Pinoli, 2001) defined the contrast associated to a border F which separates two adjacent regions: where C (x,y) (f) denotes the contrast between two pixels x and y of the image f:  To implement this definition of contrast, Köhler’s segmentation method has been used (Köhler, 1981).  Instead of using this method to binarize images, we use it to measure the contrast locally: Hautière N, Aubert D, Jourlin M (2006a). Measurement of local contrast in images, application to the measurement of visibility distance through use of an onboard camera. Traitement du Signal 23:145–58.

Visible Edges Segmentation: Results

Restoration Assessement: Final Results  The computation of r enables thus to compute the increase of visibility level VL produced by the contrast restoration method.  e denotes the percentage of new visible edges, i.e. C>5%. Histogram stretching Proposed method

Conclusion  In this paper, we proposed:  An efficient contrast restoration method,  A methodology to assess its performances by gradient ratioing at visible edges,  A method to extract edges having a local contrast above 5% based on LIP model.  In the future, we want to tackle:  The detection of other meteorological phenomena such as rain, night-fog,  The restoration of other types of image degradation.