Guillaume Lavoué Mohamed Chaker Larabi Libor Vasa Université de Lyon

Slides:



Advertisements
Similar presentations
- Laboratoire d'InfoRmatique en Image et Systèmes d'information LIRIS UMR 5205 CNRS/INSA de Lyon/Université
Advertisements

Object Specific Compressed Sensing by minimizing a weighted L2-norm A. Mahalanobis.
Wavelets Fast Multiresolution Image Querying Jacobs et.al. SIGGRAPH95.
L1 sparse reconstruction of sharp point set surfaces
Robust Invisible Watermarking of Volume Data Y. Wu 1, X. Guan 2, M. S. Kankanhalli 1, Z. Huang 1 NUS Logo 12.
Contrast-Aware Halftoning Hua Li and David Mould April 22,
Chapter 3 Image Enhancement in the Spatial Domain.
Hongliang Li, Senior Member, IEEE, Linfeng Xu, Member, IEEE, and Guanghui Liu Face Hallucination via Similarity Constraints.
(JEG) HDR Project Boulder meeting January 2014 Phil Corriveau-Patrick Le Callet- Manish Narwaria.
Automatic scene inference for 3D object compositing Kevin Karsch (UIUC), Sunkavalli, K. Hadap, S.; Carr, N.; Jin, H.; Fonte, R.; Sittig, M., David Forsyth.
The Global Digital Elevation Model (GTOPO30) of Great Basin Location: latitude 38  15’ to 42  N, longitude 118  30’ to 115  30’ W Grid size: 925 m.
H. R. Sheikh, A. C. Bovik, “Image Information and Visual Quality,” IEEE Trans. Image Process., vol. 15, no. 2, pp , Feb Lab for Image and.
EI San Jose, CA Slide No. 1 Measurement of Ringing Artifacts in JPEG Images* Xiaojun Feng Jan P. Allebach Purdue University - West Lafayette, IN.
Cluster Analysis.  What is Cluster Analysis?  Types of Data in Cluster Analysis  A Categorization of Major Clustering Methods  Partitioning Methods.
Introduction to Image Quality Assessment
Detecting Image Region Duplication Using SIFT Features March 16, ICASSP 2010 Dallas, TX Xunyu Pan and Siwei Lyu Computer Science Department University.
Visibility-Guided Simplification Eugene Zhang and Greg Turk GVU Center, College of Computing Georgia Institute of Technology.
Computing motion between images
High-Quality Video View Interpolation
Optimal Bandwidth Selection for MLS Surfaces
Mean Squared Error : Love It or Leave It ?. Why do we love the MSE ? It is simple. It has a clear physical meaning. The MSE is an excellent metric in.
1 Blind Image Quality Assessment Based on Machine Learning 陈 欣
Spatial Pyramid Pooling in Deep Convolutional
Visualization and graphics research group CIPIC January 21, 2003Multiresolution (ECS 289L) - Winter Surface Simplification Using Quadric Error Metrics.
Multi-Scale Surface Descriptors Gregory Cipriano, George N. Phillips Jr., and Michael Gleicher.
Graph-based consensus clustering for class discovery from gene expression data Zhiwen Yum, Hau-San Wong and Hongqiang Wang Bioinformatics, 2007.
Neighborhood Operations
Surface Simplification Using Quadric Error Metrics Michael Garland Paul S. Heckbert.
Multimodal Interaction Dr. Mike Spann
Estimation-Quantization Geometry Coding using Normal Meshes
بسمه تعالی IQA Image Quality Assessment. Introduction Goal : develop quantitative measures that can automatically predict perceived image quality. 1-can.
1 Requirements for the Transmission of Streaming Video in Mobile Wireless Networks Vasos Vassiliou, Pavlos Antoniou, Iraklis Giannakou, and Andreas Pitsillides.
What is Image Quality Assessment?
Lecture 20: Cluster Validation
R. Ray and K. Chen, department of Computer Science engineering  Abstract The proposed approach is a distortion-specific blind image quality assessment.
Implicit User Feedback Hongning Wang Explicit relevance feedback 2 Updated query Feedback Judgments: d 1 + d 2 - d 3 + … d k -... Query User judgment.
MDDSP Literature Survey Presentation Eric Heinen
Qiaochu Li, Qikun Guo, Saboya Yang and Jiaying Liu* Institute of Computer Science and Technology Peking University Scale-Compensated Nonlocal Mean Super.
Images Similarity by Relative Dynamic Programming M. Sc. thesis by Ady Ecker Supervisor: prof. Shimon Ullman.
Mingyang Zhu, Huaijiang Sun, Zhigang Deng Quaternion Space Sparse Decomposition for Motion Compression and Retrieval SCA 2012.
CIE Budapest 2009 | FG Lichttechnik | Peter Bodrogi et al. Re-defining the colour rendering index Peter Bodrogi Stefan Brückner Tran Quoc Khanh.
Compression of Real-Time Cardiac MRI Video Sequences EE 368B Final Project December 8, 2000 Neal K. Bangerter and Julie C. Sabataitis.
UMR 5205 C. ROUDETF. DUPONTA. BASKURT Laboratoire d'InfoRmatique en Image et Systèmes d'information UMR5205 CNRS/INSA de Lyon/Université Claude Bernard.
Single Image Super-Resolution: A Benchmark Chih-Yuan Yang 1, Chao Ma 2, Ming-Hsuan Yang 1 UC Merced 1, Shanghai Jiao Tong University 2.
Department of computer science and engineering Evaluation of Two Principal Image Quality Assessment Models Martin Čadík, Pavel Slavík Czech Technical University.
3D Face Recognition Using Range Images
CSE 185 Introduction to Computer Vision Feature Matching.
Learning Photographic Global Tonal Adjustment with a Database of Input / Output Image Pairs.
1 Marco Carli VPQM /01/2007 ON BETWEEN-COEFFICIENT CONTRAST MASKING OF DCT BASIS FUNCTIONS Nikolay Ponomarenko (*), Flavia Silvestri(**), Karen.
Performance Measurement of Image Processing Algorithms By Dr. Rajeev Srivastava ITBHU, Varanasi.
Image features and properties. Image content representation The simplest representation of an image pattern is to list image pixels, one after the other.
Objective Quality Assessment Metrics for Video Codecs - Sridhar Godavarthy.
Date of download: 5/30/2016 Copyright © 2016 SPIE. All rights reserved. Comparison of images with different perceptual qualities: (a) original image, (b)
Blind Image Quality Assessment Using Joint Statistics of Gradient Magnitude and Laplacian Features 刘新 Xue W, Mou X, Zhang L, et al. Blind.
Summary of “Efficient Deep Learning for Stereo Matching”
Fast multiresolution image querying
Nikolay Ponomarenkoa, Vladimir Lukina, Oleg I. Ieremeieva,
Super-resolution Image Reconstruction
Enhanced-alignment Measure for Binary Foreground Map Evaluation
A Comparative Study for Single Image Blind Deblurring
Pei Qi ECE at UW-Madison
Digital Multimedia Coding
Research Topic Error Concealment Techniques in H.264/AVC for Wireless Video Transmission Vineeth Shetty Kolkeri EE Graduate,UTA.
Structure-Aware Lighting Design for Volume Visualization
A Review in Quality Measures for Halftoned Images
Contrast-Aware Halftoning
Cluster Validity For supervised classification we have a variety of measures to evaluate how good our model is Accuracy, precision, recall For cluster.
Digital television systems (DTS)
Nearest Neighbors CSC 576: Data Mining.
Distributions of pairwise correlation coefficients from cooccurrence models computed from temporal data (fecal) and spatial data (biopsy samples; means.
Presentation transcript:

On the Efficiency of Image Metrics for Evaluating the Visual Quality of 3D Models Guillaume Lavoué Mohamed Chaker Larabi Libor Vasa Université de Lyon LIRIS Université de poitier XLIM-SIC University of West Bohemia

Same Max Root Mean Square Error (1.05 × 10-3) An illustration Watermarking Wang et al. 2011 Smoothing Taubin, 2000 Original 0.14 0.40 Watermarking Cho et al. 2006 Simplification Lindstrom, Turk 2000 Noise addition 0.51 0.62 0.84 Same Max Root Mean Square Error (1.05 × 10-3)

Quality metrics for static meshes MSDM [Lavoué et al. 2006] MSDM2 [Lavoué 2011] [Torkhani et al. 2012] Local curvature statistics Distorted model Local differences of statistics Matching Local Distortion Map Spatial pooling Original model Global Distortion Score

Our previous works Why not using Image Quality Metrics? Distortion score Why not using Image Quality Metrics? Such image-based approach has been already used for driving simplification [Lindstrom, Turk, 2000][Qu, Meyer, 2008]

Our study Determine the best set of parameters to use for such image- based quality assessment approach. Compare this approach to the most performing model-based metrics.

Many parameters In our study, we consider: Around 100,000 images Which 2D metric to use? How many views, which views? How to combine the 2D scores? Which rendering, lighting? In our study, we consider: 6 image metrics 2 rendering algorithms 9 lighting conditions 5 ways of combining image metric results 4 databases to evaluate the results Around 100,000 images

Image Quality Metrics State of the art algorithms Simple PSNR and Root Mean Square Error MSSIM (multi-scale SSIM) [Wang et al. 2003] VIF (visual information fidelity) [Sheikh and Bovik, 2006] IWSSIM (information content weighted SSIM) [Wang and LI, 2011] FSIM (feature similarity index) [Zhang et al. 2011] State of the art algorithms

Generation of 2D views and lightning conditions 42 cameras placed uniformly around the object Rendering using a single white directional light source The light is either fixed with respect to the camera, or with respect to the object 3 positions: front, top, top-right So we have 3*2 = 6 lighting conditions We also consider averages of object-light, camera-light and global  9 conditions

Image Rendering Protocols We consider 2 ways of computing the normals, with or without averaging on the neighborhood.

Pooling algorithms How to combine the per-image quality score into a single one? Minkowski norm is popular: We also consider image importance weights [Secord et al. 2011] Perceptual model of viewpoint preference  Surface visibility

The MOS databases The LIRIS/EPFL General-Purpose Database 88 models (from 40K to 50K vertices) from 4 reference objects. Non uniform noise addition and smoothing. The LIRIS Masking Database 26 models (from 9K to 40K vertices) from 4 reference objects. Noise addition on smooth or rough regions. The IEETA Simplification Database 30 models (from 2K to 25K vertices) from 5 reference objects. Three simplification algorithms. The UWB Compression database 68 models from 5 reference objects Different kinds of artefacts from compression

Results and analysis Basically we have a full factorial experiments heavily used in statistics to study the effect of different factors on a response variable We consider 4 factors: The metric (6 possible values) The lighting (9 possible values) The pooling (5 possible values) The rendering (2 possible values).  540 possible combinations We consider two response variables: Sperman correlation over all the objects Sperman correlation averaged per objects

Results and analysis For a given factor associated with n possible values, we have n sets of paired spearman coefficients. To estimate the effect of a given factor on the objective metric performance, we conduct pairwise comparisons of each of its value between the others (i.e. n(n-1)/2 comparisons). We have paired values, so we can do better than a simple comparison of the means.  Statistical significance test (not Student but Wilcoxon signed rank test).  We study the median of paired differences, as well as the 25th and 75th percentiles.

Influence of the metrics IWSSIM provides the best results FSIM and MSSIM are 2nd best, significantlky better than MSE and PSNR. VIF provides instable results (see the percentiles).

Influence of the lighting Indirect illuminations provide better results Light has to be linked to the camera Object-front is not so bad, but not its performances are not stable.

Influence of the pooling Low values of P are better. Weights do not bring significant improvments.

Comparisons with 3D metrics For easy scenarios: 2D metrics are excellent However when the task becomes more difficult, 3D metrics are better But, still, simple image-based metrics are better than simple geometric ones.