An Interesting Question

Slides:



Advertisements
Similar presentations
Object Orie’d Data Analysis, Last Time •Clustering –Quantify with Cluster Index –Simple 1-d examples –Local mininizers –Impact of outliers •SigClust –When.
Advertisements

SHAPE THEORY USING GEOMETRY OF QUOTIENT SPACES: STORY STORY SHAPE THEORY USING GEOMETRY OF QUOTIENT SPACES: STORY STORY ANUJ SRIVASTAVA Dept of Statistics.
Component Analysis (Review)
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Medical Image Registration Kumar Rajamani. Registration Spatial transform that maps points from one image to corresponding points in another image.
Stochastic Systems Group Some Rambling (Müjdat) and Some More Serious Discussion (Ayres, Junmo, Walter) on Shape Priors.
Active Contours, Level Sets, and Image Segmentation
Independent Component Analysis Personal Viewpoint: Directions that maximize independence Motivating Context: Signal Processing “Blind Source Separation”
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction.
Object Orie’d Data Analysis, Last Time Finished NCI 60 Data Started detailed look at PCA Reviewed linear algebra Today: More linear algebra Multivariate.
Lecture 7: Principal component analysis (PCA)
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Principal Component Analysis
Announcements Take home quiz given out Thursday 10/23 –Due 10/30.
Lecture 4 Unsupervised Learning Clustering & Dimensionality Reduction
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Unsupervised Learning
PHY 042: Electricity and Magnetism
Statistics – O. R. 892 Object Oriented Data Analysis J. S. Marron Dept. of Statistics and Operations Research University of North Carolina.
Object Orie’d Data Analysis, Last Time Finished Algebra Review Multivariate Probability Review PCA as an Optimization Problem (Eigen-decomp. gives rotation,
Chapter 2 Dimensionality Reduction. Linear Methods
Probability of Error Feature vectors typically have dimensions greater than 50. Classification accuracy depends upon the dimensionality and the amount.
Shape Matching for Model Alignment 3D Scan Matching and Registration, Part I ICCV 2005 Short Course Michael Kazhdan Johns Hopkins University.
1 UNC, Stat & OR Nonnegative Matrix Factorization.
A Challenging Example Male Pelvis –Bladder – Prostate – Rectum.
Statistics – O. R. 891 Object Oriented Data Analysis J. S. Marron Dept. of Statistics and Operations Research University of North Carolina.
Object Orie’d Data Analysis, Last Time Discrimination for manifold data (Sen) –Simple Tangent plane SVM –Iterated TANgent plane SVM –Manifold SVM Interesting.
1 UNC, Stat & OR Metrics in Curve Space. 2 UNC, Stat & OR Metrics in Curve Quotient Space Above was Invariance for Individual Curves Now extend to: 
Curve Registration The rigid metric of physical time may not be directly relevant to the internal dynamics of many real-life systems. Rather, there can.
Geometry of Shape Manifolds
Object Orie’d Data Analysis, Last Time SiZer Analysis –Zooming version, -- Dependent version –Mass flux data, -- Cell cycle data Image Analysis –1 st Generation.
Maximal Data Piling Visual similarity of & ? Can show (Ahn & Marron 2009), for d < n: I.e. directions are the same! How can this be? Note lengths are different.
Discriminant Analysis
Common Property of Shape Data Objects: Natural Feature Space is Curved I.e. a Manifold (from Differential Geometry) Shapes As Data Objects.
1 UNC, Stat & OR PCA Extensions for Data on Manifolds Fletcher (Principal Geodesic Anal.) Best fit of geodesic to data Constrained to go through geodesic.
Participant Presentations Please Sign Up: Name (Onyen is fine, or …) Are You ENRolled? Tentative Title (???? Is OK) When: Thurs., Early, Oct., Nov.,
Object Orie’d Data Analysis, Last Time PCA Redistribution of Energy - ANOVA PCA Data Representation PCA Simulation Alternate PCA Computation Primal – Dual.
GWAS Data Analysis. L1 PCA Challenge: L1 Projections Hard to Interpret (i.e. Little Data Insight) Solution: 1)Compute PC Directions Using L1 2)Compute.
Unsupervised Learning II Feature Extraction
Cornea Data Main Point: OODA Beyond FDA Recall Interplay: Object Space  Descriptor Space.
SigClust Statistical Significance of Clusters in HDLSS Data When is a cluster “really there”? Liu et al (2007), Huang et al (2014)
Object Orie’d Data Analysis, Last Time Finished NCI 60 Data Linear Algebra Review Multivariate Probability Review PCA as an Optimization Problem (Eigen-decomp.
Object Orie’d Data Analysis, Last Time DiProPerm Test –Direction – Projection – Permutation –HDLSS hypothesis testing –NCI 60 Data –Particulate Matter.
Quantum Two 1. 2 Angular Momentum and Rotations 3.
Landmark Based Shapes As Data Objects
Unsupervised Learning
Statistical Smoothing
Last Time Proportions Continuous Random Variables Probabilities
LINEAR CLASSIFIERS The Problem: Consider a two class task with ω1, ω2.
Object Orie’d Data Analysis, Last Time
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
LECTURE 10: DISCRIMINANT ANALYSIS
Unsupervised Riemannian Clustering of Probability Density Functions
Morphing and Shape Processing
Principal Component Analysis (PCA)
Functional Data Analysis
Spectral Methods Tutorial 6 1 © Maks Ovsjanikov
Statistics – O. R. 881 Object Oriented Data Analysis
Maximal Data Piling MDP in Increasing Dimensions:
Landmark Based Shape Analysis
Principal Nested Spheres Analysis
Today is Last Class Meeting
Principal Component Analysis
Crustal Deformation Analysis from Permanent GPS Networks
Dimension reduction : PCA and Clustering
Connecting Data with Domain Knowledge in Neural Networks -- Use Deep learning in Conventional problems Lizhong Zheng.
Feature space tansformation methods
LECTURE 09: DISCRIMINANT ANALYSIS
Back to Cone Motivation: From the proof of Affine Minkowski, we can see that if we know generators of a polyhedral cone, they can be used to describe.
Unsupervised Learning
Presentation transcript:

An Interesting Question How generally applicable is Backwards approach to PCA? An Attractive Answer: James Damon, UNC Mathematics Key Idea: Express Backwards PCA as Nested Series of Constraints

General View of Backwards PCA Define Nested Spaces via Constraints E.g. SVD 𝑆 𝑘 = 𝑥 :𝑥= 𝑗=1 𝑘 𝑐 𝑗 𝑢 𝑗 Now Define: 𝑆 𝑘−1 = 𝑥∈ 𝑆 𝑘 : 𝑥, 𝑢 𝑘 =0 Constraint Gives Nested Reduction of Dim’n

Vectors of Angles Vectors of Angles as Data Objects 𝜃 1 ⋮ 𝜃 𝑑 ∈ 𝑆 1 𝑑 𝜃 1 ⋮ 𝜃 𝑑 ∈ 𝑆 1 𝑑 Slice space 𝑆 1 𝑑 with hyperplanes???? (ala Principal Nested Spheres)

Vectors of Angles E.g. 𝑑=2, Data w/ “Single Mode of Var’n” Best Fitting Planar Slice gives Bimodal Dist’n Special Thanks to Eduardo García-Portugués

Torus Space 𝑺 𝟏 × 𝑺 𝟏 Try To Fit A Geodesic Challenge: Can Get Arbitrarily Close

Torus Space 𝑺 𝟏 × 𝑺 𝟏 Fit Nested Sub-Manifold

PNS Main Idea Data Objects: 𝑋 1 ,⋯, 𝑋 𝑛 ∈ 𝑆 𝑑 ⊆ ℝ 𝑑+1 Where 𝑆 𝑑 is a 𝑑 dimensional manifold Consider a nested series of sub-manifolds: 𝒮= 𝑆 0 ,⋯, 𝑆 𝑑−1 where for 𝑗=0,⋯,𝑑−1 𝑑𝑖𝑚 𝑆 𝑗 =𝑗 and 𝑆 𝑗 ⊆ 𝑆 𝑗+1 Goal: Fit all of 𝒮 simultaneously to 𝑋 1 ,⋯ 𝑋 𝑛

General Background Call each 𝑆 𝑗 a stratum, so 𝒮 is a manifold stratification To be fit to 𝑋 1 ,⋯ 𝑋 𝑛 New Approach: Simultaneously fit 𝒮= 𝑆 0 ,⋯, 𝑆 𝑑−1 Nested Submanifold (NS)

Projection Notation For 𝑗=0,⋯,𝑑−1 let 𝑃 (𝑘) denote the telescoping projection onto 𝑆 𝑘 I.e. for X∈ 𝑆 𝑑 𝑃 (𝑘) = 𝑃 𝑘 𝑃 𝑘+1 ⋯ 𝑃 𝑑−1 𝑋 Note: This projection is fundamental to Backwards PCA methods

PNS Components For a given 𝒮, represent a point 𝑋∈ 𝑆 𝑑 by its Nested Submanifold components: 𝑐 1 𝑋 ,⋯, 𝑐 𝑑 (𝑋) where for 𝑗=1,⋯,𝑑 𝑐 𝑗 𝑋 = 𝑃 𝑗 𝑋 − 𝑃 𝑗−1 (𝑋) In the sense that “𝐴−𝐵” means the shortest geodesic arc between 𝐴 & 𝐵

Nested Submanifold Fits Simultaneous Fit Criteria? Based on Stratum-Wise Sums of Squares For 𝑗=1,⋯,𝑑 define 𝑆𝑆 𝑗 = 𝑖=1 𝑛 𝑑 𝑃 𝑗 𝑋 𝑖 , 𝑃 𝑗−1 𝑋 𝑖 2 Uses “lengths” of NS Components: 𝑐 𝑗 𝑋 = 𝑃 𝑗 𝑋 − 𝑃 𝑗−1 (𝑋)

NS Components in ℝ 2 NS Candidate 2 (Shifted 𝑆 0 to Sample Mean) Note: Both 𝑆𝑆 1 & 𝑆𝑆 2 Decrease

NS Components in ℝ 2 NS based On PC1 Note: 𝑆𝑆 1 ↑ 𝑆𝑆 2 ↓ 𝑆𝑆 1 ↑ 𝑆𝑆 2 ↓ Yet 𝑆𝑆 1 + 𝑆𝑆 2 is Constant (Pythagorean Thm)

NS Components in ℝ 2 NS based On PC2 Note: 𝑆𝑆 1 ↓ 𝑆𝑆 2 ↑ 𝑆𝑆 1 ↓ 𝑆𝑆 2 ↑ 𝑆𝑆 1 + 𝑆𝑆 2 is Constant (Pythagorean Thm)

NS Components in ℝ 2 NS Candidate 1

NS Components in ℝ 2 NS Candidate 2 0.3× 𝑆𝑆 1 +0.7× 𝑆𝑆 2 ↓

NS Components in ℝ 2 NS based On PC1 0.3× 𝑆𝑆 1 +0.7× 𝑆𝑆 2 ↓

NS Components in ℝ 2 NS based On PC2 0.3× 𝑆𝑆 1 +0.7× 𝑆𝑆 2 ↑

Nested Submanifold Fits Simultaneously fit 𝒮= 𝑆 0 ,⋯, 𝑆 𝑑−1 Simultaneous Fit Criterion? 𝑤 1 𝑆𝑆 1 + 𝑤 2 𝑆𝑆 2 +⋯+ 𝑤 𝑑 𝑆𝑆 𝑑 Above Suggests Want: 𝑤 1 < 𝑤 2 <⋯< 𝑤 𝑑 Works for Euclidean PCA (?)

Nested Submanifold Fits Simultaneous Fit Criterion? 𝑤 1 𝑆𝑆 1 + 𝑤 2 𝑆𝑆 2 +⋯+ 𝑤 𝑑 𝑆𝑆 𝑑 Above Suggests Want: 𝑤 1 < 𝑤 2 <⋯< 𝑤 𝑑 Important Predecessor Pennec (2016) AUC Criterion: 𝑤 𝑗 ∝(𝑗−1)

Pennec’s Area Under the Curve 100% Based on Scree Plot 𝑆𝑆 1 𝑆𝑆 2 𝑆𝑆 3 𝑆𝑆 4 1 2 3 4 Component Index

Pennec’s Area Under the Curve 100% Based on Scree Plot Cumulative 𝑆𝑆 1 𝑆𝑆 2 𝑆𝑆 3 𝑆𝑆 4 1 2 3 4 Component Index

Pennec’s Area Under the Curve 100% Based on Scree Plot Cumulative Area = 𝑆𝑆 2 +2 𝑆𝑆 3 +3 𝑆𝑆 4 𝑆𝑆 1 𝑆𝑆 2 𝑆𝑆 3 𝑆𝑆 4 1 2 3 4 Component Index

Torus Space 𝑺 𝟏 × 𝑺 𝟏 Fit Nested Sub-Manifold Choice of 𝑤 1 & 𝑤 2 in: 𝑤 1 𝑆𝑆 1 + 𝑤 2 𝑆𝑆 2 ???

(maybe OK for low rank approx.) Torus Space 𝑺 𝟏 × 𝑺 𝟏 ×⋯× 𝑺 𝟏 Tiled (−𝜋,𝜋] 𝑑 embedding is complicated (maybe OK for low rank approx.) Instead Consider Nested Sub-Torii Work in Progress with Garcia, Wood, Le Key Factor: Important Modes of Variation

OODA Big Picture New Topic: Curve Registration Main Reference: Srivastava et al (2011)

Collaborators Anuj Srivastava (Florida State U.) Wei Wu (Florida State U.) Derek Tucker (Florida State U.) Xiaosun Lu (U. N. C.) Inge Koch (U. Adelaide) Peter Hoffmann (U. Adelaide) J. O. Ramsay (McGill U.) Laura Sangalli (Milano Polytech.)

Context Functional Data Analysis Curves as Data Objects Toy Example:

Context Functional Data Analysis Curves as Data Objects Toy Example: How Can We Understand Variation?

Context Functional Data Analysis Curves as Data Objects Toy Example: How Can We Understand Variation?

Context Functional Data Analysis Curves as Data Objects Toy Example: How Can We Understand Variation?

Functional Data Analysis Insightful Decomposition

Functional Data Analysis Insightful Decomposition Horiz’l Var’n

Functional Data Analysis Insightful Decomposition Vertical Variation Horiz’l Var’n

(even mathematical formulation) Challenge Fairly Large Literature Many (Diverse) Past Attempts Limited Success (in General) Surprisingly Slippery (even mathematical formulation)

Challenge (Illustrated) Thanks to Wei Wu

Challenge (Illustrated) Thanks to Wei Wu

Functional Data Analysis Appropriate Mathematical Framework? Vertical Variation Horiz’l Var’n

Landmark Based Shape Analysis Approach: Identify objects that are: Translations Rotations Scalings of each other Mathematics: Equivalence Relation Results in: Equivalence Classes Which become the Data Objects

Landmark Based Shape Analysis Equivalence Classes become Data Objects a.k.a. “Orbits” Mathematics: Called “Quotient Space” , , , , , ,

Curve Registration What are the Data Objects? Vertical Variation Horiz’l Var’n

Curve Registration What are the Data Objects? Consider “Time Warpings” 𝛾: 0,1 → [0,1] (smooth) More Precisely: Diffeomorphisms

Curve Registration Diffeomorphisms 𝛾: 0,1 → [0,1] 𝛾 is 1 to 1 𝛾: 0,1 → [0,1] 𝛾 is 1 to 1 𝛾 is onto (thus 𝛾 is invertible) Differentiable 𝛾 −1 is Differentiable

Time Warping Intuition Elastically Stretch & Compress Axis

Time Warping Intuition Elastically Stretch & Compress Axis 𝛾 x =x (identity)

Time Warping Intuition Elastically Stretch & Compress Axis 𝛾 x

Time Warping Intuition Elastically Stretch & Compress Axis 𝛾 x

Time Warping Intuition Elastically Stretch & Compress Axis 𝛾 x

Curve Registration Say curves 𝑓 1 (𝑥) and 𝑓 2 (𝑥) are equivalent, 𝑓 1 ≈ 𝑓 2 When ∃𝛾 so that 𝑓 1 𝛾 𝑥 = 𝑓 1 ∘𝛾 𝑥 = 𝑓 2 (𝑥)

Curve Registration Toy Example: Starting Curve, 𝑓 𝑥

Curve Registration Toy Example: Equivalent Curves, 𝑓 𝑥

Curve Registration Toy Example: Warping Functions

Curve Registration Toy Example: Non-Equivalent Curves Cannot Warp Into Each Other

Data Objects I Equivalence Classes of Curves (parallel to Kendall shape analysis)

Data Objects I Equivalence Classes of Curves (Set of All Warps of Given Curve) Notation: 𝑓 = 𝑓∘𝛾: 𝛾∈Γ for a “representor” 𝑓 𝑥

Data Objects I Equivalence Classes of Curves (Set of All Warps of Given Curve) Next Task: Find Metric on Space of Curves

Metrics in Curve Space Find Metric on Equivalence Classes Start with Warp Invariant Metric on Curves & Extend

Metrics in Curve Space Traditional Approach to Curve Registration: Align curves, say 𝑓 1 and 𝑓 2 By finding optimal time warp, 𝛾, so: 𝑖𝑛𝑓 𝛾 𝑓 1 − 𝑓 2 ∘𝛾 Vertical var’n: PCA after alignment Horizontal var’n: PCA on 𝛾s

Metrics in Curve Space Problem: Don’t have proper metric Since: 𝑑 𝑓 1 , 𝑓 2 ≠𝑑 𝑓 2 , 𝑓 1 Because: 𝑖𝑛𝑓 𝛾 𝑓 1 − 𝑓 2 ∘𝛾 ≠ 𝑖𝑛𝑓 𝛾 𝑓 2 − 𝑓 1 ∘𝛾

Metrics in Curve Space 𝑖𝑛𝑓 𝛾 𝑓 1 − 𝑓 2 ∘𝛾 ≠ 𝑖𝑛𝑓 𝛾 𝑓 2 − 𝑓 1 ∘𝛾 𝑖𝑛𝑓 𝛾 𝑓 1 − 𝑓 2 ∘𝛾 ≠ 𝑖𝑛𝑓 𝛾 𝑓 2 − 𝑓 1 ∘𝛾 Thanks to Xiaosun Lu

Metrics in Curve Space 𝑖𝑛𝑓 𝛾 𝑓 1 − 𝑓 2 ∘𝛾 ≠ 𝑖𝑛𝑓 𝛾 𝑓 2 − 𝑓 1 ∘𝛾 Note: 𝑖𝑛𝑓 𝛾 𝑓 1 − 𝑓 2 ∘𝛾 ≠ 𝑖𝑛𝑓 𝛾 𝑓 2 − 𝑓 1 ∘𝛾 Note: Very Different L2 norms Thanks to Xiaosun Lu

Metrics in Curve Space Solution: Look for Warp Invariant Metric 𝑑 Where: 𝑑 𝑓 1 , 𝑓 2 =𝑑 𝑓 1 ∘𝛾, 𝑓 2 ∘𝛾

Metrics in Curve Space 𝑑 𝑓 1 , 𝑓 2 =𝑑 𝑓 1 ∘𝛾, 𝑓 2 ∘𝛾 I.e. Have “Parallel” Representatives Of Equivalence Classes

Metrics in Curve Space Warp Invariant Metric 𝑑 Developed in context of: Likelihood Geometry Fisher – Rao Metric: 𝑑 𝐹𝑅 𝑓 1 , 𝑓 2 = 𝑑 𝐹𝑅 𝑓 1 ∘𝛾, 𝑓 2 ∘𝛾

Metrics in Curve Space Fisher – Rao Metric: Computation Based on Square Root Velocity Function (SRVF) 𝑞 𝑓 𝑡 = 𝑓 𝑡 𝑓 𝑡 Signed Version Of Square Root Derivative Where 𝑓 𝑡 = 𝜕 𝜕𝑡 𝑓(𝑡)

Metrics in Curve Space Square Root Velocity Function (SRVF) 𝑞 𝑓 𝑡 = 𝑓 𝑡 𝑓 𝑡 𝑓 𝑡 =𝑓 0 + 0 𝑡 𝑞 𝑓 𝑠 𝑞 𝑓 𝑠 𝑑𝑠

Metrics in Curve Space Fisher – Rao Metric: Computation Based on SRVF: 𝑑 𝐹𝑅 𝑓 1 , 𝑓 2 = 𝑞 1 − 𝑞 2 2 So work with SRVF, Since much easier to compute.

Metrics in Curve Space Why square roots? Thanks to Xiaosun Lu

Metrics in Curve Space Why square roots?

Metrics in Curve Space Why square roots?

Metrics in Curve Space Why square roots?

Metrics in Curve Space Why square roots?

Metrics in Curve Space Why square roots?

Metrics in Curve Space Why square roots?

Metrics in Curve Space Why square roots?

Metrics in Curve Space Why square roots?

Metrics in Curve Space Why square roots? Dislikes Pinching Focusses Well On Peaks of Unequal Height

Metrics in Curve Space Note on SRVF representation: 𝑑 𝐹𝑅 𝑓 1 , 𝑓 2 = 𝑞 1 − 𝑞 2 2 Can show: Warp Invariance 𝑑 𝐹𝑅 𝑓 1 , 𝑓 2 = 𝑑 𝐹𝑅 𝑓 1 ∘𝛾, 𝑓 2 ∘𝛾 Follows from Jacobean calculation

Metrics in Curve Quotient Space Above was Invariance for Individual Curves Now extend to: Equivalence Classes of Curves I.e. Orbits as Data Objects I.e. Quotient Space

Metrics in Curve Quotient Space Define Metric on Equivalence Classes: For 𝑓 1 & 𝑓 2 , i.e. 𝑞 1 & 𝑞 2 𝑑 𝑓 1 , 𝑓 2 = 𝑖𝑛𝑓 𝛾∈Γ 𝑞 1 − 𝑞 2 ∘𝛾 Independent of Choice of 𝑓 1 & 𝑓 2 By Warp Invariance

Mean in Curve Quotient Space Benefit of a Metric: Allows Definition of a “Mean” Fréchet Mean Geodesic Mean Barycenter Karcher Mean

Mean in Curve Quotient Space Given Equivalence Class Data Objects: 𝑓 1 , 𝑓 2 , ⋯, 𝑓 𝑛 The Karcher Mean is: 𝜇 = 𝑎𝑟𝑔𝑚𝑖𝑛 𝑞 𝑖=1 𝑛 𝑑 𝑞 , 𝑞 𝑖 2

Mean in Curve Quotient Space The Karcher Mean is: 𝜇 = 𝑎𝑟𝑔𝑚𝑖𝑛 𝑞 𝑖=1 𝑛 𝑑 𝑞 , 𝑞 𝑖 2 Intuition: Recall, for Euclidean Data Minimizer = Conventional 𝑋

Mean in Curve Quotient Space Next Define “Most Representative” Choice of 𝜇 𝑛 As Representer of 𝜇

Mean in Curve Quotient Space “Most Representative” 𝜇 𝑛 in 𝜇 Given a candidate 𝜇 Consider warps to each 𝑞 𝑖 Choose 𝜇 𝑛 to make Karcher mean of warps = Identity (under Fisher Rao metric)

Mean in Curve Quotient Space “Most Representative” 𝜇 𝑛 in 𝜇 Thanks to Anuj Srivastava

Toy Example – (Details Later) Estimated Warps (Note: Represented With Karcher Mean At Identity)

Mean in Curve Quotient Space “Most Representative” 𝜇 𝑛 in 𝜇 Terminology: The “Template Mean”

More Data Objects Final Curve Warps: Warp Each Data Curve, 𝑓 1 , ⋯, 𝑓 𝑛 To Template Mean, 𝜇 𝑛 Denote Warp Functions 𝛾 1 , ⋯, 𝛾 𝑛 Gives (Roughly Speaking): Vertical Components 𝑓 1 ∘ 𝛾 1 , ⋯, 𝑓 𝑛 ∘ 𝛾 𝑛 (Aligned Curves) Horizontal Components 𝛾 1 , ⋯, 𝛾 𝑛 Data Objects I

More Data Objects Final Curve Warps: Data Objects II Final Curve Warps: Warp Each Data Curve, 𝑓 1 , ⋯, 𝑓 𝑛 To Template Mean, 𝜇 𝑛 Denote Warp Functions 𝛾 1 , ⋯, 𝛾 𝑛 Gives (Roughly Speaking): Vertical Components 𝑓 1 ∘ 𝛾 1 , ⋯, 𝑓 𝑛 ∘ 𝛾 𝑛 (Aligned Curves) Horizontal Components 𝛾 1 , ⋯, 𝛾 𝑛 ~ Kendall’s Shapes

More Data Objects Final Curve Warps: Warp Each Data Curve, 𝑓 1 , ⋯, 𝑓 𝑛 To Template Mean, 𝜇 𝑛 Denote Warp Functions 𝛾 1 , ⋯, 𝛾 𝑛 Gives (Roughly Speaking): Vertical Components 𝑓 1 ∘ 𝛾 1 , ⋯, 𝑓 𝑛 ∘ 𝛾 𝑛 (Aligned Curves) Horizontal Components 𝛾 1 , ⋯, 𝛾 𝑛 Data Objects III ~ Chang’s Transfo’s

Computation Several Variations of Dynamic Programming Done by Eric Klassen, Wei Wu

Toy Example Raw Data

Toy Example Raw Data Both Horizontal And Vertical Variation

Toy Example Conventional PCA Projections

Toy Example Conventional PCA Projections Power Spread Across Spectrum

Toy Example Conventional PCA Projections Power Spread Across Spectrum

Toy Example Conventional PCA Scores

Toy Example Conventional PCA Scores Views of 1-d Curve Bending Through 4 Dim’ns’

Toy Example Conventional PCA Scores Patterns Are “Harmonics” In Scores

Toy Example Scores Plot Shows Data Are “1” Dimensional So Need Improved PCA Decomp.

Visualization Vertical Variation: PCA on Aligned Curves, 𝑓 1 ∘ 𝛾 1 , ⋯, 𝑓 𝑛 ∘ 𝛾 𝑛 Projected Curves

Toy Example Aligned Curves (Clear 1-d Vertical Var’n)

Toy Example Aligned Curve PCA Projections All Var’n In 1st Component

Visualization Horizontal Variation: PCA on Warps, 𝛾 1 , ⋯, 𝛾 𝑛 Projected Curves

Toy Example Estimated Warps

Toy Example Warps, PC Projections

Toy Example Warps, PC Projections Mostly 1st PC

Toy Example Warps, PC Projections Mostly 1st PC, But 2nd Helps Some

Toy Example Warps, PC Projections Rest is Not Important

Toy Example Horizontal Var’n Visualization Challenge: (Complicated) Warps Hard to Interpret Approach: Apply Warps to Template Mean (PCA components)

Toy Example Warp Compon’ts (+ Mean) Applied to Template Mean

Participant Presentations Xi Yang Multi-View Weighted Network Hang Yu Introduction to multiple kernel learning Zhipeng Ding Fast Predictive Simple Geodesic Regression