Elastic Maps for Data Analysis Alexander Gorban, Leicester with Andrei Zinovyev, Paris.

Презентация:



Advertisements
Похожие презентации
Comparative Analysis of Phylogenic Algorithms V. Bayrasheva, R. Faskhutdinov, V. Solovyev Kazan University, Russia.
Advertisements

1 Another useful model is autoregressive model. Frequently, we find that the values of a series of financial data at particular points in time are highly.
In mathematics, the notion of permutation is used with several slightly different meanings, all related to the act of permuting (rearranging) objects.
Time-Series Analysis and Forecasting – Part IV To read at home.
WS9-1 PAT328, Workshop 9, May 2005 Copyright 2005 MSC.Software Corporation WORKSHOP 9 PARAMETERIZED GEOMETRY SHAPES.
How can we measure distances in open space. Distances in open space.
Business Statistics 1-1 Chapter Two Describing Data: Frequency Distributions and Graphic Presentation GOALS When you have completed this chapter, you will.
BREADTH FIRST TRAVERSAL Lesson Plan -3. Evocation.
The Law of Demand The work was done by Daria Beloglazova.
Chap 11-1 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc. Chapter 11 Hypothesis Testing II Statistics for Business and Economics.
1 Model reduction and uniqueness of thermodynamic projector Alexander Gorban ETH Zurich, Switzerland, and Institute of Computational Modeling Russian Academy.
Sequences Sequences are patterns. Each pattern or number in a sequence is called a term. The number at the start is called the first term. The term-to-term.
1/27 Chapter 9: Template Functions And Template Classes.
Time-Series Analysis and Forecasting – Part V To read at home.
Fractal A fractal is a mathematical set that has a fractal dimension that usually exceeds its topological dimension and may fall between the integers.
PAT312, Section 10, December 2006 S10-1 Copyright 2007 MSC.Software Corporation SECTION 10 DISPLAY.
SPLAY TREE The basic idea of the splay tree is that every time a node is accessed, it is pushed to the root by a series of tree rotations. This series.
Lesson 3 - HTML Formatting. Text Formatting Tags TagDescription Defines bold text Defines big text Defines emphasized text Defines italic text Defines.
Correlation. In statistics, dependence refers to any statistical relationship between two random variables or two sets of data. Correlation refers to.
DRAFTING and DIMENSIONING 98. A properly dimensioned drawing of a part is very important to the manufacturing outcome. With CATIA, it can be a very simple.
Транксрипт:

Elastic Maps for Data Analysis Alexander Gorban, Leicester with Andrei Zinovyev, Paris

Plan of the talk INTRODUCTION Two paradigms for data analysis: statistics and modelling Clustering and K-means Self Organizing Maps PCA and local PCA

Plan of the talk 1. Principal manifolds and elastic maps The notion of of principal manifold (PM) Constructing PMs: elastic maps Adaptation and grammars 2. Application technique Projection and regression Maps and visualization of functions 3. Implementation and examples

Two basic paradigms for data analysis Data set Statistical Analysis Data Modelling

Statistical Analysis Existence of a Probability Distribution; Statistical Hypothesis about Data Generation; Verification/Falsification of Hypothesises about Hidden Properties of Data Distribution

Data Modelling We should find the Best Model for Data description; We know the Universe of Models; We know the Fitting Criteria; Learning Errors and Generalization Errors analysis for the Model Verification Universe of models

Example: Simplest Clustering

K-means algorithm Centers y (i) Data points x (j) 1)Minimize U for given {K (i) } (find centers); 2)Minimize U for given {y (i) } (find classes); 3)If {K (i) } change, then go to step 1.

Centers can be lines, manifolds,… with the same algorithm 1 st Principal components + mean points for classes instead of simplest means

SOM - Self Organizing Maps Set of nodes is a finite metric space with distance d(N,M); 0) Map set of nodes into dataspace N f 0 (N); 1) Select a datapoint X (random); 2) Find a nearest f i (N) (N=N X ); 3) f i+1 (N) = f i (N) +w i (d(N, N X ))(X- f i (N)), where w i (d) (0<w i (d)<1) is a decreasing cutting function. The closest node to X is moved the most in the direction of X, while other nodes are moved by smaller amounts depending on their distance from the closest node in the initial geometry.

PCA and Local PCA The covariance matrix is positive definite (X q are datapoints) Principal components: eigenvectors of the covariance matrix: The local covariance matrix (w is a positive cutting function) The field of principal components: eigenvectors of the local covariance matrix, e i (y). Trajectories of these vector-fields present geometry of local data structure.

A top secret: the difference between two basic paradigms is not crucial (Almost) Back to Statistics: Quasi-statistics: 1) delete one point from the dataset, 2) fitting, 3) analysis of the error for the deleted data; The overfitting problem and smoothed data points (it is very close to non- parametric statistics)

Principal manifolds Elastic maps framework SVM Principal manifolds Regression, approximation Supervised classification K- means SOM Clustering Multidim. scaling Visualization PCA Factor analysis LLE ISOMAP Non-linear Data-mining methods

Finite set of objects in R N X i i=1..m IRIS database Petal heght Petal width Sepal width Sepal height SPECIES Iris-setosa Iris-setosa Iris-setosa Iris-versicolor Iris-versicolor Iris-versicolor Iris-virginica X1.9Iris-virginica Iris-virginica Iris-virginica

Mean point K-means clustering

Principal Object,

Principal Component Analysis, Maximal dispersion 1 st Principal axis 2 nd principal axis

Principal manifold

Statistical Self-consistency x π π π -1 (x) x = E(y|π(y)=x) Principal Manifold

What do we want? Non-linear surface (1D, 2D, 3D …) Smooth and not twisted The data model is unknown Speed (time linear with Nm) Uniqueness Fast way to project datapoints

Metaphor of elasticity Data points Graph nodes U (Y) U (E), U (R)

Constructing elastic nets y E (0) E (1) R (1) R (0) R (2)

Definition of elastic energy. E (0) E (1) R (1) R (0) R (2) y XjXj

Elastic manifold

Global minimum and softening 0, , , ,

Adaptive algorithms Growing net Adaptive net Refining net: Idea of scaling:

Scaling Rules For uniform d-dimensional net from the condition of constant energy density we obtain: s is number of edges, r is number of ribs in a given volume

Grammars of Construction Substitution rules Examples: 1)For net refining: substitutions of columns and rows 2)For growing nets: substitutions of elementary cells.

Substitutions in factors × = Substitution rule Transformation of factor Graph factorization

Substitutions in factors × × Graph transformation

Transformation selection A grammar is a list of elementary graph transformations. Energetic criterion: we select and apply an elementary applicable transformation that provides the maximal energy decrease (after a fitting step). The number of operations for this selection should be in order O(N) or less, where N is the number of vertexes

Projection onto the manifold Closest node of the net Closest point of the manifold

Mapping distortions Two basic types of distortion: 1) Projecting distant points in the close ones (bad resolution) 2) Projecting close points in the distant ones (bad topology compliance)

Instability of projection Best Matching Unit (BMU) for a data point is the closest node of the graph, BMU2 is the second- close node. If BMU and BMU2 are not adjacent on the graph, then the data point is unstable. Gray polygons are the areas of instability. Numbers denote the degree of instability, how many nodes separate BMU from BMU2.

Colorings: visualize any function

Density visualization

Example: different topologies RNRN R2R2

VIDAExpert tool and elmap C++ package

Regression and principal manifolds regression principal component x F(x)

Projection and regression Data with gaps are modelled as affine manifolds, the nearest point on the manifold provides the optimal filling of gaps.

Iterative error mapping For a given elastic manifold and a datapoint x (i) the error vector is where P(x) is the projection of data point x (i) onto the manifold. The errors form a new dataset, and we can construct another map, getting regular model of errors. So we have the first map that models the data itself, the second map that models errors of the first model, … and so on. Every point x in the initial data space is modeled by the vector

Image skeletonization or clustering around curves

Approximation of molecular surfaces

Application: economical data Gross output Density Profit Growth temp

Medical table 1700 patients with infarctus myocarde Lethal cases Patients map, density

Medical table 1700 patients with infarctus myocarde 128 indicators Age Numberof infarctus in anamnesis Stenocardia functional class

Codon usage in all genes of one genome Escherichia coli Bacillus subtilis Majority of genes Highly expressed genes Foreign genes Hydrophobic genes

Golubs leukemia dataset 3051 genes, 38 samples (ALL/B-cell,ALL/T-cell,AML) ALL sample AML sample Map of genes: vote for ALL vote for AML used by T.Golub used by W.Lie

Golubs leukemia dataset map of samples: AML ALL/B-cell ALL/T-cell density Cystatin C Retinoblastoma binding protein P48 CA2 Carbonic anhydrase II X-linked Helicase II

Useful links Principal components and factor analysis Principal curves and surfaces Self Organizing Maps Elastic maps

Several names K-means clustering: MacQueen, 1967; SOM: T. Kohonen, 1981; Principal curves: T. Hastie and W. Stuetzle, 1989; Elastic maps: A. Gorban, A. Zinovyev, A. Rossiev, 1998; Polygonal models for principal curves: B. Kégl, 1999; Local PCA for orincipal curves construction: J. J. Verbeek, N. Vlassis, and B. Kröse, 2000.

Two of them are Authors

Thank you for your attention! Questions?