1 Introduction

We will present here a brief study of the paper “ Laplacian Eigenmaps for Dimensionality Reduction and Data Representation \cite{Belkin_2003}”,  in this paper the authors presented a new machine learning algorithm for dimensionality reduction and data representation using the Laplacian Eigenmaps(LE). This paper was published in 2003 by Mikhail Belkin and Partha Niyogi, two professors at University of Chicago, USA. This paper is well known internationally and has been cited 5700 times so far.
This paper introduces at first the known algorithms, at that time, for dimensionality reduction, then it presents the new algorithm LE. After that, it moves to demonstrate it's contribution to the spectral clustering, and finally, it concludes with some examples.
 

2  Context of the work

In many of the artificial intelligence disciplines, the information retrieval subjects or the data mining areas, one have to solve problems where the sample data sets are in a very high dimension space. One of the main challenges in machine learning and is the development of an accurate representation for such type of data, which its treatment is usually complex so that it could be represented in a way that reduces the problem of complexity. The paper in our hands considers the problem of constructing a representation for data lying on a  low dimensional manifold embedded in a high-dimensional space, the thing that will not only improve the computation system's performance ( from time and memory point of view) but also will help to better visualize the data.

3  Positioning

The dimensionality reduction problem has a long history. Starting from the classical algorithms we should mention the most popular multivariate statical algorithm, Principal Component Analysis (PCA) \cite{Abdi_2010}, which is used by almost all scientific disciplines. It is also likely to be the oldest multivariate technique for dimensionality reduction. PCA is able to find a  low‐dimensional embedding of the data points that best preserves their variance as measured in the high‐dimensional input space. Another famous algorithm is multidimensional scaling (MDS). This algorithm can find an embedding that preserves the inter‐point distances, and it is equivalent to PCA when those distances are Euclidean. 
Considering the case of a data set with low dimensionality manifold embedded in a high dimensional space as in Fig. 1, the methods above will be inefficient in the most of the cases.