Understanding the Literature

  • Category:
    Logic & Programming
  • Document type:
    Assignment
  • Level:
    Undergraduate
  • Page:
    6
  • Words:
    4296

2LITERATURE

Introduction

In the 21st century, as technological advances continues to emerge, a few proposals have been proposed regarding manifold learning algorithms like the Laplacian Eigenmap (LE), Locally Linear Embedding (LLE), ISOMAP and Locality Preserving Projection (LPP). These algorithms main aim is to discover the meaningful data space low dimensional structure. The aim of this report is to explore and analyse the Locality Preserving Projection proposed by He, X. and Niyogi, P (2004) from the University of Chicago in in their article namely; Locality preserving projections. The analysis of the article will be based on the content of the article, novelty (Innovation), technical quality rating, x-factor, quality of the presentation and research work application.

In terms of the content and the title, the title of the article provides a full overview of what the research article is all about. However, the topic is broad in the sense that it does not specify what the title objective is. Upon reading the article, a reader realizes that the title Locality Preserving Projections is a solution to an existing problem in dimensionality reduction. A more appropriate title would have been more straight-forward in providing a detailed summary of the article in one sentence such as, Locality Preserving Projections in dimensionality reduction. Such a title is appropriate for this article because of its high quality work.

The authors introduces a regression method of Locality Preserving Projections (LPP) and defines it as linear projective maps arising from solving variational problems that preserves the data sets neighbourhood structure optimally as an algorithm for linear dimensionality reduction method in information processing. This algorithm is introduced as a solution to a problem statement of dimensionality reduction where the authors provides an example of a situation where there is a collection of dimensional real vectors data points taken from a probability distribution that is unknown.

The problem statement

According to the authors, the LPP in this case creates a graph that incorporates the data sets neighbourhood information after which employing the graphs Laplacian notion, an individual then computes a transformational matrix that maps to a subspace, the data points. The transformation which is linear maintains the local neighbourhood information optimally in a positive sense. The generated representation map by the algorithm can be seen as a linear discrete approximation that arises naturally from the manifold geometry to a map that is continuous. The authors present a problem statement of linear dimensionality reduction where provided a set of
 Understanding the Literature Understanding the Literature 1one is needed to find a transformational matrix A that will map the m points to points
 Understanding the Literature 2 Understanding the Literature 3so that
 Understanding the Literature 4“represents”
 Understanding the Literature 5 where
 Understanding the Literature 6=
 Understanding the Literature 7The authors uses a practical applicability method where the case is special in that
 Understanding the Literature 8 and
 Understanding the Literature 9 in this case is a nonlinear manifold that is embedded in
 Understanding the Literature 10.

Locality Preserving Projections Implementation

According to the authors there are three steps followed when solving the problem statement using the LPP algorithm where they provide the steps which starts with the construction of the adjacency graph. In this step,
 Understanding the Literature 11indicates a graph with nodes Understanding the Literature 12. An edge is then put between the nodes
 Understanding the Literature 13and
 Understanding the Literature 14 in case
 Understanding the Literature 15and
 Understanding the Literature 16are close. However there are two variations of
 Understanding the Literature 17-neighborhoods where [parameter Understanding the Literature 18].
 Understanding the Literature 19and
 Understanding the Literature 20 nodes are linked if ║ Understanding the Literature 21by an edge where the common Euclidean
 Understanding the Literature 22 norm is the norm and
 Understanding the Literature 23 nearest neighbors where [parameter Understanding the Literature 24]. Understanding the Literature 25and
 Understanding the Literature 26 nodes are linked if
 Understanding the Literature 27is one of
 Understanding the Literature 28 nearest neighbors of
 Understanding the Literature 29 Understanding the Literature 30is one of
 Understanding the Literature 31 nearest neighbours of
 Understanding the Literature 32.

The second step involves weights choosing where
 Understanding the Literature 33 is a symmetric sparse
 Understanding the Literature 34 matrix where
 Understanding the Literature 35 has the edge joining vertices weight and 0 if no such edge exists. The two variations in this step are Heat kernel where [parameter Understanding the Literature 36]. If
 Understanding the Literature 37and
 Understanding the Literature 38 nodes are linked, put
 Understanding the Literature 39and Simple-minded where [No parameter].
 Understanding the Literature 40 = 1 if and only if an edge links
 Understanding the Literature 41 and
 Understanding the Literature 42 vertices. The third step involves Eigenmaps where the computation of eigenvalues and eigenvectors is carried out for the eigenvector generalized problem. Understanding the Literature 43λ Understanding the Literature 44.

Justification of the LPP algorithm

In order to justify their proposed LPP algorithm, the authors have provided three methods to justify and prove the functioning of the algorithm through various methods namely: Optimal Linear Embedding, Geometrical Justification and Kernel LPP. In Optimal Linear Embedding, the authors base their argument on the standard spectral graph theory. In reference to a given data set where a weighted graph
 Understanding the Literature 45= ( Understanding the Literature 46), the authors give an example of a problem of trying to map to a line, the weighted graph
 Understanding the Literature 47 so that the linked points remains close to each other as possible. They denote such a map using
 Understanding the Literature 48= ( Understanding the Literature 49) ᵀ. According to them a criterion that can be considered reasonable for selecting a better map would be to minimize the objective function
 Understanding the Literature 50under constraints that are suitable. This objective function with the authors selection of
 Understanding the Literature 51 encounters a heavy penalty if
 Understanding the Literature 52and
 Understanding the Literature 53the neighboring points, are planned far apart. Thus, minimizing the above objective function according to the authors will be an attempt to make sure that if
 Understanding the Literature 54and
 Understanding the Literature 55are near, then
 Understanding the Literature 56and
 Understanding the Literature 57are near as well. The authors then continues to provide a worked out example of an objective function and how it is reduced to a minimization problem after imposing the constraints.

In Geometrical Justification, the authors explain that the finite graph Laplacian matrix
 Understanding the Literature 58 (= Understanding the Literature 59 ) or Chung’s (1997), Spectral Graph Theory is similar to ℒ (Laplace Beltrami operator) on Riemannian compact manifolds. According to them, while the manifold Laplace Beltrami operator is produced by Riemannian metric, in the case of a graph it originates from adjacency relation. The authors assigns
 Understanding the Literature 60to be the d-dimensional, smooth, Riemannian compact manifold. In case the manifold gets embedded in Understanding the Literature 61 the manifolds Riemannian structure is stimulated by the
 Understanding the Literature 62 standard Riemannian structure. According to them, they are searching the manifold for a map to the actual line such that the points on the manifold that are near each other gets mapped near each other on the line. The authors takes
 Understanding the Literature 63 to represent a map like that and surmises that
 Understanding the Literature 64 is differentiatable twice. The authors then quotes a worked out optimization problem by Belkin and Niyogi (2002, 585).

In the Kernel LPP, the authors makes an assumption of a Euclidean space
 Understanding the Literature 65 that is represented in a Hilbert space
 Understanding the Literature 66 through a mapping function that is nonlinear Ф:
 Understanding the Literature 67 Understanding the Literature 68 The authors use this function, the kernel function and a simple algebra formulation to formulate a Hilbert space eigenvector problem. The authors then employs the Kernel LPP to solve the problem and justify that it provides the same results as the Laplacian Eigenmaps.

Data analytics methods

In the course of carrying out their research, the authors employed various data analytics techniques such as association rule learning where they conducted an experiment of 2-dimensional data visualization with multiple features database utility maps of Dutch to detect the relationship between dimensionality reduction algorithms (Laplacian Eigenmaps, LPP and PCA). The data set of the Multiple Features Database consists of (`0′-`9′) handwritten digits. The 2-dimensional space mapping of (`0′-`9′) handwritten digits is done using Laplacian Eigenmaps, LPP and PCA algorithms. In this experiment the authors also used the clustering method to find the algorithm that is more suitable in dimensionality reduction which in this case they were trying to prove that the LPP algorithm is the best. Clusterization was also used in mapping the data points into 2-dimensional space.

The authors also used summarization to provide a more data set compact representation in manifold of face images manifold where they applied LPP to face images for visualization where the dataset consisted of 1965 face images. The authors also used clusterization where the image faces were to be represented next to their data points in the space. The authors also conducted another experiment for face recognition where the authors used association rule learning, anomaly detection and clustering methods. The authors used the Yale face database to conduct this experiment. The authors applied the LPP algorithm for face recognition. The association rule was used to identify the relationships between the variables which include facial expressions, lighting conditions, and with or without glasses. Clusterization was used for cropping the final images, aligning the eyes on similar position and for the best algorithm selection. Anomaly detection was used to identify the error rates. In order to form the training set, six individual images were capture with labels. This training set according to the authors was employed to learn projections. Projection of testing samples was then done into the testing reduced space. Recognition was then carried out with the use of nearest neighbour classifier.

Experimental results

In this section the authors provide a discussion of the different application areas of LPP algorithm where they give two synthetic examples to provide an illustration of the way the LPP functions. The first example is simply synthetic where the authors provide figure 1 with four plots where the first plot and the third plot displays the PCA results while the second and forth indicates LPP results. From the authors view based on the example the LPP is insensitive and is more powerful in terms of discrimination than PCA.

The authors provides a second figure 2 which represents a 2-dimensional space mapping of (`0′-`9′) handwritten digits. The figure on the left in this case represents the Laplacian eigenmaps of all the images set of digits. The centre figure displays LPP results while the figure on the right displays PCA results. Every different colour in the figures is equivalent to a digit. In the face images Manifold experiment, the authors provide a figure 3 that shows the face images mapped into a 2-dimensional plane.

According to the authors the mapping of image space to a space that is low-dimensional using the LPP method is linear. Various representative faces are displayed in the space at different parts subsequent to data points. To some length, the proposed LPP do not detect the face images nonlinear manifold structure. The face images in the space are segmented clearly to two parts with the right part displaying open mouth faces and the left part showing closed mouth faces. According to the authors, this is caused by the LPP algorithm where it implicitly stresses the data natural clusters when attempting to preserve the embedding neighbourhood structure.

In the face recognition experiment, the authors provides a table 1 that displays the Yale database face recognition results indicating the error rates. Overall, PDA, LPP and PCA performance varies depending on the number of dimensions. The authors concluded that LPP performs better that the LDA and PCA.

Further research implications

According to the authors, their proposed linear Locality Preserving Projections algorithm for dimensionality reduction is based on a similar variational principle that is responsible for giving rise of the Laplacian Eigenmap. Therefore, it has the same properties for locality preservation. According to them, their algorithm has a main benefit over other up-to-date nonparametric methods for universal nonlinear dimensionality reduction in that its Eigen problem has dimensions that scales as the data points dimensionality to a certain extent than the data points number. For bulky data sets the memory and run time saving can be massive. Over Principal Component Analysis according to the authors this methods performance is illustrated through various experiments. Even if their method is an algorithm that is linear, it has the ability to discover the data manifolds nonlinear structure.

Innovation

Innovation can be explained as the act of introducing something new or simply the act of innovating. He, X. and Niyogi, P (2004) research paper can be identified as highly innovative because their work involves the proposition of a new linear algorithm that can be used for dimensionality reduction. He, X. and Niyogi, P (2004) simply made use of existing theories and methods to formulate a new and improved method for solving dimensionality reduction problem.

Another reason as to why He, X. and Niyogi, P (2004) work is highly innovative is the fact that the new algorithm proposed by them is not just another common problem solving algorithm but an algorithm that can be applied in the state of art technology such as facial recognition technology which has been and is still been implemented by many companies, governments and individuals for security measures and surveillance. Even though this technology is not new and have been implemented using other algorithms such as PCA and PDA, LPP provides better functionality, detects errors, shares a lot of nonlinear techniques for data representation properties like the Laplacian Eigenmap, it is more tractable computationally and it outperforms the existing algorithms used in dimensionality reduction.

The authors view the LPP algorithm as innovative from different viewpoints like: the LPP algorithm is linear and thus it’s suitable and fast for practical applications; the LPP algorithm can be applied in all data pints that are new because it is defined everywhere; LPP can also be carried out in reproducing Kernel Hilbert space (RKHS) or original space where the data points have been mapped and lastly the LPP maps are planned to minimize a distinct objective criterion out of the linear classical techniques. Because of these features, the authors expect techniques based on LPP to be viewed as a Principal Component Analysis (PCA) alternative in pattern classification, exploratory data analysis and information retrieval.

The paper is also highly innovative because it has been cited by some other books and articles even though not by many such as: Zheng, F., Shao, L. and Song, Z., 2010, July. Eigen-space learning using semi-supervised diffusion maps for human action recognition. In Proceedings of the ACM International Conference on Image and Video Retrieval (pp. 151-157). ACM (Zheng, F., Shao, L. and Song, Z. 2010, 151); Chen, Z., 2012. Discovery of informative and predictive patterns in dynamic networks of complex systems. North Carolina State University (Chen, Z. 2012, 56); Assi, K.C., Labelle, H. and Cheriet, F., 2014. Statistical model based 3D shape prediction of postoperative trunks for non-invasive scoliosis surgery planning. Computers in biology and medicine, 48, pp.85-93 (Assi, K. Labelle, H. and Cheriet, F. 2014, 85); Assi, K.C., Labelle, H. and Cheriet, F., 2014. Modified large margin nearest neighbor metric learning for regression. IEEE Signal Processing Letters, 21(3), pp.292-296 (Assi, K., Labelle, H. and Cheriet, F. 2014, 292); Foytik, J.D., 2011. Locally Tuned Nonlinear Manifold for Person Independent Head Pose Estimation (Doctoral dissertation, University of Dayton) (Foytik, J. 2011, 5); Min, R., 2013. Reconnaissance de visage robuste aux occultations (Doctoral dissertation, Paris, ENST) (Min, R., 2013, 7) and Rui, M.I.N., 2013. Doctorat ParisTech (Doctoral dissertation, TELECOM ParisTech) (Rui, M.I.N. 2013, 3). The fact that all the above authors choose He, X. and Niyogi, P (2004) paper as a reference for their work indicates that they considered the work highly innovative and of high quality. This is reinforced by the fact that some of the articles are peer reviewed and accepted by international standards bodies while some have been published as doctoral dissertations.

Technical Quality

The work done in this paper by the authors He, X. and Niyogi, P (2004) is of high quality. The authors used existing dimensionality reduction methods to create a new LPP algorithm. They proposed a problem statement where through a step by step procedure, they solved the problem statement of dimensionality reduction using the proposed algorithm. As if that was not enough, the authors then continued to provide a justification that their algorithm was functional through Optimal Linear Embedding, Geometric justification and Kernel LPP where in each they provided an example of a dimensionality reduction problem and provided a step by step guidance on how to solve the problem.

He, X. and Niyogi, P (2004) used explained equations throughout their work showing their workings. This is transparency which is a main attribute of high quality work. An independent researcher working on the same field of information processing can simply be able to replicate this paper any section they wish to as long as they have an understanding of basic mathematical symbols, equations, operators and they have the appropriate software to do so.

When conducting their research, He, X. and Niyogi, P (2004) obtained their data from primary and secondary sources of information. In primary sources, the authors used raw facts and figures where they formulated their own objective functions and gathered their own data for experiments. In secondary sources, the authors referenced their work from eight peer reviewed articles and publications. The authors selected the articles and journals on the criterion of their usefulness and applicability to their research. The referencing of the work from various sources of information increases the quality of work of the paper.

In any research paper, the quality of work can be determined by the number of experiments carried out by the researchers before arriving at a conclusion. A high quality research work conducts more than one experiments for verification purposes in order to arrive at an acceptable conclusion. In their work, He, X. and Niyogi, P (2004)conducted several experiments such as the face recognition where they applied the LPP, LDA and PCA techniques in facial recognition, face images manifold experiment where they used the LPP to face images and lastly they conducted a 2-dimensional visualization experiment employing multiple features database. This experiments helped the authors to arrive at an acceptable conclusion such as that the LPP outperforms other techniques.

X-factor

The papers X-factor can be explained in terms of appealing results and equations, and the way the authors have solved the defined problem statement. In terms of the results appealing nature, any individual who views the report be it an expert in that field or any other person is drawn by the results section where the results are presented in appealing figures 1 (Simply Synthetic Example), 2 (2-D data visualization) and 3 (2-D data visualization) .

Figure 1 Understanding the Literature 69

 Understanding the Literature 70

igure 2 Understanding the Literature 71F

Even though figures 1 and 2 are not large like figure 3, their colourful appearance attracts an individual attention to know what the figures are all about. The most capturing figure is figure 3 with human faces. This makes a reader anxious to know why there are so many faces which appear similar but different in one figure. The many equations dominating the first pages of the paper are also appealing to individuals who have dealt with linear and nonlinear equations where the equations fill an individual’s curiosity of knowing what the equations are all about and which problem statement is being solved using such equations. The ability of the paper to appeal to any individual regardless of their technical background because of the papers visual representation indicates that it has an X-factor.

In terms of addressing the problem statement, the paper was interesting and could start a discussion in class because the authors explains the entire procedure followed in solving the problem from the formulation of the objective function using the LPP algorithm to solving the problem. The authors also provide a justification for the LPP algorithm using examples which they have worked out. This can engage students in a creative discussion where they try to work out the problem statement using the papers step by step explained equations and solutions as a guide. In the end, students can gain knowledge on ways of minimizing objective functions and how to formulate them. This qualifies as an X-factor.

Presentation

The papers presentation is of high standard in terms of depth of argument, the presentations clarity and presentation style. In the depth of their arguments, the authors proposes a new LPP algorithm and in order to justify its applicability and functionality they present a problem statement which they solve using the new LPP algorithm. They provide a detailed description of how to integrate the LPP algorithm in the problem statement and formulate an objective function which they then solve. The authors also provide three methods of justifying their algorithm before conducting experiments to demonstrate the practical application of the algorithm.

In the presentations clarity, the way the authors of the paper describe and show the procedures to be followed when formulating an objective function from the problem statement provides clarity to the reader or other researchers to an extent where an individual with a technical background can replicate the work due to its transparency. The application of the LPP algorithm in the experiments also provides individuals with clarity regarding its practical applicability and validity in pattern classification applications, information retrieval and exploratory data analysis.

The presentation style (verbal) was also easy to understand. In cases where the authors used technical terms that required an explanation, they simply provided a reference for further reading. Other technical terms regarding the LPP algorithm were explained by the authors. In cases where technical jargon could be avoided, the authors used general language which reduced the technical jargon and made the papers language concise and clear. In Visual representation of figures, the authors provided a detailed explanation of the figures, referenced them and annotated them appropriately.

Suggested improvements

Any research work despite its presentation been of high standard does not necessarily mean that the work is 100 % completely perfect. In this case He, X. and Niyogi, P (2004) research paper was not easy to follow and the suggested improvements include: When writing a research report it is usually important to explain what every section of the report entails by providing a brief description, in the case of this research paper after some section titles the authors did not provide a brief description of what that section entails. This makes the report hard for an individual without an understanding of the proposed algorithm to understand what that section entails. For example, on the Justification section, the authors did not provide a brief description on what that section entails.

Another suggestion entails the arrangement of the overall research report. There is a standard outline that research reports follow where the author guides the reader in a chronological order of tasks and activities. In He, X. and Niyogi, P (2004) report, Locality Preserving Projections, the authors should not have listed the different perspectives as to why the new LPP algorithm is interesting at the introduction part. This is because the reader has not even yet fully comprehended what the algorithm entails, how it functions and how it outperforms other dimensionality reduction techniques. Therefore this part should be just before the conclusion and after the experimental results. In the experiments and experimental results sections the authors did not separate the two or arrange them appropriately. The results are explained before the experiments section. And in some areas, both the experiments and results are combined together. The authors should have separated the experiments section from the results section with the experiments section appearing before the experimental results section.

Application

This papers research work is not limited to future applications. This is because the papers main aim was to propose a new LPP algorithm that could be applied in dimensionality reduction. As technological advances continue to be experienced, huge amounts of data will continue to accumulate in the future. Bearing in mind that information is power and data is becoming a major trading commodity where organizations are dealing with selling of data, new methods for data and information processing will be required. Therefore the LPP algorithm can be used by the data companies for information processing.

Another domain where the research work can be applied is the marketing domain where the LPP algorithm is used for pattern recognition. The pattern recognition in this case refers to the consumer preferences where the algorithm identifies a consumption pattern from the products bought by a certain customer, then the pattern is used to predict which product can be marketed to the consumer.

The LPP algorithm can also be applied in designing of facial recognition software for security purposes where the LPP algorithm uses pattern recognition to identify individuals as in the case where the government puts the profile of wanted criminals in an online database connected to CCTV surveillance. In case the CCTV cameras captures the criminals face the LPP algorithm integrated in the police database automatically detects the individual using his facial patterns and sound an alert alarm.

The LPP algorithm can also be applied in the designing of modern library and archive management systems where once implemented, the system will allow an individual to retrieve information (documents, video and audio) through an indexing scheme that permits and enables retrieval of information quickly since the LPP algorithm design preserves the local structure.

Conclusion

In general, the article by He, X. and Niyogi, P (2004) qualifies to be of high standard. The article proposed an interesting new linear LPP algorithm for dimensionality reduction in information processing. Due to this the paper was considered highly innovative, well presented verbally and well proven. The information presented in the article is helpful and it can be recommended that individuals researching dimensionality reduction algorithms seek this article for referencing purposes.

References

Assi, K.C., Labelle, H. and Cheriet, F., 2014. Modified large margin nearest neighbor metric learning for regression. IEEE Signal Processing Letters, 21(3), pp.292-296.

Assi, K.C., Labelle, H. and Cheriet, F., 2014. Statistical model based 3D shape prediction of postoperative trunks for non-invasive scoliosis surgery planning. Computers in biology and medicine, 48, pp.85-93.

Belkin, M. and Niyogi, P., 2001, December. Laplacian eigenmaps and spectral techniques for embedding and clustering. In NIPS (Vol. 14, pp. 585-591).

Chen, Z., 2012. Discovery of informative and predictive patterns in dynamic networks of complex systems. North Carolina State University.

Foytik, J.D., 2011. Locally Tuned Nonlinear Manifold for Person Independent Head Pose Estimation (Doctoral dissertation, University of Dayton).

He, X. and Niyogi, P., Locality preserving projections. Cambridge, MA, 2004.

Min, R., 2013. Reconnaissance de visage robuste aux occultations (Doctoral dissertation, Paris, ENST).

Rui, M.I.N., 2013. Doctorat ParisTech (Doctoral dissertation, TELECOM ParisTech).

Zheng, F., Shao, L. and Song, Z., 2010, July. Eigen-space learning using semi-supervised diffusion maps for human action recognition. In Proceedings of the ACM International Conference on Image and Video Retrieval (pp. 151-157). ACM.