The Laplace operator was first leveraged in the study of celestial mechanics, or the motion of objects in outer space, by Pierre-Simon de Laplace, and as such has been named in his honour.

The Laplace operator has since been leveraged to detail several differing phenomena, from electric potentials, to the diffusion equation for heat and fluid flow, and quantum mechanics. It has also been recast to the discrete space, where it has been leveraged in applications connected to image processing and spectral clustering.

In this guide, you will find out about the Laplacian, with a brief introduction.

After going through this guide, you will be aware of:

- The definition of the Laplace operator and how it is connected to divergence
- How the Laplace operator is related to the Hessian
- How the continuous Laplace operator has been recast to discrete-space, and applied to image processing and spectral clustering.

__Tutorial Summarization__

This tutorial is subdivided into two portions, which are:

- The Laplacian
- The idea of divergence
- The Continuous Laplacian

- The Discrete Laplacian

__Prerequisites__

For this guide, the assumption is that you are acquainted with:

1] The gradient of a function

2] Higher-order derivatives

3] Multivariate functions

4] The Hessian matrix

__The Laplacian__

The Laplace operator (or Laplacian as it is usually referred to) is the divergence of the gradient of a function.

In order to understand the prior statement in a better way, it is ideal that we begin by comprehending the idea of divergence.

__The idea of divergence__

Divergence is a vector operator that functions on a vector field. The latter can be viewed of as indicating a flow of a liquid or gas, where every vector within the vector field indicates a velocity vector of the moving fluid.

Roughly speaking, divergence quantifies the tendency of the fluid to collect or disperse at a point.

Leveraging the nabla (or del) operator, ∇, the divergence is signified by ∇ and generates a scalar value when applied to a vector field, measuring the quantity of fluid at every point. In Cartesian coordinates, the divergence of a vector field, F = (f,g,h) is provided by:

Even though the divergence computation consists of the application of the divergence operator (instead of a multiplication operation), the dot in its notation is reminiscent of the dot product, which consists of the multiplication of the components of two equal-length sequences (in this case ∇ and **F**) and the summation of the resulting terms.

__The Continuous Laplacian__

Let’s go back to the definition of the Laplacian.

Remember that the gradient of a 2D function, f is provided by:

Then, the Laplacian (i.e. the divergence of the gradient) of f can be defined by the total of unmixed second partial derivatives.

It can equivalently, be viewed as the trace (tr) of the function’s Hessian, H(f). The trace gives definition to the sum of the elements of the primary diagonal of a square nxn matrix, which in this scenario is the Hessian, and also the total of its eigenvalues. Remember that the Hessian matrix contains the own (or unmixed) second partial derivatives on the diagonal.

A critical attribute of the trace of a matrix is its invariance to a change of basis. We have already given definition to the Laplacian in Cartesian coordinates. Within polar coordinates, we would define it in the following way:

The invariance of the trace to a change of basis implies that the Laplacian can be defined in differing coordinate spaces, but it would provide the same value at some point, (x,y) in the Cartesian coordinate space, and at the same point (r, *θ) *in the polar coordinate space.

Remember that we have also specified that the second derivative can furnish us with data with regards to the curvature of a function. Therefore, intuitively, we can take up the Laplacian to also furnish us with data with regards to the local curvature of a function, through this summation of second second derivatives.

The continuous Laplace operator has been leveraged to detail several physical phenomena, like electric potentials, and the diffusion equation for heat flow.

__The Discrete Laplacian__

Analogous to the continuous Laplace operator, is the discrete one, so formulated to be applied to a discrete grid of, say, pixel values in imagery, or to a graph.

Let’s observe how the Laplace operator can be recasted in both instances.

Within the domain of image processing, the Laplace operator is realized in the format of a digital filter that, when applied to imagery, can be leveraged for edge detection. In a way, we can take up the Laplacian operator leveraged within image processing, to additionally, furnish us with data with regards to the fashion in which the function curves (or bends) at some specific point (x,y)

In this scenario, the discrete Laplacian operator (or filter) is developed by bringing together two, 1D second derivative filters, into a single 1D line.

Within machine learning, the data furnished by the discrete Laplace operator as derived from a graph can be leveraged for the purposes of data clustering.

Take up a graph G = (V, E), possessing a finite number of V vertices and E edges. Its Laplacian matrix, L can be defined in terms of the degree matrix, D, consisting data with regards to the connectivity of every vertex, and the adjacency matrix, A, which signifies pairings of vertices that are adjacent within the graph.

L = D – A

Spectral clustering can be executed through application of a few conventional clustering methods, like k-means, on the eigenvectors of the Laplacian matrix, therefore partitioning the graph nodes (or the data points) into subsets.

One problem that can prop up in doing so relates to an issue of scalability with big datasets, where the eigen-decomposition (or the extraction of the eigenvectors) of the Laplacian matrix may be prohibitive. The leveraging of deep learning has been put forth to tackle this issue, where a deep neural network is trained in such a way that its outputs approximate the eigenvectors of the graph Laplacian. The neural network, in this scenario, is trained leveraging a constrained optimisation strategy, to enforce the orthogonality of the network outputs.

__Further Reading__

This section furnishes additional resources on the topic if you are delving to go deeper.

*Books*

Single and multivariable calculus, 2020

Handbook of image and video processing, 2005

*Articles*

Laplace operator, Wikipedia

Divergence, Wikipedia

Discrete Laplace Operator, Wikipedia

Laplacian matrix, Wikipedia

Spectral clustering, Wikipedia

*Papers*

In this guide, you found out about the Laplacian with a brief introduction.

Particularly, you learned:

1] The definition of the Laplace operator, and how it is connected to divergence

2] How the Laplace operator is connected to the Hessian

3] How the continuous Laplace operator has been recasted to discrete-spaces, and applied to image processing and spectral clustering.

#### An intro to the Laplacian

The Laplace operator was first leveraged in the study of celestial mechanics, or the motion of objects in outer space, by Pierre-Simon de Laplace, and as such has been named in his honour.