DBSCAN, (Density-Based Spatial Clustering of Applications with Noise), captures the insight that clusters are dense groups of points. The idea is that if a particular point belongs to a cluster, it should be near to lots of other points in that cluster. It works like this: First we choose two parameters, a positive number epsilon and a natural number minPoints. We then begin by picking an arbitrary point in our dataset. If there are more than minPoints points within a distance of epsilon from that point, (including the original point itself), we consider all of them to be part of a "cluster". We then expand that cluster by checking all of the new points and seeing if they too have more than minPoints points within a distance of epsilon, growing the cluster recursively if so. Eventually, we run out of points to add to the cluster. We then pick a new arbitrary point and repeat the process. Now, it's entirely possible that a point we pick has fewer than minPoints points in its epsilon ball, and is also not a part of any other cluster. If that is the case, it's considered a "noise point" not belonging to any cluster. (There's a slight complication worth pointing out: say minPoints=4, and you have a point with three points in its epsilon ball, including itself. Say the other two points belong to two different clusters, and each has 4 points in their epsilon balls. Then both of these dense points will "fight over" the original point, and it's arbitrary which of the two clusters it ends up in. To see what I mean, try out "Example A" with minPoints=4, epsilon=1.98. Since DBSCAN considers the points in an arbitrary order, the middle point can end up in either the left or the right cluster on different runs. This kind of point is known as a "border point"). To illustrate the "epsilon ball rules", before the algorithm runs I superimpose a grid of epsilon balls over the dataset you choose, and color them in if they contain more than minPoints points. To get an additional feel for how this algorithm works, check out the "DBSCAN Rings" dataset, which consists of different numbers of points in different sized circles. Note that in the actual DBSCAN algorithm, epsilon and minPoints remain the same throughout. But I thought it'd be fun to play around with changing them while the algorithm is running, so I've left the option in to do so. Take a look at how the k-means algorithm performs on these same datasets. How does it compare? Where is it better and where is it worse?

Affinity matrix 

An Affinity Matrix, also called a Similarity Matrix, is an essential statistical technique used to organize the mutual similarities between a set of data points.  Similarity is similar to distance, however, it does not satisfy the properties of a metric, two points that are the same will have a similarity score of 1, whereas computing the metric will result in zero.  Typical examples of similarity measures are the cosine similarity and the Jaccard similarity.  These similarity measures can be interpreted as the probability that that two points are related. hor example, if two data points have coordinates that are close, then their cosine similarity score ( or respective “affinity” score) will be much closer to 1 than two data points with a lot of space between them.
Why is this Useful?
By assigning a numerical value to the abstract concept of similarity, the affinity matrix lets machine learning programs mimic human logic by making educated guesses about what information is related and how similar they are. Just as useful, the similarity matrix allows machine learning systems to work with data from unlabeled or corrupted datasets with human-like intuition, which leads to countless practical applications in many fields.
Practical Uses of an Affinity Matrix
Smart Information Retrieval –  The similarity matrix is the driving force behind smart search engines that pull additional relevant information for you that you didn’t even know you needed.
Advanced Genetic Research – Discovering and studying affinities among ostensibly random DNA combinations has delivered huge breakthroughs in medical science and pharmaceutical research.
Efficient Data Mining – A similarity matrix allows for fast and accurate recognition of hidden relationship patterns in any database full of unlabeled information. Particularly useful for business intelligence, law enforcement and all forms of scientific research.
Intelligent Unsupervised Machine Learning
– Creating machine learning algorithms capable of deriving structure and meaning from “raw” unorganized data would simply be impossible without an similarity matrix to ensure a minimum standard of accuracy in what the network is teaching itself. Techniques used in this area are k-means or k-nearest neighbors that rely heavily on a choice of either a distance function or an affinity measure.

Spectral clustering

https://stats.stackexchange.com/questions/245455/why-does-kernel-k-means-work-better-than-spectral-clustering-in-this-case