Unsupervised Learning Series —Exploring Self-Organizing Maps | by Ivo Bernardo | Aug, 2023

Learn how Self-Organizing Maps work and why they are a useful unsupervised learning algorithm

Self-Organizing Maps (SOMs) are a type of unsupervised neural network utilized for clustering and visualization of high-dimensional data. SOMs are trained using a competitive learning algorithm, in which nodes (also known as neurons) in the network compete for the right to represent input data.

The SOM architecture consists of a 2D grid of nodes, where each node is associated with a weight vector that represents the means of the centroids in the SOM solution. The nodes are organized in such a way that nodes are organized around similar data points, producing a layer that represents the underlying data.

SOMs are commonly used for a wide array of tasks such as:

• data visualization
• anomaly detection
• feature extraction
• clustering

We can also visualize SOMs as the most simple neural network version for unsupervised learning!

While they seem confusing at first, Self-Organizing Maps (or Kohonen Maps, named after their inventor) are one interesting type of algorithm that is able to map the underlying structure from the data. They can be described as follows:

• a one-layer unsupervised neural network, without backpropagation.
• a restricted k-means solution, where nodes have the ability to influence the movement of other nodes (in the context of k-means, the nodes are known as centroids).

In this blog post, we’ll do a couple of experiments on the SOM model. Later, we’ll apply a Self-Organizing Map to a real use case, where we will be able to see the main features and shortcomings of the algorithm.

To understand how SOMs learn, let’s start by plotting a toy dataset in 2 dimensions.

We’ll create a `numpy`array with the following dataset and plot it afterwards:

`import numpy as npX = np.array([[1, 2], [2, 1], [1, 3], [1, 2.5], [3.1, 5], [4, 10], [3.6, 5.4], [2…`