January 2019

Volume 34 Number 1

### [Test Run]

# Self-Organizing Maps Using C#

A self-organizing map (SOM) is a relatively simple machine learning (ML) technique/object. However, SOMs are a bit difficult to describe because there are so many variations, and also because SOMs have characteristics that resemble several other ML techniques, including unsupervised clustering and supervised classification. Briefly, I usually think of a SOM as a kind of exploratory clustering analysis where the goal is to assign data items that are similar to each other to a map node.

The best way to get an idea of what a SOM is and to see where this article is headed is to take a look at the demo program in **Figure 1** and the diagram in **Figure 2**. The demo creates a 5x5 SOM for the UCI digits dataset. The dataset has 1,797 items and the SOM has 25 nodes. The SOM is created so that each data item is associated with exactly one map node.

**Figure 1 Self-Organizing Map (SOM) Demo**

**Figure 2 One Possible Visualization of the SOM**

After constructing the SOM, the demo computes the most common digit associated with each map node and that information is displayed in **Figure 2**. The result of a SOM analysis is usually visual and must be analyzed somewhat subjectively. For example, you can see most data items that represent a particular digit are assigned to SOM nodes that are close to each other. And, the data items that represent the digit 0 and the digit 6 are close to each other in the SOM, which suggests they’re more similar to each other in some way than the digit 4 and the digit 8 are.

This article assumes you have intermediate or better programming skills, but doesn’t assume you know anything about SOMs. The demo program is coded using C#, but you shouldn’t have too much trouble refactoring the code to another language, such as Visual Basic or Python. The complete code for the demo program, with some minor edits to save space, is presented in **Figure 3**. The code and the data file are also available in the download that accompanies this article.

Figure 3 Self-Organizing Map Demo Program

```
using System;
using System.Collections.Generic;
using System.IO;
namespace SOmap
{
class SOmapProgram
{
static void Main(string[] args)
{
Console.WriteLine("\nBegin self-organizing map demo \n");
Random rnd = new Random(0);
int Rows = 5, Cols = 5;
int RangeMax = Rows + Cols;
double LearnRateMax = 0.5;
int StepsMax = 100000;
// Initialize SOM nodes to random values
double[][][] map = new double[Rows][][]; // [r][c][vec]
for (int i = 0; i < Rows; ++i) {
map[i] = new double[Cols][];
for (int j = 0; j < Cols; ++j) {
map[i][j] = new double[64];
for (int k = 0; k < 64; ++k)
map[i][j][k] = rnd.NextDouble();
}
}
// Read data and labels into memory
Console.WriteLine("Reading UCI digits data into memory");
double[][] data = new double[1797][];
for (int i = 0; i < 1797; ++i)
data[i] = new double[64];
int[] labels = new int[1797];
FileStream ifs = new FileStream("..\\..\\digits_uci.txt",
FileMode.Open);
StreamReader sr = new StreamReader(ifs);
string line = ""; string[] tokens = null;
int row = 0;
while ((line = sr.ReadLine()) != null) {
tokens = line.Split(',');
for (int j = 0; j < 64; ++j)
data[row][j] = double.Parse(tokens[j]) / 16.0;
labels[row] = int.Parse(tokens[64]);
++row;
}
sr.Close(); ifs.Close();
// Construct the SOM
Console.WriteLine("Constructing 5x5 SO Map");
for (int s = 0; s < StepsMax; ++s) // main loop
{
if (s % (int)(StepsMax/5) == 0 && s > 0)
Console.WriteLine("step = " + s);
double pctLeft = 1.0 - ((s * 1.0) / StepsMax);
int currRange = (int)(pctLeft * RangeMax);
double currLearnRate = pctLeft * LearnRateMax;
// Pick random data index
int t = rnd.Next(0, 1797);
// Get (row,col) of closest map node -- 'bmu'
int[] bmuRC = ClosestNode(data, t, map);
// Move each map mode closer to the bmu
for (int i = 0; i < Rows; ++i) {
for (int j = 0; j < Cols; ++j) {
if (ManDist(bmuRC[0], bmuRC[1], i, j) <= currRange)
for (int k = 0; k < 64; ++k)
map[i][j][k] = map[i][j][k] +
currLearnRate * (data[t][k] - map[i][j][k]);
} // j
} // i
} // s(tep)
Console.WriteLine("Map construction complete \n");
// Show one map node
Console.WriteLine("Value of map[1][1] vector is: ");
Console.WriteLine(map[1][1][0].ToString("F4") + " " +
map[1][1][1].ToString("F4") + " . . " +
map[1][1][63].ToString("F4"));
Console.WriteLine(" [0] [1] . . [63] \n");
// Map has been created. assign data items to map
Console.WriteLine("Assigning data indices to map \n");
List<int>[][] mapping = new List<int>[Rows][];
for (int i = 0; i < Rows; ++i)
mapping[i] = new List<int>[Cols];
for (int i = 0; i < Rows; ++i)
for (int j = 0; j < Cols; ++j)
mapping[i][j] = new List<int>();
for (int t = 0; t < 1797; ++t) // each data item
{
// Find node map coords where node is closest to D(t)
int[] rc = ClosestNode(data, t, map);
int r = rc[0]; int c = rc[1];
mapping[r][c].Add(t);
}
Console.WriteLine("Data indices assigned to map[3][3]: ");
foreach (int idx in mapping[3][3])
Console.Write(idx.ToString().PadLeft(5) + " ");
Console.WriteLine("\n");
// Show one possible visualization
Console.WriteLine("Most common labels for each map node: ");
for (int i = 0; i < Rows; ++i) {
for (int j = 0; j < Cols; ++j) {
List<int> members = new List<int>(); // '0'- '9'
foreach (int idx in mapping[i][j])
members.Add(labels[idx]);
int mcv = MostCommonVal(members);
Console.Write(mcv + " ");
}
Console.WriteLine("");
}
Console.WriteLine("\nEnd self-organizing map demo");
Console.ReadLine();
} // Main
static int ManDist(int x1, int y1, int x2, int y2)
{
return Math.Abs(x1 - x2) + Math.Abs(y1 - y2);
}
static double EucDist(double[] v1, double[] v2)
{
double sum = 0;
for (int i = 0; i < v1.Length; ++i)
sum += (v1[i] - v2[i]) * (v1[i] - v2[i]);
return Math.Sqrt(sum);
}
static int[] ClosestNode(double[][] data, int t,
double[][][] map)
{
// Coords in map of node closest to data[t]
double smallDist = double.MaxValue;
int[] result = new int[] { 0, 0 }; // (row, col)
for (int i = 0; i < map.Length; ++i) {
for (int j = 0; j < map[0].Length; ++j) {
double dist = EucDist(data[t], map[i][j]);
if (dist < smallDist) {
smallDist = dist; result[0] = i; result[1] = j;
}
}
}
return result;
}
static int MostCommonVal(List<int> list)
{
if (list.Count == 0) return -1;
int largestCount = 0; int mostCommon = 0;
int[] counts = new int[10];
foreach (int val in list) {
++counts[val];
if (counts[val] > largestCount) {
largestCount = counts[val]; mostCommon = val;
}
}
return mostCommon;
}
} // Program
} // ns
```

## Understanding SOMs

The first step when creating a SOM is to make sure you understand the target data. The UCI Digits Dataset has 1,797 items. The data looks like:

```
0, 12, 0, 8, . . 16, 70, 9, 11, 5, . . 13, 4...
```

Each item has 65 values. The first 64 values are grayscale pixel values, between 0 and 16, that represent a crude 8x8 handwritten digit. The last value is the digit label, 0 to 9, which corresponds to the pixel value. So, the two items in the previous dataset represent a 7 and a 4. You can create a SOM for data that has labels, or data without labels, but for basic SOMs the non-label data must be strictly numeric.

The demo program creates a 5x5 SOM. The dimensions of a SOM are arbitrary to a large extent and are mostly a matter of trial and error. Each cell of a SOM is called a node. Each node holds a vector value, all with the same length, and the data items, excluding the label. So, each node in the demo SOM has a vector with 64 values. For example, If you examine **Figure 1**, you’ll see that after the SOM was created, the map node at location [1][1] is (0.000, 0.0237, . . 0.0086).

A SOM is created so that each node vector is representative of some of the data items but also so that map nodes that are close to each other geometrically represent data items that are similar. The algorithm used to create the demo SOM, in very high-level pseudo-code, is:

```
create map with random node vectors
loop while s < StepsMax times
compute what a "close" node is, based on s
compute a learn rate, based on s
pick a random data item
determine map node closest to data item ("BMU")
for-each node close to the BMU
adjust node towards data item
end-loop
```

Creating a SOM is really more of a meta-heuristic than a rigidly prescribed algorithm. Each of the statements in the pseudo-code can be implemented in several ways.

## The Demo Program

To create the demo program, I launched Visual Studio and created a new C# console application project named SOmap. I used Visual Studio 2017 but the demo has no significant .NET dependencies so any version of Visual Studio will work fine. After the template code loaded, I renamed file Program.cs to SOmapProgram.cs and allowed Visual Studio to automatically rename class Program for me. In the Editor window, I removed all unneeded using statements, leaving just the reference to the System namespace. Then I added references to the Collections.Generic and IO namespaces because the demo uses a List<int> collection and reads data from a text file.

The demo program Main method begins with:

```
Random rnd = new Random(0);
int Rows = 5, Cols = 5;
int RangeMax = Rows + Cols;
double LearnRateMax = 0.5;
int StepsMax = 100000;
```

A key part of the SOM algorithm is determining what it means for two map nodes to be close together. The demo uses Manhattan distance; for example, map nodes at [1][1] and [3][4] have a Manhattan distance of 2 + 3 = 5. For a 5x5 map, the farthest apart any two nodes can be is 5 + 5 = 10.

Inside the main processing loop, the current maximum distance that defines “close” nodes is computed as:

```
double pctLeft = 1.0 - ((s * 1.0) / StepsMax);
int currRange = (int)(pctLeft * RangeMax);
double currLearnRate = pctLeft * LearnRateMax;
```

The pctLeft variable computes the percentage of steps left. For example, if StepsMax was set to 100 and the current loop counter variable is s = 33, then the percentage of steps remaining is 0.67. The maximum neighbor distance for step s is computed using the percentage. Similarly, when neighbor nodes are updated, the percentage of steps remaining is used to gradually decrease the learning rate.

In each training iteration step, the one node in the map that’s closest to the randomly selected data item, in terms of Euclidean distance, is called the best matching unit (BMU). The code that updates each map node that’s close to the current BMU is:

```
for (int i = 0; i < Rows; ++i) {
for (int j = 0; j < Cols; ++j) {
if (ManDist(bmuRC[0], bmuRC[1], i, j) <= currRange)
for (int k = 0; k < 64; ++k)
map[i][j][k] = map[i][j][k] +
currLearnRate * (data[t][k] - map[i][j][k]);
```

In words, for each map node, if the Manhattan distance is less than the current maximum range that defines closeness, then the map node vector is adjusted a small fraction of the difference between the vector’s current value and the current data item.

## Using a SOM

Suppose you’ve created a SOM—an n-by-m grid of nodes where each node is a numeric vector that has the same length as the data being analyzed (excluding any labels). Now what? Using a SOM is problem-dependent. If your data doesn’t have labels, one common technique is to create a U-Matrix, which is an n-by-m grid where the value of each node is the average distance between the corresponding SOM node vector and neighbor node vectors. A U-Matrix is usually displayed so that each value is interpreted as a grayscale pixel value.

Another possibility for displaying a SOM created from data without labels is to map each node vector to a color. For example, if a SOM node vector has size six, you color each SOM cell using the RGB color model by assigning the average of the first two vector values to R, the second two values to G, and the last two values to B. The demo SOM node vectors have size 64, so there’s no obvious way to associate each vector to a color.

If your data has labels, you can map each SOM node to a label. The demo program determines all labels from the data under analysis associated with each SOM node, and then computes the most frequent label for each node, and then colors nodes according to an arbitrary color assignment.

## Wrapping Up

There are dozens of variations of the basic self-organizing map structure used in the demo program. Instead of using an n-by-m rectangular grid, you can use a layout where each cell in the SOM is a hexagon. You can use a toroidal geometry where edges of the SOM connect. You can use three dimensions instead of two. And there are many ways to define a close neighborhood for nodes. And so on.

Although SOMs are widely known in the field of ML, they aren’t used very often, at least among my colleagues. I suspect there are several reasons for this, but perhaps the main cause is that SOMs have so many variations it’s rather confusing to select one particular design. Additionally, based on my experience, in order for a SOM to give useful insights into a dataset, a custom SOM is needed (as opposed to using an off-the-shelf SOM from an ML library). All that said, however, for certain problems scenarios, SOMs can provide useful insights.

**Dr. James McCaffrey** *works for Microsoft Research in Redmond, Wash. He has worked on several key Microsoft products, including Internet Explorer and Bing. Dr. McCaffrey can be reached at jamccaff@microsoft.com.*

Thanks to the following Microsoft technical experts who reviewed this article: Chris Lee, Ricky Loynd, Ken Tran