Memory Map Implementation

A "memory map" is a collection of vectors with normalized factors. It may go by different names, such as "Self-organizing Maps (SOMS)", and variations on this theme are common in AI and neural research. The question at hand is about making such algorithms more efficient.

Here is an example map with a zero-to-one normalization range that a space probe may use as it samples planets:

So we get a list of 6 measurements. Now suppose we had a database of planets in the galaxy. We could use probe samples to guess which planet it landed on. The basic algorithm to find the closest match is:

  s = getSampleVector();
  best_sample = blank;   // initialize
  min_diff = 999;  
  for p = each_planet_in_galaxy() {
     diff = abs(p1-s1) + abs(p2-s2) ... + abs(p6-s6);
     if (diff < min_diff) {
        best_sample = p;
        min_diff = diff;
     }
  }
  print("Best matching vector: " . showV(best_sample));

A fancier version would show multiple top-ranking matches, something like this:

  Rank Diff         Sample              Planet
  -----------------------------------------------------
  1    0.25  (0.25,0.52,0.98,...) (0.26,0.50,0.95,...)
  2    0.30  (0.25,0.52,0.98,...) (0.24,0.58,0.93,...)
  3    0.32  (0.25,0.52,0.98,...) (0.15,0.61,0.91,...)
  4    0.40  etc...

The question I have is, what kind of algorithms can be used to speed up the matching/scoring process such that one does not have to do a sequential search? Some of the simplistic approaches I have thought of (eg. index strings) tend to weigh some factors heavier than others, which is not what we want.

Notes

      diff = w1*abs(p1-s1) + w2*abs(p2-s2) ... + w6*abs(p6-s6)


CategoryArtificialIntelligence, CategoryPerformance


EditText of this page (last edited July 11, 2007) or FindPage with title or text search