A "memory map" is a collection of vectors with normalized factors. It may go by different names, such as "Self-organizing Maps (SOMS)", and variations on this theme are common in AI and neural research. The question at hand is about making such algorithms more efficient.
Here is an example map with a zero-to-one normalization range that a space probe may use as it samples planets:
s = getSampleVector(); best_sample = blank; // initialize min_diff = 999; for p = each_planet_in_galaxy() { diff = abs(p1-s1) + abs(p2-s2) ... + abs(p6-s6); if (diff < min_diff) { best_sample = p; min_diff = diff; } } print("Best matching vector: " . showV(best_sample));A fancier version would show multiple top-ranking matches, something like this:
Rank Diff Sample Planet ----------------------------------------------------- 1 0.25 (0.25,0.52,0.98,...) (0.26,0.50,0.95,...) 2 0.30 (0.25,0.52,0.98,...) (0.24,0.58,0.93,...) 3 0.32 (0.25,0.52,0.98,...) (0.15,0.61,0.91,...) 4 0.40 etc...The question I have is, what kind of algorithms can be used to speed up the matching/scoring process such that one does not have to do a sequential search? Some of the simplistic approaches I have thought of (eg. index strings) tend to weigh some factors heavier than others, which is not what we want.
Notes
diff = w1*abs(p1-s1) + w2*abs(p2-s2) ... + w6*abs(p6-s6)
CategoryArtificialIntelligence, CategoryPerformance