We all know what a hash is: you have some data, like a string, and you calculate a number from the string (say, add up the ascii values of the characters) and then you store lookup info about that string at an index. But multiple strings might have the same hash index, so you have to do a linear search through the data at the index to find the exact string you are interested it. Hash functions can get you very close to constant time look up, but only if the hash function is very good.
A perfect hash guarantees that every source value maps to a unique index. These are generally impossible to achive, but they can be done for special circumstances. One of those special circumstances is a finite list of strings. If you know that only this 1000 strings will ever be part of your input, you can generate a perfect hash function. The function would pick, say, the first bit of the first character, add the third bit of the second character, xor the fourth character and multiply by the sixth character and guarantee that the result was always a value 0 to 999, which you could then use to index an array.
The trick is knowing which bits are unique between the 1000 strings and figuring out a high-speed way to generate the values.
gperf is a tool that does this analysis and generates the perfect hash function for a set of strings, making for extremely fast data lookup for a constant set of strings.
Such a tool could be useful for some G programmers doing parsing or translation. I figured I would mention it in case someone is looking for a project. Even if no one picks up this idea, just knowing about the possibility of a perfect hash function is a useful tool in any programmer's mental toolbox.
The article goes into deep detail about the algorithms used in C++ to generate perfect hash functions for data structures, which should be sufficient to actually translate this into LabVIEW code and scripting.