When Better Isn’t Necessarily Better

After writing about “luck manipulation” in tool-assisted speed runs, I got to thinking about the subject of pseudo-random number generation. This is a topic that comes up, briefly yet inevitably, on every game engine project in existence, and I think it’s a good example of a situation where implementation choices are not always as cut and dried as conventional wisdom would have you believe.

The contemporary random number generator of choice is the snappily-named Mersenne Twister. It has an unimaginably large period, good equidistribution characteristics, and has passed many different randomness tests. It is also reputed to be faster than many runtime library implementations of rand(). I know that it has been used in game projects, as well as more widely in the scientific research community. So why isn’t using the Mersenne Twister an automatic slam dunk decision that nobody should ever think about?

First off, it is not suitable for cryptography, because observation of a number of results allows an attacker to predict future random numbers. This isn’t normally a concern for me in the projects that I write, and it usually is not a concern in game engines in general. So I’ll put this concern aside.

A somewhat more practical concern involves the amount of storage used for the state of the random number generator. It needs to store 624 words of data as the state of the machine, which is about 2.5K for each random number generator. This doesn’t sound like a lot, but when you consider that one might want to use this algorithm in a tightly resource-constrained environment (say, on an embedded processor), even a couple of K can be a concern. An application may have multiple random number generators, which would exacerbate this problem. (For example, in a game engine, you might have one random number generator for gameplay events and things that would need to be reproducible in a replay system, and another for things like special effects which do not impact gameplay.) The Gamasutra article linked above notes that the buffer size can be reduced for a tradeoff in the period of the random number generator — I didn’t rigorously check this idea against the original paper, but it sounds like a reasonable compromise.

A final concern involves execution speed. The algorithm generates 624 numbers at a time, and stores them in the state array. The process of extracting numbers out of the state array is pretty straightforward — it’s just a bunch of logical operations and shifts. When the supply of generated numbers is exhausted, the generation process is run again. However, the generation process is considerably more expensive than the extraction process. The result is a situation where the execution time of generating a random number may vary considerably from call to call. This is often undesirable, because it makes the performance characteristics of code using the Mersenne Twister harder to understand. (For example, when looking at profiling statistics, hotspots might “migrate” based on what section of the code had to stop and generate a new set of 624 numbers.)

So the moral of this story is that, like always, a tool should be selected based on a project’s particular needs and limitations, and not necessarily based on some objective notion of quality. Oh, and another take-away would be that sometimes the best tool for a job is actually a set of techniques. In this case, the humble linear congruential generator may act as a worthy complement to the otherwise stellar Twister.

Leave a comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.