Fake Plastic Guitars

I rented Guitar Hero III the other day, figuring that I should play it, form an opinion of it, and get it out of the way before Rock Band shows up. After playing for a bit, I have to agree with a lot of the criticisms that I have seen leveled at the game:

  • The difficulty curve is very uneven. There are some songs in later tiers that seem quite easy in comparison to earlier songs (including Slow Ride, which seems to be a very strange choice for a first song).
  • The boss battles, while sounding neat on paper, don’t work very well in practice. Part of it may be that dueling stringed instruments now have unsavory associations, but they are just not very fun.
  • The art direction has gone more towards “x-treme,” and away from the friendlier, more whimsical styling in the Harmonix versions of the game. It’s difficult to find a character to use that I actually like, and I think this is a fairly big problem in a game that is, when you get right down to it, all about wish fulfillment.
  • The Guitar Hero series still has terrible tutorials. I can’t understand why this is still happening. It’s understandable that for the first game, the developers were under tremendous time pressure, and it’s hard to produce a quality tutorial on a compressed timescale. But now, Guitar Hero has a huge amount of resources behind it — there’s no excuse for them to still be copying the terrible tutorial structure from the first game, and there’s absolutely no excuse for not being able to fully explain game elements like hammer-ons and pull-offs in the tutorial. (The tutorial for GH3, if I am not mistaken, completely omits the fact that you can only perform those moves on notes that have their outline removed.)

It’s still Guitar Hero, which means it’s still fun, but this third installment (first for the new developer Neversoft) is personally disappointing. The song selection this time around is decent, but I would say that there are more songs in the career mode that I think are stinkers than in either of the first two games.

As I played through the career mode, I became more and more frustrated when the controller would not register a fret press when I was clearly holding down the button. Acknowledging that I must be a poor craftsman since I was blaming my tools, I read this short post about some simple mods that could be done to the 360 Xplorer Controller.

For starters, I decided to put cardboard under the fret buttons to help with responsiveness when the buttons were pressed in certain ways. I followed the (fairly complete) directions in that linked post, but hit a couple of snags that I figured I would note in case it saves anybody else some time.

  • In a moment of mental weakness, I traced out the wrong side of the button on the cardboard. This resulted in cardboard backing that barely fit into the bracket for the button, and produced a lot of friction and resistance — basically, not usable.
  • After fixing that oversight, I reassembled the guitar, only to discover that the cardboard backing I used was too thick, and the buttons were perpetually being pressed! I had used cardboard cut from a shoebox, but I wound up slicing it thinner (in half, roughly) to get something that would work. My recommendation would be to test the action of the fret buttons during each stage of the reassembly process, because it gets progressively more snug as the various pieces are reattached, and it’s a pain to have to undo 16 screws again because something got messed up.

This mod resulted in a decent improvement in the response of the fret buttons — I would endorse making these changes for anyone using the Xplorer controller, as it definitely helps reduce the frustration factor of the Guitar Hero games. I don’t have any issues with the responsiveness of the strum bar, so I probably won’t be changing that, but later on I might try my hand at making the whammy bar looser

The Amalgamated Mathematician

I recently got done reading The Artist and the Mathematician, a book about the fictitious mathematician Nicolas Bourbaki and his influence on 20th century math. Bourbaki was a pseudonym for a group of French mathematicians who were intent on not only revamping the state of math education in France (it having suffered greatly from the two World Wars), but also rewriting the foundations of math in a more rigorous, axiomatic fashion, based in set theory. The book also posits Bourbaki as part of the genesis of the structuralist movement, with wide-ranging impact in fields such as anthropology, economics, and the like.

I found the parts of the book dealing with Bourbaki and the persons behind the pseudonym to be mostly acceptable. The author, Amir Aczel, delivers the story in a somewhat stilted, somewhat meandering fashion, but on the whole the narrative is readable, interesting, and enjoyable. However, I find the thesis of Bourbaki’s influence on the structuralist movement to be poorly supported (with the sources admitting only an ephemeral connection between the parties involved). If there is a connection, I personally believe it to be more indirect than what is implied.

This may be somewhat mean-spirited, but I must say that I feel the author’s sentiment of amazement regarding the birth of structuralism in anthropology to be akin to awe at the sight of watching a caveman bang rocks together. The way the story is told, an encounter with André Weil and a brief introduction to set theory saves the dissertation of Claude Lévi-Strauss (which would be published later as The Elementary Structures of Kinship), who then pioneers a new movement in anthropology. The apparently insurmountable problem that Lévi-Strauss faced, formalizing notions of marriage restrictions in aboriginal society, can be reduced to a set membership problem that I think any intelligent sixth grader could solve. Lévi-Strauss turned this into his meal ticket, trying to find any way he could to deduce “structure” from anthropological data, constructing theories that had no visible means of support. While doing a bit more reading on the topic, I was amused to find this summary of structural anthropology provided by Wikipedia: “…a great weakness of structuralism is that its main propositions were not formulated in a way so that they could be subject to verification or falsification. Lévi-Strauss did not develop a framework that could prove the existence of his concept of the fundamental structures of human thought but simply assumed them to be there, an unfortunate mistake considering that this concept underpinned all of his work.”

The book contains additional howlers on the subject of economics, psychology, and literature. The work of Jacques Lacan on the “mirror experiment” is “analyzed…using the assumption of hidden structure. This led him to results that confirmed the structuralist approach.” (It should surprise no one that assuming a hidden bogeyman exists would lead you to interpret data in such a fashion as to confirm the existence of the bogeyman.) Aczel also somehow seeks to appropriate the supply and demand curve (an idea which was formulated much earlier, and which has remained largely unchanged since the end of the 19th century) as validation for the ideas of structuralism in economics. Having attempted this bizarre feat, the lack of an attempt to concoct some connection between fractals and micro/macroeconomic behavior seems like uncharacteristic restraint. The coup-de-grace is the description of the Oulipo group, a group of “literary” madmen whose work can essentially be described as a combination of Mad Libs and Eliza, wholly worthless.

Overall, I would say that the book was a disappointment. While the topics of Nicolas Bourbaki and the history of French mathematics in the 20th century were interesting, the book’s detours into other realms of study resulted in a pretty spectacular decline in quality. These sections provoked a palpable sense of outrage — while I felt compelled to finish the book, I also felt compelled to point out these problems. The book’s lack of actual mathematical content was just the cherry on top…

When Better Isn’t Necessarily Better

After writing about “luck manipulation” in tool-assisted speed runs, I got to thinking about the subject of pseudo-random number generation. This is a topic that comes up, briefly yet inevitably, on every game engine project in existence, and I think it’s a good example of a situation where implementation choices are not always as cut and dried as conventional wisdom would have you believe.

The contemporary random number generator of choice is the snappily-named Mersenne Twister. It has an unimaginably large period, good equidistribution characteristics, and has passed many different randomness tests. It is also reputed to be faster than many runtime library implementations of rand(). I know that it has been used in game projects, as well as more widely in the scientific research community. So why isn’t using the Mersenne Twister an automatic slam dunk decision that nobody should ever think about?

First off, it is not suitable for cryptography, because observation of a number of results allows an attacker to predict future random numbers. This isn’t normally a concern for me in the projects that I write, and it usually is not a concern in game engines in general. So I’ll put this concern aside.

A somewhat more practical concern involves the amount of storage used for the state of the random number generator. It needs to store 624 words of data as the state of the machine, which is about 2.5K for each random number generator. This doesn’t sound like a lot, but when you consider that one might want to use this algorithm in a tightly resource-constrained environment (say, on an embedded processor), even a couple of K can be a concern. An application may have multiple random number generators, which would exacerbate this problem. (For example, in a game engine, you might have one random number generator for gameplay events and things that would need to be reproducible in a replay system, and another for things like special effects which do not impact gameplay.) The Gamasutra article linked above notes that the buffer size can be reduced for a tradeoff in the period of the random number generator — I didn’t rigorously check this idea against the original paper, but it sounds like a reasonable compromise.

A final concern involves execution speed. The algorithm generates 624 numbers at a time, and stores them in the state array. The process of extracting numbers out of the state array is pretty straightforward — it’s just a bunch of logical operations and shifts. When the supply of generated numbers is exhausted, the generation process is run again. However, the generation process is considerably more expensive than the extraction process. The result is a situation where the execution time of generating a random number may vary considerably from call to call. This is often undesirable, because it makes the performance characteristics of code using the Mersenne Twister harder to understand. (For example, when looking at profiling statistics, hotspots might “migrate” based on what section of the code had to stop and generate a new set of 624 numbers.)

So the moral of this story is that, like always, a tool should be selected based on a project’s particular needs and limitations, and not necessarily based on some objective notion of quality. Oh, and another take-away would be that sometimes the best tool for a job is actually a set of techniques. In this case, the humble linear congruential generator may act as a worthy complement to the otherwise stellar Twister.

A Game So Nice, I Played It Twice

Puzzle Quest recently joined the elite ranks of “games that I have played to completion more than once.” Months ago, I had played and finished the Nintendo DS version of the game — I enjoyed it so much that when the Xbox Live Arcade version was released in October, I bought it again and replayed it. I recently polished off the campaign mode and got all of the achievements (with the help of Sandy, who got probably the hardest achievement, the Master Craftsman achievement), so now I think I can safely say that I’m done with Puzzle Quest.

I briefly thought about doing some analysis on optimal play at the top of the game board, since I think it’s the one aspect of the game that’s dominated by chance more than any other. I also haven’t really found any good existing analysis out there — I only found analyses of character builds, along with speculation that Puzzle Quest is a game that’s “vulnerable” to optimal AI play in multiplayer. (I disagree, because the many different character builds and spells make it difficult to truly play optimally. I frankly find it hard to conceive of a Puzzle Quest AI that can play the match-3 game optimally and utilize all spells correctly and with maximum effectiveness.) I think that I’m lacking the motivation to follow through with a project like that, but we’ll see. Apparently the PC version is moddable, and the game is driven by XML files and Lua scripts, which should make it easier to find information about piece drop probabilities.

There aren’t that many games that I play more than once — I have such a huge backlog that I naturally gravitate towards trying to clear some more of that, rather than playing an old game again. I know that I’ve played Jagged Alliance 2 and the Fallout games multiple times, but beyond that, I’m having a hard time coming up with additional substantive (>1 hour) games that I’ve replayed.

Poincaré’s Prize

I recently finished reading the book Poincare’s Prize, by George Szpiro. The book covers the history of topology and of Henri Poincaré, the attempts to prove his famous conjecture, and the techniques that were developed along the way. Proving the conjecture was one of the seven Millennium Prize problems that were published by the Clay Mathematics Institute, along with million-dollar bounties for each. A proof of the conjecture was finally completed in papers published by the enigmatic Grigori Perelman in 2002 and 2003. This achievement, though, was surrounded by several strange occurrences:

  • Perelman refused to accept a Fields Medal for his work, becoming the first person to ever refuse the honor. In an article in the New Yorker, he suggests that his refusal was motivated by a perceived lack of ethical standards in the mathematics community.
  • Perelman also refused to submit his work in accordance with the Millennium Prize judging criteria. As of now, the prize remains unclaimed.
  • A pair of Chinese mathematicians released a controversial paper around the time that other mathematicians published papers on Perelman’s work (which were intended to clarify his work and present a full proof of the Poincaré conjecture). The Cao-Zhu paper initially contained language claiming that it was, essentially, the first proof of the Poincaré conjecture. This is a position that has not been accepted by the mathematics community at large. Additionally, a section of the Cao-Zhu paper was determined to be plagiarized from the work of the other aforementioned mathematicians.

Even though I haven’t ever really been seriously interested in topology (nor displayed much aptitude for the frequently mind-bending concepts involved), I thoroughly enjoyed the book. In reading it, I began to feel that perhaps math educators (at all levels) were wrong in focusing their attempts to interest students in math solely on applied examples. I honestly can’t remember any math teacher or professor with whom I’ve interacted making much of an effort to interest students in pure math. Granted, careers in pure math are few and far between, but it seems wrong to practically steer students away from the field.

Through the book I also learned of the existence of arxiv.org, an e-print archive maintained by Cornell. I guess my prior searches for academic papers always wound up leading me to individual school sites, and not this archive. (They also appear to have a somewhat hostile attitude towards indexers!) I may have to spend some quality time plowing through the computer science sections to find articles to read — I’m sure I could find lots of things to interest me in there.

Apart from that, having been bitten by the math history bug, I now have The Artist and the Mathematician sitting on my book queue. I had read the short book Chance, by the same author, recently as well, and thought enough of it that I figured I’d give this other book a shot too.

In Pursuit of Perfection

A few weeks ago, me and Sandy went to see The King of Kong, which I can wholeheartedly recommend to just about anyone. As reviews have noted, it’s really more a movie about the people involved rather than the focus of their competition. In the movie, there is a segment that is an interview with one of the judges at Twin Galaxies, in which he discusses the workload of the job, and the backlog of submissions that he had to work through. I thought one of his comments was interesting — something along the lines of, “Most people never get a chance to see a world record being broken — I’m lucky, because I see it happen all the time.” While most people would not place the same importance on a video game world record as they would on, say, a track-and-field event, the sentiment still holds — it is exciting to see something, anything, that’s better than anyone else has done.

I had seen sites like the Speed Demos Archive before, and had watched some of the shorter and/or more amusing movies. (Seeing Morrowind being completed in 7 minutes, 30 seconds, via exploitation of spells and items, is a hoot.) The amount of effort that goes into some of these videos is staggering — for short games, massive amounts of iteration on a game quickly add up to a large amount of time spent recording videos. For longer games, just getting through them at all, without costly mistakes, is an achievement — never mind the meticulous planning process that precedes any world-record attempt.

A little bit of reading revealed an important distinction and bifurcation of the “speed run” scene — the demos shown on the Speed Demos Archive are done without any software assistance, and are played in real-time. They are demonstrations of how quickly a human can play games. A second class of speed demos, known as “tool-assisted speed runs,” use a variety of techniques to try and complete a game in the absolute minimum possible time. Some of these techniques include:

  • Slow motion and replay in emulators, to achieve “perfect play.” Damage is only taken in cases where it shortens the length of the run, or allows for shortcuts to be taken due to “damage bumping.”
  • Bug abuse. Almost any bug is fair game for exploitation, it seems, and speed runs frequently abuse these.
  • Luck manipulation in Darkwing Duck “Luck manipulation.” This is, to me, the most impressive technique, and one that shows the lengths to which speed runners will go in order to shorten a demo. Many older games have “flaws” in their use of random number generators, such that the game’s randomness can be exploited by someone determined enough. In short, luck manipulation aims to exploit the behavior of a game’s random number generator for player benefit. For example, this can be used to avoid random encounters in an RPG, ensure that enemies drop certain needed items when killed, or provoke enemies to appear from certain locations onscreen or at certain times.An example of this technique is shown in the image at the right (rehosted from the TASVideos site). Depending on the frame at which the enemy is killed, different items are dropped.Sometimes people go so far as to disassemble the ROM of a game to determine how the random number generator works, in order to construct a faster run. Another example of this technique is an amazing run that produces a Monopoly win in 30 seconds.

A site known as TASVideos hosts a huge collection of these videos. I’ve spent some time watching a number of these videos, and I enjoyed them quite a bit, particularly for the games that I had played in my youth, where I am familiar with how the game is normally supposed to proceed. Here are some of my recommendations for tool-assisted speed runs worth watching:

Your Arsenal

For my birthday (tomorrow), Sandy got us a pair of season tickets for the Anaheim Arsenal of the NBDL (at a very reasonable price, I might add). I’m pretty excited about this, as I’ve never been a season ticket holder for any sport. Ticket prices for the Clippers and the hated Lakers are quite high — when you look at the number of games in a package, and the fact that you’re buying for two, it can get expensive real quick. (I have to admit that when I was living in the Bay Area, I got spoiled by the Warriors‘ ridiculously low ticket prices. You used to be able to get lower bowl end tickets for $18, of which, let me tell you, I took full advantage.) I also can’t imagine being a season ticket holder for a MLB team — there’s just way too many games to attend! The 24-game home schedule for the Arsenal seems quite reasonable in comparison. I don’t know what the playoff structure for the NBDL is, though — I just know that apparently there is one, because last year I flipped over to the NBA channel on Sirius one day, and they were playing the championship game.

Looks like the first home game is on November 24th, versus the Utah Flash. Now I need to get up to speed on the teams and players down low

Windows Live Writer

I’ve been trying out Windows Live Writer for a little bit here, because I’ve heard that it’s quite good. While I think the WordPress interface is just fine for writing and editing posts, it is kind of nice to have a better client user interface when composing posts.

So far my experience has been pretty positive, although I haven’t used any of the media-related features — I’ve really just been using it as a better editing environment for posts. I’m pretty impressed that it interoperates pretty much perfectly with WordPress, as far as I can tell. It’s actually a little strange to see Microsoft striving to integrate with something that’s not governed by an international standards body.

Windows Mobile 6 Upgrade available for AT&T 8525

Looks like it was just posted this evening. I had coincidentally decided to check to see if it had finally been released, and came across this lengthy forum thread of frustrated phone users where the release of the update was eventually discovered.

I haven’t installed it yet, but I’m curious about the changes and improvements that have been made. I’m hoping that Internet Explorer will be a bit more stable and capable — I’ve noticed that the version included in Windows Mobile 5 has issues with pages that use JavaScript and such, and my experience with Opera Mobile wasn’t much better. (And don’t even get me started on Minimo…)