Home Wireless Networking…

Our new apartment has a nice layout, but with regards to home wi-fi, there are a few key differences from our old place:

  • There are many more neighbors here who are also using wireless networks. They’re also closer to us than in our old place.
  • The construction of the building itself may be contributing to the reception problem.
  • The signal from our main access point is passing through a few walls. In our old place, it just had to go through a ceiling to get to our PCs.

The end result of all of this is that, in our home office, the wi-fi reception has been a bit dicey ever since we moved in. It would work, but the signal would occasionally drop out, or the response time would not be as good as I would like. With Battlefield 3‘s beta starting soon, I didn’t want to take any chances with a problematic network connection. I finally had a chance to do some tinkering and try and find a good solution to this problem, using components and parts that I already had laying around.

The first thing I tried was to set up a WDS (Wireless Distribution System), with a wireless router (my old Buffalo WBR2-G54) connecting wirelessly to my Asus RT-N16, which was situated in a hallway. The RT-N16 had better, unobstructed line of sight to the computers in the home office. The WBR2-G54 was running Tomato, and the RT-N16 was running Tomato USB. The conventional wisdom is basically that for WDS to work reliably/at all, the same hardware (or same wireless chipsets) must be used on all nodes. I can now report that the conventional wisdom seems to be true — I was able to connect using WDS, but not reliably. One minute, the network would be working very well, with strong reception between my office PC and the access point in the hallway, and good transfer rates. The next, it would be completely kaput, with a reboot of the router seemingly necessary to get it to respond at all.

The next thing I tried was to flash both of my routers with DD-WRT, and then try out its repeater bridge mode. This would purportedly allow me to have two separate access points, with the one in the hallway set to use the other one as its gateway, and with all machines on both sides of the network on the same subnet. This sounds nice in theory — however, I wasn’t able to get it to work, and the tools and documentation available for troubleshooting in DD-WRT are somewhat minimal. I double-checked all of the setup instructions on the DD-WRT Wiki, but didn’t have much success — I could connect to each access point separately, but the bridging didn’t seem like it was working reliably.

At this point, I was seriously considering just running some cable from the main access point in the living room to the hallway, and hooking the RT-N16 up there. It might be a bit ugly, but it would definitely work, and the interference problems would go away since the line of sight from PC to the access point would be much more direct and unobstructed. Some new Cat-6 and some cable covers, and everything would be golden…

Finally, I decided to try the basic repeater mode in DD-WRT. I also shelved the idea of using both the Buffalo and Asus wireless routers in this — I just set up the RT-N16 to repeat the signal of the main access point in my place. I also moved the Asus from the hallway to inside the office, in a place that may have clearer line of sight and less interference to the main access point in the living room. (The Asus is sitting near a window, which is across from a single exterior wall, behind which lies the main access point.) Once I straightened out all the little differences in setup (ensuring that the Asus was set to mixed B/G mode instead of B/G/N, due to the limitations of the main access point, ensuring that the wireless security settings matched, etc.), it all just started working. Devices in my living room can talk with those in my office, and the connection seems reliable and steady.

I could probably go back to using Tomato USB instead of DD-WRT, but at this point, now that it’s working well enough, I don’t want to mess with it for awhile. Maybe later down the line I will add another wireless-N router near my main (802.11g) access point, and see if repeating that signal will improve performance, but for right now I’m just happy to have nice reliable wireless networking going for my main PCs once again.

Memory Card Bugs (and a note about static analysis)

I was watching John Carmack’s QuakeCon 2011 keynote, and he mentioned that Rage was currently in the stage where they are creating cert builds, and just fixing bugs like (paraphrased) “getting a multiplayer invite and pulling your memory card out.” Memory card bugs are one of those things that tend to be a big annoyance for game programmers, because of the number of asynchronous use cases that need to be handled and the need to tie what are essentially supposed to be serial operations to a game that may be doing many other things in parallel. (Memory card support was optional on the original Xbox, due to the guaranteed presence of the internal hard drive. Accordingly, hardly any games actually support managing memory cards directly in-game.)

Carmack’s mention of memory card bugs reminded me of a funny story from Obsidian. For the Onyx Engine, one of my coworkers was working on writing the save/load code and then fixing bugs in the system, including memory card bugs on Xbox 360. Many of these bugs were timing-specific, so he would remove and reinsert his test memory card to try and reproduce the bug. Eventually, though, the first memory card slot on his development kit broke from the repeated (and potentially forceful, because of the need to try and reproduce specific timings) insertions and removals of the memory card. He had to switch over to the second slot on the kit — which, thankfully, survived until the project was over.

Another thing that came up in Carmack’s keynote is the use of static code analysis. He mentioned that id have drunk the proverbial Kool-Aid as far as static code analysis goes, and mentioned that turning on the “/analyze” switch for Xbox 360 builds (a flag that the XDK compiler supports — normally I think you need the Ultimate version of Visual Studio) brought to light many issues with their codebase. I can also vouch for this — I used to do this semi-regularly at Obsidian, and every time I ran it there were several subtle bugs that were sniffed out. It’s really worth using if you have it available.

A Musing on Sirius Radio

On Sirius, why is Lithium (the ’90s rock channel) so terrible in comparison to the excellent First Wave channel? I listen to both regularly, and I’m still hearing unusual or rarely-heard tracks on First Wave. It’s great. On the other hand, on Lithium, I’m guaranteed to hear a steady diet of the same Soundgarden, Alice in Chains, and RHCP songs, over and over. I’m sick and tired of the lack of variety and lack of actual DJs on that channel — it’s amazing how much they add to First Wave and other channels (namely Sirius XMU).

The Kindle is really picky about USB cables…

I decided to try setting up Calibre on my machine to manage my e-books, and rather than use the e-mail sync functionality with my Kindle, I figured I would just sync it via USB. I attached it through another micro-USB cable that I happened to have attached to my computer, but I had weird issues where any file I/O would cause the Kindle USB drive to become unmounted and then remount. This was extremely strange, so I tried removing all software from my computer that might be related to the problem: Virtual CloneDrive, VirtualBox, etc. This didn’t fix it. I updated the USB drivers for my motherboard, to no avail.

Finally, after a bit more Googling, I saw that some people mentioned that they tried plugging the Kindle into a different USB port on their machine. That didn’t work for me, but changing out my cable for the official Kindle cable made it magically work. Very strange!

So for anyone who’s having problems syncing their Kindle via USB, make sure you’re using the official cable (or a high-quality cable, at least). I initially thought it might be a 64-bit Windows 7/Vista problem, because I saw some other reports of problems with it, but at least in my case it turns out that it was the USB cable I was using.

Windows 7 Mobile Device Center not recognizing my phone over USB

I had some issues setting up my increasingly old and decrepit phone with my new machine – I installed the Windows Mobile Device Center, but it refused to recognize my phone when connected by USB. I seemed to be having some other issues at the time, and had an aborted attempt to install the proper USB drivers for my motherboard’s controller. I went through the following steps:

  • uninstall Windows Mobile Device Center
  • uninstall the unidentified devices in the Device Manager
  • reboot
  • install my motherboard’s “proper” USB driver (NEC)
  • reinstall Windows Mobile Device Center
  • Try connecting via USB again – it still failed, as before.
  • Slap in my Bluetooth adapter and connect via Bluetooth. When I did this, it seemed to install the mobile device.
  • I also had to delete my two existing PC partnerships before setting up the new one. Now it all syncs correctly.

Annoyingly, I still cannot sync via USB, but at least the Bluetooth works. I could have probably skipped to that, but I’m so used to syncing via USB that I wanted to get that working. I guess I could probably start sifting through the RNDIS driver error logs and see if I can find anything there, but considering that this whole epic saga started because I needed to sync someone’s address to my phone so I could mail them a package, I don’t want to get sidetracked too much more…

A Gross Overgeneralization

From David Chisnall:

If you find yourself optimizing your code, then it means that the author of your compiler has failed.

This is just very, very untrue. Even if you strike algorithmic optimization from the picture, code optimization is still a very important and useful skill to have no matter what sort of programming you’re doing. Knowing your target platform, knowing the behavior of your compiler or interpreter, and knowing your data (hat tip again to Mike Acton for driving this point home to me) can provide you with ideas on how to transform your code and realize massive performance gains.

If you’re writing performance-critical code, no compiler is going to do everything for you – the best optimizer is still between your ears. The idea that poorly-performing code can be blamed on the compiler is simply naïve and defeatist.

Our Magnificent Bastard Tongue

Our Magnificent Bastard Tongue (Kindle version) is a book that my wife gave me awhile back, and to which I finally got around to reading. It’s a bit of a strange book – it purports to challenge existing dogma about the origins of modern English, but does so in a manner that seems too casual for academia proper, and yet still too involved for most laypersons. The capsule summary of the author’s view is that Celts and Vikings are mostly responsible for some of the oddities of the English language, rather than the “punctuated equilibrium” that mainstream linguistic thought champions. The erosion of verb conjugations and the presence of meaningless “do” words are cited as some examples of these, which are found in precious few other languages.

As someone who knows very little about linguistics, I feel that I was able to understand the book’s arguments but not critique them – it seemed pretty reasonable, but without a more thorough background in the subject I’m sort of hesitant to embrace it as truth. I did find it very amusing and interesting that the author, John McWhorter, went on a bit of a tangent to attack one of the more interesting bits of language-related theory that I had read about in college: the Sapir-Whorf Hypothesis. Back in college, I remember having some doubts about its veracity (namely, that it seemed unbelievable that a people could be limited in their ability to participate in the “modern” world just by virtue of quirks of their native tongue), but McWhorter brings up several other reasonable objections to the theory and its formulation.

All in all, I can recommend it as a thoughtful, dense, and short read. It’s unlikely to spark any epiphanies for the average person, but still an interesting book nonetheless.

The Xbox (1) Live Shutdown and Secret Weapons Over Normandy

A few months ago, Microsoft announced that it would be shutting down the Xbox Live service for the original Xbox on April 15, 2010. I haven’t played an Xbox 1 Live game in years, but this news still makes me a little wistful, because for the first time something that I’ve worked on will essentially no longer be available anywhere. I’m referring to the downloadable content for Secret Weapons Over Normandy – there were three packs, each containing a challenge mission and a new plane. According to this list, there were only 53 games on the original Xbox that even had DLC, and ours was in the earliest 20% or so of that. (Wikipedia doesn’t have a dated list for downloadable content, so I’m merely going by release date of the original game, which may not be accurate – I seem to recall that the Yavin Station DLC for KOTOR only came out after the PC version shipped, which was many months after the Xbox version.) I remember at the time that it was still a novelty for a game to support DLC – I think the only game up until that point for which I personally had downloaded DLC was MechAssault.

At that point, publishers and developers hadn’t really figured out their DLC strategies yet – they tended to come out at odd, uncoordinated times, and since you couldn’t bill users for it, there wasn’t a direct financial motivation for producing it. (I’m guessing Microsoft probably paid LucasArts for the DLC, but I have no real knowledge about this.) In hindsight, it seems a little quaint to produce DLC for a game without Xbox Live multiplayer like SWON when there was no ability to monetize it. Then again, the time investment in producing the DLC was pretty modest – creating and setting up a new plane was a fairly simple process, and we had an awesome mission editor, SLED, that really made it fast and efficient to create new missions. Comparing that process to content generation on current-gen games makes my head spin, to be honest.

The DLC was pretty much ready to go from the launch date – we had spent less time in certification and re-submission than we had planned, and so the DLC content was started and finished sooner than expected. I also seem to recall some feedback from Microsoft that our DLC was the first that had passed cert the first time through – I’m not 100% sure if I’m remembering that right and that we were the first, but passing on the first submission was definitely something that they highlighted and was a nice feather in our cap. Our engine design and the relatively self-contained nature of the DLC definitely helped with that. (Certification in general was actually a breeze on that game – we passed SCEA cert on our first submission, and I think we only had minor fixes for SCEE and for MS. The project schedule was built around the assumption of two resubmissions for each platform/region.) I think the DLC came out one pack at a time, a couple of weeks apart from each other, and all of the game content was out by early-to-mid December 2003.

My involvement in the DLC was pretty peripheral – I just played it a few times and gave a little feedback. At the time, I think I was working on the Japanese localization of the game (for PC and PS2), which needed some additional code and tool work. That version, incidentally, is the “final” version of the game – there were a couple of tiny bug fixes I made that missed the US and European releases, and of course none of the Japanese text and font rendering stuff was in the earlier releases.