So I recently got done burning a huge chunk of time finishing Persona 4 GoldenPersona 4 Golden on the Vita (with the “true ending”). I never played Persona 4 when it was originally released, so I figured that it would be a good game to pick up for the Vita. Overall, it’s a stellar example of “better than the sum of its parts.” There’s a lot to like, but I can also level a lot of critiques at the game.
- The story is fairly compelling. That is, for a game, it’s pretty good. It is full of well-worn character stereotypes, along with two bonus dungeons that really feel tacked on to this re-release, but the main story arc is satisfying enough. (Even if the resolution of the main story is telegraphed well in advance.) I guess the best way to describe it would be that it’s like a really, really long after school special.
- There is a pretty solid set of interactions between the two parts of the game. The “high school time management” stuff has consequences for your equipment and powers in the dungeon crawling part of the game, and you’re motivated to do well in both parts of the game.
- The translation and voice acting are really quite good. Humor and nuance are carried through into English, and given the amount of text in the game, this is no small feat. The dialogue fits the characters, and fits the mood of the game very well.
- There is a ton of stuff to unlock and/or complete. You don’t need to do much of it to complete the game, but it’s enjoyable enough that you’ll be motivated to do a lot.
- The dungeon crawling becomes incredibly tedious as the game goes on. The dungeons are nearly all just tile swaps of each other, with few gimmicks or notable differences between them except the level and type of monsters you fight. Boring…
- The bestiary of enemies you fight is truly insane, and really feels like a huge disconnect from the story and theme. And, what’s worse is that nobody ever really comments on it! When a reanimated table is trying to kill you, you think somebody might find it at least a little funny. I realize that these enemies get carried over between games in the series, but it just seems completely out of whack. And none of the “themed” areas of the game have themed enemies, which seems like a missed opportunity.
- The combat system gets quite boring after a while — there aren’t enough twists and sub-systems to sustain 50+ hours of gameplay. The combat basically boils down to: 1) determine elemental weaknesses, 2) spam elemental attacks of that type, 3) perform all-out-attacks, 4) rinse and repeat. It feels like they even removed a little bit of complexity from P3 since I don’t think any creatures in P4 are resilient to all-out-attacks. None of the boss fights really change things up, either — the only variation is that you might need to heal or remove ailments at some point. There are no cases where the standard battle rules are subverted, or you are forced to use unusual aspects of the battle system — there are only a couple of “trick” encounters, and you don’t even need to recognize the trick to prevail.
- Along similar lines, there’s no real incentive or reason to mix up your party — the main character can fill in pretty much any missing powers via judicious use of Personas. I used Yosuke, Chie, and Yukiko for basically the entire game, because they were the highest level characters I had.
- There’s a decent amount of creepitude (Teddie is the #1 offender) and/or blatant fan-service, which just makes me roll my eyes. A lot of the movies in the game fit this description, actually.
- The social link system of the game is broad, but very limited. There aren’t really any meaningful decisions to be made when advancing someone’s social link, and once you complete it, there’s no meaningful interactivity or payoff beyond the dungeon crawling benefits (persona unlocks, battle abilities, etc.). You can’t pick between character development trees, or unlock mutually exclusive abilities, or anything like that. There was some attempt in the re-release to give some flexibility as far as respec’ing yourself and your allies, but it’s not enough (and it takes too long to do so in-game — you have to burn up a chunk of time every time you want to either get the card for a power, or respec one of your allies’ powers).
- You are also really incentivized to be a giant man-whore, in order to unlock all of the battle benefits for each party member. (There are, I think, two points in the game where man-whoring behavior is pointed out, but there are absolutely no consequences.) All of the benefits, like follow-up attacks, ability to withstand mortal wounds, ability to take fatal damage for the main character, and especially all of Rise’s party-wide boosts are ridiculously powerful, and it would be foolish not to unlock everything that you can.
- There are some story bits that are just kind of dropped on the floor, and left unexplored. The presence of Junes in Inaba, the fate of certain characters, Dojima’s story arc, and some of the “school life” stuff is left unresolved or ignored in the last third of the game. This is kind of disappointing, because I feel that the plot or story could be even more engaging with just a bit more effort.
In spite of all of the negativity above, I really enjoyed playing the game. I just think that with some extra polish it could move from “pretty great niche JRPG” to “amazing game that could be recommended to any gamer.” I suppose there’s always next time.
I’m totally done with P4G now, though. The thought of “New Game+” after playing for dozens of hours already is pretty scary…
A few months ago I decided to update my desktop PC to Windows 8. I am somewhat ambivalent about the new Metro/Modern UI, but I figured that I would update anyway just to be on the latest and greatest.
Unfortunately, after I updated, I started experiencing a lot of bizarre blue-screen crashes. At the same time, sometimes my machine would refuse to get through the BIOS startup at all, which would seem to indicate an issue with hardware rather than with the OS. I suspected perhaps that it was an issue with the SSD I had installed a while back (a Crucial M4), because the BIOS startup code would hang on drive detection. I tried updating the firmware for it, but that didn’t seem to do anything. Finally, after a more serious bout of being unable to boot for an extended period of time, I got frustrated and started trying to isolate things further.
I turned off my external Blu-ray writer (an ASUS BW-12D1S-U), and then all of a sudden I was able to boot consistently. I thought that was very strange, and then got to thinking that maybe it was because I had plugged it into one of the front USB 2.0 ports on my case, and maybe that was drawing too much power. The reason I had plugged it in there was that a different USB 3.0 hub that I had bought (a SIIG JU-H60012-S1) had always been kind of flaky, and didn’t seem to work well with Windows 8 — I checked the Device Manager, and it was listed as “Superspeed USB Hub (Non Functional)”, which was kind of off-putting.
It turns out that there is a firmware update for that USB hub that gets it to work correctly with Windows 8 (I’m guessing that may be more of a “gets it to work correctly at all” update). Applying that update allowed the hub to work in Windows 8, and, in turn, allowed me to plug my Blu-ray writer into it (a separately-powered hub) rather than directly to my computer. And (crossing my fingers) that seems to have solved my stability problems so far.
My layman’s guesses as to why the problems initially seemed to be related to Windows 8:
- There might be power-management changes in the USB drivers for Windows 8, that might be changing behavior slightly and triggering problems with my particular setup.
- I might have switched where I plugged the drive in after I upgraded, not realizing that was what was causing the problem to begin with.
- I didn’t use the hub that often before I updated to Windows 8, and didn’t realize that the hub itself could potentially be screwing with the rest of my system.
I’m just glad that everything seems to be working now, and I can soldier on a bit further with my system (which is almost 3 years old now!)…
I recently picked up a Nokia Lumia 920, which I like quite a bit. I just figured I should try out the WordPress app for Windows Phone 8 — I might post more often if I can punch in a quick post or two on my phone!
I have been going through the process of obtaining an apostille certifying the recent birth of my daughter. It is an international means of certifying the origin of a document from another country – in this case, her birth certificate. We need to have this in order for her to obtain her dual citizenship, from Finland. There is a good primer on apostilles, “The ABCs of Apostilles,” available from the Hague Conference on Private International Law. (Note that not every country is a participant in the “Apostille convention,” so check to make sure that the country in question accepts this means of document authentication.)
The process is a bit involved, so I figured that it would be useful to put together a short article describing the steps involved. Note that the steps may vary depending on where you are – in my case, the instructions are tailored for people living in California (and more specifically, Santa Clara County).
- Obtain a certified copy of the birth certificate. Be warned that Google is infested with dozens of companies that try and obfuscate the “normal” government channels for doing this, and rip you off by charging you money to do things that you can take care of yourself. (As an example, one link that I clicked on wanted to charge me $39 as a “retrieval fee,” on top of the normal costs charged by the county. This is an outrageous skimming fee.)
For Santa Clara County, certified copies of the birth certificate (from birth through 1 year of age) are available from the Public Health Department, through the Department of Vital Records. Information can be found here. At the time of writing, the cost was $21.00 per copy.
Birth certificates older than 1 year must be obtained from the Santa Clara County Clerk Recorder’s office. The cost is still $21.00, although if you order them online or with a credit card, additional fees will apply.
- You will then need to have the signatures of the county health officials certified by the clerk recorder’s office. The reason for this is that the California Secretary of State’s notary section cannot certify these signatures – they can only certify a smaller group, from the various county clerks. This will cost an additional $13.
As an aside, I was missing this crucial step, as it is not called out on the California Secretary of State’s Authentications information page, and none of the other government sites I read while researching this really mentioned it. While the SOS page does mention the limitations on what signatures can be authenticated by the Secretary of State, it does not mention that even though the birth certificate is a certified copy, that its signature cannot be authenticated by their office.
- Finally, you can send the certified document to the California Secretary of State’s notary section, along with a $20 check or money order, a self-addressed stamped envelope, and a cover letter indicating the country in which the document will be used. (In our case, that would be Finland.) Information can be found on the Secretary of State’s authentications page. The processing time, as of this writing, is 3-5 business days.
At the end of this process, you should have an apostille indicating that the birth certificate is an authentic document, valid for use in the country you requested.
October 2nd, 2012 · 1 Comment
I was having some issues with spam on my mail server (hMailServer), so I decided to set up SpamAssassin to filter things before they hit my inbox. While hMailServer has some native support for SpamAssassin, I figured I would write up the steps I used to get things running smoothly as a Windows service.
Note that this is on a Windows Server 2003 box – steps may vary for other versions.
- Download and install the Win32 binaries for SpamAssassin from JAM Software. This includes the spamd daemon, but this normally runs as a command-line application. It is better to run it as a Windows service, so it can be launched automatically when the machine reboots, without having to log in and run it manually.
- Install the Windows Server 2003 Resource Kit Tools, if you haven’t done so already. This contains the srvany tool, which can be used to run an application as a service.
- Open a command prompt for the Resource Kit Tools. There’s a shortcut installed with the tools.
- Run instsrv SpamAssassin <path to Resource Kit Tools>\srvany.exe. This will install a new “SpamAssassin” service, linked to the srvany tool.
- Now, open up the registry editor (regedit). Create a new key under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SpamAssassin, called Parameters.
- Under the Parameters key, create a string value called Application. Set the value of this to <SpamAssassin path>\spamd.exe.
- Under the Parameters key, create another string value called AppDirectory. Set the value of this to <SpamAssassin path>.
- Go to the Services administrator tool, and start the new SpamAssassin service you created.
- Now, open up the hMailServer administrator tool, and go to Settings\Anti-spam. Click on the SpamAssassin tab, and check the Use SpamAssassin checkbox. Fill in the appropriate hostname (localhost should be fine, although you can use 127.0.0.1 if that doesn’t work), and check the Use score from SpamAssassin box. Finally, click on the Test… button to test your configuration. If a dialog box shows up with a response from the SpamAssassin service, you’re good to go.
- If not, check to make sure that the service is running, and that the port used (783, by default) is not blocked. Check to see if the port is being used, by running netstat –an and checking to see that something (the spamd daemon) is using TCP port 783. You can also try enabling logging with spamd by creating an AppParameters string value under the registry key mentioned above, and setting it to –s file. This should result in a spamd.log file being created in the same directory as spamd.exe, and the information within might help you debug the issue.
I just spent the last couple of hours discovering that my U-Verse gateway’s DNS server (serving the internal network) decided, for fun, to persist old IP addresses for some of my computers. The end result of this was that, while their external/NATted access was fine, local network services that were reliant on DNS would fail (since those machines had been allocated new IP addresses). So, for example, Windows file sharing would still work, but pinging or trying to use P4 would fail.
I did find the awesomely named post, “The ATT U-verse 2Wire 3800 HGV-B. I am not a fan…” detailing other problems with it. Needless to say, I am not a fan of it either. I got this resolved (by simply locking those machines to the “incorrect” IP reported by DNS — my will to live was figuratively destroyed by this point), but the next time I run into a problem with it, I’m just going to shove another TomatoUSB-powered router behind it (to replace it as a wireless access point, basically), stick it in DMZPlus mode (more info here), and be done with it.
I’m pretty sure this will be one of a million “stories about Apple” that will be going up tonight, prompted by the death today of one of its founders, Steve Jobs. Like many of my peers, the Apple IIe was one of the first computers I ever used, and one that I spent a huge amount of time with in my childhood — mostly playing games, of course, but also some programming and doing other practical things. The Macintosh that my dad later bought, in turn, also saw a lot of usage by me, although curiously my use of it was tilted more towards the practical and less games and programming. (Games because they simply didn’t exist, for the most part, on the Mac. And programming environments were pretty rudimentary for awhile — the development situation on the Mac at the time was definitely not friendly towards, or accessible by, 9-year-old kids.)
Of those two platforms, I would say that their influences were quite different on me. The incredible breadth and depth of games available on the Apple II platform really fanned the flames of my interest in gaming, which I would later go into as a professional career. The Mac, apart from the obvious innovations in user interfaces, introduced me to concepts like hard drives, laser printing (via PostScript), local area networking, WYSIWYG, and desktop publishing (an innovation that has become so ubiquitous that the term isn’t even used any more).
As a game developer, and someone who did some development on pre-OS X Macs, I wound up bearing a bit of a grudge against Apple. Their development environment, lacking protected memory, was incredibly unforgiving in many ways, and killed productivity. And Apple’s haphazard, ramshackle attempts at courting game developers were for the most part insulting, incomplete, and lacked support. I generally stayed away from purchasing Apple products and MacOS for a long time, ending only recently in the iPhone (which is a pretty decent phone).
So, in total, Apple is a company whose products have been incredibly influential not only in the world at large, but to me personally. And when one of its founders kicks the can, I feel obligated to eulogize just a little bit. In parting, I’ll relate an Apple II gaming anecdote that I haven’t written about before:
When we were kids, my brother and I used to love playing a game called Micro League Baseball, on our Apple IIe. It was a baseball simulation game, with somewhat rudimentary graphics, but a wide roster of teams, and the ability to play head-to-head. The multi-player mode was hot-seat — for each pitch, the player whose team was batting would select an option, and then the player whose team was in the field would select an option. Since the options could include baseball trickery like stealing a base, or pitching out, the person who was not entering their option would look away from the screen and keyboard and cover their eyes. This was to prevent the gaming equivalent of stealing signs.
Our rivalry was quite intense, and it was quite a big deal to us to triumph in these games. (We didn’t really consider the relative strengths of teams we were using, apart from the ’27 Yankees being mega-powerful.) So I wound up doing something that was both smart and dumb. The smart part: I realized that the hollow case and keyboard design of the Apple IIe was such that key presses had distinct timbres to them — ones that could be distinguished quite easily, with a little practice. I quickly learned that I could steal signs and know exactly what my brother was doing, even though my hands were covering my eyes in adherence to the “rules.” If he pressed ’3′ to steal a base, or ’4′ to hit-and-run, I would know about it in advance. The key noises were so distinct that there was basically zero chance of making a mistake.
The dumb part: I took too frequent advantage of this, and he got suspicious after about the sixth time that I called for a pitchout and happened to catch him stealing bases. I had to ‘fess up to my little trick.
I can’t remember if covering our ears, too, became part of playing the game, or if the other person had to leave the room, to ensure fairness and a level playing field.
Tags: Computing · Games
September 24th, 2011 · No Comments
Our new apartment has a nice layout, but with regards to home wi-fi, there are a few key differences from our old place:
- There are many more neighbors here who are also using wireless networks. They’re also closer to us than in our old place.
- The construction of the building itself may be contributing to the reception problem.
- The signal from our main access point is passing through a few walls. In our old place, it just had to go through a ceiling to get to our PCs.
The end result of all of this is that, in our home office, the wi-fi reception has been a bit dicey ever since we moved in. It would work, but the signal would occasionally drop out, or the response time would not be as good as I would like. With Battlefield 3‘s beta starting soon, I didn’t want to take any chances with a problematic network connection. I finally had a chance to do some tinkering and try and find a good solution to this problem, using components and parts that I already had laying around.
The first thing I tried was to set up a WDS (Wireless Distribution System), with a wireless router (my old Buffalo WBR2-G54) connecting wirelessly to my Asus RT-N16, which was situated in a hallway. The RT-N16 had better, unobstructed line of sight to the computers in the home office. The WBR2-G54 was running Tomato, and the RT-N16 was running Tomato USB. The conventional wisdom is basically that for WDS to work reliably/at all, the same hardware (or same wireless chipsets) must be used on all nodes. I can now report that the conventional wisdom seems to be true — I was able to connect using WDS, but not reliably. One minute, the network would be working very well, with strong reception between my office PC and the access point in the hallway, and good transfer rates. The next, it would be completely kaput, with a reboot of the router seemingly necessary to get it to respond at all.
The next thing I tried was to flash both of my routers with DD-WRT, and then try out its repeater bridge mode. This would purportedly allow me to have two separate access points, with the one in the hallway set to use the other one as its gateway, and with all machines on both sides of the network on the same subnet. This sounds nice in theory — however, I wasn’t able to get it to work, and the tools and documentation available for troubleshooting in DD-WRT are somewhat minimal. I double-checked all of the setup instructions on the DD-WRT Wiki, but didn’t have much success — I could connect to each access point separately, but the bridging didn’t seem like it was working reliably.
At this point, I was seriously considering just running some cable from the main access point in the living room to the hallway, and hooking the RT-N16 up there. It might be a bit ugly, but it would definitely work, and the interference problems would go away since the line of sight from PC to the access point would be much more direct and unobstructed. Some new Cat-6 and some cable covers, and everything would be golden…
Finally, I decided to try the basic repeater mode in DD-WRT. I also shelved the idea of using both the Buffalo and Asus wireless routers in this — I just set up the RT-N16 to repeat the signal of the main access point in my place. I also moved the Asus from the hallway to inside the office, in a place that may have clearer line of sight and less interference to the main access point in the living room. (The Asus is sitting near a window, which is across from a single exterior wall, behind which lies the main access point.) Once I straightened out all the little differences in setup (ensuring that the Asus was set to mixed B/G mode instead of B/G/N, due to the limitations of the main access point, ensuring that the wireless security settings matched, etc.), it all just started working. Devices in my living room can talk with those in my office, and the connection seems reliable and steady.
I could probably go back to using Tomato USB instead of DD-WRT, but at this point, now that it’s working well enough, I don’t want to mess with it for awhile. Maybe later down the line I will add another wireless-N router near my main (802.11g) access point, and see if repeating that signal will improve performance, but for right now I’m just happy to have nice reliable wireless networking going for my main PCs once again.
I was watching John Carmack’s QuakeCon 2011 keynote, and he mentioned that Rage was currently in the stage where they are creating cert builds, and just fixing bugs like (paraphrased) “getting a multiplayer invite and pulling your memory card out.” Memory card bugs are one of those things that tend to be a big annoyance for game programmers, because of the number of asynchronous use cases that need to be handled and the need to tie what are essentially supposed to be serial operations to a game that may be doing many other things in parallel. (Memory card support was optional on the original Xbox, due to the guaranteed presence of the internal hard drive. Accordingly, hardly any games actually support managing memory cards directly in-game.)
Carmack’s mention of memory card bugs reminded me of a funny story from Obsidian. For the Onyx Engine, one of my coworkers was working on writing the save/load code and then fixing bugs in the system, including memory card bugs on Xbox 360. Many of these bugs were timing-specific, so he would remove and reinsert his test memory card to try and reproduce the bug. Eventually, though, the first memory card slot on his development kit broke from the repeated (and potentially forceful, because of the need to try and reproduce specific timings) insertions and removals of the memory card. He had to switch over to the second slot on the kit — which, thankfully, survived until the project was over.
Another thing that came up in Carmack’s keynote is the use of static code analysis. He mentioned that id have drunk the proverbial Kool-Aid as far as static code analysis goes, and mentioned that turning on the “/analyze” switch for Xbox 360 builds (a flag that the XDK compiler supports — normally I think you need the Ultimate version of Visual Studio) brought to light many issues with their codebase. I can also vouch for this — I used to do this semi-regularly at Obsidian, and every time I ran it there were several subtle bugs that were sniffed out. It’s really worth using if you have it available.
Tags: Development · Games
On Sirius, why is Lithium (the ’90s rock channel) so terrible in comparison to the excellent First Wave channel? I listen to both regularly, and I’m still hearing unusual or rarely-heard tracks on First Wave. It’s great. On the other hand, on Lithium, I’m guaranteed to hear a steady diet of the same Soundgarden, Alice in Chains, and RHCP songs, over and over. I’m sick and tired of the lack of variety and lack of actual DJs on that channel — it’s amazing how much they add to First Wave and other channels (namely Sirius XMU).