Erik Novales

Game and Software Development, plus other stuff

Erik Novales header image 1

Ludum Linguarum: Aurora

June 3rd, 2016 · No Comments

(Ludum Linguarum is an open source project that I recently started, and whose creation I’ve been documenting in a series of posts. Its purpose is to let you pull localized content from games, and make flash cards for learning another language. It can be found on GitHub.)

In this post, I’ll talk a little bit about Ludum Linguarum’s support for some of the Aurora-engine based games that are out there. The Aurora engine was Bioware’s evolution of its earlier Infinity Engine, and was used in quite a few games overall.


There are quite a few games with large amounts of text that were produced with the Aurora engine (including one that I worked on), so it seems quite natural to try and target it for extraction. The text in these games can also be categorized in some ways that I think are interesting, in the context of language learning – there are really short snippets or words (item names, spell names, skill names, etc.), as well as really lengthy bits of dialogue that might be good translation exercises. Additionally, there’s quite a bit of official and unofficial documentation out there around its file formats.

Goals for Extraction

The raw strings for the game are (mostly) located inside the talk table files. However, just extracting the talk tables would lose all context around how the strings are actually used in the game. For example, the spell names, feat names, creature names, dialogues, and so on, are all jumbled together in the talk table. It sounds like a small thing, but I feel that creating a taxonomy (in the form of “lessons”) would make a big difference in the usefulness of the end product. Unfortunately, it also makes a huge difference in the amount of effort needed to extract all of this data!

How it all went

I spent quite a bit of time writing file-format-specific code, for things like the TLK table format, the BIF and KEY packed resource formats, the ERF archive format, and the generic GFF format. On top of that, then there was code to deal with the dialogue format that gets serialized into a GFF.

I started with the original Neverwinter Nights, and then moved on to Jade Empire. The console-based Aurora engine games used some variant file formats (binary 2DAs, RIM files, etc.) that needed a little extra work to deal with, but there was enough information about these available on the Internet that I was able to support them without too much hassle.

Once I had the basic file parsing code in place, it was just a matter of constructing the “recipe” of how to extract the game strings. This mostly involved sifting through all of the 2DA files for each game, looking for columns that represented “string refs” (i.e. keys into the talk table database) – extracting dialogues was much simpler since they were already in their own files, and their contents were unambiguous.

Comparison between C# and F# implementations

I had basically written all of this file parsing code before (in the C# 2.0 era, so without LINQ), but this time around I was writing it with F#. I found it very interesting to compare the process of writing the new implementation, with what I remember from working on Neverwinter Nights 2 more than 10 years ago.

The F# code is a lot more concise – I would estimate on the order of 5-7x. It isn’t quite an apples-to-apples comparison with what I did earlier (for example, serialization is not supported, only deserializataion), but it’s still much, much smaller. I suspect that adding serialization support wouldn’t be a huge amount of additional code, for what it’s worth.

Record types and list comprehensions really help condense a lot of the boilerplate code involved in supporting a new file format, and match expressions are both more compact, and safer when dealing with enumerated types and other sets of conditional expressions. I also got lots of good usage out of Option types, particularly within the 2DA handling, where it very neatly encapsulated default cell functionality.

But I think the thing that accounts for the biggest difference between my old C# implementation and the new F# implementation, is the range of functional abstract data types available – or, to put it another way, the lack of LINQ in my C# implementation. If LINQ were available at the time I was working on Neverwinter Nights 2, I think my code would have looked a lot more like the F# version, with liberal use of Select()/map() and Where()/filter(). These operations replace very verbose blocks of object construction and selective copying, often in a single line, which is an enormous savings in code size and improvement in clarity.

I feel like there is still a lot of bespoke logic involved, for extracting the individual bits and pieces of each format, but that doesn’t seem to be avoidable – the formats are not self-describing, and it seemed like it would be overkill to try and construct a meta-definition of the GFF-based formats.


Overall, I was pretty pleased with how this went. While it was a decent amount of work to support each file format, once that code was all written, the process of creating the game-specific recipe to extract strings was pretty straightforward. There weren’t really any surprises in the implementation process, which was definitely not the case for the game that I’ll talk about in my next set of posts.

→ No CommentsTags: Development · Games · Ludum Linguarum

Ludum Linguarum: The Simple Stuff

June 2nd, 2016 · No Comments

(Ludum Linguarum is an open source project that I recently started, and whose creation I’ve been documenting in a series of posts. Its purpose is to let you pull localized content from games, and make flash cards for learning another language. It can be found on GitHub.)

When I started this project, I figured that support for individual games would fall into one of a small set of categories:

  • Low effort, where the strings are either in a simple text file or some sort of well-structured file format like XML, where many good tools already exist to pull it apart.
  • Cases where the file formats, while bespoke, are well documented, and where there may be tools and code that already exist to parse the file formats.
  • The really hard cases – ones where there isn’t a lot of (or any) extant information about how the game stores its resources, and extracting strings and metadata about them is more of a reverse-engineering exercise than anything else.

In this post, I’ll talk very quickly about a few really simple examples of games that I was able to knock out very quickly: King of Fighters ‘98, King of Fighters 2002, Magical Drop V, and Skulls of the Shogun.

King of Fighters ‘98 and King of Fighters 2002

While I was working on this project, I started on some of the other supported games first. But then, I decided to take a little break, and see if there were any games out there that would be really trivial to support. I just started browsing through my Steam library, and realized that fighting games were probably a good candidate – they contain limited amounts of text, but were definitely globalized.

Both of these games use the Xbox 360 XDK’s XUI library formats to present their UI. (I determined this by the presence of some files and directories with “xui” in their name.) All of the strings in the game are inside a file conveniently named strings.txt inside the data directory.

This is a tab-delimited format with just four columns – a key for the string, a “category” comment field, and then columns for each supported language – “en” for English, and “jp” for Japan. (It’s interesting that the country code rather than the language code was used for Japan – I’m not sure if that was an unintentional mistake.)

In this case, it’s super simple to extract all of the strings, because of the simple formatting, and the one place that I need to look to find them all. I simply read in the file, and directly map the key column to the per-language text for each card.

(It’s worth noting that King of Fighters XIII doesn’t use the same format or engine, so I wasn’t able to just add support for it using the same code.)

Magical Drop V

Adding support for Magical Drop V just involved reading some XML files within its localization subdirectory, and massaging them slightly to remove invalid and undesirable text. For example, ampersands were not escaped in the XML files, which caused the .NET framework’s XML parser to complain. I also stripped out some obvious placeholder values (“<string placeholder>”).

Overall, it was really quite simple to add support for this game, with the game-specific code only running to about 50 lines.

Skulls of the Shogun

Skulls of the Shogun is a game built on XNA and MonoGame, and actually uses the .NET framework’s globalization support to localize its strings. Thus, I was able to use the framework’s support for loading satellite assemblies to pull out both the string keys used to refer to the strings, as well as the content itself, quite easily.

I actually spent more time determining that I had to load the assemblies using the reflection-only context, in order to keep my library and console application bit-length-independent, than writing the rest of the code to support this game!

→ No CommentsTags: Development · Games · Ludum Linguarum

Ludum Linguarum: The Tools

June 1st, 2016 · No Comments

(Ludum Linguarum is an open source project that I recently started, and whose creation I’ve been documenting in a series of posts. Its purpose is to let you pull localized content from games, and make flash cards for learning another language. It can be found on GitHub.)

When I started working on Ludum Linguarum, I decided to use it as an opportunity to exercise what I had been learning about the F# language on the side. This might seem like kind of a strange decision out of context, but there were a few reasons why I felt that this made sense:

  • I already had a good bit of familiarity with the .NET stack, having spent a good chunk of my years in the gaming industry writing tools in C#.
  • Because of the frequent use of C# in games and editors, I felt that there would be a greater likelihood of me finding useful, easy-to-integrate libraries and documentation for reverse engineering games than on other stacks.
  • I write Scala at my day job, so I figured that I would be reasonably well-equipped to deal with the functional programming aspects of the language, even if I had never really touched Ocaml before.

At the very beginning of the project, I was working on learning F# on my commute, using Mono and MonoDevelop on an old netbook that I threw Ubuntu on. This worked (in that it is totally possible and viable to write F# and .NET code on non-Windows platforms), but later on I got a proper new laptop, threw Visual Studio 2015 on it, and never looked back. The added benefit of doing this, of course, was that, running under Windows, I could easily install and run the games that I was reverse engineering.

The benefits

All in all, I have been very pleased with my decision to use F#. Using a functional-first language let me construct composable, easily-testable pipelines, and I feel this really saved me a bunch of time as the project grew. The language is very similar in capabilities to Scala for application code, albeit with significantly different syntax and a slight verbosity tax.

When I think back to similar code I’ve written in the past, I feel that my F# code is more concise, easier to understand, and with less room for bugs to creep in, compared to C++ and C#. This applies both for simple parts of the code, as well as much more complex parts. In a future post, I’ll go into this in some more detail.

I would go so far as to say that the things that slowed me down the most were when I strayed furthest from the functional style, and just used the full console application and the full game data as my testbed. (The reason I did this is that it can be a bit of a pain to construct test data that is compact, concise, and doesn’t include any actual copyrighted material.) As long as I wasn’t too eager, moved at a reasonable pace, and built up a decent test corpus, things worked out well.

Project setup

Initially, I used the standard .fsproj and solution setup in VS. The project was set up as a plugin-based system, where all main build outputs were copied into a single output directory, and NUnit test projects were simply run in-place. This worked OK, but as I got closer to actually releasing the first version of the project, I decided that it would be better to migrate the project to the FAKE build system and Paket dependency manager. (Using those makes it simpler to keep dependencies up-to-date, and hopefully easier for the curious or motivated to build and run the project.)

I used the open source F# Project Scaffold, and reconstructed my old project setup. It took a little bit of experimentation, but I was able to get up and running pretty quickly. I did run into an issue where the recently-released NUnit 3 isn’t supported by FAKE, and I did have to do some legwork to get everything building with F# 4 and .NET framework 4.6.1, but it wasn’t too bad. Now I have a very simple system to build, test, and package the project for release.

The latter is particularly important – I don’t have a lot of extra time to spend on overhead like manually making builds and uploading them, so it’s much easier for me to change the way I work, and change my project to conform to some existing conventions. One example of this is that the console program used to have no project dependencies on its plugins – they were copied as a post-build step into a separate plugins directory in the build output. This was done out of a bit of a purist mindset (and was what I had done on some other projects in the past) – but when I migrated to FAKE, this presented some problems, as it was difficult to duplicate this exact behavior. The solution was to simply abandon purity, adjust the way that I did things, and just add project dependencies to the console application against the plugins. Realistically speaking, anyone developing a plugin is probably going to have the full source handy anyway, so why get hung up on this?

Other libraries

So far, I’ve pulled in just a few other libraries. One is sqlite-net, a SQLite wrapper, and another is CommandLineParser, to allow me to construct verb-driven command line handling in the console application. I spent a little while wrestling with both, but now I have a couple of wrappers and things generally set up in a way that works well. (I actually switched back and forth between the old version of CommandLineParser and the new beta one, and wound up sticking with the new beta as it fixed at least one annoying crash relating to help text rendering when using verbs.) I also wound up adding the venerable SharpZipLib library for zip archive support.


In summary, I’m glad that I have a setup now, using FAKE and Paket via the F# project scaffold, which is good for rapid development in Visual Studio, has good testing support, and one-line release packaging and deployment. There were a few bumps along the way in arriving at this setup, but I can wholeheartedly recommend it to anyone working in this ecosystem.

→ No CommentsTags: Development · Games · Ludum Linguarum

Introducing Ludum Linguarum

May 31st, 2016 · No Comments

I’ve been working on a side project for some time now, and it’s gotten far enough along that it’s worth releasing it, and discussing it. It’s called Ludum Linguarum (LL) – a little awkward, yeah, but I figured that a unique name would be better in this case than spending a lot of time trying to find an available-yet-expressive one.

What does it do?

Well, it’s intended to be a tool for extracting localized resources from games, and then converting them into language learning resources. In other words, the end goal is that you can turn your Steam library (full of games that you bought for $0.99 in some random bundle) into material to help you learn another language.

The current version pulls strings from games, and turns them into flash cards for use with Anki (and compatible apps). LL supports 21 games right now, and the goal is to expand that over time.

Why write something like this?

Well, it involves two things that have always interested me (games and languages), and as far as I know, nothing else like this exists! (subs2srs is a tool in a similar vein, but it generates flash cards from subtitled videos instead.) I figure you might be able to get a little extra motivation and drive by learning another language in the context of gaming.

Another reason is that the vocabulary of games is often well off the beaten path of most language courses – I don’t think that Rosetta Stone or even Duolingo is going to tell you that “magic missile” is Zauberfaust in German. There aren’t that many opportunities to learn this stuff otherwise – think of it like professional vocabulary, but for a really weird job.

I also find cultural differences interesting, and that includes the way that game content gets translated. Seeing how colloquialisms and “realistic” conversation get translated is really interesting to me – I get a huge kick out of learning that platt wie Flundern is how someone chose to translate “flat as a pancake.”

Finally, game content in itself is an interesting treasure trove where you can often see the remnants of things that were tried and abandoned, or cut in order to get the product to the finish line. And naturally, some of the most common types of remnants are text and audio.

Next Posts

I’m going to spend the next few posts talking about the development of Ludum Linguarum, and writing the code to extract strings out of the first few games it supports. There were quite a few interesting problems that came up while getting to this point, and a few interesting tidbits and trivia that I can share about some of the supported games.

→ No CommentsTags: Development · Games · Ludum Linguarum

Open Live Writer

January 21st, 2016 · No Comments

This is just a test post to try out Open Live Writer on my blog. I used to use the old Live Writer a bit, and was glad to hear that it had recently been open sourced.

So why am I all of a sudden interested in blogging again? Well, I have a few articles that I’d like to write, relating to a little side project that I’ve been working on, and I really like the WYSIWYG and native-client feel of Live Writer versus the WordPress admin UI.

Stay tuned! Open-mouthed smile

→ No CommentsTags: Uncategorized

Compacting a VHD

April 12th, 2015 · No Comments

I was looking to back up some VHD containers that I use to store files in Windows, and needed to trim one of them down before it would fit under the OneDrive 10 GB upload limit. Since it was a dynamically expanding VHD, just removing files from the container wasn’t sufficient to reduce the actual size of the VHD file. Once I was done, I needed to unmount the drive, and then compact it using the diskpart utility. Here are the steps I followed:

  1. Run the diskpart command from a command prompt.
  2. Enter select vdisk file="path to VHD file".
  3. Enter attach vdisk readonly.
  4. Enter compact vdisk. This will compact the VHD file, and might take a little while.
  5. Finally, enter detach vdisk and then exit. This will detach the VHD file and exit diskpart.

Once this is done, your VHD size should be reduced to the minimum necessary to store the files within!

→ No CommentsTags: Computing

Analysis of Yakuza 5 Hack Videos

March 12th, 2015 · No Comments

I happen to be pretty hyped over the upcoming US release of Yakuza 5 — I’m a big fan of the series’ odd mix of ridiculous melodrama, wide variety of activities and minigames, and really satisfying combat. So naturally, after the localization was announced, I went around looking for videos of the game to watch. First off, I found this amazingly comprehensive and lovingly-assembled survey of the whole series — it’s not really related to the rest of this post, but if you’ve never seen these games it’s worth watching to get a glimpse of how unique they are, and what’s so appealing about them to me.

Then, I found some videos of some hacks that someone apparently made to the game, to allow the player to play as Haruka and another female character (Mai — no idea what her place in the story is). During the normal story arc, there is a chapter where you play as Haruka — however, her fights are rhythm games and dance battles, not the sort of bare-knuckle brawls for which the series is famous. This hack instead allows you to play as these characters during other chapters of the game, where you engage in tons and tons of fistfights. And, somewhat surprisingly, if you watch the videos, it looks pretty good!

So, putting on my ex-game developer hat, what do these videos tell us about the way the game is built, and why this was possible? And is adding a new playable character to the game as simple as these videos make it seem? Here are some of my observations and speculation on how this works, and its limitations.

  • The female and male characters must be animated using the same skeleton. Basically, because all of the combat animations that these characters are using are the same ones that the standard playable characters use, Haruka and Mai must be built and animated on the same basic skeletons as Kiryu, Saejima, Akiyama, and Shinada. This is a little surprising to me, but it goes a long way to explaining why the female characters in this series always seem to…uh, have a somewhat mannish feel to them. I’m guessing that, for the original PS2 games, that this was done to save memory, and then brought forward because it worked well enough and making unique skeletons would require duplicating an already-large animation set.
  • Haruka and Mai are missing a lot of animation metadata. The most obvious case is when Haruka goes to light up a relaxing cigarette after beating the tar out of countless schlubs, just like uncle Kiryu.
  • The smoke from the cigarette comes out of Haruka’s chest — or, more accurately, the origin (0, 0, 0) of the character. There’s a missing animation attachment point in Haruka’s metadata, and the game engine falls back to the origin. Interestingly, Mai seems to have this attachment point — the smoke appears in the correct place for her.

    Another example of missing metadata is this HEAT attack with a bowling ball — it pops away from Haruka’s hand and looks like it’s stuck on her nose. And if you freeze-frame a similar HEAT attack with a beer bottle, you can see that the bottle looks like it’s stuck on her lips.

    I believe there are also missing camera focus points — for example, at the end of this HEAT move, the camera seems to be focused on the origin point of Haruka, and her face is off camera. If I remember correctly, this move looks different when performed by one of the other player characters — the camera tracks the head and it’s in the frame.

  • They’re also missing a lot of animations. The easiest case to spot is that Haruka and Mai’s faces remain completely expressionless, and possibly unblinking, during fights — they don’t have any combat “barks” (voice + facial animation), and they don’t play any reaction or pain facial animations as they lay waste to their foes. While it kind of lends a comic tone to the video, this would definitely not be acceptable for an officially supported character. It just looks strange.
  • The game’s IK seems to work OK with these characters. I had kind of assumed that the engine supported IK, given that a lot of the close combat grabs in the game look pretty good. When Haruka grabs a thug by the hand, that’s a pretty strong signal to me that they’re doing some limited IK, because if they weren’t, you would probably see a gap in the throw animation as Haruka’s character is physically smaller than, say, Kiryu’s. Another example of this is Mai kicking the sign stuck on a thug’s head — it’s just too unlikely that it would look good without IK support.

    Note that there are still some cases where it looks like they don’t normally use IK, and just rely on the animations fitting the sizes of the characters — Haruka lifting up a thug looks pretty bad, as her hand is nowhere near the thug’s chest.

  • Haruka and Mai’s hair is not built to be animated during combat. In the case of Mai, her hair basically doesn’t move at all. And Haruka’s hair physics object was clearly conditioned to look good during movement animations, but not tuned at all for anything that would look like combat, with its frequent flips, tumbles, falls, and dashes. It’s all over the place constantly.
  • Both characters seem to be using Akiyama’s move set. But I can’t tell if this is just a convenience, a deliberate stylistic choice on the part of the author, or that none of the others would work. I think it would be kind of funny to see them using Saejima’s brawling moves, though.
  • Surprisingly, there was no content protection on the game assets. Presumably the author of these videos was able to simply pull out the PS3 HDD, and modify the files directly on the hard drive to point a character definition to Haruka or Mai’s models. It’s a little surprising to me that these were left unprotected, but perhaps Japan has less societal anxiety about hot coffee than the US. Maybe I should have a look at the installed data, to see if I can verify any of my conjecture here.

In closing, I think these are really neat, fun videos to watch, and that it would be very cool if female characters in future Yakuza installments were able to fight and brawl. But there are enough rough edges and missing content in this hack, that it should be clear that making them fully playable is not just a matter of flipping a switch (or deciding to change the story) and suddenly having Haruka powerbombing fools alongside her uncle Kiryu. There’s a lot of missing content and additional polish that would need to go into making Haruka and Mai fully first-class fighting characters in the game.

Questions or comments on my analysis are welcome!

→ No CommentsTags: Development · Games

Operation Stop Junk Mail

June 1st, 2014 · No Comments

I am sick and tired of receiving junk mail. It wastes my time, it wastes resources, and it generally has no redeeming value whatsoever. Even worse, I feel that for some classes of junk mail (stupid stuff like balance transfer checks, which I will never ever use), I need to take special care to shred it to avoid identity theft or scammery. So, I’m going to try to do everything I can to stop junk mail from being sent to me, and document everything that I’ve done in the hopes that it gives other people some ideas on how to stem the tide of garbage hitting their mailbox.

The first step on this journey is the FTC’s “Stopping Unsolicited Mail, Phone Calls, and Email” page. Here you will find:

  •, which is a site created by four major U.S. credit reporting companies to allow you to opt out of pre-screened credit card offers. You can opt out for a period of five years electronically — to opt out permanently, you need to mail in a signed form (which is a ridiculously weaselly requirement that is just trying to raise the pain threshold for truly opting out). Considering how much junk mail I get that consists of credit card offers, this seems like a great place to start. Note that unlike USPS mail forwarding, every individual in the household will need to opt out.
  • The government’s “do not call” registry, While this doesn’t actually address junk mail, it’s such a basic quality-of-life improvement that it’s worth including anyway.
  • The Direct Marketing Association’s “Mail Preference Service” site, This lets you opt out of several categories of junk mail. They also have an “e-mail preference service” which alleges to reduce unsolicited commercial e-mail.

Beyond that, now you’ll need to start on some other companies with whom you probably do business, and sell your name and address to “marketing partners.” The primary ones that I’m focusing on are banks, credit card companies, and airlines, since those seem to constitute most of the garbage offers I get in the mail. The general rule of thumb is that the “opt-out” switches tend to be hidden in each company’s “Privacy Policy” section of their site — if you can’t find any way to opt out of slimy sleazy marketing in the normal account settings, check their privacy policy. (I’m guessing that there’s a legal reason for this, but I haven’t dug into the specifics.)

I’m going to start with these and see how it goes. Hopefully this will eradicate a significant amount of hassle and wasted time and resources!

→ No CommentsTags: Operation Stop Junk Mail

Tinfoil hat time

January 8th, 2014 · No Comments

I just thought I would share an amusing anecdote from my time at Netflix, related to wi-fi networking in our office. Because many devices that support Netflix have built-in wi-fi, we would test streaming with wi-fi connections to ensure that the experience was still satisfactory. Pretty standard stuff.

At one point, though, we ran into a mysterious problem with wi-fi networking on one of our supported device types. Thinking that perhaps there was an issue with the wi-fi adapter on the device, or the signal from the router, someone did a WLAN survey to see how many wi-fi networks were visible from our corner of the office.

It turns out that there were no fewer than 150 access points visible from there. Yes, 150 access points. That’s what happens when you have a huge number of people, working on a huge array of different devices, with different network configurations, and in different working environments.

Of course, this discovery led to more than a few jokes about sterilization, CIA mind control, and space madness.

(It turns out that the root cause of the problem didn’t have anything to do with wi-fi interference — it was actually a firmware bug on the device, which was triggered by a separate firmware update to the wireless access points in our building. Pretty crazy!)

→ No CommentsTags: Computing · Development

23andme Thoughts

July 8th, 2013 · No Comments

Sandy and I recently decided to get our DNA analyzed through 23andme. The service genotypes DNA from your saliva, which you send to the lab after collecting it in their “spit kit.” After a few weeks, you get access to a set of results containing health and ancestry information for yourself. The presentation is through a fairly slick web app, with what seems to be pretty good documentation and bibliography for the claims that are made, and an easy-to-navigate interface.

The information available is kind of a mix between useful statistical data (risk factors for certain diseases, whether or not you are a carrier for certain diseases, etc.), and what I think of as “science-flavored astrology” (seeing which celebrities share your maternal or paternal haplogroups, or seeing the percentage of neanderthal DNA you possess).

There are a range of health risks for which their data suggests I am statistically at an elevated risk — similarly, there is another set of risks for which I am at decreased risk. These range from type 2 diabetes, to certain types of cancers, to Alzheimer’s. Some of the probabilities allow you to select a particular ethnicity (presumably selecting for a specific experiment or set of experiments whose results back the calculation), which is somewhat problematic or tough to interpret for someone like me from a mixed background. (23andme itself lets you report multiple ethnicities, but source data for certain health risks may only have involved cohorts of a single ethnicity.) Interestingly, the elevated risks for Sandy and I are mostly disjoint, which gives me hope that our daughter will inherit our advantageous traits while skipping our vulnerabilities. 🙂

You can also browse a set of “traits”, which are a set of tests on non-disease characteristics. For example, I am apparently 0.13-0.29 times less likely than the average European to develop male pattern baldness — fingers crossed! I also apparently have a genotype that frequently results in not being able to taste certain bitter flavors, which perhaps explains some of my tastes in food and drink.

There’s also an ancestry aspect to the service, which does a bit of analysis to show where your distant ancestors (500+ years ago) came from, and also provides a “relative finder” and family tree builder. You need to specifically allow the “relative finder” to find close relatives (or to be found as a close relative), apparently to reduce the likelihood of unpleasant surprises. This is interesting but I have no particularly close matches currently registered on the service — the closest are estimated as 3rd through 5th cousins.

The cost of the service is fairly modest — it’s now $99. On the one hand, it’s not at the level where I could call it “cheap” — however, given the breadth of information that is obtained from the test, I think nearly everyone would find something interesting or perhaps helpful from getting tested. To look at it in a very simplistic way, if you undertake any sort of successful lifestyle change (prompted by your genotype analysis results), and you wind up living just a few hours longer than you would have otherwise, it’s “worth it.” I also find the idea that this sort of information is now more readily available to the average person really fascinating — I guess I’m more focused on the potentially positive aspects of it, rather than potential privacy and/or insurability problems.

→ No CommentsTags: Uncategorized