So, like most (all?) things that I work on, one of my biggest goals from the outset of designing dhQuest was to have as clear of a visual communication with the game’s many features and forms, etc., as possible.  (There’s lots more to say about this kind of work regarding my other recent game project, The Reunion, as well, but I’ll have to come back to that in a future post.)

dhQuest checks a lot of design boxes at once, stuck as it is between the worlds of pen-and-paper RPGs and digital gaming.  The obvious inspirations to some of the design work are the great pen-and-paper RPGs (I probably don’t even need to mention Dungeons and Dragons by name, but I will anyway), which set the visual standards for a large portion of the tabletop graphic design work, and many of the game’s mechanics as well.


The Player Character Sheet (for the old-school pen-and-paper gamers!) that lets players record their character and their progress as they play.


But the larger design effort, at least for now, was the digital component.  The game leans pretty heavily on this half at the moment, since the tabletop version remains to be turned into the multiplayer experience it was designed to be.

So for the moment, what we have is essentially a digital single-player experience, and one that straddles the line between game and web application.  As a designer, my goal was make sure it looked just enough like both of those two things that it didn’t accidentally fall fully into the category of being just one or the other.

So, here’s a quick look at some of the design and development stages that went into making the current version of dhQuest‘s digital companion piece (and single-player game).


Quest icons, to give players an at-a-glance view of the availability or completion status of quests.

Since most of the interface for dhQuest was essentially a quest logbook (think World of Warcraft, etc.), it was important to make sure that each quest had a distinct graphic presence depending on its availability, or completion status, to the player.  The goal was to let the player know at even a quick glance which obstacles stand between them and any of their goals, and to satisfy them that they had either met or completed those requirements, in order to progress.  (And, breaking with the subdued color scheme, the “help” icon on the lower right is the one instance of white typography in the mix, to make sure it stands out to the player that needs more guidance.)


The resources (Time, Funding, Staff, and Support), and ending goals (Credibility, Network) that players record during their game.

As with any project, it was important to create a library of simple icons to communicate the stats that the player has available to them.  I’ve been happier with other icon sets of mine, but for the moment this seemed to do the job well enough.  And, though it’s probably lost on exactly no one, the shield icons were a nod to the sorts of heraldic visual flourishes that go hand-in-hand with the medieval fantasy aesthetic of older tabletop RPGs.


Icons representing the Team Members players recruit throughout the game. (IT, Library, Faculty, Student Body)

Finally, the player’s other goal is to assemble a local team of experts to their research group, and I wanted to design a simple visual icon for each of these.

The result–for the moment, anyway–is an interface that combines to look like this:


More soon, about the game itself, and the integration of these elements into the user interface.


Retro Projects Retrospective

So this will be the first of probably many retrospective looks at (somewhat, but never quite) completed projects of mine, from the not-so-distant past.  There’s a lot of history to cover, and I didn’t take a lot of notes about the process of the work that went into most of it.  Let’s get started on fixing that.

dhQuest: Planning for the Digital Humanities

So today I’m talking about a game that I helped design (design as in, “how it works”) a month or so back, in consultation with my two co-directors at Hamilton’s DHi, Angel Nieves and Janet Simons.  Long story short, Angel and Janet were running a course at DHSI 2015 on implementing new digital humanities programs, and wanted to try to “game-ify” their content, hoping that it might help people take on a more vivid perspective of what that experience entails.  That might be their own experience (when people choose starting scenarios similar to their own real-life circumstances), or entirely fictional (when you randomly generate a character to guide through this experience).

To skip right to the outcome for a moment, the game was ready (just barely) in time for this same year’s DHSI — as in, made entirely in about 2-3 weeks, from first concept to playable prototype — and we were lucky enough to have an extremely helpful bunch of guinea pig game testers in their class.  (And, to any of you who might read this, thanks again, hugely, for all of the time, feedback, and extremely helpful bug-hunting and feature suggestions.)

This was a bit of a new one for me, too.  Sometimes (well, okay, most of the time), I like to work in my own little vacuum, sitting here at the corner desk of a dark apartment, somewhere in the midnight hours of my own little insomnia vortex. This project, however, was one of the living examples, for me, of how gratifying it can be to work in concert with people as you hastily develop a project practically in front of its audience’s eyes.  “Release early, release often,” they sometimes say in software.  Well, this was definitely that.

The Concept

So I’ll dive into the proper “how it was made” in my next post(s), but to set the stage, the core concept/goals of this particular game/project were:

  1. We wanted to make a game that made the experience of brainstorming–and ultimately building— a digital humanities program to feel “real” and, more importantly, immediate.  So, while some elaborate fantasy metaphor was tempting, and might have been fun, we wanted this to feel direct and as honest as a game might allow.
  2. But on that topic of fantasy, this is a game we’re talking about, and therefore thaumaturgy.  So, we need to have at least a bit of fun with it, if only to spare our audience’s attention spans as they navigate through a game that essentially just bureaucracy at its core.To that end, we went right back to our nerdy routes and decided to choose Dungeons & Dragons as our inspiration.  In the earliest concepts for dhQuest, the idea was to have small groups (3-5) of players form what are essentially questing parties, deciding as a group how best to approach their tasks.  In the final tyranny of time and deadlines, this part fell away for the first year, but the idea of being the adventurer (complete with stats-crowded Character Sheets) remained.
  3. The game needed to be accessible to people of any level of familiarity with games, or the sorts of game tropes I ended up choosing for this game’s mechanics.  So, less about “winning” and more about either “experimenting,” or, for the hardcore gamers, “maximizing.”  Anyone could theoretically “win” this game with enough time invested, but the experience of either improving in a second session, or pulling off an impressive outcome on your first try — this would be the temptation and challenge of the game.
  4. Lastly (and this would be the big one), this game needed to travel.  So, while the idea of a purely pen-and-paper tabletop experience was grand, we’d have needed to have this game done at least a week earlier to print and travel with it, and then we’d be carrying an extra backpack full of heavy paper and cardstock, which probably wasn’t happening.Thus, the “digital version” was born, and eventually ended up becoming almost exclusively the way to play this game, for now.This required a great many additional things — things like a database that would house all of the game’s quests and rewards, along with a web application that knew how to read and enforce rules, and how to reward players at each step.  And, unsurprisingly, this project also ended up requiring a lot of graphic and web design work.  (Like, a lot, a lot.)

So, with all of the goals decided, I had my marching orders, and it became time to sit down and start designing and developing the thing.  (More starting in the next post.)

A quick look at a piece I’ve been working on in the background of real-life, as I get the time.  Far from finished.

My favorite boots — classic Doc Martens 1460s.  I’ve had a sturdy pair or 3 of these over the years, and have come to identify with them at a few different little phases of life–so I figured I’d give them a tribute in 3d model form.  (It might also be that I can see them all day/every day near my desk.)  I was looking for something a bit tougher to model, and this gave me an excuse to practice with tools I feel like I don’t use often enough — extrusions along curves/paths, for one (for the laces), and particle/”hair” rendering (the carpeting on the rug), and more composition/rendering (Blender/Cycles).

Some definite things to improve for sure — for one, I’m not sure why they’re still tied when they’re off — and this room is in dire need of some furniture!  More as this develops–and when it inevitably gets a low-poly treatment, to become the footwear of a game character (or two) of mine.  Would love any feedback/suggestions.

(And, as always, my love and full respect to Blender, for being both amazing and open source.)



It’s very strange and sad to lose Satoru Iwata.  Whether or not Nintendo has looked much like the rest of the crowd since the early 2000s (when his watch began), the one thing Nintendo has always looked like is Nintendo.  (And imagine how truly tough that is, both as an act to follow, and as a ship to steer in some extremely strange new oceans.)

I know a lot of people don’t always realize what an instrumental figure he was even before he was the head of the company. A genius programmer, an extremely creative developer, and (most importantly) a true lover of games that led even a titan of a company like Nintendo in a way that always tried to be true to the fans and the unique magic of their brand. So much of the gaming industry has gone another way, while Nintendo has always tried to remain itself — and I think fans and developers alike owe a lot to the risks and innovations Nintendo took on over the last decade+ of his leadership.

I’m not sure I can really say how much it’s meant to me that Nintendo has always been exactly what it is — and that that hasn’t meant that it’s ever become some stale relic, or gotten lost in the times.

So here’s to the guy that always took great care of the company (and its many franchises) that I’ve loved since I was a kid.

Video (by IGN), below:

A Farewell Tribute to Nintendo’s Satoru IwataA Farewell Tribute to Nintendo’s Satoru Iwata.http://go.ign.com/thankyouiwata

Posted by IGN Video on Monday, July 13, 2015

We left off last time with writing an algorithm (well, that’s a fancy word for it) for deriving a best-guess syllable count for “any” word.  Which involved a lot of rules…

And, obviously, these many (many) rules need testing.

While there are certainly better ways to test this (as in, automatically), I actually chose a very hands-on, “manual” way to test this as I was working.  And I’d actually recommend this approach, for a few reasons.

My quite-simple process was:

  1. Use an online “random word generator” (there are many available online, and I imagine most are about the same) to generate a list of 20 or so words at a time.
  2. Copy/paste those words into a text editor, and find/replace the “newline” breaks with quotes and commas, to make it into an array notation that we can feed into PHP.


    This adds a comma and close-quotes to the end of each line, and open-quotes to the next.

    (This requires adding the open quotes to line 1, and close quotes to line 20, by hand.  You might have fancier ways to do that last step automagically, which I’d be happy to learn.)


  3. Copy/paste that list of words into a PHP array:

And then iterate through them, running my syllable-counting function on each individual word, and echoing out each letter, and any rules-based decisions that accompany that letter (or string of letters).

The output is very verbose, but hugely helpful to see why it made the decisions it did (and, of course, whether any of those needed adjusting):


I kept commented lines as a tally of how many words, out of 20 per test (and 100 on the last big test of that day) had needed fixing, and what rules I was adding.  (I should have been more careful to add more of those there.)

I also kept a far-more-important array of failed words, to come back to as another test pass after adding/changing rules.  (And once those issues were resolved, those words graduated up into the “fixed words” array, as a kind of log of my progress.)

These failed and fixed words didn’t have to be PHP arrays, of course, but it helped to be able to quickly feed those arrays into the algorithm, in place of the test words array, for review.  (That PHP loop up above doesn’t care how many words are in the array you give it, so it’s happy to go through your entire list of test words even if it gets quite long.)


The “failed words” array, which graduate up to the “fixed words” once the necessary rules are added. Useful for keeping a list of things to come back to.

There were a lot of other fixes and notes, too, but that’s a decent glimpse into the general approach.

Some of those are still on the to-do list, now a day or two later, but it’s great to know which ones represent which rules to add/change.

A couple of the successes (yay!):


Sample verbose output from my testing script — with the “-sm” ending rule at work in this one.



Properly ignoring two “e”s that might have otherwise made this incorrectly read as a 4-syllable word.

The above couple of examples show a few places where I was pleased (relieved?) to see the algorithm correctly making adjustments for the trickier couple of rules in English.  And while it’s definitely not the most scientific testing method, of the thousand-or-so words I fed this, 93.3% of them were correctly guessed, which is honestly a better percentage than I was willing to call a successful experiment (for now).  (Obviously luckier or unluckier choices for those random words would have had far lower percentages, but hey, that’s the joy of sample sizes.)

And, with all of the inconsistencies in English, it’s worth bearing in mind that nothing is going to ever hit 100%.

Which brings me to:

A couple of the failures (boo!) to be fixed:


“de-” as a prefix breaks what it otherwise assumes is the single-syllable diphthong “ea”.  Prefixes (and suffixes) in general represent what could be a whole extra pass through a word, and possibly even a separate dictionary lookup (for, say, anything after the “de-” in a word that begins with it).


Come on, a prefix and a suffix?  And how often does “-ing” follow a vowel?! Now you’re just making stuff up, English! (Clearly I need to be splitting words on common suffixes, but in this case it wouldn’t be enough to evaluate this word even three times, removing its prefix and suffix both — “valu”, after all, is not a word!)

There definitely need to be prefix and suffix checks, for “de-” and “re-” and “-ness”, etc.  And then checks against the word that’s left after splitting those off.  (So that, say, “preamble” checks for the word “amble” after removing the “pre”, and thereby learns that that “ea” vowel pair is not actually a single long-e sound, like “seam”, but instead a prefixed word — and therefore another syllable.  And, conversely, so that it either skips the check, or chooses not to add another syllable, when the prefix precedes a consonant, as in “premium,” since a prefix or suffix status on matters to us if it affects the vowel/syllable rules.)

And so on, and so on…

So, that’s the script so far, and with plenty more rules in that “failed words” array ready to write new rules for and try to smooth over.

Some of the last big remaining steps, as I can imagine them, will be to try to find a way to split up prefixes and suffixes (not always obvious, considering “reanimated” vs “rear”), compound words (“elsewhere”), and better rules about tense.  To some extent these might also need lookups from the dictionary (where “elsewhere” was in it, but the compound word “salesgirl” was not).

The downside there is that it starts to require more queries than might be reasonable for the task, given that it already queries our dictionary database once per word (either by a button click on the user’s part, or more taxing still, automatically) each time the user stops typing for more than a second or so within a line.

In any case, since this entire algorithm component is purely a supplement to a professional database (which will ideally serve the huge majority of a user’s words), the accuracy of this humble tool already seems reasonable enough that I don’t mind incorporating it into the project as is, albeit with a couple of warnings and the promise of better results in days ahead.

These scripts all need a bit of cleanup (and security measures, since a database that isn’t just on my own hard drive will be involved), but I’ll make sure that the revised script (along with some basic PHP functions to call it and retrieve results from it), will be available in the source code along with the others, shortly — and in a Github repo, for anyone who wants to join the fun.

More soon.

So, last time we got some rhymes working, which was lovely.  While rhyming is a huge part of enforcing the structure of the poems we’re going to be creating, an equally important part is meter.

Which means we’ve got some syllables to count.

Now, fair warning that for this version of this (eventual) tool, we’re setting the bar fairly low and only counting syllables.  This is partially because (mercifully) the structure of poem we’re analyzing actually doesn’t enforce stress patterns (like, say, the alternating stress “da DA da DA…” of an iambic pentameter sonnet, etc.).  But it’s also partially because we have to start somewhere, and syllables are a big enough question in themselves for now.

For the most part, CMUdict saves our day again, in that we can rely on that (great) work to derive syllable counts for every word in its dictionary.  It isn’t quite as simple as that (as I’ll cover below), but it’s a straightforward enough process to get what we need from it.

As I mentioned in the last section, we’re able to derive the number of syllables in a CMUdict word easily enough, simply by counting the vowel sounds.  (Briefly: a simple regular expression of the phonetic column lets us count how many times we find numerals, which only occur as markers for the lexical stress of a vowel sound.  Find a number, you’ve found a vowel — and, thus, a syllable.)

But the problem is that you don’t exactly want to be splitting and reg-ex-ing every word every time you want to count its syllables — and, worse, you can’t do that to every entry in the database every time you want to find a word of a particular syllable count.  In other words, it’s not easily query-able information.  And we want it to be.

My solution, simply enough, was to just derive that information once, for every word in the database, and store it there permanently as another column.

Time for some brute force.


This is far from glamorous code, I know, but this actually gets the job done quite nicely.  Simply put, we select every line from that dictionary (as in, all 133,803 of them!), split apart each result’s pronunciation field into an array, split by spaces, and then pattern match each of the word’s phonetic segments for numerals.  Every time we find a numeral, we increment a “syllable count”, per word, and then echo the whole database result again, along with that new syllable count number, setting it up as a new fourth column in what will become a new CSV file we can import, wiping out our old table for this new and improved one.

This is obviously as “brute force” as it gets, and probably something that people more familiar with SQL could do with a query.  I am not those people.  So, for now, for me, this works!

The result is a big ugly screen of output whose source code, thankfully, is at least less ugly:


The fourth “column” (the number after the third comma) is our new syllable count, ready to SELECT on in MySQL. (And, fitting that this screenshot contains both “abracadabra” and “abrasive,” right?)

Ugly and brute force or not, in about a half-hour’s worth of work, we now have an effortlessly query-able syllable count for every word in this dictionary.  So, that’s our “syllable counting” needs settled, then, right?


Well, no.  Not exactly.

What if our word’s not in the database?

So, in the case of rhyming, it’s probably understandable enough that not every word you look for is going to be able to give you a good rhyme.


(“Orange,” for instance. Heh.)

But, while that failing to find a rhyme doesn’t necessarily “break” the application, not being able to get a syllable count for a line theoretically would.  The user could count their lines’ syllables by hand, sure, but they could have done that with a pen and paper, too.  This is an application meant to help them structure their own poems!

(And, off the top of my head, I’d say rhyming is at least more subjective than meter, even if syllable counts are a bit slippery in and of themselves.  Maybe?)

So, we need a plan B on syllables, if we’re going to give the user at least a good “best guess” at the number of syllables in their lines.  My not-nearly-as-easy-as-it-seemed-when-I-thought-of-it answer: write an algorithm that parses “any” word for its syllable count.

This, it turns out, is a big job.  (Who’d have guessed?!)

What I’ll describe below is what I came up with after about 3-4 hours of revising, finally calling it “good enough for now,” with the absolute certainty of needing to come back and finesse it many times over in the future.

One thing I’m trying to keep in mind, as well: there’s also a definite “diminishing returns” problem past a certain point, given that even the duct-taped madness I’ve built so far is actually doing a pretty admirable job right now (about 90+-ish% accuracy, according to some also-questionably-accurate testing).

Probably the easiest way to present this is to document the process that I went through, thinking and adding rules to this.  So, to start with, the easy parts:

  1. Borrowing from our approach to parsing CMUdict’s entries for syllables, the biggest and easiest single step toward getting a reasonable syllable guess would be:“count the vowels”
    From Wikipedia, “a vowel is a speech sound made by the vocal cords,” which, of course, lines up pretty well with the definition of syllable, where “a syllable is typically made up of a syllable nucleus (most often a vowel) with optional initial and final margins (typically, consonants).”So, if we were to take, say, “alphabet” — well, there we go.  a • lph • a • b • e • t.  Three vowels makes three syllables.  Cool!Well, that was easy.  We done here?What do you mean, “no?”
  2. So, “a • b • o • u • t” that rule…It’s easy to forget those pesky diphthongs (even for as fun of a name as they have).  And, of course, they’re everywhere — and, worse, they’re crazy inconsistent.Disclaimer: The people that actually study these things are probably trying to reach through the screen and strangle me right now for approaching this so amateurishly.  But the truth is, my entire English program (probably unsurprisingly) didn’t involve more than the occasional toe dipped into the waters of actual linguistics.  So, for as much as I love language (and syntax, and morphology, and etymology, and I’m going to stop listing things now) and all of that fun stuff (and I actually really do), mine is unfortunately the starry-eyed fascination of the complete amateur.  But we’re going to make it work anyway!I tend to be this sort of unreasonable “renaissance” person, notoriously bad at consulting the subject expects first.  So, consider this whole info-dump to be my “baited web” here, bringing more knowledgeable people to me by frustrating them on the internet until they speak up to correct me.Back to the point, we need rules that cover these little pairs.  The easiest rule, to start with (and again, a thoroughly broken rule to use by itself), is:“ignore (as in, don’t count as a syllable) a vowel that follows a vowel”There.  Finished!


  3. Except that, applied too broadly, that last rule becomes kind of an “i • d • io • t • i • c” rule, of course.(And I’m picking on “i” there as the prime offender of this rule.)  With that in mind, my next step was to add on to the previous rule, as if in apology, simply:“…unless that last vowel was an i.”At this point, it was already doing a much better job.  (I hadn’t thought to record my tests at each step yet, and started only MUCH later into this process — I might still retroactively do this, actually, by taking rules apart from my algorithm — I think the test data would be interesting.)
  4. What’s that thing about “except after C?”  Well, it works here too.Take “s • o • c • i • a • l” for instance.  That “c” just cost us a syllable — and opened this algorithm up to something it inevitably needed all along, of course: awareness of a couple-or-few letters in each direction of the letter we’re looking at.Now we can say something like:“two vowels in a row is one syllable, unless the first vowel is an i… unless this vowel is an a, following an i, following a c”This is obviously starting to get harder — and while I’d love to add something reassuring here about eventually finding some elegant underlying simplicity, the truth is, I didn’t.  (And I don’t think there is one.)English is, of course, a language derived from and constantly incorporating a great many other languages, with a great many rules, and obviously many of these differ wildly on a case-by-case basis.So, you might be reading that last rule and (quite correctly) shaking your head saying “what about words like ‘associate’ vs ‘associatiave’” — and even between those words, the different acceptable syllable counts for them?

    Well, to that, I’d say there are really three ways to look at this:

    The first, and hardest: we could start flagging certain tricky vowels/strings as “multiple possibility” syllables, and recording multiple values for these, giving the end user some way to choose between them, to appease some strict syllabic enforcement in our final application… or…

    The second, and most lenient: to just not enforce these things.  We could far more easily just let this algorithm be what it really is: our best reasonable guess.  If it’s wrong, it’s not going to be crazy wrong, it’s just going to… I dunno, wiggle a bit?, on little rules and oddities like “is ‘comfortable’ three syllables or four?”

    The third, and most reassuring: to remember that this tool is actually just a last-resort supplement to a dictionary that has already solved this question the proper way (at great time and human effort) — by cataloguing these words by hand, accounting for the inconsistencies and multiple possibilities.  We can and will still look up each word a user types in that dictionary, and will get a far more definitive answer for any and all of the 133,000-odd words.

    (And, we will simply ask for the user’s patience when they want to write a poem that includes the allegedly single-syllable word “pwnage,” and live with a “syllable warning,” or something, for that line.)

    (Hey, I’m satisfied if you are.)

Covering all bases

Those rules above are only the first few rules that I added — I think I’ve gotten up to about 30? such rules by now, with many more left to add.  But those show a good overview of how that thinking process worked.

And after about 10 or 20 of those little refinements, I was actually starting to get quite happy with how good (generally speaking) it was doing.

For context, a few of the other rules that ended up coming in were things like:

    • “an e at the end of the word, following an L, following a consonant, is probably a syllable” (like “stable” or “bottle”) … but if that L follows a vowel, it probably isn’t (like “tale” or “joule”)
    • “if a word ends in -sm, it requires an extra syllable even without an extra vowel” (altruism, chasm)
    • “if an a follows an i, which follows a t, and the three are followed by an n, (-tian-), ignore that second syllable” (martian, venetian, although with snags like faustian, etc.)
    • “an i before an e is one syllable, except at the end of the word” (die, sortie)

And many more…  (punctuation, pluralization, tense, etc.).  The source code will let you stumble through the gnarled branches of its many if/else trees.

In a way, these generalizations started to seem so problematic that it almost made me want to cancel the whole effort, wondering if this was borderline irresponsible to allow so many rules to go in, when each of them had obvious exceptions and contradictions.  But, again, the solution of this project and tool is to assist with poem creation, not to enforce it.

And, to my surprise again, keeping on with this for a few hours led to an extremely gratifying series of tests that got these words right far more often than not.

I’ll cover that testing process in the next post.

So the first step gave us everything we needed in terms of our dictionary database, word searching, and phonetics.  That’s the better part of the foundation laid, but those by themselves are not quite all we need.  (Luckily, they should give us everything we need to figure the rest out for ourselves.)

So the question remaining is, of course, “how?”  Well, the first answer to that is to start figuring out what exactly “rhyming” means.

Disclaimer: I like to think of myself as a good student — back when I was one, and even now — but for some reason, with this project at least, I didn’t start by doing my homework.  There are a lot of reasons for that (where impatience is probably the main one), but for whatever reason I started this project wanting to figure out things like “what is rhyme” or “what constitutes a syllable” for myself.  That’s the “puzzle” aspect I was talking about — the fun of which is probably the second reason for skipping the required reading.

The point of this is, nothing about this part of the project is even kind of groundbreaking research, I know.  But part of the fun is diving right in, and using tools like that to check (and revise) your own work later.  Please, by all means, work smarter than I sometimes do.

So, what is a rhyme?

Well, according to the first stab I took at this (spoiler alert: this isn’t the right answer), a rhyme is when you can match up the tail ends of two words, counting their phonetic segments backwards from the end of the word, until you come to a vowel.


From the last section, the phonetic segments (according to ARPAbet and CMUdict) of the word “phonetic.”

(And how do we know when we’ve found our vowel?  That’s easy enough, we just need a regular old regular expression.)


preg_match() in this case will find any numerals in the $lastletter (a substring) of our word.

The lexical stress numbers, by ARPAbet/CMUdict’s notations, are the only numerals in the phonetics field.  So, we pattern match against the last character in that field and, if it’s a number, we’ve found our vowel.


(So, counting backwards to the first vowel, it would give us these two bits.)

It’s easy enough to take that result (a string) and just feed that into a query against our CMUdict database:


On the ugliest, cringe-inducing rhyming level (think bad pop music), this is at least the start of a usable rhyme finder.  Granted, if we used this as-is, it would tell you that our word “phonetic” is a rhyming match with, say, “academic,” but, you know, close enough.  (Yeah, not quite.)

So, to make that into a slightly-less-horrible rhyme match, we can continue counting (backwards) through more of the word, gathering up any consonants that occur immediately before that vowel, and feeding those into our query string as well.  This will at least get the entire last syllable sound to match.

I did that next, continuing to match backward until I found another vowel, and then recording everything after (and not including) that vowel.


This turned what was a search for the “-ic” of “phonetic” into a search for “-tic.”



This does kill off those cringeworthiest matches (like “algebraic”), but it still leaves us with pretty weak matches like “aristocratic,” since it’s really only matching on that “-tic” ending.

So, at that point, it seems like we ought to include the next vowel sound before this last syllable after all.  (Of course, this is assuming that there is a second vowel — but, hey, maybe that’s a good thing if this excludes words with just one vowel/syllable… after all, does “phonetic” actually rhyme with, say, “click”?)

After trying that, we get:


Now we’re getting somewhere.

It’s easy to see a huge quality jump in our rhyming here, with just this one more vowel.

We could go a step further, of course, and add the preceding consonants to that vowel as well, at which point our list would drop to words like “kinetic,” etc.  That’s certainly a closer match than words like “athletic,” but I’m not sure if it’s a significantly better rhyme.  (Again, I’m following no formal definitions here, but doing this first round just on “my gut.”)

So my question (to any/everyone)is:  is “phonetic / kinetic” a significant (or necessary) step up in rhyme quality from “phonetic / athletic”?  (I should probably keep my vote out of this, but whatever, my own answer would be “meh, not really.”)

So, at least temporarily satisfied to leave that there, I’d say that at this point there are already a few conclusions that I can come to:

  1. This level of matching is probably a “good enough” rule for rhyming, or at least good enough to leave this as it is for now.Obviously it’s not perfect, but it’s started to answer the task of the research question, providing passable rhyme suggestions to users that might not think of them on their own.
  2. This type of syllable-matching works great for this word, but has scaling issues at shorter or larger words.  Consider single syllable words — take “quirk”, for instance.  For this, we’d need to omit any consonants before the (only) vowel, so that “quirk” can find rhymes like “perk” or “lurk.”That would mean the rule for single syllable words would then be:“match the vowel and anything after”where, for two syllable words, it might be:“match the first vowel, and anything after”which we can obviously combine, as a rule, into:

    “find the first (even if it’s the only) vowel, and match that and anything after

    I’m satisfied with that.  But consider three syllable words again.  “phonetic / kinetic” is great (and, at least by the CMUdict’s phonetics, an exact vowel match in both syllables).  But “athletic / kinetic” — that’s a reasonable rhyme, isn’t it?  (I’m actually asking.)

    So, we could say, then, that:

    “the penultimate vowel and everything after it, are the only syllables that matter for rhyming in a 2+ syllable word”

    (Also, I just like the word “penultimate.”  …let’s see, “intimate,” “proximate,” “legitimate” — okay, I’ve been doing this too long now.)

    But, seriously, is this applicable to any 2+ syllable words, even if they’re a lot longer or shorter than one another?  Is “parthenogenetic” a reasonable rhyme for “phonetic”?  Or “hettick”?

    No conclusions here, sorry to say — just some questions to leave open in future steps.  Back to my last conclusion, so far:

  3. The way we’re matching (so far) is matching only the exact same emphasis per syllable.  While that definitely sounds the most graceful in terms of proper rhyming, should words that have different syllabic emphasis still be “good enough?”  And, if so, how do we revise that query?  (One solution is to remove the exact numerals and query instead with pattern matches that find those same phonetics with any numerals at the end of the vowels.  Regular expressions could handle that gracefully enough, I’d imagine.)


    Something like this, only with more flexibility and hopefully less manual “OR” clause building?

Review of Goals 

I think at a moment like this, it’s important to review the actual goals of the project, since there are a lot of potential paths into some pretty scary forests, in terms of the time and effort that could go into making “good” into “perfect.”  (Especially considering that “perfect” might very well be impossible anyway.)

Since this project is meant to engage people’s interest in poetry (more than to create some ultimate rhyming authority application), I decided that, for now, our earlier solution was “good enough,” especially in that it gives fewer/higher-quality results, which I think is a nice way to lean when given the choice.  To a certain extent, opening our pattern up to weaker emphasis matching would only bloat the results list — and with weaker results, on top of that.

So our final rhyme rule, for now, is:

“Find the second-to-last vowel in any word, unless there’s only one vowel.  Take that vowel, and everything after it, and match it against other words that end with the same phoneme sequence, and the same emphasis per vowel.”

So… yeah.  That’s my rhyming solution.  (For now.)

One nice thing about applications like these is that it’s easy enough to call this the first version, working as intended (if not perfect), and revise/improve that bit later down the line.


And, from my World’s-Most-Glamorous-Website test script, this is what my test code/page looks like.


A few of the (50) matched words.

In terms of putting this into our application, we’ll Ajax-ify this (soon) to be something that we can call when the user clicks on a particular word.  That function (in Javascript) can ask this script for any rhyming matches on that word, and get back something we can then present to the user as a list of words.

Next, we’ll need to start thinking about the meter.  Which means getting the syllable counts for all of these words…

(And, in the next episode, if you haven’t been anticipating it already, there’s a huge lurking question here of “so what about words that aren’t in this dictionary?” <cue dramatic musical swell>)



So apparently I really like working on lexical and phonetic analysis.  Who knew?  And apparently I like it to the point that when I finally had a weekend with some spare time to (at long last) play The Witcher 3, I instead found myself sitting at my desk working on an algorithm to split words up by their syllables and vowel sounds.  For hours.  Having fun.

And I guess there’s no reason it shouldn’t be fun.  By the time I was fully into the swing of things (which was surprisingly quickly), it felt like a puzzle.  And, since this was about words in the English language, it was even a puzzle where I was already pretty familiar with all of the pieces.

So, the background:

Without saying too much about the project itself (I’m leaving that to be the researcher’s privilege to announce and document as she likes), we’re brainstorming the early stages of a project at Hamilton that would both analyze a particular type of poetry, and give its readers the chance to create some of their own.

Like most poetry, this means there’s a particular set of rules (which are also a fun puzzle to sort out, programmatically) regarding the form, rhyme, meter, etc., of these works.

My job, to get things started, was therefore to start thinking of ways that we can essentially ask a web application, in real time, to look at either a word or a whole string of words (a line, couplet, etc.), and get some of these bits of information back.

Well, lucky for me, these lexical features are available at least in part through the excellent CMU Pronouncing Dictionary, which can tell you (almost) any English word’s phonetic sounds and emphasis.  And while that doesn’t tell us the number of syllables in the word, or provide rhymes, having the rest of that information actually gets us a lot closer than it might seem.

Setting Up

The first hurdle was making this available to a webpage as something that I could query with reckless abandon.  So, while their page shows a searchable input box (which returns the sort of thing you’d hope for), there was no obvious way to set that kind of searchable system up for yourself.  (And, me being my impatient self, I didn’t ask them for their solution.)

Before I go on, I should also give another positive mention here to Steve Hanov and his “A Rhyming Engine” (now turned into the mightier RhymeBrain, and its API), which were also strong contenders for the tool of choice, regarding the rhyming portion.  (I did reach out to Steve, who kindly responded with the suggestion of trying out that API for my purposes.  I didn’t end up going that route, but that’s just the control freak in me — part of me wanted to figure some of this stuff out for myself, and part of me wanted a tool that I could hammer away at, without API call limitations.)

The CMU Dictionary

The CMU Pronouncing Dictionary (“CMUdict”) is essentially just a gigantic (tab-separated) text list of dictionary words followed by their ARPAbet phonemes and lexical stress markers (represented as numerals at the end of the vowel sounds).  So, while that right there is the bulk of the content I think this task needs, it’s not exactly as accessible as we will need it to be.

So, for my next trick, I simply converted this whole dictionary into the world’s simplest MySQL table, so that I could just query it the old-fashioned way.  (I’d love suggestions of a better way to do this.  I did burn a couple of unsatisfying hours trying other tools I found around the web, to equally unsatisfying ends.)

Disclaimer: I am the furthest thing from a database admin, and am usually quite far behind the times on the easiest or sexiest tools for jobs like these.  I used to be pretty intimidated by that, but at this point I’m finding the value in that — which is using approaches like these, describing them to people such as yourselves, and hearing what tool would make this a thousand times easier, or more powerful, the next time around.  (So, let’s hear them, this time!)  In the meantime, it’s nice to know that at least I can accomplish the task, and probably appreciate the power of better tools all the better for knowing how clunky approaches like these really are.

My process: load this entire dictionary text into a text editor (I’ve been using the surprisingly excellent Visual Studio Code for this project — and all projects on Mac recently), and literally just search/replace the spaces with commas, creating a sort of quick-and-easy CSV (comma separated values) file.

(Fun fact, since the word “NULL” is one of the dictionary words, MySQL hates this on its import, and quits out of the import with an error.  I thought it was funny.  Thus, the manual substitution of “NULL” with “fixme”, which I later, of course, fix.)

On the database side of things, I set up a dead-simple two-column table called `words` that had a column for the word itself, and another for the phonetic/lexical stress value.  That gives us our basic structure that maps to this simple CSV, and from there it’s happy enough (after that “fixme” substitution) to let you load it in via phpMyAdmin’s “import” tool.

This isn’t quite enough by itself.  To make it properly editable, the database still needs a unique-key ID column, which is easy enough to add on after the fact.  (I do this after importing the CSV, so that I don’t have to dream up some annoying solution to manually adding IDs to each field in my text file.)  MySQL is happy enough to add that in one query.

That query being:

So, with that finished, we now have a nice little searchable database that’s happy to let you find either exact matches, or partial matches, with queries such as:

WHERE `w_word` = 'searchterm'

or, for partial matches (with the query syntax):

WHERE `w_word` LIKE '%searchterm%'

(And so forth.)  This also lets us use those ‘%’ wildcards at either only the beginning or only the end, to find words that just begin or end with our search terms.  (That becomes big on searches for rhyme.  More on that later.)

Mercifully, this is probably the biggest single line on the project’s to-do list, sorted out (well enough) in a few steps.  (And, in my mind, I had made that part into quite the dragon to slay, so I was smiling at this point already — which is always nice after only an hour or two.)

From here, it’s easy enough to jot down a few generic queries that will get us most of the search/retrieval functionally we’ll need, and then start stuffing those into a PHP script or three, which we’ll feed words into via $_GET or $_POST variables:

Quick and ugly, but it’s already enough functionality to let us access this from a webpage and see the results.  (And almost enough to soon turn into an Ajax version that we can query in real-time, as often as we need, to look up words as the user types them.)

w00t.  (Which, by the way, is a word that is strangely not in the dictionary.  Weird.)


So, my “official” web presence has been either cluttered or sometimes wildly neglected over these many years.  And while I’d love to pull the various little districts of my scattered portfolios, blogs, projects, and general digital scraps into something coherent, I’m fearing what the size or workload of such a thing would be.

So, instead, for now, I’m going the very simple route, throwing all of the old work into a “to-do” box, and starting a fresh new (simple) blog here, with the (hopeful) goal of doing a better job of documenting my various projects as they roll in.  There will definitely be some backfilling here, which I may or may not retroactively date to reflect their actual chronology… but for the moment, this will be a “current moment” kind of blog, and ideally something between a work log, portfolio, and general echo chamber of ramblings.

Let me know if there are any projects you’d like to learn more about, or to pull up from the archives a bit sooner than I might get to them on my own!  I always try to be snappy on email (gplord@gmail.com) or Twitter (@gplord) if you catch me there.  Thanks!