Game characters were better in the ’90s

Back in the ’90s, I played a heap of games which were incredibly immersive. I was sucked right in, and genuinely felt for the characters on screen.

Perhaps it’s because I was young, or perhaps it was due to the rose-tinted glasses I now undoubtedly wear, but I just felt like I cared much more about those characters than the ones of today.

I feel like the reason for this can best be articulated in the same way one can understand the difference between characters in books and characters in movies.

One thing many ’90s games had in common was a lack of proper voice acting. In its stead was text, and lots of it.

This allowed for several things:

A relationship with a narrator

A relationship with your character’s thoughts

Voices and acting were as good as you could imagine them in your head

The silent protagonist was largely a technological device. The protagonist was essentially you, a disembodied, soulless being into which you could pour yourself.

There was no horrifically poor voice acting to make you wince and distance you from the character – you were your own being reacting as yourself.

As technology allowed, 3D games moved to third person predominantly. While some modern games like Metro still use pure first person, a relationship with a character as an extrinsic entity held and holds much more appeal for game designers and storytellers who revel in the ability to create characters and have people fall in love with them.

A first-person shooter developer in the ’90s had no such ability.

With great technology, however, comes great responsibility.

The initial leaps into 3D with voice acting were paltry attempts, and more often than not they made us wince. Think of the opening of ‘Sin’, with those ridiculous lines being the game’s opening. Compare it to ‘Unreal’, which came out only a short while earlier, and the sense of real danger was palpable. You’d explore a vast and empty world and never say a word, reading the diary entries of fallen soldiers and devastated natives, picturing the events entirely in your imagination and coming up with a very dark image indeed.

The moral of the story is this: that in game design, more is not necessarily better.

In a sense, the world’s emptiness was itself due to technological limitations. They simply couldn’t fit too many enemies on screen at once with PCs being as ineffectual as they were. The end result, however, was that they had to ramp up the difficulty of the enemies to compensate, so encountering one was truly terrifying. A one-on-one battle was enough to kill you at any point.

The old saying that constraints breed creativity holds very true in gaming.

When adventure game designers in the Sierra days had to convey that something was happening with barely an ability to animate, they wrote a message telling you what was going on.

Left to your own devices, you’d be able to bring yourself to an eerie chill in titles with characters made of pixels which could be counted on two hands.

The moral of the story is this: that in game design, more is not necessarily better.

At the moment there are very few developers that know how to appreciate silence, empty space and the lingering camera. All too often a score removes tension just by being there (in games the score usually indicates when action is about to happen, and can ruin the mood by comforting the player in knowing there aren’t any enemies around).

We’ve been proving to ourselves for such a long time that our medium can be as cinematic as cinema, but we’ve also been overcompensating, trying to make everything bombastic and larger than life and failing to appreciate the art of moderation.

If FTL can convince me to care about a few blobs of pixels on a screen even today, so can anyone else.

So again, the burden of responsibility for employing voice acting, 3D graphics, scores and other peripheral cinematic devices lies not in using them as frequently as you can, but in knowing what effect each will have on a game’s tone and use them sparingly and accordingly.