This is a belated response to Brian’s post (which is a response to Adam’s post).I’m going to speak as a designer rather than a gamer, and list some titles which I think teach us something about game design rather than necessarily being the best gaming experiences available. Yet I accept they are biased towards my favourite games, as you might expect.
In no particular order:
1. Football Manager (2005, 2006, etc). This is an almost entirely abstract sports game. Unlike 99.9% of games, it’s mostly text based, so you’re immersed in the same way as when you read a book. The graphics don’t try and depict reality so there is never an opportunity for the visual failings to jolt you out of the experience. Nor is there an explicit narrative – instead, the game has you tells your own story through your experience of the simulation. Apart from the obvious stark reminder that immersion does not necessarily flow from graphical fidelity, it also shows that not all strategy games need to revolve around geography, whether abstract (Chess) or simulated (eg. Command and Conquer). Most strategy games do rely heavily on some sort of positional play or topology so this is an interesting example of breaking free of that pattern.
2. Ultima VII . This is my favourite RPG of all time, and perhaps my favourite game of all time. There’s a large world, rich with detail, exuding a sense of believability that eludes most games today despite almost 20 years of technological advancement. The plot is interesting, the writing good, the lore engrossing. Oblivion is shallow by comparison. But Ultima VII also teaches us things through its mistakes and idiosyncracies: for example, the unusual viewpoint and control system frustrated many from an early stage, as did the click-and-hope combat system which resulted in the unnecessary death of many a companion – yet the latter was refined (to some success) half a decade later in Baldur’s Gate.
3. Thief: The Dark Project. I think a lot about how games do not ‘need’ story, or simulation: but for me this game has a perfect blend of story and simulation on top of the core gameplay. There are several valuable game design lessons that Thief demonstrates: how fuzziness and uncertainty in the AI changes the way a player has to act, how a game can encourage you to attempt to recover from failure instead of reloading, how audio can be an integral part of a game rather than just polish, etc. I think a lot of players don’t like it because it’s hard to get used to the very different type of influence you have on the world compared to other first person games – treating it like a shooter will get you killed. Similarly, the story aspect in Thief is quite hit or miss for a lot of people, anecdotally because they just want to sneak around, not fight zombies. These issues could explain the sales figures being disappointing for such a critically-acclaimed title, and possibly demonstrate the importants of setting your audience’s expectations when pitching a game, because being the best stealth game on the market is no good if all your players are disappointed by it being heavily story-based, and vice versa.
4. Deus Ex. Again, I love the story in this game – and it wouldn’t work without it – but this is essentially a game about making decisions. Most games with a strong narrative don’t let you make any meaningful choices via the gameplay, but this one does. One mission in particular has quite a different purpose depending on a dilemma you face in a previous mission. And many encounters later in the game depend on actions you took earlier on – not just dialogue choices, but actual game actions. Thus the story vs. gameplay issue no longer seems like a zero-sum problem, as it is often portrayed. Sure, Deus Ex doesn’t deliver on the ability to choose as much as it could, especially later in the game, but that’s a churlish complaint given how much more it does than most games. Perhaps then it’s notable more for showing what could be possible than for what it actually did. Who will take this to the next level?
5. Doom (2). Our home PC at the time wasn’t up to playing Doom but when we finally got a 486 (SX, 33MHz. 4MB Ram I think?) I got hold of this sequel to the seminal first person shooter. Back then, it was as if the sort of game we used to dream would be made was finally possible – a world in 3D that you could move through in real time, and plenty of demon-based action to boot. It’s a shame that I don’t think I’ll ever have such a horizon-expanding moment at a computer ever again. But looking back we can still see that the game was more than just its revolutionary graphics: the maps in particular were amazingly crafted, complex despite never using more than 3 door keys, appearing fully 3D despite being essentially limited to 2D with height values, and making the FPSes of today seem trivial. Sadly, this seems to be what the players want, given the strangely common notion of recoiling at the idea of getting lost during what should have been an exploration game. Doom was also one of the last games to have entirely 2D opponents, which allowed you to face a wide range of different-sized encounters. Future games, needing many polygons for each character, had to cut back on the number of foes that could attack you at once, compromising the experience in the process.
6. Diku MUD. MUDs were the first games I played where the world that kept going when I stopped playing, with people from around the world continuing to adventure in my absence. As with Football Manager the immersion comes from the text rather than the graphics (and there was no sound to speak of anyway), and these games showed the first major use of user generated content in conjunction with the internet as a way to efficiently distribute it. All this meant that the pipe-dream of virtual worlds might come true. And yet, while many of the systems still live on – in particular, games like World of Warcraft are still talked about as having DIKU mechanics – much of the potential of early MUDs has gone sadly untapped, as the potential for meeting new people and adventuring with them got shunted aside in favour of providing something akin to a single-player game with the piracy-averting benefits of providing software as a service. Most MMOs today have a distinctly different feel to early MUDs, with few people interacting in a more than cursory manner with people they didn’t already know before joining the game. There are parallels here with the way that the more open Facepartys and MySpaces of the world gave way to the essentially closed system Facebook, where the mass market, rather than enjoying the new freedoms and opportunities available, sought only to interact with existing social circles and fought against any attempt to expand them.
7. Elite. This is another poster-child for procedural generation – how else could you fit 8 galaxies in under 64 kilobytes of memory? On the surface the actual gameplay is quite simplistic – travel from one planet to another, trading goods, and buying new equipment – but the beauty came from the infinite combinations that the content provided so effortlessly. You could always be on the lookout for another planet with more favourable prices either for buying or selling, or perhaps somewhere a little bit less lucrative but with a stronger government and thus less risk of being attacked by space pirates. But while this might have got boring on its own, the well-paced goals such as being able to upgrade your ship in various ways kept you going back. This is evidence that, with the right systems in place, procedurally generated content can be enough to keep a player entertained.
8. Magic: The Gathering. I only have a small amount of experience with the computer game but I played the card game for several years and so will speak about that. MtG demonstrates the way in which a game can be played on several levels: the game itself, where you pit your cards against an opponent’s cards, or the meta game of constructing a deck of cards with which to play your next match, or even the level above that of trading for cards in order to make one or many decks. Few games can boast such depth for their players before they even start the first turn. MtG is also an interesting example because it was one of the first ongoing games where players continued to get stronger as time went on. Although the power curve levels off due to the nature of the game (eg. with larger decks of cards being less predictable, and thus adding a new card inevitably means giving up an old card), there was still a significant amount of power creep, with various cards making older ones completely obsolete, some expansions being deemed overpowered (which is natural, given that underpowered expansions would fail commercially), and so on. The various game types that limit a player to certain expansions perhaps hint at how MMOs could address the elder game aspect, deprecating parts of the game to keep things fresh for those who choose to keep up.
9. Civilization. Sid Meier’s strategy classic perhaps demonstrates best how a game can thrive when there are many well-balanced but ultimately minor mechanics all contributing towards a whole. Most games fixate on a couple of core features, such as choosing a weapon and pointing it, or optimising a few certain stats, but Civilization gave you many different levers to pull and no right answers. Do I build a Granary or a Temple to make the most of this city? Should we study Mathematics to build catapults or Map-Making to build Triremes? Build on the coast to get a port or inland to improve farming prospects? Attack my militaristic neighbour to avoid a future invasion, or save those resources for now in the hope that war never comes? Sid Meier is claimed to have said that “a good game is a series of interesting choices” and Civilization bombards you with small yet interesting choices at every step, making each playthrough different to the last. As with Football Manager, you’re given many variables to tweak, each with consequences, and as a result you form your own narrative through the unique dilemmas that you face.
10. Pen-and-paper tabletop roleplaying. The typical system here is Dungeons and Dragons, although I’d argue that this is perhaps not the best choice. However, the actual system you use is generally irrelevant. The key here is that the game is partly improvised by a human controller in a way that computers are still unable to adequately emulate. Starting with a small amount of source material – eg. a map, a list of non-player characters, and a rough plot outline – the players can choose how to approach the tasks ahead, and the gamesmaster can extrapolate from his or her written content to accommodate the wishes of the players. From there, the players may present even more creative approaches, and the two sides work half collaboratively, half competitively, to create a shared experience for all, without the need for expensive content. As with some of the text based games above, the players are immersed by their imagination rather than by visuals and this is compelling in its own way. This is in stark contrast to how computer games have evolved, generally relying on very detailed content created before the game, with the result being that the game duration is reduced and the player’s scope for choice is also significantly reduced. Arguably this direction is unsustainable as the costs of content creation grow. Either way, it’s surely not an optimal development because we already have great experiences that are short, linear, and involve few choices, known as movies – games should ideally tap into a different type of fun. Can we ever replicate the benefits of pen-and-paper roleplaying on the computer? I think we can, and the project I’m currently working on is one of several steps in that direction, which I hope to talk about in the future.
(Long time, no post. I could start by explaining and making excuses, but no – let’s get down to business.)
From the late 90s to the present day, many commercial games have been focused on some sort of realism – graphical photorealism, real-world physics, etc. Certainly when you look at the man-hours spent on most game software you’ll see far more of it invested in features that further the simulation aspects: eg. the creation of life-like environments and characters, and life-like movement of the characters through that environment. The game is almost a token gesture on top, usually just a set of simple goals with a basic scoring system to provide some sort of obvious incentive to chase those goals.
Gamification is a common buzzword these days, typically referring to how businesses can benefit from making interactions with their products more game-like, and so they have looked to the games industry for pointers on this. The interesting thing is that the assumption has been that this would be about seeing how games are made and then extracting game-like properties from them, but in fact it might be more accurate to say that what we call games developers have really just been ‘gamifying’ themselves in the first place: rather than making games with simulation aspects, they are often in the business of writing simulations with a healthy dose of gamification.
We know that a ‘game’ does not have to involve simulation at all – there are plenty of definitions on Wikipedia, each differing, but none implying that a game intrinsically has to model the real or a fictitious world in any way. So why has the games industry adopted this position of writing playable simulations? I think there are a mixture of reasons for this, some good, some bad. I’ll return to the good reasons in a future blog entry, but for now I’m more interested in the bad reasons, which I feel have taken precedence and given us an entire industry of games that aren’t really games any more.
Firstly, many players – and, it would seem, game publishers and some developers – seem to feel that a game can’t be enjoyed unless it reaches some sort of contemporary presentational standard. To many people, good graphics makes a good game. To be fair, a lot of players do have trouble enjoying older games and some modern independent games because they find the graphics primitive and distracting. But I would argue this is mostly cultural: we’ve been sold the mantra of “better looking games are better games” because it helps sell new hardware and software, and I think in part we’ve bought that line because there is a truth to it – newer games do play better, for most people. But this is arguably because improvements in game design have run in parallel with improvements in game technology, and we mistake correlation for causation. The design improvements we’ve seen over the years can apply equally or more so to games that don’t look as realistic as many modern ones do, and thankfully some indie developers are showing us just that.
There’s also an argument regarding the interface, that worse graphics make a game’s visuals harder to understand, and to a degree that does hold true, but it’s hard to argue that even games 15 years old looked so bad that you couldn’t adequately work out what was going on. Children can enjoy blocky graphics and unrealistic iconic representations so I don’t find the interface argument compelling.
Secondly, I think many developers actively want to make things more real. Partly this is because game development is often driven by programmers who are interested primarily in technology and who like to push that technology in new ways. Perhaps the majority of coders I’ve known fall at one extreme or other of an interesting dichotomy – they either want to write interesting features themselves from the bottom up, or they want to play with 3rd party software and libraries to implement those features quickly. But either way they are playing their own ‘development game’ which is more about the technology’s intrinsic properties and not so much about what the technology is to be used for. Programmers get bogged down in optimising code that already runs fast enough or switching to a new and shinier 3D engine because those challenges are often more interesting to them than shipping a finished game. Perhaps that’s why they’re programmers and not managers!
However I think the other side of the coin is that most programmers – and, I would sadly argue, designers too – don’t really know how to improve the abstract thing that is ‘game play’. There are many who’d love to create better stories, emotions, AI, and so on, but don’t have the knowledge or skills to do so, which means they resort to making the improvements in areas they do understand. You can throw more polygons at a 3D mesh to make it look better, but you can’t just throw more materials at a rock/paper/scissors conflict model and expect to magically have a better game. You can make a game run more smoothly or make it more colourful-looking or write one of those amazing everything-is-brown-or-grey-so-it-must-be-a-gritty-game shaders because these are techniques we know about (and saw at SIGGRAPH 15 years ago), but there isn’t much resembling a science for designing the abstract game features, or at least not one that is well-known and accepted. Even some of the better-known designers such as Daniel Cook and Raph Koster seem to consider their work to be more about casting an enlightened eye over trial-and-error, relying on play-testers to tell them what is fun. While nobody would seriously argue that you don’t need some sort of play-testing – just like graphics programming requires the programmer to actually look at what is being rendered – it seems a bit defeatist to assume that it’s not theoretically possible for a knowledgeable enough designer to be able to create a compelling game experience without needing to have others try it first. In particular I can’t agree with the suggestion that emotions, experiences, and personality in games “cannot be systematically engineered no matter how many design articles anyone reads“. I can’t imagine making such a claim about film, or books. It seems even more invalid for games, where the player is a participant: so if we’re not there yet, we just have more work to do, more knowledge to acquire.
I think we can get back on the right path to that by returning to those older and purer games, the ones from decades ago that delivered interesting gameplay to us long before they could attempt to deliver a world that looked like our own, when all the graphics and sounds were necessarily iconic and symbolic. Rather than trying to look and act like real life, they attempted to capture the essence of what games had previously been – sets of abstract rules, represented somewhat arbitrarily, but in such a way that they could be played with. Chess, poker, soccer, Scrabble, all involve real humans in the real world but who are acting on artificial tokens and according to artificial rules. The contests may be played out physically or mentally, numerically, linguistically, or spatially, but essentially they’re all abstract.
This should immediately show us that moving away from simulation is not just about picking a different aesthetic for your game’s visuals as a replacement for photorealism, but about realising that a game does not have to attempt to directly model or simulate any aspect of real or imagined life to be an enjoyable activity. It should instead be sufficient to create some representation of it that lends itself readily to interesting play. Minecraft’s world of blocks is not just a graphical simplification, nor even just an aesthetic choice, but is an abstract representation of the world, simplified to make it easier to reason about and to build with.
We can go further, and say that this creative use of abstraction isn’t limited to symbolising the physics of the world (where physics in this context includes the visible and audible aspects), but can symbolise the interactions within it – the narratives, the emotions, the events. For example, look at combat in a game like Oblivion, where the game simulates a continuous 3D space in which fighting and exploring are seamlessly interwoven, just as in real life. Unfortunately, the limitations of the artificial intelligence means that most fights can be won simply by jumping onto a ledge and shooting your hapless assailant from above. Compare this with the approach taken in the Final Fantasy series (or, of course, any number of traditional CRPGs and JRPGs) which switch to an explicit combat mode, mostly isolated from the main world, with completely different actions and constraints to create a more compelling tactical experience. The part of me that loves exploring an open and consistent world much prefers the former, but there’s no denying that the gameplay is simply better in the latter. Oblivion prioritised the simulation over the game, meaning the simulation flaws become game flaws, revealing the ‘uncanny valley’ in interactive form. Final Fantasy tells you that combat in this world is resolved in a separate space, and once you accept that, you can enjoy the rest of the game undistracted. In such a way, the more abstract form can paradoxically be the more immersive one, because it immediately tells you to engage in suspension of disbelief. You enter the experience already accepting the unrealistic elements – a realism ‘sunk cost’ of sorts – and thus they don’t detract from the game.
Similarly, when a similar phenomenon to the Oblivion problem occurs in Minecraft, eg. a monster being stuck below you while you shoot it, the experience seems less jarring. Minecraft doesn’t try as hard to pretend that it’s a real world and so your immersion isn’t as readily broken by such a problem. Embracing abstraction buys you that extra suspension of disbelief. No-one minds that Chess queens are more powerful than history would suggest. And nobody complains that Monopoly is unrealistic because of the lack of cities built in a square ring.
Combat encounters are just one example. While Oblivion’s fighting attempts to be simulatory, its conversation mini-game is purely abstract and could have worked well had more effort been put into it. Research in RTS and 4X games is often handled with a very abstract interface, for reasons of necessity, but there is surely scope for interesting choices to be made at that level. And obviously some games are almost entirely based on abstract models, such as turn-based strategy games such as Civilization or management games like Transport Tycoon or Football Manager. On the surface they are still simulating something, but in simplified and discrete terms that can be easily reasoned about, both for the designer and for the player.
Other art forms, possibly because they have intrinsic limitations that can’t be solved with better hardware, have long since stopped worrying about trying to make the media more realistic. Painters and sculptors happily create works that are symbolic representations or even caricatures of what they are depicting, rather than just trying to be scale models. Writers commit entire stories to ink printed upon thin slices of wood without worrying that the reader can only possibly enjoy this story if they see it visually and audibly, because we know readers can see beyond the fact that they’re staring at text and allow their imaginations to create the world for them. Are game players not capable of that? Or perhaps we as game developers just don’t have as much respect for our players as other artists have for their audience?
For too long computer games have tried to be interactive films, acting as if we have to simulate some sort of realistic space in order for the game to be fun. I’d argue it’s time to get back in touch with the origins of games and embrace the make-believe and abstract aspects that embody what is unique to games, the ability to play with a set of rules and explore the interactions between them. By weaning players off the ‘playable Hollywood’ model and back onto a purer sense of ‘computerised games’ we can both broaden the appeal of games and garner more respect for the medium.
(Reposted from my other journal, with minor edits.)
I play a lot of Football Manager 2005. On this game I bought a lesser-known player from Norway whose position is marked as a “Defender/Forward Left/Centre/Right”. So he seemed adaptable, but I never got good results, no matter where I played him on the pitch, so I turned to Google to get more information – maybe other FM2005 players had an idea, or maybe I could find news reports telling me what position he plays in real life.
Instead, the first search result I get back is his Facebook profile. He has 679 friends, is single, and is a fan of Eric Cantona, the English Premier League, and, er, Tattoos and Piercings. What is the world coming to? Maybe Facebook is too important these days, with things such as these new ‘Like’ buttons popping up on completely unrelated sites, gathering your personal information and selling it to advertisers and so on. I’m not usually the kind of person to worry a lot about privacy concerns, but this could get a bit much.
This article by Raph Koster says it all really: ‘Facebook will become your ID card for reality’. “You will swipe your Credits card to buy your movie ticket using some credits you earned with the loyalty program in Farmville, and swipe it again to get into the theater. You watch the movie, which helpfully tells all your friends where you are and what you are doing.”
Whether all that comes to pass or not, with Facebook becoming so ubiquitous that everybody is on there (including lower league Norwegian footballers), could we be entering an age where Facebook games are what people think of when you mention computer games to them? Could we already be at that point? There are probably more people playing Facebook games than other computer games, but I don’t know if they would equate the two. Perhaps the age of specialised hardware for games is going to come to a gradual end, as web technologies start to close the gap on the standalone graphics APIs, with the benefit of a much larger audience waiting to play games in the browser.
Often we invent games by taking an existing game that works and adding a twist to it, or changing some aspect of its behaviour or theme to something different. But how would we create a game from nothing, without using an existing game as a framework? In particular, how do we make the game complex enough that it’s not trivially ‘solved’? (Set aside any game where the physical skill aspect is a significant component, or games with few or no choices where strategy is typically not intended to be a factor such as Candyland or Snakes and Ladders.) I find this a very difficult problem, but to make a start on it requires some appreciation of what makes a game interestingly complex in the first place, and the tools we typically use to create this complexity.
Generally it would seem that most games we play can be described using the extensive form notation from game theory. It’s not as intuitive a representation for individual or simultaneous decisions as normal form notation is, but extensive form captures the important notion of each move opening up a new game state and new possibilities. For example, in Chess there is a single starting state, but that branches out to 20 possible successor states, and from there, the tree branches further to 400 states, and so on. I would suggest that the complexity of the game in terms of how easy it is to ‘solve’ it or to come up with the best strategy is closely correlated with a player’s ability to understand this abstract tree of game states. If “a game is a series of interesting choices” as Sid Meier has said, it’s the player’s ability to understand the ramifications of those choices that matter, which is to say that they need to be able to predict to a certain degree the effect that they will have on the tree of game states. If the choices are too easy to make, they’re not interesting, and if you can’t see that some choices are clearly better than others, again it is uninteresting.
With that in mind, it seems that we embody this essential complexity in our games in a small number of ways:
- A combinatorial explosion of discrete state spaces. As noted above, the game state tree for Chess has 400 possible positions on turn 2, and a branching factor of of about 35 on average for successive moves, making an exhaustive study of the possibilities impractical. Therefore you have to plan more generally, making an estimate of which moves are good and which ones are bad and deciding which part of the tree to look at and which ones not to. The high branching factor means that even Chess computers need to do this via something like alpha-beta pruning. A common way in which we can create this large number of possible states while keeping the game easy to understand is to use an abstract game board, as in Chess. A 2D grid gives you an easier way to describe and reason about the state changes while keeping the quantity of options high. Compare the rule “Rooks can move vertically or horizontally as many squares as you want without jumping over another piece” with “A rook at position 1 can move to positions 2, 3, 4, 5, 6, 7, 8, 9, 17, 25, 33, 41, 49, 57. However it cannot move to position 3 if position 2 is occupied, nor position 4 if positions 2 or 3 are occupied, nor… (etc)” Being able to bring prior knowledge of Cartesian coordinates and the notion of adjacency and straight lines helps simplify many possible state transitions into a much simpler rule.
- Hidden information. Typically this takes the form of a player’s hand of cards, or the fog-of-war in a real time strategy game, or even the act of forcing players to move simultaneously and thus having to estimate what the other person will do. This doesn’t make the state tree branch out any further, but it does make it much harder for a player to know which branches can be safely ignored. Compare the choice to leave your Queen under attack by a Pawn in Chess against advancing one of your middle-ranked pieces into enemy territory in Stratego. In Chess you will almost always benefit from taking a Queen with a pawn unless moving the pawn puts you at risk, which is quite easy to check for, so it’s reasonable to assume that is the move you should examine first and in most detail, making the decision quite easy. In Stratego however you simply don’t know whether it’s worth attacking that piece unless you have managed to build up some information about it, so you will need to consider all your other potential moves in roughly equal detail.
- Random factors. Randomness is like adding a 3rd player to the mix – whereas you can predict the actions of the 2nd player (your opponent) as he will always make the best move that he can given the limits of his capabilities, the 3rd player (luck) will play unpredictably, increasing the number of possibilities you must consider, in a similar way to the hidden information above. Indeed, randomness is really just a subset of hidden information, but it’s worth drawing a distinction between the two because hidden information can be potentially knowable whereas random factors cannot. (Though keep in mind the middle ground, eg. of a shuffled deck of cards – towards the end of the game it becomes more deterministic.) Some people disparage random factors as bringing ‘luck’ into the game but really it’s just about risk management. For example, Civilization would be a much less interesting game if you knew that your Phalanx unit would always defeat an attacking Militia unit. Instead you have to balance the probabilities out. You have 66% chance of beating one, 44% of beating two in a row, 30% of beating three, so how many Phalanxes do you need to defend against 3 or 4 Barbarians, given that it’s mathematically impossible to mount a perfect defence? This poses an interesting dilemma for players which again forces them to consider a wider range of options as there is no obvious solution. Greg Costikyan has a great presentation here which shows the different benefits a random factor can bring to a game and it’s well worth a read.
- Continuous state space. Games that utilise physics engines and real-world simulations exploit this to create a wide range of possible situations from a simple set of mechanics. Arguably this is the most popular method used by games today to introduce elements of strategy and tactics to what are otherwise computer-based sports relying on reflexes. It is similar to the Chess rule system in that people have an inherent basic knowledge of geographical space and direction and indeed the laws of physics which they can bring to the game to help them reason about it. Yet the continuous and almost infinite domain in which entities can move makes predicting the exact results a difficult endeavour requiring skill and expertise. You can point a gun in an FPS in an infinite number of different directions, and can stand in an almost infinite number of positions when you do it, and the best combination of these two factors at any one time depends on your opponents (and maybe your team mates) who are constantly moving within that continuous space, as well as your understanding of the implications of the relative positions and orientations. These considerations may not immediately seem relevant to the typical turn-based strategy game, but they are – the same positioning and aiming mechanics, albeit on a simpler scale, exist in games like Scorched Earth or Worms, for example. You obviously no longer have a simple tree of discrete game states here, but from the near-infinite set of possibilities you have to mentally build your own game tree, and select your tactics based on that. When you examine a continuous domain you can pick out discrete areas within it which you can reason about – for example, in FPS games you might mentally divide the area up into rooms, areas in cover vs. areas in the open, paths between weapons or power-ups, and so on. Then you plot a route from one point of interest to another and also use such points of interest when deciding how to maximise your ability to attack someone and minimise their ability to attack you.
The end result of all these approaches is that instead of calculating the whole game tree in your head and calculating the correct flow through it, you have to look for patterns and use those to guide your judgement. Your basic skill at the game is essentially your knowledge of the game’s patterns, and the depth of the game is proportional to how hard it makes it for you to understand the full game state tree.
But there’s an interesting meta-game here too, because since you know it’s impossible to know the whole game tree, you know that your opponent is in the same position, and must therefore also be formulating his own patterns to judge the game and formulate his move. This means it’s worth trying to guess your player’s patterns also, and often that is of more use than to attempt (usually in vain) to find the optimal approach – after all, in any one situation you only have to do enough to beat your opponent, not to beat all possible opponents that could exist. (“Remember, when you and a halfling are being chased by a hungry dragon, you don’t need to outrun the dragon…”)
This post is long overdue, for which I apologise. I hope to resume a more regular service before the year is out!
There’s a post on Penny Arcade from a few weeks back which is complaining that a certain game Brütal Legend seems to masquerade as one type of game (ie. a Real Time Strategy game) while actually being something else (a third person action-RPG). The tone of the article is that games shouldn’t be mixing one type of gameplay with another, and the main complaint is that players have expectations of a game playing a certain way.
Personally, I feel this is a false problem caused by narrow-minded reviewers and fans. Games won’t get taken seriously until it’s accepted by both groups that they don’t exist merely to fit into pre-existing categories – although they certainly can choose to do so – but are individual creations and should be judged as such. There’s nothing wrong with genres as descriptive labels, as they carry a lot of useful semantics. What’s wrong is when they move from being descriptive to prescriptive.
Nobody complains that a film’s exact genre wasn’t explicitly specified before you watch it. Crossovers and variations are par for the course and often encouraged. Film reviewing and criticism moved past this single-genre categorisation long ago and a film’s entry on IMDB typically lists several genres for each individual film, reflecting the way in which different aspects are combined into each whole. Games should surely be seen in the same way.
Tycho at Penny Arcade should really know better and I am disappointed at his attitude, wanting to classify games as Either This Or That and not wanting people to experiment with crossover. Imagine if we’d thought that way 15 years ago. To paraphrase his quote towards the end of the article, “I love action games, but I don’t want to play an action game while I’m playing an strategy game”. Bye bye to the entire RTS genre then! Luckily, some people at the time noticed that although the strategy aspect did suffer a little, you gained something from the urgency of the action. And thus a new approach was born. Mixing a bit of this with a bit of that is how biology has improved species for millennia.
It’s arguable that in this case the tutorial doesn’t educate the player adequately and therefore gamers fall back to genre stereotypes, coming undone when they hit the boundaries of the metaphor. But it’s nobody’s job to explain a game in terms of other games or existing genres. If all games are to be presented and judged that way then we’re in for a boring future bereft of creativity where games can only borrow from within their genre to ‘innovate’ (if such a word could be said to apply). Cross-pollination between genres would seem to be frowned upon and God help those who might actually come up with a completely new game type. I don’t think that’s the future we want to see.
Something interesting happened today. For a couple of years, myself and colleagues had mulled over a certain ‘killer feature’ that we wished our networking layer had, as it would have made our time implementing gameplay features much easier. Today, while doing some long awaited R+D work on said layer, I found that feature. The sheer size of the codebase meant that what we’d wanted for years had lain dormant and undiscovered all that time.
In actual fact, its only call site was commented out with a note saying that it was too slow to use in practice, but I have a feeling that it could have been easily optimised and saved us man-months of work.
Why hadn’t we found this before? Well, it’s a very large code-base and most of the people who originally developed it are long gone. Unfortunately there was obviously not much time for or interest in documentation back when it was written, and we’re too busy making actual games to be able to invest time in checking all these low level components which already work.
I see a couple of lessons in this:
- Enforce documentation as company/team policy. When the brains behind the code are no longer at the company, that’s when you need documentation to replace that lost knowledge. Unfortunately it’s too late to write it then. Designate a proportion of a coder’s time to documenting their code, so that an equal or greater proportion of some future coder’s time can be saved. (Failure to do this also results in scheduling problems, where some future coder is expected to perform a task as quickly as the original coder did, despite lacking the insight into the code that the first person benefited from.) I know it’s hard to sell documentation time to a publisher or whoever, but I’m sure that many projects lose more time from problems that arise when coders don’t fully understand the code that they’re working with.
- Write less code. Sometimes, documentation just isn’t going to happen, or is inadequate, or whatever. One way to mitigate this is to write fewer lines of code. The fewer lines of code you have, the less likely it is that there are large areas of the codebase that nobody looks at or understands. It also brings reliability benefits – if you halve the number of lines in your code base, it stands to reason that each file, function, and line gets read twice as often on average, and double the eyes means more of the bugs get spotted. Cutting the lines of code may not be easy to do, but most codebases have some duplication they can lose. In C++ templates and polymorphism can do a lot for you here if you think ahead rather than resorting to copy and paste. If you’re lucky enough to move to a higher level language then you get this benefit for free, since most are far more concise than the equivalent C or C++.
There are interesting implications for team size, too. To reduce the risk from lost code knowledge, you ideally want a lot of coders, and to rotate them round several subject areas. This way, knowledge is spread out redundantly and the loss of any one coder should not impact the general understanding of the system. On the other hand, large teams start to incur prohibitive communication costs, where documenting and explaining your modules can end up taking longer than creating them. And rotating people around areas of the code tends to reduce code quality, both through making people less concerned about defects in their code (since they may never have to touch it again), and through not being able to become an expert in a certain area.
You need enough programmers to be able to develop the required system in the timescale available, but too many and you’re going to have dark corners of your code base that few will understand and they can actually cost you time. It may be that there is an optimal number of programmers that can work on any given software project. Does it vary according to the programming language in use? Or maybe according to the type of software being developed?
I recently made a post on this issue over at Gamedev.net, and thought I’d clean it up a bit to post over here, as I think it’s worth talking about. Designers are often looking for new and better ways both to create and present their design documents, and rightfully so. Just as programmers may try different languages, techniques, tools, and IDEs, designers should be looking at better tools and methods to allow them to capture their ideas, organise and prioritise them in a sensible way, and to present them to a variety of audiences, perhaps the most important of which during the implementation phase of the project being the programming team.
Our team used a Wiki for the current project, and while this choice carried several benefits, it also had several shortcomings. The positives barely need expressing to anybody familiar with Wikis – they allow collaborative work, ad-hoc reorganisation, easy cross-referencing, automatic outlining and table-of-content generation, built-in version-control, etc. Therefore I will expand upon the negatives. None of these problems are fatal, but it’s easy to fall foul of them without realising it if you’ve not used one before. So here’s what I learned from my personal experience.
Every page should be categorised. Use sub-categories if they make sense. In this modern world of tagging things rather than grouping them into hierarchies, or following Google’s “search not sort” mentality, it’s easy to overlook the categorisation. Don’t do this! Otherwise there can be parts of the design that nobody sees. Nobody is going to recursively click every single link in the document, nor is anybody going to exhaustively read every page that is returned for any given search term.
Instead there should be discrete categories and anybody who needs to understand a certain aspect or feature of the game should quickly be able to see a clearly marked subset of the document that is relevant to them. Following from this, if you have a design team who are collaborating on the document it is useful if individual designers take sole responsibility for certain categories, ideally the features they designed or primarily work with. Each category should stand alone as a fairly complete sub-document, with external links existing just for reference. Otherwise you end up needing to be familiar with the whole doc to get anything done, which means you’ve lost most of your benefits over a normal linear document.
Be prepared to produce hard copies of the categories. Obviously this is not really an issue if you work remotely. But if you share an office, being able to print off parts of the document conveniently is vital. A hard copy is much more convenient to use in meetings, for writing notes on, etc. Nobody wants to be pulled into a meeting about Feature X when there is no possible way you can bring all the specifications for Feature X with you to refer to. And if you can’t refer to the documents, you tend to end up designing in parallel to them. If you can’t print off part of the document, you tend to find that people start designing features outside of the wiki for convenience purposes. And you don’t want that because suddenly your design becomes ephemeral and ethereal – an email here, a JIRA/Bugzilla comment there, a note on a pad somewhere else, because everybody was writing without direct reference to the original document.
Ensure that pages can be locked against edits, and are when appropriate. Programmers will follow the specification as closely as they can and will clear up ambiguities with the designers as needed. But the very nature of expressing s0ftware in natural language means that occasionally a designer will write something and the programmer will implement it in a way that fails to catch the spirit of what was written while nailing the syntax 100%. This is unfortunate but is essentially a problem that designers and programmers can work through, the designers learning to be more explicit and less ambiguous with their descriptions and the programmers learning to spot situations where this might be the case. A worse problem is when designers are free to work and rework their design during implementation. A collaborative document like a wiki makes this far too easy and tempting to do. Designers, wanting the best for the game and to provide the best service to the programmers, have a tendency to go back and ‘improve’ and ‘clarify’ their designs after you’ve already started or even finished implementing them. They think they’re being helpful or diligent, but actually they can cause a major discrepancy between design and implementation. Of course, in their head, the discrepancy was already there, but in the programmer’s head, the design has appeared to change significantly. When this happens during implementation, it can be frustrating to the programmer and cause some work to be scrapped and rewritten. Worse is when it happens after implementation, and the first that the programmer hears of it is when the QA department find that discrepancy and file a bug against it, implicating the programmer for not following the design.
Once designs are given to the programmers to work on, lock the page, and force future amendments to go through an approval process. The history and revision control feature is not enough for this. QA are not going to rigorously compare the coder’s source control commit times to the designer’s wiki revision history when noting a discrepancy between ‘The Design’ and ‘The Implementation’. More than the others, this is a management issue rather than a technology issue. The same problem could conceivably happen with any design doc. However the wiki technology makes it much easier to make little adjustments here and there without anybody noticing until the damage is done.
Ensure that all pages have one owner. A good design is often a big design. In Wiki terms this can mean tens or hundreds of separate pages. As the project moves forward, some features grow, some shrink, some are removed or replaced entirely. Obviously all these changes should be signed off with the implementors as mentioned already, but then the document must remain up to date. On a regular basis coders and QA staff will use the wiki as a reference, sometimes going back to a page they’ve not used for weeks, or using the search functionality to uncover the relevant page. What is critical therefore is that what they find must be relevant and pertinent to what they’re working on. Otherwise their time is wasted: at best, they have to search again (and again), and at worst, they take some action based on this outdated design which wastes their time and potentially that of other people too.
For this reason, every wiki page should be owned by a person on the design team, and that someone should keep the page updated in sync with any changes made to the design so that the information is always correct. If you have the information categorised adequately then each category can have an owner and this becomes quite simple. But without someone being accountable, it’s assumed that someone else will keep it up to date, and this won’t happen because nobody will exhaustively check every page to see if any are out of date. If it’s not practical to have a single owner for a feature or category (as might be the case if it is being co-designed by 2 people, for example), designate one of them as having responsibility for the wiki pages.
Be disciplined regarding scratch-pad, discussion, or brainstorming pages. Wikis are a good middle ground for discussing and evolving a design. They are more convenient than email in terms of having all the information in front of you, and less time-consuming than meetings where all but one person is doing nothing but listening. Unfortunately this means that the game design document can end up with annotations, discussions, vague ideas, and other noise scattered in among the signal. This can make it hard for programmers to see what is the agreed design and what was just discussion and ideas relating to the topic. Sometimes the problem is in the other direction and key design decisions are left stated in these discussions and not migrated to any clear specification area.
Use a separate namespace if your wiki provides such a thing to keep the normal searches free of this noise. The MediaWiki ‘Talk’ or ‘Discussion’ pages are a good way to handle this, leaving the main pages with the official docs. At the very least flag the page with a big disclaimer message or template that quickly tells programmers or QA that the content of this page is not gospel and is subject to change without notice. Designers should migrate authoritative information from these pages to the proper specification pages as appropriate.
Hopefully that is all useful to someone!
Sorry for not updating recently; real-life has been a bit busier than usual and I’ve had little time to ruminate on game development of late. But then this was always going to be a sporadic endeavour rather than a regular column so have patience! I’ve plenty of posts drafted up, they just need editing and finishing off.
Until then, a note on the length of time required to enjoy or complete games today. Personally I love RPGs, and especially those permitting a large degree of freedom for exploration such as Oblivion or the Might and Magic series. Even with more linear games I tend to have a cautious play style, spending 30 minutes on Doom 2 levels that had a ‘par’ time of 2 minutes, spending 8 hours on Thief 2 levels that others finished in 1, and so on. Recently I clocked up my 200th hour on Oblivion, and although I am tiring of it somewhat, it still holds a lot of interest and novelty to me at this point. Strangely, the time I spend on these games far exceeds that which I spend on explicitly ‘replayable’ games, such as Sid Meier’s Civilization, TrackMania, or any number of RTS games.
Anecdotally it seems like many people, perhaps older gamers, seem to want shorter and more focused entertainment these days. Is this just because they have played so many games and have less tolerance for obvious filler content? I’ve watched young children play certain platformers where they’ll happily try the same jump 20 times before they get it right, while I would probably have lost patience before half that many attempts. Or is it simply because older gamers have more free time to spare and they would rather see more variety and experience more victories in that time than playing one very long game would allow? It certainly seems the case that the industry is providing us with such games – And even when playing time isn’t decreased, the play area has shrunk and traded wide expanses for fine detail – compare Oblivion’s world of 16 square miles to Daggerfall’s 62, or look at the tiny compartmentalised levels in Thief 3 when compared to the sprawling cityscapes in Thief 2 such as the ‘Life Of The Party’ mission (the size of which which even this speed-run manages to portray effectively).
How does this trend for shorter games compare to the hours sunk into MMOs like World of Warcraft? Those of us who grew up with MUDs will recognise that this phenomenon doesn’t necessarily come from the presentation values, which in MUDs were almost non-existent, or even from the quality of the game design, since most MUDs gave you the equivalent of a text adventure with the worst parser since 1982 and an unbalanced version of the Dungeons and Dragons combat rules. Was it almost entirely from the socialising and the exploration factor? Maybe the combination of the two? (Brian Green has some thoughts on the relevance of different player types in a post made a couple of years ago.)
As much as I love long games, I know I could enjoy so many more if they were quicker to complete. I currently have over 30 games installed and which I play to a lesser or greater degree, but the 2 I spend the most time on being Deus Ex and Oblivion. These preclude me from making much progress on some other games (Bard’s Tale 2, Fallout, and Football Manager 2005 to name three that I’ve put on the back-burner for now) since the longer narrative driven games require a certain degree of attention and continuity. It’s easy to forget where you stashed some equipment or which NPC you meant to talk to next if you only play the game once a month, for example. So I suppose that is the downside of the games that are more demanding of time. Modern games are getting better at maintaining this state in the game for you – both Oblivion’s and Deus Ex track your objectives for you, whereas I have reams of hand-drawn maps for the Bards Tale games – but there is only so much they can do without giving you an in-game notepad which you scrawl notes into when you save and quit for the night.
What’s your preference – short play times or longer? Do both essentially give you the same value for money, and the payoff for your hours invested? Do you feel cheated by a game that ended all too soon, or by apparent filler put in to space out the content?
Apologies in advance for the length of this entry; there’s a short summary in the last paragraph if you’re pressed for time!
A few weeks ago, in a moment of extreme boredom, I stopped to watch the snooker player Stephen Hendry score the maximum possible ‘break’ of 147 points. (13 minute video.) Snooker is not the world’s most exciting sport unless you are a devotee, and I am certainly not a devotee. Yet with my game designer hat on, the game of snooker and the way Stephen Hendry had to play to achieve this score both become quite interesting on an abstract level.
Compared to most computer games, it’s very simple in terms of mechanics. To borrow one of Chris Crawford‘s elements of terminology, it only has one ‘verb’ – use the cue to hit the white ball at coloured balls, with the intent of ‘potting’ the coloured balls in one of the six pockets. But then several rules come into play based on which balls you hit and where they end up, dictating the score you get and whether you get another shot or not. Strategies naturally emerge from a player’s awareness of the rules, balancing need to score points with the desire to deny the opponent points. eg. by playing safety shots. Yet this all comes about when there is only one significant way to interact with the game. Compare that to first person shooters for example, where you often have several different types of weapon (allowing typically for both direct and indirect fire), consumable items for health and defence, different stances/poses/actions to cross obstacles or use cover, and so on. RPGs are perhaps even more full of verbs; you might have melee combat and ranged combat methods, spells you can cast, items to pick up, potions to drink, people to interact with and talk to, horses to ride, maps to view, etc. These games are all well and good but it seems apparent that we can stand to learn something from examining how just 1 action can produce interesting gameplay.
Probably the main thing to take away from the games like snooker, pool, billiards, etc., is that the main action can be applied in slightly different ways to vary the outcome – adverbs, if you will. Choosing the direction to hit the white ball in is like choosing who to shoot at in an FPS, but you also get other choices here. You can attempt to hit the ball harder so that it travels further after hitting the second ball. You can also apply spin to the ball, making the white ball change direction after contact, or even sending it on a somewhat curved path in the first place. Such choices are too often absent in video games – you choose who to punch or to heal or to send a magic missile towards, but often the choice begins and ends there with the ‘what’, having no say over the ‘how’.
It’s apparent that once a player gets good enough to pot the balls quite effectively, they have to start planning ahead and looking to how they can not only pot a ball, but make it easier to pot a subsequent ball. Again, this is where the adverbs come in and the player will apply force and/or spin to attempt to not only pot the coloured ball they’re aiming at but to position the white ball effectively in order to pot the next. But there are trade-offs here, since hitting the ball too hard or with spin affects accuracy, and hitting it too softly might mean the coloured ball you’re attempting to pot doesn’t reach the pocket at all. You have to compromise your current shot a little in order to hopefully benefit your second shot a lot; but if you compromise your current shot too much, you don’t get that second shot.
A classic game from the 80s that had an aspect of this was Laser Squad (play it online here), forerunner of the X-Com games. A simple turn-based tactical squad combat game, you move soldiers around and shoot at each other. But when you shoot at somebody, you get a choice of either an aimed shot or a snap shot. The aimed shot is more likely to hit, but the snap shot takes less time and therefore gives you chance to shoot again or find cover. You had to weigh up which was most worthwhile, taking into account how many successful shots you thought you’d need to take down the opponent, how their distance from you would affect the chance of hitting them, how much ammo you had left, and so on. You knew you wanted to shoot the enemy, but there was an interesting choice in how you went about it.
Like many other abstract games such as chess, you can plan several moves ahead in snooker. But the combinatorial explosion in snooker is continuous rather than discrete and you never truly know exactly where the white ball is going to be for your next turn until it stops. This is an interesting property of any game that models physics, a system that is understood by players well enough to make broad predictions easy but exact predictions almost impossible. This allows for a sliding scale of competence as players get better at judging the outcome of their actions without there ever being a point of total mastery. (I hope to elaborate on this sort of gameplay aspect in a future post.) Games like Armadillo Run and Crayon Physics are very literal examples of this sort of mechanic, but there can probably be more subtle examples too.
Snooker allows for the possibility of combinations (4:38 to 5:14 in the original video) where you involve more than just two balls in the action, typically bouncing one coloured ball off another to pot one of the two. This is generally a more risky shot but sometimes is the only one on offer, or perhaps offers a potentially greater reward if successful. This sort of mechanic is a bit more common in video games, especially in fighting games where explicit combinations are very common, often giving you the opportunity to score more damage but taking longer and exposing yourself to more risk in the process. But it’s rare to be in a situation where you absolutely need to pull off the special move, and more often it’s better to keep it simple when losing. Would games be more interesting if occasionally you were effectively compelled to use the high risk, high reward strategy on occasion? The infamous decapitation attack of classic 8-bit fighting game Barbarian might not go down well with modern players who have little tolerance for such a severe penalty for making one small lapse of concentration, but in a player vs. environment game where the NPCs lack such fragile egos it could be an interesting and rewarding way for a player to snatch victory from the jaws of defeat.
Another interesting aspect I noticed in the snooker video was that the fact that the pink ball was not in the prescribed place on the table was considered quite important. Due to the standard initial layout and the routine of returning coloured balls to their spots, snooker players get used to balls being on their initial spots most of the time. A similar phenomenon could be observed in some first person shooters where the spawn location of weapons and health can form a crucial part of a player’s route around the map. But just as a good player must know how to play the pattern and get from one such point to another, they must also avoid being a sitting duck if the powerup has already been taken or the position is occupied. Expressed as an explicit yet abstract design mechanism, this is setting up a distinct pattern that is visible to players, while allowing it to vary. This gives players several levels of mastery; first, they learn where the items are; second, they learn the optimal routes between them; then, they learn how to improvise if something isn’t where they expect to find it or find it too heavily guarded, and so on.
Finally, snooker’s concept of a maximum break relates to the metagame. Once Stephen Hendry had reached 80 points, he had effectively won the game due to there not being enough points left for the opponent to catch up. Technically, it would be possible for him to lose, should he commit enough fouls that award points to the opponent, but in practice such an occurrence is rare enough that the current frame is usually conceded by the opponent once this stage is reached. However, in snooker it is common to play on until the current break finishes, even though the eventual score has no effect on the game in hand. Prize money was at stake here for getting a maximum break, and further cash is available to the player who scored the highest break of the tournament. Yet in actual terms of the rules the score in each break is essentially irrelevant. Something similar can be seen in things like secret areas in 3D explorer-shooter games, tracking how many kills in a row you have in Unreal Tournament, or seeing how far through Thief you can get without being seen at all. All these aspects sit somewhat on top of the game proper, but provide for an extra view on the same gameplay.
One interesting aspect of the break in snooker is that the score is largely independent of the quality of your opponent. This makes it relatively meaningful to compare them no matter who you play against. Computer games with a similar mechanic could have an online high score table, providing the score is accumulated against a standardised opponent or in a system where the opponent’s strength is less important. This takes the player beyond the individual game they played in and adds them into the set of all gamers who play that same game. This adds the asynchronous multiplay aspect that people like Ian Bogost talk about, and which seems likely to grow in popularity via web-based systems such as Facebook and Twitter. (This was done quite well in the form of play-by-mail games in the past, so it will be interesting to see if any of the techniques used there make a comeback.) XBox achievements work along much the same lines. And TrackMania is an example of a game that provides the typical synchronous multiplay when you race other players, combined with an asynchronous aspect in terms of the local and global rankings that persist from race to race.
All this suggests to me that a lot can be done with just 1 game mechanics. Although adding in extra mechanics and observing the varied dynamics that emerge from their combination is a perfectly valid route to adding interest to a game, more can be done with individual mechanics. Find some ‘adverbs’ to allow the player to make some risk/reward tradeoffs, possibly buying future success with increased risk now, or vice versa. Make outcomes understandable but not trivially predictable so that players can make plans and learn how to anticipate the outcomes. Attempt to provide ways in which the mechanic can be re-used or combined to offer high rewards or a difficult escape route out of a dead-end situation. Set up patterns and predictable situations that players have to learn, but vary them on occasion so that adaptability and resilience to change is a useful skill that can set the experts apart. And see if it’s possible to add some sort of scoring system that sits alongside the game, not necessarily influencing the gameplay directly, but providing metrics that players can choose to compare themselves on, adding interest and replay value.