Search

Categories

Do software development *best practices* fit game development

There are lots of things that many software programming and management books recommend and I believe lots of them are at the least just bad for game development and at the worst, flat out wrong.

I mentioned this to Joel at Joelonsoftware.com and a few weeks later he posted this article. Whether my e-mail had an influence or not I’m not sure.

For example, several books on programming, Peopleware for example, talk about the need for offices for every 1 to 3 programmers and that the amount of distractions for a programmer is inversely proprotional to their productivity. That might be true and that’s fine when you are writing some well speced out database app or backend webapp. In games though the product is entertainment, something that is fun, not just something that is functional. Can you imagine making a movie where the cameraman sat in some other room with no interaction with the rest of the crew?

I used to believe in the office myth. But, in my experience, the best games require extreme interaction between the designers, artists and programmers and putting people all in separate offices severely hinders that interaction. I’ve seen the best collaboration when artists, programmers and designers are sitting within viewing distance of each other’s montiors. Then a programmer can look over his sholder and see the frustrations of the artist and solve them or see them struggling with something and suggest a better way. The artists can see the latest behavior or effect and instantly give feedback and or whip out some particle textures or small adjustments to a model or a spawn point or whatever is need to make the game the best.

Another myth is the myth of “code first, optimize later”. That’s all fine when your goal is some business app or backend app. Unfortunately in games, the performance of your code dictates what the artists and designers are allowed to make? Can trees be 1 poly each? 6 polys each, 100 polys each? Can a view show 80K polys? 200K polys, 300K polys? Without that knowledge the team can’t make the best game. Either they are making art with less, not taking advantage of the engine or they are making art that will tax the engine resulting in a slow frame rate and complaints from players.

Even if you give them a goal, 100K polys a second, but your code is currently only doing 60K and you just keep telling them you’ll get it to 100K before you ship, how are the designers supposed to know if the game is fun? It would be seriously hard to adjust the feeling of an action game for that *fun* factor on a game with 100K polys runing on an engine that can only handle 60K unoptimized.

There are all kinds of other coding practices that don’t fit games in my opinion. One would be generic programming, lots of classes, lots of indirection, lots of flexibility. Of course maybe like Joel’s “Five Words”, maybe the world of games is subdivided. For example PC game programmers seem to love to have as generic code as possible. That their games would never run on a console doesn’t bother them. That their load times off a DVD would be 2 or 3 minutes. That on a console where you can optimize to the max since you know the hardware exactly and therefore get every last poly out of the machine to give your game that *edge* is lost if 20% to 50% of your frame is spent indirecting. That half a meg of memory is in lookup tables and names instead of models, textures and animations. I’m not saying there should be no indirection, only that there is a vast difference between how generic you can be on today’s consoles vs today’s PCs.

One reason is probably, as I mentioned, that consoles are fixed. You know every XBox, PS2 or GC will be exactly the same as every other one so you can push them to the absolute limit of their performance. Too much genericness will take that performance away. For PC games on the other hand, you have no idea what the user will have, a 500mhz P3 + TNT 2 card or a 3Ghz P4 + ATI 9800. You can optimize but you have no way of knowing what’s enough, and, you can just tell the user “get a faster machine or video card”.

Same with memory. You know a PS2 has exactly 32meg of ram, 4meg of vram etc so you can fill it every last byte with valuable data. On a PC you don’t know if the user has 32meg or 1Gig of ram but you do know that the OS will page if it needs to and you can always just make the game and then write on the box “you need 256meg of ram to run this”. You don’t have that option on a console, you need to be memory conscience right from the start. So, where as on a PC you might not consider ram usage, just start making the game and then deciding later how much ram to require. On a console you want to start of right at the beginning, how can we fit as much data as possible into this fixed amount of memory and your artists need to know what their polygon and texture budgets will be.

My point is, when you read books on best practices, take them with a grain of salt. They were not written for developing games and did not have the same issues. Not all of the ideas in them fit.