Code Re-use and Consequences of the Hidden Costs of Development

There is an often ignored, but still apparent contradiction in the games development process: attempts at code reuse often seem to end up costing more time (and thus money) than writing new code. Is this really the case? How apparent is it? Most importantly, if it is true, why?

Annecdotal Example - Game Development

First an example of one pattern that drew my attention to the issues discussed here; issues that arose repeatedly during game engine development made me think about this.
It seemed that developing a game engine rarely ended well, and while there are reasons for that with no relation to code re-use, game engines tend to strive to be well architected and avoid code duplication, so they ought to be a poster-child for code re-use.

It is not infrequent to find that developers who have invested a great deal of time and money into engine development do one of these things:

  • Abandon their internal engine and turn to middleware
  • Go bankrupt
  • In search of financial security, sell themselves to a bigger player (who promptly kills their engine development)
  • Sell their engine to other companies as a middleware product

Only the last outcome seems to indicate that developing 'game engine' software is a wise decision. Currently, it's much more the vogue for companies to employ middleware rather than develop their own engine internally. Many arguments have been made in favour of middleware, so there's no point to reiterate them here. The case of 'game engine development' is just one of a number of similar cases or variations. The rest of the discussion is not limited to game engines at all, and should be generally applicable to any situation where the authoring of code intended for re-use is planned.

In conclusion, game engine development is one anecdotal example of problems with code reuse that are mirrored (at least, to a relative extent) in the small scale as well as the large scale.

Code Reused Is Still Not Free Code

The benefits of code reuse have been lauded since the early days of programming, and there's no doubt that progress in the software industry would be all but impossible without reuse and refinement of code. However, it has been noted that in practice it's often difficult to realize the benefits of code reuse.

It often implied that the only cost of code reuse is the time for the user to become familiar with the API. The implication is that reusable code is as cheap (or almost as cheap) to produce as single-purpose code.

I hope to precis below some reasons why code reuse has other difficulties that must be overcome to exploit its benefits.

It's well understood that there is a cost in taking a piece of software from a working program to a 'product'. This issue was documented and explained in detail in the famous (and now ancient) book on the Mythical Man Month. What most people remember from this book is that adding more people to a project can often slow it down rather than speed it up. However, one of its key assertions is that turning a software routine into something fit for use by others is an act of creating a product, and that the act of productization is many times more expensive than the simple act of creation.

A genuine product must be developed more carefully, it must attempt to deal with usage cases that are distinctly non-obvious, it must be tested and it must be documented. It is typically the case that productized software must cope with a wider range of inputs and applications than one that was written for one special and specific purpose. This applies to the level of the smallest and most basic routine upwards. This (necessary) generalization effects not only the cost but also the performance of the final product.

When we look at well known software products in the marketplace, we can see that they have been expensive to develop, and that they do not perform as well as more specialized products, either in terms of usability or speed. e.g. Microsoft Windows, Photoshop, Microsoft Word and the Quickbooks accounting package. When we consider the example of Windows, it is burdened with difficulties both in its immense and ever-expanding feature set, and its immense and ever expanding API set, which must be made available to external developers if Windows is to be of any use. It is the most extreme example of a general-purpose software product: it cost an immense amount to develop and in return it has managed to generate considerable rewards for its developer and users.

When we look at the micro-scale, we can see, even in the case of a simple (geometric) vector class, that making it product-worthy, is a non-trivial exercise, loaded with negative consequences, which we must endure to reap the eventual benefit of code reuse.

Considering the component of just one simple operation: vector normalization, we can see at once there are several issues:

  • the designer must decide what means to pass the vector to the routine, some means are faster, others more secure, in some cases the best approach is situational
  • the designer must decide how to return the normalized result, or whether it will be written over the input, again questions of safety and performance arise
  • the designer must decide how to inform the user a failure to normalize the vector (if at all)
  • the designer must decide what to do with vectors that cannot be normalized
  • the designer must decide at what point the accuracy of normalization is inadequate
  • the designer must decide what costs to bear from handling the above issues that will impact on the efficient performance of the operation
  • all this must be implemented with diligent care and attention
  • rigorous tests are required to ensure that the routine functions under a wide range of circumstances, most of which will occur rarely, if ever, in normal use

In this sort of scenario, it's fairly likely that the designer is going to choose an approach that is safer rather than most efficient, though there are likely to be several compromises, they are not named that for nothing. Ultimately, a compromised design is employed for each component, with consequences and complexities for the final assembly that require additional documentation and understanding on the part of the user.

There is no decision that can be made without a cost, every element of safety or flexibility that looks good in a product has a cost in performance and development time. Of course, safety may return rewards in time saved elsewhere, but it is not achieved for free. The point is that the developer of the product must put in the effort so that users may reap the reward. The idea is of a one to many relationship, and ultimately cost benefits. However, the performance can rarely be recovered at all: the safe, general purpose routine is almost inevitably slower than the specialized one, and as product services are employed by developers these performance penalties are multiplied and multiplied as they ripple up through the entire system.

In the end, either faster hardware is required (not an option for consoles or embedded systems), or more time must be invested in various optimizations at both the high and low level, to recover some of the lost performance.

Critically, for a product to have an extended lifetime, it must meet a higher standard: the longer you want to use a product the more effort must go into it. This effort is typically not considered because it's considered a routine part of maintenance. Ironically (or perhaps obviously), designing and building for low maintenance is a time consuming task that increases the cost of the product. Once you understand this you can appreciate that there's no such thing as free low-maintenance code: maintenance is not only, in many cases, an ongoing consequence of incomplete productization - it is ongoing productization.

Put another way, low-maintenance code is effectively closer to being a product than high maintenance code.

It would be something like a breach of the third law of thermodynamics to magically produce productized code for no additional cost. In programming, as well as physics, nothing is produced without effort.

If we are to believe that making product-quality code is never free, we can see that making low maintenance code can never be free, though this is rarely appreciated. Far too often the dogma that low maintenance code is cheaper to write, and yet this is clearly false. The undeniable speed and productivity of hacky cowboy coding should confirm this: high maintenance, single-purpose, discardable code is cheaper to produce ... though it may not be cheaper to own.

It should be obvious that even a little wisely placed productization can realize some major benefits, but this doesn't mean that all code needs to be product quality.

While for the Windows developer, this is a minor issue: new hardware will always appear, for the game or embedded system developer it is a serious matter. Competition on a console platform is all on a level playing field, and specialized, single platform (perhaps even single game) engines typically deliver better user experiences. Similarly, embedded systems software targets fixed hardware and engineering revisions are unlikely to be made to resolve performance issues - new hardware would incur many (and possibly ongoing) costs - invariably a software solution is sought. Hardware specifications are typically driven by high-level management objectives for the target price of a system, and that hardware will likely be locked in long before a firmware programmer gets anywhere near a prototype.

In games, we see this reflected in products that are usually bad performers (multi-platform products) that chase sales across a range of platforms, competing with high performance products that are usually intended to run only on a single platform and usually have a custom built, or highly customized supporting engine. In embedded systems, we see sluggish menus and equipment with long start-up times that does not run as quickly as it could (or should). Whether the product is a camera, a mass spectrometer or a machine-tool, the end user may get less value from their purchase.

A comparison of UT with Final Fantasy serves here: UT is for multiple platforms, massive time and effort has been invested in its engine development, and yet it performs underwhelmingly on some platforms (or performed, as those platforms are effectively defunct), and doesn't support some other platforms at all. The underlying FFX.2 engine runs only on PS2, but is a highly evolved single-platform engine customized further for a single game. Both products have been used as the basis for games that made money but they employ very different strategies.

The Cost of Reuse is Much Higher than Expected

Ignoring issues of performance for the moment, the main cost of developing reusable code is financial. Normally, code reuse is seen as a saving, and this is the main issue here. Both developers and managers are inclined to underestimate the true cost of making product quality code.

In The Mythical Man month, Fred Brooks had some approximate numbers (alas I don't have the book, so I can't check) and IIRC suggested that the cost of a product was (and presumably still is) around thirty times that of the naive prototype code.

It is to be expected that to some extent this 'cost of reuse' is one of the factors in schedule slippage and programmer underestimation of time for tasks: the programmer is inclined to estimate the time to write something more like a working prototype, not a fully designed, documented, tested and refined product.

While it's possible that the cost of making a multi-platform version of a trivial component, such as vector normalize to a product quality is not thirty times the cost of the naive normalize, we might well find that if we added in all the meetings, arguments, use issues, bugs, platform variations, workarounds, refactors (to conform with other design changes and optimizations) and other shenannigans, we might find it isn't hard to get a thirty-times multiplier in cost at all (regardless of what Fred Brooks had to say).

And we still haven't factored in performance costs...

If We Believe The Theory

If we are prepared to believe that creating genuinely reusable code is extremely expensive, then we should be able to draw some conclusions and see how they match up to reality, or at least our experiences.

The first easy conclusion is that game engines should suffer badly from the cost of productization: e.g. producing a product quality, multi-platform game engine is far more expensive than estimated, and the performance can be expected to be below that of more specialized engines.

That the program to develop a grand and encompassing multi-purpose, multi-platform engine is typically the death knell of any developer that undertakes it seems to confirm this theory, at least anecdotally. The exceptions are all companies that actually managed to sell their engine to others to recover development costs.

We should also expect to see that it's possible to product a single purpose, single game engine, largely from scratch, and get a product to market that performs adequately on a single platform. I observe that this has been true in the past, and will probably be true in the future. While the features required for an engine have increased, the number of low-level products available (and their quality) to enable an engine, particularly on a single platform, continue to increase.

We should expect to see that 'code from the internet' rarely survives intact unless it is of product standard. That is to say, much code that is downloaded and incorporated will be extensively modified: it's a free prototype, but it's not a free product. While some libraries are highly focussed, and are effectively good products, many others are not - despite the massive programming effort that goes into many open-source developments.

The open-source process helps to make product quality code when the process is genuinely diversified amongst many developers, but when the open source development is largely the work of very few, the cost of absorbing the code is likely to be much higher. In the case of pseudo-code in academic papers, or nVidia slides, etc, you can expect the cost to be as high as writing from scratch.


It's largely in the hands of the reader to determine for themselves whether product quality code is at least an order (perhaps two orders) more expensive than use-once-and-discard code.

However, if this 'extent of cost' is really as large as suggested, then how does this relate to various commonly accepted 'truths' of game development? (And by corollary, many other kinds of software development). Make no mistake, these are things that most people developers accept to be self evident or simply obvious. The more subtle implications are rarely considered.

Alleged 'Truths'

  • A multi-platform engine is cheaper than three or four unique engines with fairly common application interfaces.
  • It's better to produce one good, ongoing engine, that is continually refined and improved for all platforms than to produce code per product that is largely discarded on completion.
  • Middleware is the answer and you should develop as little engine technology as possible
  • The above 'truths' form a theoretical argument in support of prototyping as a general methodology.

Yes, I know that last one is an obvious logical fallacy when you see it like this, but that doesn't stop people making the argument that there is some kind of linkage between middleware and prototyping, simply because you can use middleware for prototyping. Which is not to say I'm against prototyping ... see below.

Let's Examine the 'Truths'

In the first case, the multi-platform engine has to cost less than three or four times as much as each cowboy engine, and yet deliver the same level of performance across all platforms. Actually, if you intend to reuse the engine, then you might write off some of this cost, but that takes us to point two.

If you can spread the cost of engine development over several products, there appears to be an immediate win, but this has to take into account the significant increase in cost as the quality of the product is required to increase. Remember, for a product to have an extended lifetime it must reach higher standards. Some engines have achieved that sort of lifetime, but they are few in number. Even the more successful and long lived engines have undergone considerable and expensive revisions. We really can't pretend that the original Quake engine is the same code as found in the Doom III engine, though they may have a few low-level product quality components in common. By the time your first product hits the market there are bound to be parts of your engine that are already obsolete in some respect.

It should be clear that the cost benefits of a multi-platform, high-quality engine product are not as obvious as might first appear, and perhaps the benefits are quite closely balanced against the alternatives unless you can sell your product to recover development costs.

As far as middleware goes, we can see that it's largely a good idea, as long as the product is of sufficient quality and we can accept the performance limitations.

Unfortunately, the amount of high-quality middleware is low, and there's no obvious sign of forthcoming new products that might change this evaluation, rather, for 'political' reasons Renderware has diminished substantially as an option since the previous console generation, and arguably nothing has appeared to replace it (unless you count Unity). Some developers find themselves forced into using Unreal when it isn't really a good fit for their product.

It can be seen that prototype code typifies the non-product, no-reuse approach. Historically in games prototypes have often evolved into finished products. These products have often succeeded in their first incarnation. The problems with them have only emerged when somebody (incorrectly) assumed that 'finished prototype' code could be reused to make a sequel game, etc.

If you write disposable code you need to be sure you do dispose of it in a timely manner. If you cling onto it after its best-before date expires you can expect stale and nasty tasting products.

In my personal experience, it is tempting for management to assume that all existing code is reusable code, despite protests from programmers, or warnings that certain code is not particularly suitable or helpful. When this is used as an excuse to deny necessary development resources, or to massage schedules, disaster inevitably ensues.

Proper analysis and recognition of when existing code is appropriate for reuse is essential; even when time is allowed for further development and productization of existing code, it can push development in inappropriate and time-consuming directions and do untold damage to morale.

The Tomb Raider series springs to mind as an example; it can not have been easy to add new features to the existing PSone era code base; so presumably a decision was (apparently) taken to attempt to create a product quality rewrite (Angel of Darkness) with lots of new features. It foundered in time and cost overruns (presumably) because they engineered the engine for reuse (longevity) not rapid development, thus running out of time to develop the gameplay aspects properly.

We could also describe this as a case of sequelitis, but such speculations are probably worth of their own article, and when considered too deeply lose their power as simple examples.


  • I don't mean to say that creating productized code is never cost effective, but rather, that is less frequently effective than widely imagined.
  • Clearly, there are cases where reusing code has obvious benefits, however, those cases may not be be obvious in themselves.
  • If prototyping typifies non-product, non-reusable code, then prototyping is as relevant to engine and tool development as it is to other parts of the process. Put another way, prototyping is not just for gameplay.
  • You should resist temptation to reuse code that is not really fit for reuse.
  • Proper analysis and recognition of when existing code is appropriate for reuse is essential.
  • A small amount of well placed productization can delivery substantial benefits, but once the low hanging fruit are plucked, productization becomes increasingly likely to cost time and money - rather than creating savings - unless you can sell the immediate products themselves to recoup costs.
  • The cheapest productized code is code you didn't have to write internally, as long as it is off sufficiently high quality and fits your needs well, you are very likely to 'win' by using it.