tl;dr Cognitive science can help give us insight about our own limitations in understanding why others write bad code.
“This code is awful!” We've all done it - started work somewhere and seen some substandard code written by someone no longer on the project. Run a search on twitter for that and you’ll see so many angry software developers complaining about the quality of the code in projects they’ve inherited that you’ll wonder how anything ever works. It’s a fact of life that on most projects software suppliers will change. Projects are outsourced, bought in-house, staff members change, organisations are restructured, business plans are refocussed. Suppliers change, and when they do you’ll invariably hear the refrain “Those guys were terrible, look how they’ve implemented x,y,z” from the new incumbents.
Now, there is undoubtedly code out there that is ‘bad’, in the sense that it is defective and doesn’t work as intended. Equally there are people who trash-talk the work of previous delivery teams in order to make themselves look heroic as they fix the ‘problems’. But outside of these extremes, what if many of the criticisms we hear come from the perfectly normal, if irrational, tendency that behavioural psychologists call the “Fundamental Attribution Error.”? Sounds crazy, right?
It turns out that the Fundamental Attribution Error is a well established cognitive bias and is something inherent in us all. Put simply, it means that we all have the tendency to attribute shortcomings in others’ behaviour to their personality and limitations (their disposition), and attribute shortcomings in our own behaviour to circumstance (our situation). For instance, you might think that the previous delivery team were poor or inexperienced because they failed to refactor some duplicated code, and given a perfect world you'd probably be right. But what might have happened is that far from being poor, the circumstances of the delivery had been compromised in some way; eg last minute requirement changes, illness or technical difficulties combined with a very tight deadline meant that refactoring code that added no additional business value was not an efficient use of the delivery team’s time. It might well have been that under the circumstances, which you will never have access to or be able to relive, that code was indeed the very best that could be delivered.
The problem is often compounded by naive development teams who typically take a Golden Hammer approach to problems. Seeing this ‘bad’ or ‘poor’ code they will focus on fixing that - often blind to the fact that the 'bad code’ is actually sitting there in production environments delivering business value. By preferring to spend thousands of dollars of the clients money to fix a ‘problem’ which will not result in a corresponding increase in revenue, they have the dual effect of lowering the client’s confidence in their chosen technology whilst actually causing their client to lose money.
Like an optical illusion, we can’t prevent ourselves from falling for the cognitive illusion of the Fundamental Attribution Error - after millions of years of evolution our subconscious is just hard-wired to think this way (and we do it hundreds of times each week, whether we’re watching the news, travelling on the train, or going out for a drink with friends.) Thankfully through developments in cognitive science over the past 40 years we are now aware of the Fundamental Attribution Error and can at least attempt to identify when we might have fallen for this cognitive trap. Knowing this, when you next look at some ‘bad' code you’ve inherited, perhaps it could be saying more about the context in which it was written, and less about the quality of the previous delivery team, than we'd like to admit? Maybe acknowledging this is a sign of a good developer?