Broken Windows

Earlier this year, I started a new job. One of the reasons I picked it was because I would get to evaluate whether or not the existing PHP codebase was up to snuff, and what we were going to do about it. The project owners were developers but they weren’t PHP developers, and the codebase had just been worked on by contractors for many years. It was a situation of “It works, but we can’t tell you how good it is.” There was a feeling that it would not be a good code base to continue with (framework-wise, not language-wise).

The last major overhaul had seen the project move over to what I will term a “home-grown framework,” one that is not in wide use. It was old and the original developer had moved on, so the framework was effectively unsupported. Internally the framework just kind of stood there like an old factory that, while not falling over, had not really been cared for.

There were no unit tests. Nothing was really automated. API output was inconsistent. Coding standards were expected but not documented or enforced. Documentation was nearly non-existent or out of date. There were a lot of broken windows.

If you aren’t familiar with the Broken Window Theory, here it is:

Consider a building with a few broken windows. If the windows are not repaired, the tendency is for vandals to break a few more windows. Eventually, they may even break into the building, and if it’s unoccupied, perhaps become squatters or light fires inside.

Or consider a pavement. Some litter accumulates. Soon, more litter accumulates. Eventually, people start leaving bags of refuse from take-out restaurants there or even break into cars.

-James Q. Wilson and George L. Kelling, The Atlantic Monthly, March 1982

The longer these “broken windows” stay around, the harder it is going to be to fix them, and the more likely developers will continue to allow the broken windows to stay broken. Have you ever tried to put in unit tests into a legacy application? The older that the app is, the harder it will be.

Now, I have full authority to just say, “Screw it, we’re starting over fresh” and greenfield the project. I am not going to though. There’s a lot of history in this not-so-great code, a lot of bugs in the business logic that have already been fixed. I’m sure some of the bad decisions are there because they have to be, because not all code works perfectly when integrating with external services. If I rebuild the app, we’ll just make those same mistakes again.

So I’m taking the initiative to fix those broken windows. It’s a bit more work, but in the end it will be better. What am I doing, and what can you do to fix the broken windows?

Establish Standards and expectations

The first thing to do is establish some ground rules around standards. I’m laying out what our code workflow will be. There’s no ambiguity about how an issue goes from being reported to our master branch.

I’m finding standards that are well established, like PSR-2, and starting to enforce those things on new code. PHP CodeSniffer has a PSR-2 sniffer built in, so I’ve written pre-commit hooks to alert developers when their code might conflict with PSR-2. We mostly follow it now anyway, but it’s good to come right out and say “Hey, this is our coding standard, and here are tools that tell you when your code needs some love.”

I’m also encouraging the idea of Object Calisthenics. We’re going to be growing our team so we can’t work like a lone cowboy coder anymore. Cleaner, easier to read code helps everyone.

Documentation

I’m a writer, yet I hate writing documentation. It’s now just going to be a rule that new features will need to have corresponding documentation. If we change how an endpoint reacts, we need to also make sure that the PR includes the documentation update.

If a new feature is added, it gets documented. Inputs, outputs, and expectations will be laid out.

PSR-5, while not accepted yet, is a good starting base for our code documentation. Again, this is something we do already, but it makes sense to mark down what we follow.

testing

This one is going to suck, but new code will have to come with unit tests. I fully understand that the legacy code will be hard to unit test, and I’m not expecting us to get to 100% code coverage any time soon. And I know that some bug fixes will be very, very hard to unit test. What we will be doing though is on new code, it is expected to have unit tests. If it can’t, we document in the code why something isn’t unit tested so that we can find it later. Then make a unit test and just skip it. The idea will be that when we can better unit test the surrounding code, we’ll be able to revisit and make those skipped unit tests work.

As with the CodeSniffer, I’ll be supplying pre-commit and pre-push commits to make sure that as we build up tests that they are run automatically.

Code Reviews

This is another hard one. No one likes to have their code critiqued, but in a team situation having code reviews is a healthy way to find potential problems before they make it to production. All of the automated tools in the world can’t find logic or workflow issues, and no matter how well you unit test something you’ll never find all of the weird things that users will try and do. Github and Gitlab both have excellent peer review software baked into their PR systems.

Follow Through

I proposed many of these ideas with my coworker, and while receptive, I was met with, “Yeah, I’ve heard that before.” It’s one thing to say you are going to start doing these things, and it’s a whole different thing to actually do it. And commit to it. Push back when things aren’t followed.

In the immortal words of Shia LaBeouf, “DO IT.”


There are plenty of things that we as developers can do to help fix our broken windows. It might not be as glamorous as burning down the house and rebuilding it from scratch, but at the end of the day, you’ll have a better codebase.

And that’s all I really want for Christmas.