I started my professional career with PHP in the year 2000. Professional in the sense that I got paid to program, not so much because of my knowledge of PHP. I wasn’t a newbie, though, or at least I didn’t feel like one. At this point, I had accumulated around ten years of experience programming Pascal, C, x86 assembler, and Perl. So when I showed up for this interview for what I thought was a Perl gig and was asked if I could write PHP as well, the cocky young version of me said, “Sure.”
I jumped in at the deep end, facing PHP code that was written the way some people assume it still is written nowadays: big files of markup mixed with logic and potentially exploitable SQL queries. No vendors – the team wrote every bit of code in plain PHP. At this job, there was no VCS or automated tests, and releases were deployed via FTP. Looking back at these cowboy coding days, I wonder how we got away with this. Everything worked out, but I still can’t believe how poor practices and standards were back then. The bar wasn’t just low; it lay flat on the ground.
Of course, it’s easy to frown upon all this in retrospect, but even considering it was a different time, much of what we were doing was subpar. I certainly didn’t bring much to the table back then. My years of experience weren’t exactly worthless, but being able to write code is just a small part of what is required to deliver software professionally. So even at that job, I learned a lot. With every mistake, every project, and every new job, the picture of what makes a good project became clearer, slowly raising the bar higher and higher.
I appreciate all the lessons people taught me that led to building my own practices that I consider crucial and non-negotiable for creating and maintaining projects. One lesson I learned way too late is that it’s worth sharing your knowledge, even if you think it’s nothing unique or original. In the worst case, you tell people something they already know. But there’s a chance you share something that makes a difference to someone. For this reason, I’m sharing an overview of things I wouldn’t want to miss: They make my work life safe and help me keep my sanity.
I doubt there’s anyone out there not using a version control system, but I name it anyway. There is, however, a difference between using a VCS and using a VCS properly. This is a whole topic on its own. To keep it short, I only bring up two things I find most relevant:
- Make atomic commits. They’re much easier to work with and to understand why a particular change was made. Resist the urge to put unrelated changes in a commit just because you discovered something along the way. Make them separately, maybe even in the main line, and rebase your changes on top of it. Keep it clean.
- Learn how your VCS works and how to operate it. You use it every day, so you should have a firm grasp of how it works. If you feel like it is this impenetrable thing that works in mysterious ways, you’re in a bad position. I’m not suggesting you have to be an expert who knows every little detail and niche functionality. The basic concept, the commands you use every day, and understanding what to do when things go wrong will do. Reading Pro Git was an eye-opener for me, and I regret not having read it sooner.
Uniform Code Style
Some things are better left to the computer. For example, I prefer not to deal with code formatting. What’s even more beautiful than a uniform-looking codebase is the absence of unrelated formatting changes in commits that detract from the original change. Don’t waste time resolving conflicts just because of changes with different code styles. Instead, agree with your team on one style, configure PHP Coding Standards Fixer accordingly, and live happily ever after.
Another thing computers are better at is catching stupid or sloppy stuff humans do. Static analysis tools like Psalm and PHPStan have become staples in my setup since I first discovered them. I love how picky and precise I’ve become with typing code just from being slapped in the face by those tools so many times. I don’t even trust my own tests, so I let Infection mess with the code to figure out what I’ve missed.
Unless you’re dealing with the rare case of a project that goes straight from development to the bin, you will need to deploy it at some point. Since you have to do it anyway, why not do it first? I got this idea from Growing Object-Oriented Software Guided by Tests, and it has become the first thing I do when setting up a new project. It is way less stressful to start with a tiny deployment pipeline for a blank project and extend it as I go instead of doing it in one big step later in the project. Although automated deployment via pipeline should be the default, I also plan a way for team members to deploy manually from their workstations.
When I want to work on a project, I should get it into a usable state in no time, ideally by running a single script or make target. I’m not only talking about a usable state from the technical perspective; it should also contain sensible data, like user accounts and other data, to illustrate the existing use cases. I found it best for this data to be stable and only evolve with new features or when uncovered cases get discovered. This way, everyone knows what to expect when they log into a system.
Having a system you can immediately work with is nice, but if you have to figure out how it’s supposed to work, it’s no fun. Reading the source code and unit tests is a way to get there, but it’s a little tedious. Also, when talking to your client, you must constantly translate between two worlds. On the other hand, documentation written in tickets, wikis, or other documents is easier to digest and discuss for non-technical people but in my experience, it’s more prone to error. For example, people forget to transfer changes or the implementation diverges from the specs for unknown reasons.
To bring these two worlds together, it’s best to document a system along with the code in the VCS. I like Gherkin for its simple structure that makes sense to both technical and non-technical people. Illustrating features using examples is a great way to ensure everyone is on the same page. In conjunction with Behat, you get executable specifications that prove that the system behaves accordingly. Writing the specification for a new feature first gives the team a definition of done, preventing gold plating.
Since we’re on the topic of testing, I want to emphasize the importance of rapid test feedback. Slow-running tests interrupt your development flow and slow down the deployment process. While unit tests should be blazing fast anyway, I expect a similar pace for acceptance tests. Depending on the project size, this can be a challenge, and it often only works if slow parts can be easily substituted. If an application is coupled too tightly to concrete and slow technology, testing time will increase significantly with each new feature, making the feedback loop slower and slower. Therefore, keeping an eye on that or having a different strategy right from the start is better.
Decouple All The Things
Have you ever used a bit of technology, library, or framework that got you into trouble? Like a library that got abandoned, some nasty BC break, or just the need to replace something with something else? I’ve been there more than once. Instead of introducing proper abstractions, my former self was lazy, thought there would never be a need to replace something, or didn’t want to “over-engineer.” It took weeks of refactoring sessions to pay off that technical debt. The ecosystem has caught up, and nowadays, we have fantastic tools like Rector to automate such activities. This is good if the damage is already done. My learning is to decouple dependencies from the core application by using abstractions and pushing concrete implementations down into a small infrastructure layer. If this is Greek to you, I highly recommend Alistair Cockburn’s paper on Hexagonal Architecture. If you want something more PHP-related, there are Matthias Nobacks’s books Advanced Web Application Architecture and Recipes for Decoupling.