What We All Do

As developers the core of what we do is probably pretty similar. We probably work on tickets or stories or some other tracked and recorded issue. This may be in a tool like Jira or Rally or Pivotal or it could be tracked in an Excel spreadsheet or even via email. In some cases, it’s just a post-it or a couple of lines in a notebook that we keep in order to not lose track of what we’re asked to do. But someone is asking us to make changes to the software (it could be us).

After that we have the process of actually writing or modifying the code. For me and for many I know it’s roughly this – create a branch, make the changes, commit the code (one or more times), push up the branch as a pull request. After that, there’s the waiting for feedback, making changes based on that feedback and then finally the code is merged back into the main code branch.

There’s more details, I’m sure, but it’s probably not too dissimilar from what you might be used to. For us, the feedback or code review is a critical part of our process. If the code doesn’t get approvals from the right people or enough people it never moves on to the merging part of the process. We have a server that knows about all the different projects and repositories as well as the rules the team has for how many approvals it needs and from whom. This server also does a lot more for us that I’ll get into later, but it didn’t start with all of that.

How It Started

In the long long ago, but at my current company (I’ve been here eight years which is close to an eternity for many developers I know) we worked with code in SVN. Branching was easy but merging was a nightmare, so mostly people just worked in the main branch and then when a release happened, someone would copy the files into a release branch and away we’d go. Shortly after I started I converted us to Mercurial. Mercurial, if you’re not familiar with it, is quite git-like but at the time the tooling was much better for Windows developers compared with Git. Funny enough, Git and Mercurial both began life within a week or two of each other.

Our process then was that each developer would “fork” the repository and do their work and then when the code was ready, they’d let the QA team know and when they were ready, they’d let me know and I’d pull together and merge all the code they’d asked for. If it failed due to a merge conflict, I’d let the developer know and they would merge in “default” (the “master” branch of a Mercurial repo) and then we’d try again. I was the human integrator. I got to be pretty good at it and out of the hundreds or thousands of merges I did manually that way, I only made a small number of mistakes. But even those were too many. So I set about creating a set of PHP shell scripts that would speed up the manual entry of commands I was needing to enter multiple times every day. Think of it as fancy macros that would take the list of repositories and branches that QA wanted integrated and this tool would spit out the commands that I needed to copy/paste and run.

After doing this a bit and smoothing out any rough edges, I updated the code so that given the same input, it would run the commands. Of course error handling was still manually done. When a merge failed or when we needed multi-repo merges to happen in a transaction — think web code that relies on database changes that are in another repository — if the merge on the latter repos failed, I needed to undo the merges I’d done previously. After doing this a lot of times I found the various patterns and wrote some code to automate various cleanup and undo activities. Of course I was still a human conditional statement and had to figure out which sort of failure I had run into and what the appropriate fix was I needed to run.

As I’m sure you can guess, these scripts eventually got combined, I was able to make the determination of what sort of failure and how to clean it up I might run into, but still, all of this was manually executed based on input from the QA team. After running this way for a couple of weeks without needing to make changes, I built a UI for it. Instead of asking me to do the merges, the QA team could use the UI and make their own requests and the code would execute it and either let them know it was all done successfully, or tell them something had gone awry and what needed to be done to resolve it.

Going Further

At one of the many conferences I attended, I saw Jeff Carouth speak about some of the automation he and his team had. They used emojis in the review comments on github to indicate a pull request’s ability to be merged. I forget if it merged automatically or not, but I remember learning the developers on that team could request an environment be created for testing that pull request and it would be automatically created. I wanted that.

Sometime around this point, we moved from an internally hosted repository to Bitbucket for hosting. There are certain things you give up when moving from hosting your own code to using a service. One of those is that the repositories are no longer a part of your network. This means that events that happen on the repository cannot send requests or “webhooks” to servers that are part of your internal network.

What I wanted, what I needed, was a server that would straddle the edge — it needed to be publicly accessible but also be able to make API calls to servers that were part of our internal network. What this would allow is for an external service, like Jira or Bitbucket or Slack to make a request and something on our network could respond to it. This could be Jenkins causing a build to happen, or a message to go to a Slack channel or a text message to be sent or any number of things.

Webhooks

Eventually, I got the server, but it was a number of years later and my responsibilities had grown to the point that I didn’t have a lot of time to implement any of these things myself. What did happen ultimately has been great for the project and amazing for what it provides to the developers on the team. We hired my brother as a developer. I was able to tell him about my goals and ideas for the project and fortunately, he had the time to start implementing. As such, in further paragraphs, please understand that “we” is almost always referring to my brother implementing and coding the entire project but with both of us collaborating on the ideas.

Using Zend Expressive’s middleware based framework as a starting point we’ve found over the past few years that the possibilities are endless. We were able to not only integrate my integrator tool, we were able to enhance it greatly. Now instead of polling the repository for changes, the moment a pull request comes in it will trigger a build on the Jenkins servers that are on the internal network. When the builds complete, or comments are made on the pull request, the Webhooks server can post a status to Slack. Additionally it checks a configurable set of criteria, including the builds have passed, there’s the right number of approvals, the right number of people in particular groups have approved and that there’s not any comments that change this. The comments can be that another individual’s approval is required or that the author’s own approval is required and more. Assuming all the criteria have passed, the Webhooks server can either automatically rebase and squash the code (again all configurable) or let the author know that they need to rebase.

We follow a rebase workflow for most projects. This is mainly because QA is able to test a pull requests on a branch and know that the code they’ve tested will be identical to what it will be after the merge. This leads to another feature that was added more recently. By posting a command in Slack, developers or QA can bring up a sandbox set of servers for our applications on whatever branches or code versions they desire.

The webhooks project started simply and has grown to become an even more important part of our workflow than we initially thought it would. Much of this functionality is configuration or initiated through Slack commands. These commands are all controllable via an ACL (access control list) which is also configurable through Slack. We can also set up automatic merges, rebases and Jenkins runs. We can manage email configurations through Mailgun into one of our products. You can look up Twilio error logs and you can ask the server why any particular pull request hasn’t merged yet.

Because of the ease of development using middleward with Expressive, some of the commands we’ve built are just for fun. There’s a feature to send an inspirational message to someone and even one that would allow you to order a burrito.

Let Computers Do the Boring Things

The point of all of this is that you don’t have to start off with a server that can handle everything. That’s not feasible. But you can start out and automate a little bit that makes your life a little bit easier. Then iterate on it. For us, we started with automation of merging code, later we added creation of environments, rebasing code and much more. It means the human developers can focus on the parts of the process only they can do – writing and reviewing code, while computers handle the boring, repetitive and error-prone stuff: rebasing, merging, bringing environments up and down and deployment. This will let you focus on the more interesting things like coding, solving problems, and going home on time.

Leave a Reply

Your email address will not be published. Required fields are marked *