On Reddit a few days ago I read through this discussion thread regarding the article by Matt Rickard titled “Developers Should Deploy Their Own Code”. I was just the guinea pig for starting this process at the company I work for, so while the pain experience is fresh in my mind, I figured I’d jot down some thoughts.
The Argument(s)
The discussion of the article really made two points – it’s not as simple as “developers should deploy their own code”. There were two common threads, which are definitely related but not quite the same thing.
- Developers should be responsible for their production code.
- Developers should be responsible for deploying their production code.
So what’s the difference?
Responsible for Code
This argument essentially states that once in a production environment, developers should be held accountable for any failures in the application itself. A very common sentiment for this argument is that site reliability engineers, infrastructure/platform engineers, and other product-development-adjacent technical types are usually on the hook for being on-call for downtime, but application developers tend to shy away from it. It’s common for them to want to toss a bundle of code over the fence (so to speak), and then ignore whatever happens after that.
I think this argument makes a lot of sense, honestly. I personally feel a load of responsibility for bad code and don’t mind taking my turn at being on-call every now and again. In my opinion I’d say it makes for a more resilient application, since no one wants to get woken up in the middle of the night and have to support a thrashing app. That leads development teams to build stronger software.
Responsible for Deployment of Code
This argument makes the point that with the rise of things like infrastructure-as-code (namely, Terraform), application developers should be empowered to own the SDLC from design to deploy. In this model the infrastructure or platform team would build abstracted modules or building blocks of infrastructure that development teams could self-serve on.
I think this really falls apart, though. Those abstractions have to be truly rock-solid for non-platform focused developers to be able to consume them. Even then, an infrastructure team is going to build for the common denominator. In the case of the application that I was recently developing, it was something completely different than our typical application structure, and so some of the abstractions didn’t really apply. That meant that the people who wanted to be hands-off for the deployment really couldn’t afford to be, since I was learning on the fly.
Takeaways
I don’t know that I completely agree with either approach.
I’m not sure it’s reasonable to expect application developers to generalize so heavily. This is a complicated subject, but I feel like the solution is to involve the platform engineers early and often (probably as early as the design phase), maybe to the point of pair programming on the topics that are most murky. Otherwise, the insistence on extreme generalization can tank project deadlines, application design, and even developer morale if things go badly enough.
I also understand the other side of the coin, where app developers don’t take quite enough responsibility for their code taking things down. And in that case, I definitely agree that there is a reasonable expectation of accountability. There’s certainly benefit to having cross-trained developers as well (with the obvious caveat that they’ll in general be less effective than the specialists).
As with all things, there is a balance, and I think meeting somewhere in the middle is the most reasonable approach.
That’s all for now. Thanks for reading!