For the vast majority of my professional career, I’ve been working with mostly just PHP, and felt pretty confident with it. However, the longer I worked with just PHP, the more afraid I became of branching out into new things. I watched as the backend ecosystem started changing – it seemed like every one was picking up Node or Go or Ruby – but what if I couldn’t learn those? What if my skills didn’t transfer? What if I wasn’t as good as I am at PHP – or even worse, good enough to even get anything done?
Last fall, my team was tasked with building a new service, and a coworker suggested it was a great opportunity to create a serverless microservice using Node JS. Over the course of a few months I had to throw myself into learning the AWS ecosystem, functional programming, learn Serverless, start writing JS – and I realized I’d been holding myself back for no reason.
That project started me down a path of learning TONS of new technologies – and after 20 years of writing software, I finally got over my reluctance to learn how any of the hardware side worked – which was based in the same fear of not being good enough. I bought a kit and started learning about microcontrollers and circuits. Powering an LED for the first time and controlling which LED light up with a simple toggle switch was as rewarding as those first “Hello World” scripts. I jumped into python and C++, built IoT devices for my home by sending signals over WiFi and long range radio, learned how to solder – and then taught my 8 year old daughter how to!
I asked my daughter to come up with an idea for a project for us to build together, and with her (admittedly vague) product requirements, we started working on her “cube shaped light controlled by drawing on an iPad app”. I created a React app for the frontend, which not only needed to support touch events so it would work on the iPad, but even multi-touch for better interactions. The backend is functional JS running on a lambda, which communicates with a third-party service to push events to the microcontroller – which is running python to control a bunch of individually addressable LEDs. The final step was designing a case for the whole thing, which went through a bunch of iterations where we tried to use acrylic cut on the CNC machine, before I finally caved and bought a 3D printer.
I wish I hadn’t held myself back for so long. Don’t be afraid to step outside of your comfort zone, the discomfort is where we grow. There is an entire world of communities just like our PHP community, out there waiting for more newbies.
Knitting is computing. Bear with me, I’ll explain.
Computing is using certain hardware, following a set of instructions to manipulate input and produce a desired result.
Knitting is using certain hardware, following a set of instructions to manipulate input and produce a desired result.
See, it’s the same! Knitting ‘hardware’ is needles, our ‘input’ is yarn, and our result is… a scarf, perhaps.
Knitting patterns are exactly like computer programs – they’re a set of instructions. Yes, that would make knitting pattern writers programmers.
As a woman in tech, I’ve witnessed many surprised faces belonging to men who have just learned that I am a programmer. None have ever expressed surprise that I am able to knit!
Unravelling the Mystery
Let’s take a look at a much simpler knitting pattern that will create a slightly textured fabric.
worked over even number of sts
row 1: [yo, k2tog] rep
row 2 & 4: k all sts
row 3: [ssk, yo] rep
rep rows 1-4 for pattern
This pattern consists entirely of familiar concepts; variables, conditionals, iterators, and functions.
Let’s look at the variables. These are sts (stitches) and row (row). Our knitting needles holds an array of stitches which are knit over a number of rows. So, after we knit (process) each of our sts in turn, our row increments.
Our pattern contains conditionals. Which row we are on determines which instructions we follow. If row == 1, we [yo, k2tog] rep, if row in (2, 4), we k all sts, and if row == 3 we [ssk, yo] rep.
Each of the instructions for the rows ends with rep, our iterator. rep means to repeat the previous instruction. In this case, we continue to repeat the instruction, processing stitches, until we complete the row.
We also have functions. These are the actual yarn manipulations – we see them in the pattern as yo, k2tog, k, and ssk. Each of these is a different way to process a stitch in a row. As a knitter, you must learn how to perform each manipulation.
yo is a manipulation that increases the number of sts in a row by 1. k2tog and ssk manipulate two stitches at the same time, decreasing the number of sts in a row by 1, and k means to knit – leaving the number of sts as it is.
Examining the pattern using this knowledge, we can determine that given any even value for sts, sts.length will remain unchanged at the end of each row. If we were to keep repeating rows 1-4, we would end up with a rectangular piece of fabric which would get longer with each iteration. That’s our scarf!
A History Intertwined
Computing and textile manufacture have a shared history. Back in the 18th century, the textile industry was undergoing a revolution. A silk fabric inspector, Jacques de Vaucanson, turned his efforts to mechanising the cloth-making processes. He succeeded in creating a water-powered automatic loom. Unfortunately, this invention was largely ignored for several years – until it was picked up by Joseph-Marie Jacquard. Jacquard improved upon this automatic loom and patented the invention in 1804.
This Jacuqard machine could be attached to a loom, and, given some input, could create woven fabric according to any given pattern. This was a breakthrough in textile manufacture. Due to the way in which it employed punched cards, exceedingly complex patterns could be fed to the machine and easily re-used.
You may know that early computers also used punched cards for reusable input. This came directly as a result of the mechanisation of textile manufacture. Charles Babbage was heavily inspired by Jacquard, and used much of the same technology when he designed his Analytical Engine – the world’s first computer.
Punched cards are very familiar in computing as they represent binary data – either the hole is open, or it is closed. 1 or 0. This fact defined human-machine interaction. Ada Lovelace–the world’s first programmer and accomplished mathematician, who just so happened to be a woman–surmised that these punched cards could represent not just numbers and mathematical operations, but arbitrary data.
We may say most aptly that the Analytical Engine weaves algebraical patterns just as the Jacquard loom weaves flowers and leaves.
– Ada Lovelace
Weaving Things Together
There is not currently any one ‘universal’ notation for knitting patterns. Even different cultures have differing names for stitches; for example, an English double crochet stitch is an American single crochet stitch. There are also different ways to signify the size of needle required for a pattern, and different names for the thickness of yarn.
When we write our code, we usually use a specific language which is formally defined. This ensures that our code can be executed, and understood by other engineers. Could a ‘universal language’ be developed for knitting, and could we write tools to parse and validate our written patterns, and identify mistakes?
Each kitting pattern is generally written to be performed on specified hardware. Could we develop tools that allow knitting patterns to be transcompiled for use with differing hardware? For example – increasing or decreasing the number of rows or stitches to achieve a desired size, given any combination of needles and yarn.
Furthermore, is it possible that our knitting language could actually be used as a general purpose language for performing computations? In order for this to be true, our knitting language must be Turing Complete. In her Masters thesis “Algorithmic Complexity in Textile Patterns”, Heidi Metzler investigates whether or not knitting can be used to simulate a universal Turing machine. If it is possible, then knitting patterns must be Turing Complete!
In conclusion, knitting and computing are more similar than you might think! I hope that this article has given you food for thought, and perhaps some new perspective. If it weren’t for the textile industry and the contributions of women, perhaps computational theory wouldn’t be where it is today.
Lately, I’ve been working on migrating Doctrine ORM to PHP 8 syntax. To that end, I’ve been using Rector, an automated refactoring tool. It comes with a set of rules called LevelSetList::UP_TO_PHP_81 which makes sure you use the most modern syntax to do something as long as it is supported on PHP 8.1. UP_TO_PHP_81 is equivalent to UP_TO_PHP_80 + PHP-8.1-specific rules. UP_TO_PHP_80 is equivalent to UP_TO_PHP_74 + PHP-8.0-specific rules. UP_TO_PHP_74 is equivalent to… you get the idea, and maybe also marvel at how satisfying this looks. 🤓
I’m doing that work on the 3.0.x branch because we don’t currently plan to drop support for PHP 7.1 on the 2.* branches. While I’m at it, I’m taking this as an opportunity to add type declarations where not already present. Adding type declarations is very often a breaking change, especially when working on methods that are not private, and classes that are not final,which explains why that was not already done everywhere on lower branches.
Thankfully, we have plenty of phpdoc comments that Rector can use to infer the correct type declarations to add. Here is how such changes might look:
- * @param string $foo
- * @return int
- public function doStuff($foo);
+ public function doStuff(string $foo): int;
doctrine/orm is a lot of code though, so I’m trying to work in bite-sized pull requests. First, because it would be awful to review all these changes at once, but also because the phpdoc we have can be imprecise, or plain wrong (we still have to work through our PHPStan and Psalm baselines). Having inaccurate phpdoc might be fine from PHPUnit‘s point of view, but having inaccurate type declarations isn’t, so I need to fix these by hand afterwards.
I like to repeat that we’ve never been closer to releasing doctrine/orm 3.0 than we are today, an information that you can share widely because it’s always true. While that’s a fact you can hardly deny, it is still good to backport any of the fixes and improvements to the 2.x series: it makes difference between branches smaller, which in turn makes merging up from 2.x to 3.x easier, but also lets the users benefit from those fixes and improvements earlier.
Usually, what I do is pick a class or namespace, apply Rector on it, then review the changes. If I spot phpdoc that is wrong, I fix that on the patch branch (currently 2.13.x). If I spot phpdoc that is correct but a bit vague, I make it more precise on the minor branch (currently 2.14.x). After the PR is merged, I merge 2.13.x up into 2.14.x, and 2.14.x up in 3.0.x, and I try running Rector again, this time with correct phpdoc.
That migration is a good use case for the git subcommand I want to introduce to you today, because I need to change branches often. It’s already not unusal to have to do so when maintaining a library, but it’s exacerbated here. Sorry for the Tom Jones earworm.
In this particular case, here is the problem I am facing: imagine you have 10 fixes or improvements to backport from one branch to the others, and that you discover them progressively. How would you proceed? Would you stash changes, switch branches, run composer update for good measure, make your change, commit, then switch back every time? Or would you maybe try to remember several things you need to do, and try to do them all at once? Either solution sounds pretty bad.
The subcommand that can save you from this is git worktree. It allows you to have ✨several✨ worktrees at once for a single repository.
Creating a throwaway worktree with branch 2.14.x can be done like so:
$ git worktree add /tmp/throwaway 2.14.x
The operation is instant, and in the case of doctrine/orm, there are only 2 steps to be ready to work on that new branch:
$ cd /tmp/throwaway
$ composer update
But I do not want to create throwaway worktrees… I find the idea of having permanent worktrees very appealing: they are like starting points for new branches I want to create. Each one has its own vendor directory, with the right dependencies.
Also, I prefer to have them neatly grouped in a single directory. I could have a normal repository, and then add worktrees inside, but then git would consider the worktrees themselves as new directories that need to be put under version control. To avoid having that main worktree, you can use a bare repository:
You will end up with a directory called doctrine-orm.git, and the contents of that directory will be what you usually find in the .git directory If you use git log, you will see the history of the default branch, which the current HEAD points to (2.13.x in our case).
Doctrine uses a consistent branching model on all of its repositories:
🐛 bugfixes go to the patch branch;
💡 new features, deprecations, improvement go to the minor branch;
💥 breaking changes go to the major branch.
At first, I named directories after branches, but when 2.12.x went unmaintained, I no longer had a use for the corresponding directory, and realized I should have one directory per branch type instead. Here is how to create that workforest 🌳🌳🌳:
After that, you should end up with something like this
├── major # a full 3.0.x doctrine/orm is inside 💥
├── minor # a full 2.14.x doctrine/orm is inside 💡
└── patch # a full 2.13.x doctrine/orm is inside 🐛
doctrine-orm.git # Looks just like a regular .git directory
└── worktrees # contains administrative files for your worktrees
Note that I could have created the worktrees directly inside doctrine-orm.git, but I don’t want to, I find that messy.
When inside doctrine-orm/*, git still knows where the repository is stored thanks to a .gitfile in each worktree. Yes, in this case it’s just a one-line file, with a pointer to a directory that holds administrative data for that worktree.
When trying this at first, it broke custom git hooks I had. That’s because when using worktrees, Git will store the administrative data that is common to all three worktrees in the usual directory, but will put worktree-specific administrative data in another directory (here: /path/to/doctrine-orm.git/worktrees/minor). Making the distinction between the git directory specific to a worktree and the git directory shared by all worktrees helped me fix my hooks. It is possible to figure them out from inside a worktree:
In 2016 I decided that I didn’t want to be an IT-coordinator anymore. At the time I couldn’t deal with the politics involved in that function. You know, co-workers protesting the changes that you have to make happen and stuff like that. After some talks with job – and career – advisors I decided to become a developer. Probably a front-end one.
Thanks to one of the advisors I was offered a position at a non-profit that enables people with physical and mental disabilities to find and keep a job. That company wanted to make use of my front-end and webmaster experience and offered me a traineeship for back-end development. Yet, the problem was, they wanted me to learn it the auto-didactic way. The way that my direct co-worker and so many others in the industry followed. It didn´t seem as a problem at first, mind you. I already considered myself the prototype of a self-educated employee. And besides: how difficult could PHP be? You know, being a scripting language invented to cough up HTML? That much I already knew. So – believe it or not – they gave me the book Head First PHP & MySQL (O’Reilly, 2008) and after finishing asked me to help maintain the Document Management System and build an Intranet-side based on WordPress. They must have thought I was ready.
How little did we all knew? It took me three years to feel comfortable with my role. If I wasn´t so determined to become a developer, I would have quitted already. Meanwhile, the patience of the product owner fluctuated. That didn’t help either. As a matter of fact, we had a good talk only yesterday (2022) and he still took the opportunity to express his concerns about my productivity. Totally overlooking the fact that I not only introduced and implemented such an essential process as lifecycle management through Git but also helped secure the ISO-27001 audit by dressing up a tree of security policies for the DMS. On days I spend more time on processes than on coding.
It’s been six years ago that I started to become a PHP developer. I still have a lot to learn. If you find yourself in a similar position, I can assure you that there’s not much time to learn new things when production goals are set. Unless perhaps you’re nineteen, started programming twenty years ago, aren´t responsible for any other livelihood, and can work 24 hours a day. (not complaining about my own circumstances here though.)
What all this learning and determination thought me, besides knowing the perks of something being null, is that most tutorials and books can teach you programming to a random extent but they don’t teach you to be a developer. On hind sight I have to admit that to become a professional developer, it’s maybe better to get proper training and education. Because business demands more skills than logic and tooling. If you’re one of those dreaded full-stack developers like I am, or even ‘just’ a back- or front-end developer, there’s so much more to learn than code. And good tutors know this.
I was lucky to be determined and be able to compensate for my lack of knowledge and experience in PHP programming with the knowledge and experience that I gained in other jobs over the years. But one isn’t always that lucky.
Let’s set aside all practical concerns for a moment — it’s Christmas, after all. If you could choose — freely choose: what would you change about PHP?
Would you want generics or the pipe operator? Maybe you’d like to see consistent function signatures or get rid of the dollar sign. Type aliases, scalar objects, autoloading for namespaced functions, improved performance, less breaking changes, more breaking changes — the list goes on.
But what if I told you, you had to pick one, and only one. What would it be?
My number one feature isn’t in this list. Even worse: my number one wish for PHP will probably never happen. But like I said at the beginning: let’s set aside all practical concerns. Let’s dream for a moment, not because we believe all dreams come true; but because dreams, in themselves, are valuable and give hope — it is Christmas, after all.
It was a language that — legend says — was written in two weeks. Yet it grew to be the most popular programming language people had ever seen, almost by accident.
Let’s talk about PHP. I’ve been writing it for more than a decade now. I love the PHP community, I love the ecosystem, I love how the language has managed to evolve over the years. At the same time I believe there’s room for PHP to grow — lots of room. And so I dream.
But dreams seldom come true.
And that’s ok. It means my dream is probably unrealistic, but it also means something much more important.
My realisation recently is that PHP is already awesome. People are already building great things with it. Sure, maybe PHP is boring compared to the latest and greatest programming languages, and sure you might need to use another language if you’re building something for those 0.1% of edge cases that need insane performance. My dream of a superset of PHP might be one of many approaches, but it sure isn’t the only viable path forward.
Even without that dream of mine: PHP is doing great. It’s not because of how its designed, it’s not because of its syntax. It’s not because of its top-notch performance and it’s not because of its incredible type system. It’s because people like you are building amazing things with it. Whether you’re using PHP 8.2 or not; whether your running it serverless or asynchronous or not; whether you’re writing OOP, FP, DDD, ES, CQRS, Serverless or whatever term you want to throw at it — you are building awesome things. It turns out a language is rarely the bottleneck, because it’s merely a tool. But “PHP” is so much more than just a language, it’s so much more than just a tool. That’s because of you.
Thank you for being part of what makes PHP great. Happy holidays.
I can’t remember exactly when I discovered PHPAdvent, but it was pretty early in my PHP career, so probably around 2007/2008. At the time I was a fairly junior PHP developer, and the ability to read about the work and live experiences of more senior developers was pretty life-changing. I felt like I was sitting around a campfire, listening to the stories of the tribe’s elders. Every day I read each new post with great enthusiasm, keen to hear what new things I could learn or discover.
24DaysInDecember is the spiritual successor to PHPAdvent, and I look forward to planning and sharing the stories of the PHPamily every year. This will be the 7th edition of 24 Days in December, and hopefully, by kicking things off earlier than I did last year, we’ll have another bumper year of stories to share.
It’s been such an interesting year in the PHP space, so if you’ve had a thought or idea in your head that you’d like to share with the PHP community, this is your chance.
What should I write about?
In all honesty, your contribution is whatever you want it to be about. Did you learn something recently you’d like to share in a guide or a tutorial? Do you have an opinion about the current state of PHP core development? Have you been working on something cool you’d like to share with the community? Do you want to share something less technical you learned this past year? The content is entirely up to you.
How much time do I have?
There are no hard and fast deadlines, except that we try to post at least one new article every day for the 24 days leading up to the 25th of December. If we have more contributions we keep going, but that’s our goal. Given that the 1st is just over 3 weeks away if you choose to contribute today, you’ll have at least 3 weeks.
How long should it be?
There are no guidelines here. It could be a few short paragraphs or an entire deposition. We don’t mind.
What format should I send it to you in?
Markdown is preferable, but plain text is also acceptable. You can send it in an email, as a text attachment, via Google Doc, tied to the leg of a carrier pigeon, we honestly don’t mind, as long as we can get it.
We hope to hear from you soon.
If you would like to contribute to this year’s edition, please email us at info AT 24daysindecember DOT net, or contact us via Mastodon at @24DaysInDec or Twitter (while it’s still viable) at @24DaysInDec.
If production breaks, it is not the fault of a single person but a faulty process.
The story, all names, characters, and incidents portrayed in this blog post are fictitious. No identification with actual persons (living or deceased), places, buildings, and products is intended or should be inferred.
We are professionals, nothing of this would ever happen to us.
One of our customers using Shopware called me and said, they would like to have a trailing / on every URL. I thought that is no problem, opened an SSH terminal and added a few lines to the .htaccess on production. Tested it and worked. Two hours later the customer called and told us, that it is not possible to put anything into the cart anymore.
Shopware uses its API to add items to the cart and the API doesn’t like a trailing /.
How could that have been mitigated?
On our “safety” checklist we have a few things:
We don’t have tests though.
But tests wouldn’t have helped. Why you ask? Because some dumb idiot (me – this is a fictional story!) accessed the production system, ignored all processes and broke the system.
So the simple answer to “How could this have been mitigated” is: Don’t allow access to the production system. (But it is important to state the downside of this rule: You can’t fix anything fast).
Customer calls: “No secure connection to the server is possible”. Ok, obviously something with the SSL certificate is not working. Yesterday I read an article about an expired R3 certificate by Let’s Encrypt, which breaks one or another connection. And checking the certificate shows: Yes, it uses the expired R3 intermediate certificate.
Easy fix: Create a new certificate. Connect via SSH on the server (no, that was not the problem! :D) and run certbot for a new certificate. Testing, done.
A little time later the telephone rings again, this time the admin asks, why all websites are down. I told him, I updated the certificate and tested the site after it, it is online – still.
The certificate contained 12 domains before I updated it. After the update only 1 was in it.
How could that have been mitigated?
The bad thing: If I hadn’t access to production I couldn’t have tried to fix it in the first place.
The easiest to avoid this are two things:
Know what you are doing (I’m a developer not an admin).
Use a TLS monitoring service. The one we are using is oh dear.
Automate your certification updates (ok, this was a special case)
Customer calls, their e-commerce site is down. Investigation starts. Login with SSH is already full of errors – hard disk full.
We already had a discussion with the admin, that the old Shopware version we use has a cache leak and we need to update(!!), but it is end of November and the customer prefers to do it next year. Ok. The cache writes about 90GB a week, we have 160GB disk, about 120GB free, so it is no problem, if we clean the cache each week once.
But now it happened, the cache flooded the disk and we don’t now why. First thing: Clean the cache and the store is online again. And now check what happened. We only have 40GB disk free. Why? Good question. With each deployment we create a new cache, which means we “archive” the old cache and don’t delete it. So with each deployment our disk space decreased and we didn’t know this.
So we installed a cronjob which not only deletes the cache ones a week, but once each night – just to be sure – and not only the current cache but all caches of all releases. And warm the current cache up.
How could that have been mitigated?
Simple answer: Hard disk monitoring. You can monitor a lot, but free disk space and used inodes* is a good start point. Used memory and CPU utilisation can be the next – but I’m not expert no this topic.
inodes are the entries of files in the file system, so you can find each file on the hard disk. It can happen (e.g. if you write TONs of small session or cache data) that you have a free disk, but you are out of inodes, then you still can’t create new files, because the file system can’t remember the position anymore. Like a database when you run out of AUTO_INCREMENT ids.
Things you want to have (and are not always easy to sell – I know)
Software, tipps and services we use(d) to mitigate our risk:
A lot has been written about the PHP Community. How it is an integral part of why PHP is not dead. About who is part of the community. About how it helped people become better developers. About how people improved because of the community.
And most of that is about the PHP language. Almost all of us came for the language. But what makes the PHP-Community so special for me is that a lot of the people stayed for the friends.
Friends that are beyond “everything is fine”. Friends you can count on and friends that actually take care for each other.
Whether that is people enquiring about the health of others just because they were suddenly not that active on Twitter anymore. Or people making plans to more closely engage with someone to help them through a rough time. Up to realizing that there are people talking about their health issues openly in chat-rooms or in blog posts.
And I think it is great that we as a community stand together in good times as well as in bad times.
We for sure have our differences. And we can fight! Over almost nothing!
It starts with Tabs vs Spaces or VSCode vs. PHPStorm, and doesn’t end at deprecating dynamic properties.
We are also capable of toxic behaviour. There are some people that are challenging the community in every way. And there are even some that we had to say “no” to. To be able to become more welcoming and more empathic.
And especially in bad times, it doesn’t matter whether we have a background in WordPress, Laravel, Symfony, Drupal, PHP, or whatever else we started out with. What matters is that we are not afraid to say that not everything is fine. And that there are people that take care of each other beyond mentoring through the next development challenge. People that help each other out. No matter what!
And it gives me a tingling feeling that I can be a part of that.
Thank you all for providing such an overall safe space.
I spent my whole career writing code. I consider myself as a Jack-of-all-trades – I know a bit of everything when it comes to development, CI/CD, SEO, UX, project management, and probably many other branches. While this is very useful in some situations, I saw my limits much quicker when it come to learning some things in-depth.
I even started to search for a job thanks to which I could move from writing code all day to something where I could use my broad knowledge. I imagined myself as a project manager with a technical background. Luckily, out of the blue, I got an offer to work as a WordPress Ambassador at Buddy. I’ll be honest – back then I didn’t have a clue what a person like this does, also I was a bit skeptical because I was sure that it will require talking with people (and I learned PHP so could limit those contacts).
It’s been almost a year since I started working in Developer Relations and you know what? This is a true dream job for people like me. First of all, I don’t have to code all day, but I still get enough chances to do so. I learned that I love talking with other people – this was the biggest surprise for me. I also discovered that I’m a pretty solid webinar host and organizer.
What is most funny about this is the fact that all of this started with an “echo ‘hello world’” many years ago. So if you feel, that you are getting tired of coding all day, but you still want to use your technical skills, maybe DevRel is also for you?
Last year, a call for entries to 24 Days of December came out, and I wanted to write something discussing PHP documentation to encourage more contributors, but … the words just weren’t coming to me.
Last year, I had started a new job a couple of months prior, after being unemployed for a year and a half. I was adjusting and acclimating to my first full-time dev job. I was assigned a project that had a deadline of end-of-January, and I had to quickly scale up learning the code I was working with and how to solve the overarching problem. I felt the pressure because I wanted to succeed. I wanted to prove myself, both to my new employer and to myself, that I could do this.
You see, in my previous role, while development was part of my job duties, it wasn’t a focus, and it wasn’t the only duty. I tended to pick up programming related tasks, push them until I was nearly complete, and then… become stuck. And I had very few options to unstuck myself. I wasn’t an experienced developer, I had taken a handful of traditional, introductory courses, and had some basic development experience, but I’d regularly become baffled why something wasn’t working, struggled to make examples that I could share with others online to ask for help, and I didn’t work with any other developers who I could ask questions. I regularly felt stuck, and regularly felt incapable. Impostor syndrome was borne from the weight of several unfinished programming projects over the years.
I had to complete this deadline to show myself that I could figure out programming problems and write code. And I did.
But the cost was burnout and the inability to work on other things for a while.
A common indicator of burnout is feeling lack of control over one’s life. The feeling of ambition and motivation turns into pressure and stress. “I want to do XYZ” becomes “I have to do XYZ.” Sometimes, this stress is necessary to complete a task, but there’s the subconscious promise of reprieve, a break, afterwards. That this pressure is temporary, and that I will have time later to recover and care for myself. Except, at the time, I didn’t understand this. I wasn’t able to identify I was already burnt out. I had become so accustomed to feeling overwhelmed in everyday life, I thought it was “normal,” or “normal for me.”
To give a short summary: I was bad at keeping my apartment clean. There are underlying reasons that are out-of-scope for this topic, but I struggled to motivate myself to keep my apartment in a state that provided me comfort. It was common for me to wash individual dishes because my cupboards had no clean dishes. It was common for me to take a shower and have no clean, dry towels ready. It was common for me to wait until my trash bags had built up before I wheeled out my trash can for waste services to pick up. You get the picture. Cleanliness in my living area was an overwhelming struggle, and another piece in my burnout puzzle that had to be resolved before I could begin the journey of caring for myself.
Year after year, I failed to clean my apartment. I forgot what “clean” looked like for me. It was the project that I would always procrastinate, avoid, or push off to next month, next season, next year. Like mental health, I realized this was something I couldn’t accomplish alone. I needed help. I contacted a cleaning company to do it for me. And it was worth every penny. I hired them to clean once a week thereafter, to build a stable foundation of comfort, of safety, in my mind.
There were objects in my apartment that I incorrectly assumed I wanted because other people had these items in their homes. I came to accept that what worked for others, may not work for me, and that’s okay. Having these items in my apartment occupied space, in both my mind and my living area, and increased my stress because they were additional things to clean or account for while cleaning. So, I got rid of them.
With the weekly cleaning, I adjusted to the feeling of having a clean apartment, the happiness that the feeling of a clean apartment brought me and built an image in my head of what my apartment looked like cleaned, and what was necessary for me to maintain it. I canceled the cleaning service and started doing it myself. Around this time, I came to learn the definition of self-care. I knew the word from therapy, but my idea of self-care actually turned out to be coping behaviors.
Self-care is taking the steps necessary to ensure my state of feeling happy, which happened to include a clean apartment. I still struggle with motivation to clean, but reframing cleaning as self-care was an epiphany. I started to realize and understand what I needed to do to feel happy, to attempt to prevent burnout from overtaking me again.
One of the hardest challenges has been acknowledging what I’m able to complete in one 24-hour day. I make a to-do list that I want to complete, but sometimes I overestimate my energy level for that day. I’ve learned to accept that I may not complete everything I intended, and I’ve learned how to forgive myself if I’m unable to finish, to not let that further weigh on my mind. Learning the act of kindness towards myself.
I tend to view two-to-four days as one really long 48–96-hour day with breaks for sleeping in between. I struggle with the flow of time, and often think I’m able to accomplish more than I can. But working towards following an internal day-to-day schedule is helping. Journaling out my hopes, dreams, and aspirations so that I can form a rough plan towards achieving those goals, is helping.
I also have days where I don’t feel up to working on anything, and that’s also okay. I spend those days taking care of myself, coping with life, and recharging in the hope that tomorrow is a better day. I remind myself that what I’m feeling in the present is temporary, and that it will pass. Another day will come that I will feel ready to take on the world, or at least the items on my to-do list. 🙂