Work smarter by taking cues from the music-learning process

In 2019, after several years as a cellist and orchestra manager, I started learning to code. While I didn’t have a technical background, I found I had developed a number of coding-adjacent skills through studying music. One practice I’ve adopted from music learning has helped me deliver code with fewer mistakes, better choices, and more efficiency: a thoughtful process.

If you’re not familiar with the process of learning classical music, you might be surprised by how many intricate problems we encounter, many of which we examine before even touching the instrument. I’ve found a similar approach is valuable when working in an enterprise code base full of interdependent systems. With this in mind, I’ve broken down the parallel music learning and software development processes into several stages.

Setting up your space

It’s a common joke among musicians that we can’t practice if our room isn’t clean. We take our space seriously, knowing how much it can distract or unsettle us. In music there’s a “flow” state very similar to that of coding. But that doesn’t just happen by itself. Reaching a focused mindset can take a really special set of circumstances, which are unique to you as a person, and sometimes fluctuate day-to-day. As a first step in the process, it’s helpful to take a moment and assess what environment will help us do our best work.

At music school, I often needed the accountability factor and lack of distractions of the school practice room. Most music schools have a block of tiny rooms, empty except for a chair and a music stand (if you’re lucky), and their only purpose was individual practice. That whole section of the school was full of practicing students. These days, the equivalent is taking my laptop to a cafe, where my mind is far away from the dirty dishes in my kitchen and anything else that could distract me. Or, I might just need to do those dishes, clean my desk, make some coffee, turn on a lofi playlist and the noise-canceling headphones, and enable do-not-disturb mode. Either way, this first step of the process can impact how effective the rest of the process will be.

Understanding the context

A lot of classical music was written under interesting political, romantic, or social circumstances. For example, Shostakovich’s 7th Symphony was written in the Soviet Union as World War II broke out and became a symbol of resistance to “enemies of humanity.” This knowledge impacts how we perform the pieces – in the Shostakovich example, we would bring in intensity and sometimes sarcasm. When learning a piece, we would also consider the composer’s typical style, and the nuances in the instruments of the period, which would limit or inform our technical choices. Some musicians refer to learning a piece of music as “studying” it, which is an apt description due to the research involved in the process.

Programming requires a similar exploration of context, ideally early in the process. You might carefully consider the requirements or expectations of a ticket and ask clarifying questions. You want to understand how your work fits into the bigger picture of feature delivery, and why it is important. In both music and programming, we are asking the same questions: What are the intentions behind the technical direction? What is the big picture? What exactly are we trying to accomplish?

Consulting the experts (and the docs)

Are there existing solutions to this problem? Don’t reinvent the wheel. If there’s already a tool or design pattern or interface for this, we should use it. That might take some discovery.

When I was learning a new piece of music, I would get on YouTube and check out performances by a variety of other cellists. Everyone has their own style and approach, so it was a good way to open my mind to all the possibilities of how to play the piece.

This step of the software development process might include:

  • Looking at existing solutions to similar problems in your code
  • Searching previous PRs and company documentation for the topic
  • Talking to other engineers on your team about how this has been done in the past, and why
  • Googling, Tweeting, etc.

Getting in there

In music, this is probably the most fun step in the process, and the least fun for someone else to hear. I would sit down with the cello and the sheet music, and start playing through it (badly). I would stay open-minded, not committing to any musical choices. The goal would be to discover the biggest challenges, and how the piece physically feels on the instrument in a general sense. I might mark some ideas in the music in pencil.

If I were programming, I would be going through existing code, assessing all the possible dependencies and the biggest challenges. I might write some high-level pseudocode or notes. I could check out existing test data for a similar method or feature and see how it maps to the code. If possible, I would walk through the user flow that is impacted by this code, and step through the code with the debugger. Again, no commitments yet, just open-minded exploration.

Making a plan

OK, now that we have a lot of information to work with, we’re ready to come up with an approach. Musically, this means we decide where the most important moments are, and what we want to “say” as an artist with the piece, what we want the audience to feel or think when they hear it. More practically, how much time do we have to learn this? We might want to time-box certain parts and move on to make sure we can cover everything, or plan which days we work on which parts of the structure. We have already assessed what we think the biggest challenges will be, so we could plan to focus most of our energy there.

In this stage of the programming process, we would come up with some testing ideas and consider edge cases. We could look at our deadlines and determine what our MVP should be, or map out some milestones. We would decide what to tackle first – start with the easy wins, or the biggest challenges. This might be a place where we break the work into tickets, or mentally divide the ticket into sections that can be tested independently. And now that we know our dependencies, we can decide whether to write new code or extend existing code, and plan the structure.

Problem solving

At this point in the process, we have a high-level plan and an end goal. Now how do we get from Point A to Point B? Here are a few of the strategies I use, in both music and coding:

  • Break it down into tiny pieces. In music, this means working on a few measures at a time. When you have part one down, work on part two, then put one and two together. When that’s working, figure out part three, and put parts two and three together, then parts one to three (and so on). I often do the exact same thing when I code!
  • Brainstorm a few ways you could solve the problem, then choose a good option and give it a try. For example, if it’s a big “shift” (a lot of physical distance on the instrument to cover), you could use any combination of fingers as a starting point and ending point. Each option would have unique pros and cons impacting sound, accuracy, and speed. If one way isn’t working well, you have several more options to choose from. Same with coding – there are many ways to solve even simple problems, so it can be helpful to start with several ideas and assess pros and cons.
  • Try working backwards instead of forwards for a different perspective. Sometimes in coding and in music we know where we want to end up, but not where to start – so why not start with what we know?
  • Ask for help or ideas from a mentor or friend. This might be built into the environment, like music lessons or coding mentorship, or you might have to reach out to a colleague.
  • Limit the time you spend on one problem so you don’t have time to get too frustrated or lost in the details. Try setting a timer for 20 minutes. Take a break and practice diffuse thinking. When all else fails, call it a day and come back tomorrow with a rested mind.

Bringing it all together

The problems are solved! Now we can put it back together and start refactoring. On the cello, this is where I would play larger sections through, make sure I am amplifying those important moments that I planned, and strengthen the “character” (AKA vibes) of the piece. In my code, I would be stepping through the flow that I wrote, checking against my company style guide, unit test suite, and static analysis tools, maybe improving variable or function names, and DRY-ing it all out.

Incorporating feedback

This is arguably one of the most important parts of the development AND music learning processes. In music we have several avenues to challenge our assumptions and learn how we can improve. Weekly private lessons are essential to a college performance degree; lessons are essentially an intense session of 1:1 feedback with a professor, often with the expectation that you will immediately adapt to their suggestions and build on previous feedback to refine your performance. Closely related are master classes, feedback sessions in front of an audience; and chamber music coachings, similar but with a small group such as a string quartet.

For the development process we have code review and pair programming (in addition to tests and static analysis, which can also give great feedback). Just like in music lessons, the key is adapting to the feedback, both from this project and previous projects, into our work, to refine our code and build on our knowledge. Receiving feedback can be difficult – no one likes to be wrong, and we tend to get attached to our choices – but I try to treat it just like I would on cello: a critical part of the path to a better outcome.

Performing / Deploying

I don’t know about you, but I never feel like my code or my music is ever truly “done.” I’ll probably look back at my finished product in a year and notice improvements I could make (which is a milestone of growth in both careers). But we have to balance our desire for “perfect” with deadlines and deliverables. We have to decide what is “good enough” – what we can stand behind and feel confident about, knowing that it’s short of “perfection.” I often remind myself that 100% all the time is unrealistic. So, let’s call this the final step in the process, knowing that we will continue to grow after this project is finished.

Of course, we can’t talk about deploying, or performing, without talking about failure. All the processes in the world can’t prevent human mistakes; they happen to all of us. But if there’s anything I’ve learned in music, it’s that The Show Must Go On!

The process I’ve described is pretty granular, and some development work isn’t so complex that it needs to be broken down this far. But when I’m facing a big project, not sure where to start, or feeling overwhelmed, it’s a great reference for me. I hope it is for you too.

Happy holidays! If you need a soundtrack to get you in the spirit, I have a recommendation 😉

Progress is Never Permanent

“Progress is never permanent, will always be threatened, must be redoubled, restated and reimagined if it is to survive.”

Zadie Smith, Feel Free: Essays

As anyone who has worked on a software project of any size or complexity can tell you, things just have a tendency to… decay. The more people work on it, the more technical debt loaded onto it, the slower it gets and the higher the rate of what I refer to as butterfly bugs – you make a small change in one place, and all hell breaks loose elsewhere.

So we add tests – unit, integration, functional, behavioral. And coding standards. And static analysis. And release management. And documentation. And continuous integration. And still the apps keep breaking; the sheer complexity of what we build means it’s almost entirely inescapable.

Then we come to upgrade to a newer version of the framework, language or runtime, perhaps for security patches, long term support, or just cool new features. And suddenly the way we were doing things is no longer supported, or no longer best practice, or no longer scales. And we have to refactor previous work, without breaking the rest of the application. But it breaks anyway, And we fix it.

This is progress – messy and a lot of work, sometimes moving forward; sometimes just to stand still. Entropy comes for us all, and everything we do.

A couple of months ago, I left my job of almost six years for ethical reasons – the company had been acquired, and our once inclusive and welcoming culture was massively undermined in the name of “efficiency” and the thinly-veiled application of right wing, capitalist ideology. The breaking point for me was the effective disbanding (through defunding) of the Employee Resource Groups – officially sponsored organizations intended to support those colleagues from minority and disadvantaged backgrounds.

Prior to the acquisition – in fact, on June 22nd 2022 – I was perfectly happy, in the best job I’d ever had, with the best colleagues and management of my career so far. I felt valued, and felt that those around me with less privilege, tenure or experience were treated like equals. While I wasn’t unaware of issues with the company, I guess I had slipped into complacency, and felt very comfortable. And then the wrecking ball came, in the form of a right wing CEO and ruthless COO. Now most of my colleagues in engineering and beyond have scattered to the winds.

I handed in my notice at the end of August, with a job to go to; the day after I was informed that that position had fallen through. I was, of course, quite hurt – but ultimately decided now was the time to strike out on my own, and incorporated my own company. One that I’ve pledged to run ethically in all areas, to look after employees (should I grow enough to have any!) and society as a whole, where I can. Two months down the line, it’s definitely a struggle, but I have faith that I can do things – and do things right.

And as I mourn the loss of what once was – as well as a lot of the political upheaval in the wider world – I came across the quote at the top of this post, and it resonated with me.

Just as we need to maintain our code, we need to maintain our relationships and organizations. When we spot people hurting we need to step in. When we spot things fraying or creaking under load, we need to tend to them.

We need to avoid complacency, and step out of our comfort zones. When we spot ways to improve things, either for ourselves or others, we need to not only make those changes, but find ways in which we can keep an eye on them, always questioning whether they’re the right solutions, and asking “What next?”

Like painting the Forth Bridge, our job is never done. Otherwise, our work – and ourselves – will rust away and slip beneath the waves.

Creating music with PHP

As Wikipedia tells us, PHP is a general-purpose scripting language geared toward web development.

And that’s true.

For example, WordPress – CMS that powers 40% of the web, runs on PHP and JavaScript.

But there is more to it.

PHP is a handy programming language suitable for prototyping ideas quickly.

As I’ve always been interested in playing musical instruments and recording music, I’ve decided to investigate what it takes to create audio files using PHP.

This blog post will show you how to make PHP create some actual music. I’ve decided to write the code from scratch to make the process more fun and educational.

I’ve chosen WAVE audio file format because it’s relatively easy to understand. Usually, it contains uncompressed audio, so we won’t have to deal with compression algorithms.

The code

Prerequisite: please use recent PHP versions to run the script. It has been tested to work on PHP 8.0 and above.

The script can be downloaded from this Gist.

It consists of 5 PHP classes:

  • WaveFile – responsible for creating .wav files and writing the data to disk;
  • Melody – responsible for storing successions of musical tones (a.k.a. notes);
  • Note – responsible for storing the pitch and duration of a note;
  • SineWaveSynth – responsible for generating sounds we should be hearing, e.g., sine waves;
  • Track – this class manages all the other classes and exports melodies to audio tracks.

I will clarify the most obscure parts of the code.


A regular .WAV RIFF audio file consist of a header and sample data.

The header of a .WAV file has two sub-chunks: fmt and data.

If you want to learn more about the header, please check out this article.

$this->write_long( $this->sample_rate * static::BITS_PER_SAMPLE * static::CHANNELS / 8 );

We must consider all the audio channels (if there is more than one channel).

Hence we multiply the value by static::CHANNELS.

Currently WaveFile supports mono audio only. This is because we don’t need to use stereo audio for the purpose of this blog post.

$this->write_short( static::BITS_PER_SAMPLE * static::CHANNELS / 8 );

This sets the bytes per sample value.

WaveFile::write_long(), WaveFile::write_short()

fwrite( $this->file, pack( 'v', $value ), 2 );
fwrite( $this->file, pack( 'V', $value ), 4 );    

The pack() function is being used to convert integer and string values to binary data.

This fantastic tutorial works well in its understanding (although it is written for Perl’s implementation of pack()).


$this->buffer .= pack( 's', $value );

Calling fwrite() several thousand times should be avoided as it’s a bad practice.

In my tests, implementing my custom write “buffer” has significantly increased the performance, meaning that the internal buffer isn’t very effective.


$this->flush_buffer( true );
fclose( $this->file );

This one is interesting.

The header must contain information about the file’s overall size and the sample data’s size.

As this can only be calculated when the file is finalized, we call the WaveFile::update_sizes() method in the WaveFile‘s destructor.


return sin( 2 * M_PI * $frequency * $current_time );

The frequency value should be multiplied by 2 * π so that one period of a sine wave takes exactly one second.


class Melody implements \Iterator {

The Melody class implements the \Iterator interface.

This way, we can get individual notes by simply iterating over the $melody object, i.e.:

foreach ( $melody as $note ) {
    // do stuff

Isn’t that elegant?

Also, the class constructor depends on the VirtualInstrument interface instead of a particular implementation of that interface, e.g., SineWaveSynth.

This way, Track can easily be used with other virtual instruments or synths which implement the VirtualInstrument interface.

Of course, you are welcome to experiment and implement your own synths/tone generators.


$sample_value = $normalized_sample_value * $max_sample_value * 0.9;

All the sample values should be multiplied by 0.9 (or another float less than 1) to not clip the signal.

La finale

It’s time to run the script.

php melody.php

Download the script (if you haven’t already done so), and execute it:

As a result, melody.wav should be generated in the same folder where your script is.

So now you can play the file and “enjoy” the melody :).

Share your stories!

This year I was lucky enough to be a speaker at the SymfonyCon 2022 conference in Disneyland Paris. I was honored to be invited to this very special edition and made a brand new talk for this conference: 7 Lessons You Can Learn From Disney Movies. Yes, it was completely themed to the location of the conference.

During that talk, I shared several important lessons, such as the need to set goals, the fact that you are not alone and there’s always someone that can help you, and that you sometimes need to take risks to make the next step. But there’s one thing I mentioned that really seemed to resonate, as I got feedback from several people afterward about this.

The different perspective

The thing that really seemed to resonate was that everyone needs to see things from a different perspective on a regular basis. And one of the examples that I mentioned was that senior developers need the input of a less senior developer. Because as senior developers, you sometimes get so stuck in your own perspective that sometimes a fresh perspective really helps. Of course, you have a lot of experience, and you’ve been able to enhance your skills for a long time, but that doesn’t make you right all the time. So, senior developers, take your medior and junior colleagues more seriously and listen to their arguments with an open mind. They’re also not always right, but their input can help you improve your code.

Ways to share your story

Of course, the next question then is: How can I, as a junior or medior developer, share my story? Well, there’s a lot of different ways how you can do this. So let’s list some:

  • Speak up at work
  • Start a blog
  • Visit a user group
  • Speak at a user group

The list is a lot longer, but I want to focus on these four examples.

Speak up at work

I know this can be hard. I know senior developers can sometimes feel overwhelming, or even intimidating, because of all the experiences they’ve had, all the skills that they’ve built over the years. But really, your voice matters as well. As I mentioned earlier, seniors sometimes get stuck in doing things a certain way that they don’t realize maybe there are other ways of solving a problem. Your input can help get them unstuck. So next time you’re listening to a senior developer blabbing about how we need to fix this problem in this way because it worked the last 10 times while you know there is another way to do things, speak up. Tell them about your idea.

Start a blog

Now, this is probably one of the easiest ways of sharing your stories. Thanks to and it’s really easy to start a new blog. Or, if you want to keep control of your own data, installing WordPress, Bolt, or any other blogging software on a VPS or shared host is not that hard either. After you’ve set up your blog, it’s a matter of writing. Write about stuff you encounter at work, about things you’ve found in a hobby project, about the stuff that interests you but you haven’t even really been able to work with. It doesn’t matter that you’ve not an expert on the topic. It is especially valuable if people that are new to things share their experience.

Visit a user group

If there is a user group in your area, schedule a visit to the user group. Just being there, being able to ask questions after the speaker is done, and being able to talk to other visitors, that is already valuable. You’ll learn from it, and you’re able to share your experiences when speaking to other visitors. That might create connections that last a lifetime, or help you advance in your career.

Speak at a user group

If you have a topic that you’re passionate about, that you’ve had some experience with or that you really like and want to share with others, why not speak about it? Yes, this is a bit more work as you have to prepare the talk, but sometimes user groups have lightning talks (where your presentation only has to be 10-15 minutes). Or maybe you can already do a 30-45 minute talk about a subject. As I said when talking about starting a blog, you don’t have to be an expert on the topic you’re speaking about. Share YOUR story with this subject. Because your story will be different from the stories of others, people will learn from you. It might make them think. And when you’ve triggered that, you’ve won the speaking game.

Please, share your story

Your story is valuable. Your story is your story. It is different from the stories of other developers. A different use case, a different solution, a different perspective. Your perspective matters. So, next year, sign up to write for 24 Days In December, and share your story. I’m looking forward to reading it.

Why I built Suphle, an opinionated PHP framework, in 2022

Among all the new PHP projects you’d expect to see in 2022/23, I doubt another PHP framework is one of them. In fact, you probably already have your favorite one serving all your needs.

So I’m not here to tell you why you should use my framework. Rather, this is the story of what inspired me to build it. At the same time, I’ll share some reasons I think it’s worth your time.


Working with many different PHP frameworks, I witnessed first hand the amount of damage and technical debt that can accumulate in the absence of certain early decisions in the application architecture.

I’ve listed these problems below but the TL;DR is that left with no experience in a field, developing with a framework can get complicated and ugly.

Most of them were in the initial blueprint. Few were included for convenience over the course of development. Here goes:

  • Internal feature release and archiving features
  • Breaking unchanged parts of the code by modifying dependencies cuz No clearcut dependency chain, and certainly, no integration tests
  • Requiring a full stack developer to work on our UIs (before they were ported to SPAs)
  • Entire pages crashing because of an error affecting an insignificant page segment/data node
  • Waiting for negative customer feedback before knowing something went wrong, then wrangling error logs
  • Sacrificing man hours after giving up on SSR. A front end dev was hired. The back end had to be duplicated into an API with slightly diverging functionality.
  • Chasing and duplicating state and errors between the SPA and back end, for the sole purpose of a SPA-ey feel/fidelity
  • Cluelessness when our callback URLs broke in transit
  • API documentation, testing, breaking clients thanks to indiscriminate updates since there was no versioning
  • Irresponsible practices such as requests without validators, fetching all database columns, improper or no model authorisation, dumping whatever we had nowhere else to put in middlewares, gigantic images ending up at the server, models without factories, stray entities floating about when their owner model gets deleted, not guarding negative integers from sneaking in from user input, I could go on
  • Corrupted data when operations separated by logic, that should’ve been inserted together gets broken in between
  • Gnarly merge conflicts among just a handful contributors

Some of them may be age-old problems you don’t consider big deals anymore. Suphle was built to solve them by making significant changes to traditional application architecture, with the hope of helping out those who encounter some of those issues, prevent those you’re unaware of, and add additional cherries on top.

How it’s similar

In a broader sense, if you’re coming from a framework written in another language, some features there parallel what is obtainable in Suphle:

  • NestJS: Modules, @transaction
  • Spring Boot: Circular dependencies, decorators, interface auto-wiring, component/service-specific classes, @transactional
  • Rust: Macros, Result
  • Phoenix: Livewire

It may interest you to know that some of these best practices were only found to intersect after Suphle was mostly complete rather than a premeditated attempt to build a chimera of widely acclaimed functionality. That is why in Suphle, their implementation details differ. For instance, Suphle’s modules are wired/built differently. The rest of the documentation goes into thorough detail about how that, as well as other implementations you’re used to were improved upon.

Perhaps, the most significant change new Suphle developers will find is in connecting the route patterns to a coordinator. Coordinators evolved from controllers, and I will explore three components briefly. Solutions to most of the problems listed above are already covered in their respective chapters of the documentation. Considering it’s yet to be officially published, you may have to host the repo itself on your local environment.


If you’re not familiar with software modules in development terms, they are folders containing all code required to sustain a domain in your application. Each module is expected to expose meta files enabling it depend or be depended upon by other modules. Modules can be created by hand, but it’s far more convenient to use the command,

php suphle modules:create template_source new_module_name --destination_path=some/path

They can be created as you progress through development and their need arise. They are aggregated at one central point from which the application is built. The default central point is the PublishedModules class. There is usually little to no benefit to changing this. A typical app composition would look like so:

namespace AllModules;

use Suphle\Modules\ModuleHandlerIdentifier;

use Suphle\Hydration\Container;

use AllModules\{ModuleOne\Meta\ModuleOneDescriptor, ModuleTwo\Meta\ModuleTwoDescriptor};

class PublishedModules extends ModuleHandlerIdentifier {
	protected function getModules():array {

		return [
			new ModuleOneDescriptor(new Container),

			new ModuleTwoDescriptor(new Container)

When the application is served (whether using traditional index.php or built through the RoadRunner bridge), only modules connected here will partake in the routing ritual.

Modules can depend on each other using the sendExpatriates method. Suppose ModuleTwoDescriptor has a dependency on ModuleOneDescriptor, the association can be defined as follows:

namespace AllModules;

use Suphle\Modules\ModuleHandlerIdentifier;

use Suphle\Hydration\Container;

use AllModules\{ModuleOne\Meta\ModuleOneDescriptor, ModuleTwo\Meta\ModuleTwoDescriptor};

use ModuleInteractions\ModuleOne;

class PublishedModules extends ModuleHandlerIdentifier {
	function getModules():array {

		$moduleOne = new ModuleOneDescriptor(new Container);

		return [

			new ModuleTwoDescriptor(new Container)->sendExpatriates([

				ModuleOne::class => $moduleOne

You may have observed introduction of the ModuleOne entity. It’s an interface that exists because modules aren’t driven by shells like ModuleOneDescriptor and ModuleTwoDescriptor. They are made of three crucial parts:

  • The descriptor or shell. This is the part connected to the aggregator or app entry point.
  • The module’s interface, typically consumed by any sibling module dependent on this module.
  • Module interface’s implementation. This can vary from application to application for the same module. The objective is for modules to be autonomous and isolated rather than tightly coupled to dependencies.

These parts are explored in greater detail on the Modules chapter of the documentation.


Suphle routes are defined as class methods instead of wrangling one gigantic script calling static methods on a global Router object. Perhaps, the biggest advantage of trie-based route handling is that it makes for fast failure i.e. easier to determine non-matching routes. Another benefit is that it allows us encapsulate the entrails or possible embellishments to those routes.

Basic routing

One notable novelty is that the eventual output type is defined within the method rather than in the controller/coordinator. Putting it all together, the average route collection could start out like this:

use Suphle\Routing\BaseCollection;

use Suphle\Response\Format\Json;

use AllModules\ModuleOne\Coordinators\BaseCoordinator;

class BrowserNoPrefix extends BaseCollection {

	public function _handlingClass ():string {

		return BaseCoordinator::class;

	public function SEGMENT() {

		$this->_get(new Json("plainSegment"));

With the definition above, requests to the “/segment” path will invoke BaseCoordinator::plainSegment and render it as JSON. Status code is 200 except an exception is thrown. The status code and other headers can either be set here or on the exception, as necessary. The _handlingClass method is a reserved method for binding one coordinator to all pattern methods on this class.

Nested collections

We can link to other collections using the _prefixFor method.

class ActualEntry extends BaseCollection {
	public function FIRST () {

class ThirdSegmentCollection extends BaseCollection {

	public function _handlingClass ():string {

		return NestedCoordinator::class;

	public function THIRD () {
		$this->_get(new Json("thirdSegmentHandler"));

If we configure ActualEntry as this module’s entry route collection, the pattern for “first/third” will kick in. Removing the _prefixFor call will disconnect all collections from that point onward from the application. Sub-collections can control the eventual pattern evaluated using their _prefixCurrent method. When not defined, it simply uses the method name of the parent collection.

Sometimes, we may want to conditionally customize the prefix only if this collection is used as a sub in another. We can modify ThirdSegmentCollection like so:

class ThirdSegmentCollection extends BaseCollection {
	public function _prefixCurrent ():string {
		return empty($this->parentPrefix) ? "INNER": "";

	public function _handlingClass ():string {

		return NestedCoordinator::class;

	public function THIRD () {
		$this->_get(new Json("thirdSegmentHandler"));

This will compose the same pattern as the previous one. However, when used as a sub-collection, the available pattern becomes “inner/third”. The parentPrefix property grants us access to the method name if any.

_prefixFor is a reserved method just like _handlingClass. There are other reserved methods for authorization, authentication, middleware application, etc. As you’ll expect, parent behavior propagates or fans out to any sub-collections nested beneath them.

Route placeholders

So far, we’ve only seen methods directly mapping to URL segments. Every now and then, dynamic paths have to be represented by placeholders. Since collection methods are legitimate methods, only permitted characters can be used. We differentiate between segment literals and placeholders using casing. For instance, the following definition would allow us intercept calls to “/segment/id”.

class BrowserNoPrefix extends BaseCollection {
	public function _prefixCurrent ():string {
		return "SEGMENT";

	public function _handlingClass ():string {

		return BaseCoordinator::class;

	public function id () {

		$this->_get(new Json("plainSegment"));

For a simplistic scenario like the above, the method can be named SEGMENT_id. But we’ll be getting ahead of ourselves. There are other advanced sub-sections of method naming such as hyphenation, slashes and underscores. .

Route collections do other things ranging from dedicated browser/API-based CRUD routes, to canary routing, route versioning, route versioning, and route mirroring.

Failure-resistant execution

During development, some operational failures are easy to anticipate. These are usually covered either by manual or automated tests. However, there are mysterious, involuntary scenarios where application fails without project maintainer’s knowledge. There are many terrible fallouts of such incident, some of which are:

  • User sees a 500 page and is either turned off or is clueless about the next step.
  • Developer either has to sift through bulky logs or worse, is blissfully unaware of the crash until a user notifies him. At this point, the business must have lost a lot of money.
  • The upheaval may have been caused by one insignificant query in a payload where others succeeded.

A better experience for all parties involved would present itself as sandboxed wrappers that calls can be made in. Should any of them fail, instead of a full-blown crash, request will carry on as if nothing happened. But the developer will instantly be alerted about the emergency.

Suphle provides the base decorator, Suphle\Contracts\Services\Decorators\ServiceErrorCatcher, that when applied to service classes, will hijack any errors and perform the steps suggested above.

use Suphle\Contracts\Services\Decorators\ServiceErrorCatcher;

use Suphle\Services\{UpdatelessService, Structures\BaseErrorCatcherService};

class DatalessErrorThrower extends UpdatelessService implements ServiceErrorCatcher {

	use BaseErrorCatcherService;

	public function failureState (string $method) {

		if (in_array($method, [ "deliberateError", "deliberateException"]))

			return "Alternate value";

	public function deliberateError ():string {


	public function deliberateException ():string {

		throw new Exception;

The caller can read the status of immediate past operation success using the matchesErrorMethod method. In practice, that would look like this:

$response = compact("service1Result");

$service2Result = $this->throwableService->getValue();

if ($this->throwableService->matchesErrorMethod("getValue"))

	$service2Result = $this->otherSource->alternateValue(); // perform some valid action

$response["service2Result"] = $service2Result;

return $response;

But you’re unlikely to use this base decorator, because it’s extended by higher level ones that take care of concerns such as transactions, the kinds of row locking, change tracking, enforcing data integrity, among others. These and more are covered under the Service-coordinators chapter.

The future

Where to next from here? I use the following metrics to guage progress status of both mine and any greenfield project, and suspect you do, too:

  1. Roadmap completion.
  2. Project stability.
  3. Project longevity and support.

The short-term priority is concluding what chapters are left of the documentation and announcing an official release. This shouldn’t be confused for project instability, as all API and behavior is already cast in stone. There are items on the roadmap not checked off yet; most important to me is integrating a parallel test runner and implementing request scopes to circumvent container clean-up when application is ran in a long-running process. These are low-level additives that will definitely accompany the first release.

There are a few others I wish are ready, most notably, auto API documentation. Nevertheless, I won’t dare advocate this first version for production use if I wasn’t confident it’ll cover majority of enterprise requirements. The tests for the currently extensive feature set are all passing.

After this phase passes, Suphle will grow more audible to those who hang around dev spaces often — to both rally collaborative engagement and enlighten team leads/potential employers of a framework more suitable to work with. I’m not sure how long this will take, but I’m confident it’ll be worth the wait.

A detail worth mentioning is that Suphle has a Bridge component that allows you mount projects started in any other PHP framework, as long an adapter for it exists. So far, only a Laravel adapter has been written. But this ability to leverage years of development and support means that not only will Laravel-style routers work, so will service providers and everything else. These applications will be secluded to a configured folder, from where we expect relevant patterns to be left behind (standalone functions, facades, etc).

If you’re reading this, and Suphle looks interesting to you, there are various ways to get involved.

Bugs can be filed on Github. I’ve started discussions for those interested in disputing or improving some present features. I would like to assist with questions relating to direct usage on a Gitter channel but would refrain from setting one up since answers get lost in time, and are difficult to reference both manually and in search engines. If you plan to post questions on StackOverflow, and have >= 1500 rep, please create a “suphle” tag and let me know, so I can follow it as well as highlight it in publicity documents. You can also link to my profile on your StackOverflow questions, so I can get notified in order to respond:

Awesome question

// some code

I sincerely hope Suphle is not only of immense benefit to you but that you enjoy using it. If you eventually do, consider leaving a star on the repo, telling your pals about it, and possibly watching for future updates.

Security doesn’t have to be boring

When it comes to building your own apps, setting development priorities, spending budgets, and generally getting stuff done, security is usually considered to be boring and unnecessary. This is especially true when you’re working with a modern PHP framework like Laravel or CodeIgniter. Security is included out of the box so you don’t need to think about it, right?

This is the mentality that results in vulnerabilities being introduced because the developer overlooked something. I’ve seen it many times when doing security audits: authentication missed on an admin route, or signed URLs not actually checking for a valid signature. When it comes to working with modern “secure” frameworks, it’s the little things that get missed because no one takes the time to think about security.

As a friend of mine once put it: “I find unless a company has had a security scare they don’t consider it a high level risk or priority.” The development time is put into features and shiny things, with no time dedicated to security because “we use a secure framework”.

I’ve been a PHP developer for 20 years, and in that time I’ve seen trends come and go. One that always sticks with me is testing. Testing used to suck. No one wanted to do it. We’d write rubbish tests to make the pretty Jenkins dashboard flash Green and move on. Code Coverage was about some magic percentage, not actually testing the use cases. Add in the fact that everything was supposed to be fully mocked and tested in isolation, and testing was slow, painful, and everyone hated it.

However, at some point that changed…

Testing stopped being the boring thing everyone avoided doing and felt bad for not focusing on, and has become somewhat interesting and innovative, and maybe even fun? There are modern testing tools that make testing easier, frameworks now include testing helpers, and we’re talking about integration tests on entire routes with full databases and sandbox APIs, rather than painfully breaking everything up. The change is significant and dramatic and if you’d told me about it when I first struggled with testing, I probably wouldn’t have believed you!

So my question to you is: How do we make Security fun?

Testing was made fun with shiny tools, framework features, and simpler methodologies, but can we do the same for Security? I don’t think we can simply automate the process like we did for testing. We now have a number of code quality and static analysis tools available in the PHP world, which are doing their part in raising the security topic, but they aren’t good enough. Plus I suspect they actually give a false sense of security. Static analysis tools look at code conventions like type hints and unused variables, but it’s hard to detect a vulnerability in code where authentication is completely missing on one specific route!

The big shift for testing came not just because of the shiny tools, but because developers talked about how to make testing easier and more enjoyable. Courses were developed and articles written. This is what I believe we need to do with security in the PHP community. That’s why I started my mailing list, Laravel Security in Depth, in September last year, and why I’ve started working on a course, Practical Laravel Security.

My goal is to teach developers about security within the PHP ecosystem, in a way that is fun and engaging. I want to give them practical steps and talk about common issues I find when doing my security audits and penetration tests. Ultimately, I want to teach PHP developers why I find security interesting. To make it something developers talk about, something they prioritise, something that sits alongside testing in the realms of non-feature priorities that developers care about when writing their apps. I want developers to write secure apps and not overlook the simple things.

And most importantly, I want security to stop being boring.

An Ode to PHP

I have been working with PHP for many years, mostly as a side language.

Once upon a time in, on an island far, far away, I did try to get a SaaS business going based on a PHP stack but alas it suffered from a lack of customers and we eventually bowed out.

It did not diminish my enjoyment of working with PHP and several years ago when a new opportunity presented itself, I again chose PHP for the stack because:

  1. I was familiar with it
  2. I was confident of hiring people
  3. It is easy to stand up a server
  4. It is easy to test and deploy

In order to add to the joy of the ’24 Days in December’ I decided upon an idea, a little out of the ordinary that popped into my head. An Ode to PHP.

This was mostly, as I said on, because I was thinking of Vogon poetry. I trust that this effort does not rate on the Universes Worst Poem league table.

Ode to PHP

Two thirds of the world
Two in three
Have come to rely

WordPress or Symfony
Or Laravel
These are the tools
That work so well

People all day
working so hard
Creating code so eloquent
It rivals "The Bard"

Code is poetry
Someone did say
If then, foreach
While 200 ok

So little by little
We refine and release
Git clone, add, commit
Git push and rebase

Patterns and algorithms
Classes, reflection
We are taking this code
In a brand new direction

So at the end of the year
We say "Hip, Hip Hooray!"
8.2 is released
8.3 's on the way

Finally, Big thanks to all the companies that support the PHP foundation, all the people that contribute PRs, release managers, testers, technical writers, and everyone else contributing to the wonderful community that is PHP.

Happy holidays and a Happy New Year.

The PHP 8.2 Release Managers

PHP 8.2 will be released on December 8, and many articles have already been written about the new features in this version,

So let’s talk about the people involved in releasing PHP — the release managers.

Who are the release managers?

I joined the release manager team in May of this year, a lot of my developer acquaintances didn’t know that PHP had such a “position” and what release managers do.

Perhaps there are those among of readers who haven’t thought about this side of the PHP either, so I want to open that door a bit.

Who they are?

Release managers are chosen for each PHP version.

Since PHP 8.1, there are 3 members chosen: 1 “veteran” who has already been a release manager for some version in the past, and 2 rookies.

Also, previously the rule was that one person can not be a release manager for two versions at the same time, now that rule is no longer in effect, but rather it’s a good practice to have one of the rookie release managers from the current release, serve as the veteran for the next release.

Ben Ramsey, a PHP 8.1 “rookie” is now a PHP 8.2 “veteran”.

What do they do?

So what do release managers do?

First and foremost, release managers triage bugs and pull requests on GitHub and categorize them.

Since PHP has had a team of developers sponsored by the PHP Foundation, this makes the task easier, the Core-developers take on many of the tasks themselves: they have more expertise in this, and we listen to their opinions.

Of course, the release managers directly release the PHP (I think you guessed it 😅).

Before a GA version is released, it is preceded by a 6-month pre-release phase:

Every 2 weeks, they release a test version of the PHP: from alpha1 to RC6 (and in the case of PHP 8.2, RC7).

This branch is then supported for another 3 years: 2 years of active support and 1 year of security releases, after which the release manager retires.

What are they for?

Release managers cannot affect a feature, all new features must be RFC’d and voted on.

If the core-developers and the community lead development, why do we need release managers?

Their job is to keep track of release dates, keep the PHP branch in good condition, and respond in case of abnormalities.

There are sometimes controversial situations, release managers are a kind of court of arbitration: they have the last word and great responsibility for the decision made.

How to become an RM?

Soon, the PHP team will begin soliciting applications for PHP 8.3.

All you have to do is put your hat in the ring.

The election is by STV and 2 rookies will be chosen to contribute to PHP for 3.5 years.


Thanks so much to Ben and Pierrick for their help and support.

Don’t be afraid to get voted in and improve our beloved programming language 💙 🐘

Evolving PHP

With 2022, I see PHP’s cost as becoming prohibitive. Here’s why.

PHP continued to evolve in 2022. That’s a good thing. PHP also scored an “own goal” near the end of 2022. This latter concern is not at all obvious. Here’s my view of the situation.

Double-digit bugs

Do you remember the Year 2000 problem? With 2022 PHP, we have some similarities–and differences.

One of the most difficult problems was moving from two-digit years (such as 87 for 1987) to four-digit years (1987). The problem looks trivial, right? But even the largest and fastest computers in the world ran on one-eighth of a megabyte of RAM for operating system, I/O buffers, everything. It was not trivial to find that extra two bytes of storage! Making more space “here” meant something else lost space “there.” We took months and years with cascading re-designs.

Our measure of success was that as January 1, 2000, rolled across the planet from one time zone to the next… nothing happened.

The situation was a forced upgrade. We had no choice. When people wrote software according to current best practice in the 1960s, 1970s, and 1980s, none of us expected that the same code–literally, the same executable file–would still be running and producing revenue in the 2000s. The books 1984 and 2001: A Space Odyssey, in our minds, still described as distant a future as Star Trek.

“Decades from now” and “centuries from now” were both, in our minds, the same problem category. In 2015, NASA famously searched for a programmer fluent in 60-year-old languages because the Voyager 1 and Voyager 2 spacecraft just kept working. In 2020, IBM scrambled to find or train more COBOL programmers to help states because various U.S. states continued to rely on mainframes running COBOL to manage their unemployment systems. The problem was that the 2020 pandemic had overloaded many such unemployment systems.

When functionality must continue

We collectively had a large legacy code base to be sure, but that’s not the point of this analogy. For Y2K, we took years out of our lives for what was essentially a bugfix. There was no new functionality to be gained. The whole point was continuing functionality. Because the installed code base (i.e., nearly all software on the planet) was so large, businesses, government, and software vendors all had compelling interest in keeping that revenue-producing software running past January 1, 2000.

This investment was reluctant. Nobody wanted to invest time or salary in non-features. In my own experience, moving a running production codebase from PHP 5 to PHP 7 encountered similar reluctance. “Upgrade” time was a cost rather than a benefit. It carried the “opportunity cost” of time taken away from developing new features or reacting to business needs.

There was a similar situation with U.S. gas station pumps becoming EMV (“chipped” credit card) compliant. The banks refused to pay for upgrading point-of-sale equipment. Instead, the card issuers implemented a liability shift, assigning the problem to the gas pump owner.

As with Y2K software upgrades, the necessity was the continuing functionality. Customers expected to continue using credit cards for gasoline purchases, as had been possible since the 1920s.

PHP language changes

Nikita Popov proposed a discussion nearly three years ago (February 2020).

In recent years, there has been an increasing tension in the PHP community on how to handle backwards-incompatible language changes. The PHP programming language has evolved somewhat haphazardly and exhibits many behaviors that are considered undesirable from a contemporary position.

Fixing these issues benefits development by making behavior more consistent, more predictable and less bugprone. On the other hand, every backwards-incompatible change to the PHP language may require adjustments in hundreds of millions of lines of existing code. This delays the migration to new PHP versions.

The general solution to this problem is to allow different libraries and applications to keep up with language changes at their own pace, while remaining interoperable.

Comments on Popov’s proposal describe a possible 4-7 year window for people migrating to the next major release. That, given my experience with Y2K, I see as tight but feasible.


I then watched, in horror, the discussions of a year ago (November 2021), concerning plans for future PHP releases. Branko Matić, for example, implored:

Give us a break, at least for a year or two. Stop updating and “improving” everything. The developer work is now 50% of time updating and compatibility fixes. So much time is lost for that, globally.

Matić was talking about Juliette Reinders Folmer‘s thread concerning PHP 8.2 proposals.

Deprecations are not the problem

Brent, explaining deprecations, discloses the internal developers’ perspective:

Of course, one could ask: are these breaking changes and fancy features really necessary? Do we really need to change internal return types like IteratorAggregate::getIterator(): Traversable, do we really need to disallow dynamic properties?

In my opinion–and it’s shared by the majority of PHP internal developers–yes. We need to keep improving PHP, it needs to grow up further.

The problem is not the deprecations themselves. The problem is the shortened migration timeframe.

With Y2K, vendors and compiler writers were answerable to the installed customer base. Operating systems and database engines required Y2K fixes, as did the banks who were running most of the financial transactions around the planet.

PHP, on the other hand, is free and open source software. The PHP internal developers, an extremely talented group of people and mostly volunteers donating their time, are not generally answerable to PHP’s installed customer base.

Where is the “own goal”? It’s in the difference between that vision of 4-7 years for each migration path, and the reality of 1-2 years at most.

As Matić explains, the problem is not the pace of new features or even of deprecations. It’s the narrow window of time allowed for the forced upgrades.

The fundamental shift

Consider our own tiny little team, a probably-typical PHP shop with 3-4 people doing PHP. We’ve been developing PHP software full time for the past ten years or so.

As you can well imagine, once the code for a certain overnight process, or a certain report, works, there’s no need (or reason) to touch that code unless a business need changes. That code, developed once, continues to run in production. This situation is quite similar to the legacy code bases leading up to Y2K.

Our legacy code looks very PHP 4-ish, or at least very PHP 5.2-ish. That is, it was developed according to the way things were done back then.

Is there a difference? Yes, indeed! You will recall that, with the introduction of PHP 5.0, that one of the “very big deals” with PHP 5 was Object-Oriented Programming. The PHP Certification Exam even had Design Pattern questions.

I said at the time that, with PHP 5, in my view PHP had now become a “real” programming language. PHP, in my view, became a “mature” language with PHP 7. With PHP 8, PHP became… well… something else. What happened?

Up through PHP 7, we could keep our ten years’ worth of legacy code base and patch it up to continue running. It was not that difficult to change mysql() calls to mysqli() calls, for example. Big fat god object arrays still worked fine. “Loosey-goosey” sloppy coding continued to work. Duck typing worked.

When writing new code for new requirements or functionality, I strongly favor strong typing and automated tests. I can be obnoxious about it. I have a solid supply of “I told you so” and “this is why.”

But the thing is, our loosey-goosey code can’t make the jump to modern PHP. Most would argue that it shouldn’t make the trip; it needs a rewrite.

Wait a minute! Let’s think about this. In November 2022, Brian Jackson wrote Is PHP Dead? No! At Least Not According to PHP Usage Statistics:

According to W3Techs’ data, PHP is used by 78.9% of all websites with a known server-side programming language.

PHP, as you and I know (that’s why we’re here!), built the modern World Wide Web. That’s why half the planet still runs on PHP, whether it’s a dead language or not.

These days, with PHP dying off (allegedly), the question becomes, “what do you call a good PHP programmer?”

And, these days, the answer remains, “employed.”

During the 1970s, 1980s, and 1990s, the banking and business systems of the world ran on COBOL, and for crypto, FORTRAN. Old code continued to run. Vendors made darn sure this was true, so they still made money. For example, IBM’s System/360 was announced in 1964, yet:

Application-level compatibility (with some restrictions) for System/360 software is maintained to the present day with the System z mainframe servers.

IBM supported binary level compatibility for decades. Literally the same compiled modules ran for decades. IBM existed to make money.

But now today, we run on free software. There’s no vendor. Those producing the free software–on their own time as volunteers in most cases–tell us, “you need to keep up.”

The result is that the software that built the internet won’t run anymore. The “old” PHP is no more. The “new” PHP not only encourages better-written code, it requires it. The loosey-goosey idioms no longer apply.

What’s the barrier? Strangely enough, the situation is similar to Y2K. Y2K required re-thinks, re-designs, rewrites. Even with minor compatibility changes, our PHP 5.2-era codebase still uses PHP 5.2-era techniques. The code is loosey-goosey because that’s how it was done back then.

There was a low barrier to entry with PHP. One could put a site up in five minutes. We did just that! Live edits, in production, were easy and quick. So we did!

A computer scientist would have done things differently and more reliably. But it wasn’t computer scientists who built the World Wide Web with PHP. In fact, to the best of my knowledge, for 5-10 more years, no 4-year computer science program even taught PHP as a programming language. We learned PHP from blogs, stack overflow, and eventually boot camps.

That’s why it’s meaningless to try to remain compatible with PHP 5.2 “best practices.” None of us had any idea what “best practices” were!

If we choose to remain compatible–and we chose otherwise, but play along–we need to remain compatible with our real-world legacy code bases. But why? Is this reasonable? Yes… and no. Let’s look at my own employer.

Our team of 3-4 developers, and our predecessors, built that legacy code base over the past 10+ years. It was not written or updated by computer scientists. This situation is, in my experience, absolutely typical.

We’re forced into a rewrite, or something very like a rewrite, while at the same time remaining in production and producing new features to deal with rapid growth. It’s a deadly combination. Certainly we all have technical debt, and we all need to consider rewrites.

Sam Newman, author of Building Microservices, explains:

The need to change our systems to deal with scale isn’t a sign of failure. It is a sign of success.

Running the numbers

Let’s do the math. We have the software developed by 3-4 people over the course of 10+ years. Can we do necessary PHP codebase upgrades over a period of 1-2 years? Yes, we should be able to. Can we do a complete rewrite with modern coding practices, in 1-2 years, of what took ten-plus years to write in the first place? That does not sound so likely, does it?

Meanwhile, what do we do about upcoming business needs, new features, and so on, while 100% of our development time is already engaged in that rewrite? We tried to handle both sets of needs… it didn’t go well.

With a 1-2 year time limit, the numbers show it just can’t be done. Remember, it’s not the deprecations–it’s that PHP 8 is no longer PHP.

PHP 8 has, in my view, mandated that the way we design our PHP software must change for the better. How much time do we have to effect that change in our legacy code bases? On November 28, 2022, Official PHP achieved an “own goal”:

As of today, PHP 7.4, and with that PHP 7 is no longer supported.

The w3techs report, as of December 2, 2022, states:

PHP is used by 77.5% of all the websites whose server-side programming language we know.

Their breakdown by PHP version is:

  • PHP 4, 0.2%
  • PHP 5, 22.8%
  • PHP 7, 70.4%
  • PHP 8, 6.7%

I believe it’s a remarkable achievement that 70% of installations made it to PHP 7. Meanwhile, though, we close out 2022 with the knowledge that 72.4% of the world’s websites (whose server side language is known) have been abandoned by Official PHP. This sounds rather like when the credit card issuers created a “liability shift,” abandoning their own guarantees, shifting fraud costs onto the merchant.

What’s a reasonable migration path? Martin Fowler describes the Strangler Fig Application approach:

An alternative route is to gradually create a new system around the edges of the old, letting it grow slowly over several years until the old system is strangled.

If we had the luxury of staying on a “long term support” version of PHP 5 or even PHP 7, then we could rewrite one feature at a time, over the course of months and years.

With the close of 2022, those days are done.

Discomfort is where we grow!

For the vast majority of my professional career, I’ve been working with mostly just PHP, and felt pretty confident with it. However, the longer I worked with just PHP, the more afraid I became of branching out into new things. I watched as the backend ecosystem started changing – it seemed like every one was picking up Node or Go or Ruby – but what if I couldn’t learn those? What if my skills didn’t transfer? What if I wasn’t as good as I am at PHP – or even worse, good enough to even get anything done?

Last fall, my team was tasked with building a new service, and a coworker suggested it was a great opportunity to create a serverless microservice using Node JS. Over the course of a few months I had to throw myself into learning the AWS ecosystem, functional programming, learn Serverless, start writing JS – and I realized I’d been holding myself back for no reason.

That project started me down a path of learning TONS of new technologies – and after 20 years of writing software, I finally got over my reluctance to learn how any of the hardware side worked – which was based in the same fear of not being good enough. I bought a kit and started learning about microcontrollers and circuits. Powering an LED for the first time and controlling which LED light up with a simple toggle switch was as rewarding as those first “Hello World” scripts. I jumped into python and C++, built IoT devices for my home by sending signals over WiFi and long range radio, learned how to solder – and then taught my 8 year old daughter how to!

I asked my daughter to come up with an idea for a project for us to build together, and with her (admittedly vague) product requirements, we started working on her “cube shaped light controlled by drawing on an iPad app”. I created a React app for the frontend, which not only needed to support touch events so it would work on the iPad, but even multi-touch for better interactions. The backend is functional JS running on a lambda, which communicates with a third-party service to push events to the microcontroller – which is running python to control a bunch of individually addressable LEDs. The final step was designing a case for the whole thing, which went through a bunch of iterations where we tried to use acrylic cut on the CNC machine, before I finally caved and bought a 3D printer.

If you had told me a year ago that by this time next year, I’d be comfortable with any ONE of these ideas: writing python, react – or any JavaScript – working in AWS, building physical circuits from scratch, designing 3D models – I’d have laughed and said you had the wrong person.

I wish I hadn’t held myself back for so long. Don’t be afraid to step outside of your comfort zone, the discomfort is where we grow. There is an entire world of communities just like our PHP community, out there waiting for more newbies.