Skip to content

Serverless’ latest release breaking Babel polyfill?

Posted in Did you notice?

Here, get some context

Hey everyone, let me tell you about Serverless‘ latest release. You must be thinking “Three posts in ten days after three months absence what got into you JD?”. Nothing particular, well now that I go exercise in the morning and finish work around 5 I have tons of time to do stuff afterwards. Also I keep running into things from that feel blog worty. Indeed, today I experienced what can easily become a nightmare for developers. Broken continuous integration from out of nowhere. Indeed, this morning as I was making the latest adjustments to a project set to move towards production in a few days, the continuous integration broke after merging my latest pull request. The pull request contained minor changes in a configuration but nothing that would be used at any point  through CI.

Let me add some context about the stack here, the project relies on the Serverless framework which allows building serverless solutions. By serverless solutions I mean stuff running on what you would call Functions apps on Azure or Lambdas on AWS. Pretty clever stuff that reduces the pain of setting up deployment and privileges focus on development.

Now the continuous integration part, I use that serverless package command that permits to build the package generated by the serverless deploy command that allows deploying your functions to whichever provider you want. That way after the unit tests are passed I can validate that the configuration for deployment is all set. This is the part of the CI that broke with some random error about babel-polyfill requiring a single instance.

Columboing around

This is where my biggest head-scratcher in a few months started. Continuous integration breaks after changing a configuration file used at runtime by a serverless function but not used at any point through the build steps. I could not reproduce the issue locally no matter how hard nor often I tried. Believe me I tried harder and a Gold V in League of Legends. I tried deleting the package-lock.json & node_modules then running serverless package on my local but still nothing. Still stuck. Then I checked again the file changes between the last successful CI build and the first failed one. Still nothing.

I thought that maybe it was just some timing error as it happened to me a while ago on VSTS when there was that cloudfront thing with NPM not retrieving packages. So I triggered the CI build manually but still nothing. At that point I was grasping at straws. Even compared the builds to try and figure what was the difference in execution but still nothing.

After a couple of hours I eventually went for lunch with my girlfriend and a friend of hers, still with that issue in mind. As they do when together they were speaking italian which allowed me to think through everything again but far from the screen and temptation to google things I knew would have no answer. Then I got what I thought was my aha moment. What if serverless was the reason the CI broke? Unlikely, yet I had seen way worse in the past. After all all the continuous integration was set to use Serverless’ latest release. It just kept growing on my mind while munching on that sweet chili chicken from Wasabi. It reached the point where I rushed the end of the lunch then went back to the office to check my theory.

Serverless’ latest release

The first thing I wanted to validate was updating my local to use Serverless’ latest release (1.28.0) which matched the last few failed builds. Yes few builds, as I was saying I did try loads of stuff. After the update I ran the serverless package command and it still worked. There goes my aha moment. At that point it crossed my mind that it could be my continuous integration system that could have had a weird temporary issue so I tried my luck one more time. However, this time I did explicitely set my CI definition to install the previous version (1.27.3) of Serverless just in case. I trigerred yet another build and went to grab a coffee, as one does.

You probably guessed it, I came back to another failed build. It starts to get on my nerve a bit so I decided to up the ante. I put my headset on with some some gangsta rap to go all out on this issue. So I eventually went through the logs comparing every single command, displayed verbose from the last successful build with the last failed one. I even went to check Serverless’ latest release page. I even went to checkout Serverless’ npm page. As “King’s Dead” from Kendrick started hitting my hears I got a aha moment from my previous aha moment. Do you know what happened this morning? Serverless 1.28.0 got released. Right after that I noticed something else. The last failed build still installed serverless 1.28.0 instead of 1.27.3.

Serverless' latest release
Yup, exactly

As it turns out I changed the title of the task and not the actual command. That’s what happens when you do changes pre-coffee.

Roll credits

After correcting it on the continuous build for our development environment, I triggered the build again. It worked! But I was way too happy to have figured that out to care about my previous fix attempt to have been defeated by a dumb mistake. Afterwards, I updated all the build and release definitions using serverless to explicitely target the 1.27.3 version. I guess that was a good refresher that when you setup continuous integration/delivery you should fix the versions of everything you use in your build. You can never know when people behind the tool you rely on will issue a broken release.

Be First to Comment

    Leave a Reply

    This site uses Akismet to reduce spam. Learn how your comment data is processed.

    %d bloggers like this: