a coding nagger's blog

My name is Jean-Dominique Nguele and this is my blog. FLVCTVAT NEC MERGITVR

Tag Archives: development


Serverless’ latest release breaking Babel polyfill?

Reading Time: 4 minutes

Here, get some context

Hey everyone, let me tell you about Serverless‘ latest release. You must be thinking “Three posts in ten days after three months absence what got into you JD?”. Nothing particular, well now that I go exercise in the morning and finish work around 5 I have tons of time to do stuff afterwards. Also I keep running into things from that feel blog worty. Indeed, today I experienced what can easily become a nightmare for developers. Broken continuous integration from out of nowhere. Indeed, this morning as I was making the latest adjustments to a project set to move towards production in a few days, the continuous integration broke after merging my latest pull request. The pull request contained minor changes in a configuration but nothing that would be used at any point  through CI.

Let me add some context about the stack here, the project relies on the Serverless framework which allows building serverless solutions. By serverless solutions I mean stuff running on what you would call Functions apps on Azure or Lambdas on AWS. Pretty clever stuff that reduces the pain of setting up deployment and privileges focus on development.

Now the continuous integration part, I use that serverless package command that permits to build the package generated by the serverless deploy command that allows deploying your functions to whichever provider you want. That way after the unit tests are passed I can validate that the configuration for deployment is all set. This is the part of the CI that broke with some random error about babel-polyfill requiring a single instance.

Columboing around

This is where my biggest head-scratcher in a few months started. Continuous integration breaks after changing a configuration file used at runtime by a serverless function but not used at any point through the build steps. I could not reproduce the issue locally no matter how hard nor often I tried. Believe me I tried harder and a Gold V in League of Legends. I tried deleting the package-lock.json & node_modules then running serverless package on my local but still nothing. Still stuck. Then I checked again the file changes between the last successful CI build and the first failed one. Still nothing.

I thought that maybe it was just some timing error as it happened to me a while ago on VSTS when there was that cloudfront thing with NPM not retrieving packages. So I triggered the CI build manually but still nothing. At that point I was grasping at straws. Even compared the builds to try and figure what was the difference in execution but still nothing.

After a couple of hours I eventually went for lunch with my girlfriend and a friend of hers, still with that issue in mind. As they do when together they were speaking italian which allowed me to think through everything again but far from the screen and temptation to google things I knew would have no answer. Then I got what I thought was my aha moment. What if serverless was the reason the CI broke? Unlikely, yet I had seen way worse in the past. After all all the continuous integration was set to use Serverless’ latest release. It just kept growing on my mind while munching on that sweet chili chicken from Wasabi. It reached the point where I rushed the end of the lunch then went back to the office to check my theory.

Serverless’ latest release

The first thing I wanted to validate was updating my local to use Serverless’ latest release (1.28.0) which matched the last few failed builds. Yes few builds, as I was saying I did try loads of stuff. After the update I ran the serverless package command and it still worked. There goes my aha moment. At that point it crossed my mind that it could be my continuous integration system that could have had a weird temporary issue so I tried my luck one more time. However, this time I did explicitely set my CI definition to install the previous version (1.27.3) of Serverless just in case. I trigerred yet another build and went to grab a coffee, as one does.

You probably guessed it, I came back to another failed build. It starts to get on my nerve a bit so I decided to up the ante. I put my headset on with some some gangsta rap to go all out on this issue. So I eventually went through the logs comparing every single command, displayed verbose from the last successful build with the last failed one. I even went to check Serverless’ latest release page. I even went to checkout Serverless’ npm page. As “King’s Dead” from Kendrick started hitting my hears I got a aha moment from my previous aha moment. Do you know what happened this morning? Serverless 1.28.0 got released. Right after that I noticed something else. The last failed build still installed serverless 1.28.0 instead of 1.27.3.

Serverless' latest release

Yup, exactly

As it turns out I changed the title of the task and not the actual command. That’s what happens when you do changes pre-coffee.

Roll credits

After correcting it on the continuous build for our development environment, I triggered the build again. It worked! But I was way too happy to have figured that out to care about my previous fix attempt to have been defeated by a dumb mistake. Afterwards, I updated all the build and release definitions using serverless to explicitely target the 1.27.3 version. I guess that was a good refresher that when you setup continuous integration/delivery you should fix the versions of everything you use in your build. You can never know when people behind the tool you rely on will issue a broken release.

NDepend’s static code analysis: Noob Review

Reading Time: 8 minutes

Today I am going to do something I have not done before. A couple months ago I was contacted by NDepend to play around their software. I did not check but there is probably a fair amount of software reviews out there. Hence why I will try an hopefully different approach. A noob approach. I’ll read the promise from the software to review and just dive into it without any sort of guidance. Let’s call it Noob Review. Yep, that’s how you create a series that might or might not live longer than a post.

NDepend’s promise

According to the website homepage, NDepend is the only Visual Studio extension that is able to tell us developers about created technical debt. This would then allow undoing the debt before it gets committed. The alleged debt is calculated based off a set of rules predefined using LINQ queries. We also can add our own queries.

Enough with introductions, let’s just get noobie!

Getting started

I downloaded the latest version 2018.1.0 released on Wednesday you can find the link to the latest version here. on NDepend upon download presents itself as a ZIP archive containing some executables and a Visual Studio Extention installer.

As you can see below, the installer propose you to install the NDepend extension to Visual Studio versions all the way back to VS2010.

From there I just installed the extension using the licence key the NDepend team nicely offered me.

Let’s jump right into it

From now on I am going full improv. I will have no idea of what I am doing because that is what most people do when they get a new tool. That approach works when you know how to use a pencil and grab a pen for the first time. It might be a bit more entertaining if I do so with NDepend. Since it is a tool that should allow me to detect technical debt I will write an ok piece of code then some less ok code to see what happens.

The project

First things first, I created a console project running on .Net Core. I am not seeing anything trigger automatically. Being used to Resharper, I check the toolbar and see an NDepend menu that was not there before.

Opening that menu I see that I can create an NDepend project which I do.

Once the project is created, it seems that I still cannot analyze my code. Not before attaching my solution to the NDepend project, which I do.

After attaching the project, I went back to run an analysis on my console app but I kept getting this error:

Turns out the NDepend project did not pick up the Visual Studio project. I then closed Visual Studio to reopen it yet I had the same error after loading the NDepend project and attempting to run the analysis again. Paying more attention to the error message this time, I noticed the error was about a reference to my solution not being loaded in NDepend. I thought that maybe the issue was with me not creating the NDepend project in my console app solution folder. I conjectured that maybe these errors occur because the NDepend project is not in the same directory as my solution. Probably a noob error on my end. So I went on to edit the NDepend project properties.

Above, you can see the NDepend project properties after I added the reference to my solution using the “Add Assemblies from VS solution(s)” button. It seems that it loaded the binary generated by the solution along with It also shows the 3rd-party binaries used by my solution, System.Runtime and System.Console. After that, I ran another analysis and it eventually worked as you can see below:

First analysis

Now that I finally set up the static analysis properly I can dive into what it reveals from a basic “Hello World!” console app. After that first successful analysis run, I could see that the NDepend menu changed. A whole new world opened to me. As a first reflex, I opened the “Rules” submenu. From there I could see that a rule was violated.

What rule could Microsoft’s “Hello World!” code possibly have violated? Well, look down.

Microsoft's coding sins

Microsoft’s code sins

Class with no descendant should be sealed if possible. It is actually more of a warning. A cool bit that I noted is that you even get a more detailed description of the warning cause along with why you should consider it.

I always learned that whenever possible we should have as little warnings as possible so let’s clean that up and make our Program class sealed. After making the change, when I re-ran the analysis, I got the same result and broken rules as before. Also, there was a message telling me that my file Program.cs was out of sync. I got a hunch and rebuilt the solution. Then the analysis result views updated.

NDepend all green

Technical debt (feel free to skip this part)

Now that the code is green and clean. It is time to try and build some technical debt. If you are not familiar with that term I will try to sum it up for you. Technical debt is the implied cost of rework needed in the future when choosing a quick and easy solution over one that would be more thorough but would take more time. More often than not chosing the easy way will hit you back. It will hit you hard.

Let’s say you take a complex subject at school. You could put in place a system to cheat to get good grades. It is easy and does not require extensive preparation work. Yet you can get caught and lose everything. Also, the ink can ruin your cheatsheet. Or, you could learn that subject and try to do your best mastering it class after class, exercise after exercise. You will not necessarily feel the effort was worth it from the start but eventually it will pay off. Learning your subject from the start is hard but you get more confidence to build on top of. Building technical debt on purpose is basically cheating on your Geometry class from high school. Don’t cheat on your Geometry class.

Back on the dirty coding

I felt like I did not want to spend months writing the perfect imperfect piece of code so I just googled “c# bad practices” and opened the first result that came up. From there I just copied the method and adjusted it to be called in our Main(). You can copy the code below if you are trying to reproduce the experiment.

Once the code ready, I rebuilt the solution and ran a new analysis.

In the post mentioned earlier, there are a few things wrong that are pointed at but some that would be unfair to criticize here. However, I will keep the points that I wish would have been picked up and were not.

What was found

  • The Calculate() method is public yet accessed by only one method in a console app. I hoped I would see more from the actually copied code and not from how I access it in my Main() method.

What was not found

  • The use of the magic numbers 0.1, 0.5, 0.7, 1, 2, 3 and 4.
  • Disrespect of the DRY principle with the same piece of logic written twice for discount calculation.
  • An alleged bug where the calculated price is 0 when none of the if-else is matched (to be fair, it might be a valid business logic in some cases but a warning would be welcome).

It can be considered unfair to point these out and it might be. I will try to spend some time later to see whether it would be possible to create custom rules to spot any of these. That will definitely be a fun exercise. Feel free to try the same at home.

Wrapping up

I originally planned on adding a section where I would try to get more warnings and errors but that would be outside the boundaries of what I want a Noob review to be. A follow-up post covering more complex cases and custom queries would be more fitting for a separate post anyway. Since there are loads of things currently happening in my life, that post might not happen for a while. That being said let’s wrap up with some pros and cons I noted during that quick take.

Pros

  • NDepend provides a clear explanation of what rules are broken and where in your project.
  • Don’t need an engineering degree to get started without a manual. I do have one so that might invalidate my comment so people without engineering degrees let me know if you found it hard to use without checking any guide.

Cons

  • Have to re-open the NDepend project when restarting Visual Studio.
  • Have to rebuild project when editing file to verify that no rule was violated or whether a violation was fixed. Also building the project does not automatically trigger NDepend’s analysis either.
  • Not available on VSCode or VS for Mac.

Special note on customization

While people love to customize things I do not trust myself for writing a rules engine determining my code’s quality. I’m likely to make a mistake in there and not notice it. I may actually change it to a pro after experimenting with it more.

Closing

After that first experiment, I do not think I would use NDepend for my personal projects. The cons I pointed above outweight the pros in my opinion. I do believe that spending more time with NDepend could change my vision of it and maybe make me realise that it fits my needs more than I think. I am no evangelist nor influencer, even if I was or become one by the time you read this, you should not take this post as absolute truth. It is a Noob Review after all, it cannot be right nor fair. My piece of advice is to go and have a look for yourself. If your interest got piqued by this post, you should download NDepend and figure out whether it fits your needs. You can have a 14-day trial to play with it. Happy experimentation!

.NET Core CLI Tools: Build a web API in 10 minutes

Reading Time: 7 minutes

This tutorial is an introduction to .NET Core CLI tools. More precisely it is about creating a web API using the CLI tools provided for .NET Core. Whether you are a beginner in development or just new to .NET Core this tutorial is for you. However, you need to be familiar with what an API is and unit tests to fully enjoy this tutorial. Today, we will set up a solution grouping an API project and a test project.

For the next steps, you will need to install .NET Core and Visual Studio Code (referred to as VSCode later for the sake of brevity) that are supported on Mac, Unix and Windows. If you want to know how that multi-platform/framework is working have a look here.

Creating the solution

First things first we will open a terminal (or Powershell for Windows users) to create our solution. Once this is done we can create our solution that I will name DotNetCoreSampleApi as follows:

This command will create a new folder and DotNetCoreSampleApi a solution file with the surprising name DotNetCoreSampleApi.sln .Next, we will enter that folder.

Creating and running the sample web API

Now that the solution is here, we can create our API project. Because I am not the most creative mind I will also name it DotNetCoreSampleApi. Here is the command to create the project.

That command will create a subfolder named DotNetCoreSampleApi to your solution DotNetCoreSampleApi. If you followed all the steps your solution root should contain a file DotNetCoreSampleApi.sln and the web API folder DotNetCoreSampleApi.sln. The web API folder should contain a few files but the one we need now is DotNetCoreSampleApi.csproj. We will add a reference to it in our solution. To do so, run the following command:

After getting a confirmation message we can now start the API by running that command:

After a few seconds, it should display a message notifying you that the API is now running locally. You may access it at http://localhost:5000/api/values which is the Values API default endpoint.

Adding the test project to the solution

You may be aching to see some code by now but unfortunately, you will have to wait a bit more. Back in the days of .NET Framework, there was no such thing as generating projects by command line. You had to use cumbersome windows to pick what you needed to create. So now all of this project generation can be done by command line thanks to the CLI tools you will like it. And this is merely a suggestion. Back to the terminal. If the API is still running you may kill it by pressing Ctrl+C in the window you opened it in.

We are now able to create a test project and add it to the solution. First, let’s create the test project using dotnet new as follows:

That command creates a new unit test project using MSTests in a new folder with the name DotNetCoreSampleApi.Tests. Note that if you are more of a xUnit person you can replace mstest in the command with xunit which will create a xUnit test project. Now similarly to what we did for our web API project, we will add our test project to the solution:

Almost instantly you should have a confirmation that the project was added.

Getting acquainted with VSCode

Now, open VSCode and open the folder containing the file DotNetCoreSampleApi.sln. At this point you have that structure into the folder:

If you never used VSCode before, or at least not for C# development you will be suggested to install the C# extension:

Select “Show Recommendations” and apply what VSCode suggests. Then, once you finished installing the C# extension you will get a warning about adding missing assets to build and debug the project, select “Yes”.

Don’t hesitate to go back a few steps or even to restart this tutorial if something does not seem to work as expected. Here is how your test folder should look like by now:

Time to write our test

And finally, we are getting in the fun code writing part. The part where we put aside our dear CLI tools  By code writing I mean copy/paste the code I will show you later. And by fun, I mean code that compiles. There is nothing less frustrating than code that does not compile. Especially when you have no idea why. Fortunately, this will not happen here.

Now that you have your code editor ready to use you can go ahead and delete the UnitTest1.cs file. Once done, you will create a new file named ValuesControllerTests.cs in your test project. Then your VSCode  more or less looks like this:

Using VSCode the file should be empty, but in case it is not, delete its contents to match the screenshot above. As soon as you get your nice and empty file copy the code below into it:

Now you should get some warnings, which is perfectly fine because they should be here. If you hover over these you will see some referencing related error messages like below:

These appear because we did not reference the API project into our test project yet. It is time to open your terminal again. However, if you feel like having a bit of an adventure you can try VSCode’s terminal that will open in your solution folder. In order to do so, you can press Ctrl+' while in VSCode to open it. Or Ctrl+` if you’re using a Mac, probably either work for Unix.

Once the terminal open we will reference our API project into the test one with that command:

If you don’t see the full command above, you can still copy it using the copy button present when hovering.

Now that the reference to the API project is here the referencing warnings concerning it should be gone. However, a new one might appear about the Get call as below:I am not quite sure why it happens but it seems to be a bug within VSCode not getting this reference is here through the API project. However, you should not worry about it because if you try to build the solution and or run the tests it will work.

Understanding and running our test

Now we lay into the crispy part, the one we need before getting any further. The part we can use as the basis before delving into more advanced stuff like continuous integration or continuous deployment. Running a test that validates our logic. If you had a look at the ValuesController.cs file inside our API project you will see that the Get()  method is returning an array of strings. This array contains the values “value1” and “value2”. The test class you copied earlier contains a method that verifies that both “value1” and “value2” are returned for this Get().

So, back to the ValuesControllerTests.cs file. You may have noticed some links appearing on top of our test method like this:

You can ignore the “0 references” and “debug test” links for now. Press “run test” to execute our test. Actually, it will first build our API project to have the latest version of it before linking it to our test binary. After running the test, you should see something like this:

And unsurprisingly our test is passing. Now let’s see if we remove “value2” from the array returned from ValueController.Get() and run the test again.

Running the test again:

As you can see this time it failed, in order to have it pass you may now undo your changes in ValuesController.cs.

 

A little more of .NET Core CLI tools

It’s nice to know that one of your tests failed, however, you know what is better? Knowing which test actually broke and why. Therefore, this is the perfect time to bring up the  .NET Core CLI tools again. Now, you can run our test using the .NET Core CLI tools with this command:

Which will actually provide you with some more details on what broke:

.NET Core CLI tools magic with tests

.NET Core CLI tools magic

As you can see you get the message “value2 is not returned” that we defined in our test file. Here is a little callback for you:

I won’t say that now you are a fully fledged .NET Core developer but it’s a good start. You just created your (maybe) first API and test projects. Moreover, the test actually validates some of the API controller logic. So you know, congrats on that. However, if for a reason or another, something did not go according to plan, feel free to check the source code here.

I hope you enjoyed this new entry of my future-proof series and I will see you next time. You should look forward to it as I will cover how to setup continuous integration for such a project. It should be different from that other post from last year using Appveyor.

And remember, if you ever need anything from the CLI tools:

dotnet new everything

Just dotnet new it!

Greek Goddess Gamble: Slowing down the writing

Reading Time: 2 minutes

Hi everyone, it’s been exactly a month since my last post and I have a good excuse for it. As it turns out I was pretty busy between a wedding, a holiday and the beginning of a personal project. Yep, another one! From now I will refer to it as my Greek goddess gamble until I reveal what it is all about.

The phase 1 of that gamble started a few weeks ago, hopefully I’ll make enough progress by December. Time is key here so it is more than likely that I post even less until then which makes it an even bigger gamble. Not posting for a month slowed down the growth of the number of views by 8%. Still I am lucky enough to see the number of readers slightly increasing week after week and hope it will last until December. Hopefully, the break will allow me to fully focus through my weekends and evenings to deliver on that crazy move.

Before you ask, no, I am not gonna retire to a corn field to raise my chicken anytime soon. Anything chicken related I leave to KFC (not sponsored, but can be :wink wink:). Here I am digressing again because I don’t want to risk revealing too much. Back to the main topic, that Greek goddess gamble does involve a fair amount of coding along with research. I originally wanted to kind of serialize and post every week about it or even vlog my progress. But eventually I realised that it will be more meaningful if there is a clear narrative through the posts. It is much easier to tell a story when you know the end.

To conclude, if the phase 1 of that gamble goes well, I will start to post on a weekly basis and/or vlog through phase 2. In case of failure, well I’ll just present over a couple posts what it was about and what went wrong. Stay tuned!

P.S. If you feel craving for my personal posts you can check out my recent poetry or my techier stuff. You may even want to keep an eye on Poq’s blog within the next few weeks, just don’t tell anyone I told you.

No more, Bugs Funny, no more

Reading Time: 2 minutes

Have I ever told you about that annoying Bugs Funny?
Bothered me day and night from his irritating company.
And another day and another night, yet again another one,
Thinking I had nothing better to do, no joy, no life, no plan.

Sneaking in my code when I was all chill and compiling,
His exception traces eyeing at me seem almost smiling.
Even mocking, it doesn’t matter how hard I have studied,
As for next few days he will torture, get my brain crippled.

Burned in its light, blind to its weak spot, feeling hopeless,
I keep browsing StackOverflow with Redbull and stress.
It aches in every bone, my date nights and parties gone,
Bugs look at me, trick me, slap me. Show me mercy? None.

Really unfair you know, not one bug should have all that power,
Emprison, break my mind, haunt from the kitchen to the shower.
Drinking my misery when suddenly I remember, flabbergasted,
I inadvertently turned a comparison to an assignation, damned!

Run my program again as I get closer and closer to the rise of dawn,
I finally got rid of Bugs Funny, indeed now he’s dead and gone.
When I squashed it I wondered why I was so numb, so dumb,
More than ever I was so close to cry, beg, call for my mum.

Rest my head now I will, not ever rest on my success I shall,
Because his brothers are lurking in the shadows, right behind the wall.
Waiting for my vigilance to fall, letting room for them to spawn,
My testing shall betray them and help eradicating them in a yawn.

I will be the watcher on that wall, protector of my software,
None shall corrupt it with uncovered logic, noobs beware!
It will not be easy but a man’s got to do what a man’s got to do,
I shall head to the bar for a few drinks without any further ado.

I got pretty inspired as I wrote two of these over 24 hours between Monday and Tuesday where the first one was a month ago. Thanks for reading again, if you missed the last one you can also have a look there. I see that “Poetry time!” is quite popular on here so I’ll definitely write more tech-ish poems in the future. Thanks again for reading guys!

Mister Ozymandie: A song of dice and dire

Reading Time: 2 minutes

Tick, tick, tick look at me it’s Mister Ozymandie,
Once more bringing, no, inflicting my opinion upon thee.
Whatever the effort, the time you put in your source,
My remarks, of your good day will disrupt the course.

No matter how close you were to a merge
It is time for me to compare with yours my verge.
I am the biggest, the best, better than the rest
The victim you will be of my self-esteem quest.

Whether right or wrong my assurance won’t fail
Poker facing you, hoping your knowledge frail.
Always trumping around like there is no tomorrow
Still making up shit when my mastery is shallow.

Even if you manage to see through my gambling
All day, every day, I will keep them dices rolling.
Although you call it perversion, it is my perfection
I know you see me as a pain, worse, a diversion.

It doesn’t matter what you think, it is my ship,
None shall questions my conduct, like dictatorship.
Become one for all, always know that all is for me,
Your personal judgement here has no place to be.

Line after line, block after block, thought after thought,
I shall erase your experience, everything you brought.
Indeed, I will not stop until all aspects of my glorious vision,
Sink deep in your mind, make the past you aversion.

For that I am the star on the hailed Christmas tree,
For that others forced the same behaviour on me.
Even though this might be to your growth toxic,
Above all, my ego, my satisfaction is what I pick.

Even if you’re right and I am turning value to churn
And someday for my crimes one makes me burn.
I just want one thing, that you dance on my symphony
Myself throning in development pantheons for eternity.

Because it’s me Mister Ozymandie. All! Look at me!
On the humanity commit history, my mark will be.

Thanks for reading, hope you enjoyed reading it as much as I enjoyed writing. I guess that now writing poems on that blog is a thing now. If you haven’t read the previous one you can check it out clicking here.

HttpResponseSimulator: A simple tool born over an afternoon

Reading Time: 2 minutes

What is the HttpResponseSimulator? Apart from being the least original name. Well, it is tool that allows simulating the behaviour you want from an endpoint to test an http client and/or wrapper. I built it over an afternoon so that I could write a timeout test for an http client wrapper. I had to get familiar with Node.js and Express again, which I previously used to create HappyPostman. Despite the slow start, it took me about a couple of hours to implement and deploy.

Like every small projects written with a simplistic goal the first version was not great. If you follow that link you will notice a lot of coupling and no tests whatsoever. The first couple of commits are still good enough to deploy and serve the HttpResponseSimulator original purpose. However, I wanted to push it further and live up to my whole being “future-proof” thing and make it robust. To make it robust I need it to be testable and cover as much logic as possible. This is where I started googling to figure how I to write tests and get coverage feedback with Node.js.

Due to the high coupling of my code my only option was to write http assertions related tests. The kind of tests where I hit the endpoints directly and validate the output based on the given input. In order to write these tests, I had two options that would later allow me to refactor that code to clean it up. The easy option was to follow my own tutorial on Postman and remain in a known territory.

However, I chose to try something new and stumbled into supertest that can implemented in tests that can be ran using Mocha. It seemed like the best option since I can write all my other tests post-decoupling using that Mocha too. Also, Mocha can be used along tools like Istanbul to generate coverage metrics that can be uploaded to coveralls. In that case, my choices were all driven by what I wanted to achieve which is very important in software development. Eventually after a few days of test writing and refactoring, I was finally happy with myself you can see the test coverage result below:

HttpResponseSimulator coverage results

Now that it is robust, I feel like it is time to share it with the world. It is time to make it open-source, it may just die out in a few months or grow and become something bigger. It currently serves a few more purposes than just waiting a few seconds before responding. You can now get your response from any freely available url or pastebin id among other things. If you have any improvement suggestions feel free to hit me up through Github. Actually while you’re at it, if you have any coding notion and want to try your hand at open-source development you can fork the project and open pull requests to improve it. Also, if you have a better name than HttpResponseSimulator you can google around to hit me up.

 

 

Trying to provide helpful pull request reviews

Reading Time: 3 minutes

How I unblocked a frozen pull request

A few weeks ago, I saw a pull request to modify one of our webjobs which codebase is pretty old and had no tests. The pull request had no tests either. The thing is that we decided to make unit testing mandatory for any pull requests a couple weeks before.

I started reviewing the code when I noticed someone else already posted a review. A pretty laconic “please add tests”. Not a bad nor a mean review but not a really helpful one. Proof of it is that it was posted about an hour before and the pull request was blocked. Indeed we do not untested logic to enter or remain in our software. Yes it is aligned with our new policy about tests. That being said, the webjob code was tightly coupled and pretty impossible to test as it was.

This is where I stepped in, I reviewed the code and found a way to make it testable. I then suggested a few minor changes in the existing codebase to make it testable. Within thirty minutes he modified the code and was pretty happy to have tests for the logic he improved. Eventually, I went on and approved his pull request then the first reviewer followed up.

What went wrong with the first review

Please add tests

In most cases “please add tests” is enough to do the trick. The code is designed properly and decoupling is applied wherever possible. “Please add tests” is enough if the tests were not written because of laziness or just got forgotten. However, in this particular case, the reviewer did not take in consideration the context of the change. Indeed, it was an update to an old project designed at a time where the backend team was a couple of guys trying to launch a company. Delivering the software was prioritised over making it easily maintainable. In order to allow a business to take off, testing and decoupling was left for another day. Taking these factors in consideration I have been able to come up with a few strategic changes that eventually allowed to add some tests.

You may have noticed the two different approaches and their effect here. On one hand, turning a change of context into a problem, on the other hand, suggesting a solution. The first one had the pull request frozen for an hour where the latter allowed the pull request to move forward and the code to be merged. As software engineers we need to help others moving forward and propose solutions not problems. Solving problems is central to what we do, whether it is designing a seamless checkout or helping a colleague to make progress on a project.

Become an enabler

We all have been that first reviewer at a moment or another, and if you currently recognized yourself there here are a few tips for you:

Leave your ego out

If you comment on a pull request because it will make you feel superior to the submitter by showing how big is your knowledge relative to theirs or how you are the best developer there is, don’t. Just don’t. Especially if it does not bring any value to what he is trying to accomplish through the pull request. Always leave your ego out of anything if you want to be productive.

Ask questions

Close to the previous one even though one may happen without the other. Please do not assume one’s coding or design choices are wrong because they do not match what you would do. Ask questions and if there is a real issue try to provide comments that drive the submitter towards a solution.

Follow-up

When you request changes, depending on the system you are using you may be blocking a pull request and preventing someone from working. Make sure you follow-up whenever you can, between two of your own pull request submissions, during a coffee break or anytime you come back to your desk. Time is precious and when you request changes on a pull request you become responsible for the additional time spent on it for every developer involved.

Bring a positive value

Ask yourself about the impact you have on a project or a colleague. Does your comment make your colleague’s day better or worse? If it makes it worse, does it actually help solving the problem at hand and bring a positive value? Because at the end of the day, all that matters is the value you can create. Value to a business, value to people. Making a positive impact on your environment will encourage others to do the same. Eventually it will help you and the people around you to thrive and yearn for improvement every day.

Special thanks to Joshua Dooms who did make a positive impact on my vision of how reviews should go.

Time: Fail fast and bounce back

Reading Time: 2 minutes

Time saving tube fail

Nowadays, most tools we use exists to save time. In London to travel through public transports we have tons of options to make our lives easier. Contactless debit card, ticket, Oyster card, you name it. However, having the choice between these options may reveal troublesome when in a rush. Indeed, on Monday I used my debit card by accident instead of my Oyster card making myself pay for a right I already have. Also, if I did not use it again while leaving the tube I would have ended up getting charged the max amount. I think it is £6.60 instead of £2.40 for a journey in zone 1. Luckily, I realised my mistake on the spot allowing me to rectify it while leaving at my station.

I did that mistake because I saw the elevator open and jumped in. Yes I got in the elevator, but instead of losing a couple minutes I lost money. Indeed the amount is as insignificant as the time saved, however that got me thinking. I started thinking about these times where I made design or coding decisions to save time. The classic “let’s do something quick” that is basically the coder’s “spray and pray”.

Spray and pray then spray to slay

In Monday’s instance, the “spray and pray” was to tap my wallet on whichever side is more accessible. I knew odds to mistakenly use my debit card was 50/50 and I knew how to limit the loss in case of failure. When the failure happened, I paid a price I was ready for when the time came. Similarly to a project however, you need to reduce the risks of your decisions as much as possible or at least figure a way to turn things around if they go south. Failing to recognise the risks of the choices we make will be as punishing as the risks taken allow.

This might be the key here. Maybe, it’s not about missing a shot, but about the rebound. About what you will do when the ball bounces back. If you know how to bounce back from a mistake you will feel empowered to do more and learn from them. Maybe in the end being a good developer is not necessarily about making the right choice every time. It can be about evaluating the potential consequences of our choices and ensure that they are worth taking. Also it can be about whether we can adapt off the result of our choices.

So next time I take the tube, I will slow down to tap my Oyster instead of my debit card. Coding wise, I could run into the most MacGuffinest MacGuffin piece of software that might help on a project and still take time to evaluate its pros and cons so that I can mitigate the risks of using it.

My Musketeers for DotNet Test driven development

Reading Time: 5 minutes

Test, four letters, one meaning and for some people a struggle. Getting people around you to write tests is easy only when everyone already agrees with you. As often, there are instances where some people show resistance to writing tests. Here are the stuff I hear the most from them:

D: I don’t have time to write tests.

A: I don’t need to test this.

B: I can’t write a test for this.

Not writing tests will always lead to hours of tears and blood. Tears and blood from debugging something you let slip through. Something that broke your super edgy software. I am not saying that writing tests will lead you to a bug free software but at least you know exactly how your code behaves. There, you know what you can reasonably expect from it. Despite having a great code coverage your code will eventually break and it’s perfectly fine. This is where your tests become useful as they will help you ensure you don’t break your existing code while refactoring or fixing a bug. Then you can simply add a new test to cover that unexpected scenario.

Per example, yesterday a colleague had some weird data mix up on a development deployment of an API I created a few months ago which revealed a case I didn’t think of. That API had 95% coverage and still a bug showed up because it is how software works. Although the bug was generated from a virtually impossible case, so what I did was replicate it, write a test for it, fix it and get it through review and released it all within 30 minutes. That project coverage is now at 98% (highest we have now, of course I’m gonna show off about it) and yet I know that one day or the other another bug will pop. When that day comes, it may not be fixed as quickly as yesterday but it will be as easy to refactor parts of it safely.

Yes it takes some time to write tests but on the long run it is more than worth it. For a long time I thought that the only reason for one not to cover his code would be laziness. Not the good laziness that makes you want to save time by writing tests and not spend hours debugging and testing a whole bunch of non covered features. Still, over time I came to learn that no developer walks to his desk everyday to write buggy code on purpose. A lot of factors come into play such as clients and project managers pressuring you with tight deadlines. Tighter and tighter deadlines, day after day. Then ensues a drop of quality in favour of faster delivery that in the long run can hurt a business.

In that kind of situation, blaming a developer for not writing tests will not help anyone. However, what can help is providing tools to help that developer to move faster. This is where today’s post is supposed to help you. Help you to accelerate your development. Today, I am using this post to present three tools that help me everyday to deliver code faster without sacrificing quality. Although nothing is magic, I hope these will help you in a personal or professional context as they help me every single day.

It doesn’t matter whether you have access to continuous integration or not. However, what will matter is your ability to write decent tests. Even if you write only very simple happy path tests, as long as you write those properly you will be fine. Here we go!

Moq

Moq is awesome for unit testing. What is unit testing? Well, I don’t have a proper definition in mind and there are tons of different versions online. The version I learned mostly over experience and you are free not to believe the same thing. To me, a unit test is a piece of software written to test a component regardless of the dependencies it has to make sure that a defined input will provide an expected output. Basically, unit tests allow you to validate your software’s behaviour in a way that prevents you or a potential collaborator from breaking it your software later on.

How does Moq works? The premise is that you can mock any interface which allows you to define how your software behaves based on a dependency input. Which is great in an inversion of control context. This also extends to class virtual and abstract methods so that you can create tests defining how a class behaves based on what a method could return. Another cool feature of Moq is the possibility to verify the methods of a mocked interface/class got called with a specific input. That will allow you to make sure that the method under test is calling its dependencies methods with the parameters you expect.

For more information on Moq you can check out their documentation on Github

AutoFixture

Let’s now move onto AutoFixture that I use pretty much since it exists. AutoFixture is a library that allows generating dummy data on the fly in any context. This thing made my test writing so much faster. It also works great with Moq to quickly write test cases where the input data does not really matter. You can use it to generate data of type, from string to bool to your custom classes. One of my main use for that library is to create data on the fly without thinking too much about it and use that generated data to validate my tests.

I have not reached the limitations of what you can do with that tool yet. However, you need to be careful with types that have a recursive relationship which you often get when you work using EntityFramework. Per example, if you have a Chicken class with an property of type Egg. Imagine that Egg class has a property of type Chicken, you will end up with an exception due to some kind of infinite loop situation. You can avoid that situation by defining what properties you don’t want to set when generating your data.

For more details on AutoFixture, check out their Github.  To jump straight to code samples here is their cheat sheet.

Postman

This one is a bit more different than the others mentioned previously. Indeed, you can use Postman to document how your API works. You can use it for monitoring with a paid account or build a monitoring of your own using Newman. I wrote a couple of posts about it over the past months to get started or to build simple CI using Appveyor. What I like with Postman is that it is pretty intuitive and straightforward to use even for non-technical people. Once you get started you can  do some pretty advanced flow-based testing which is pretty useful in micro services architecture. In the end, how and where you use Postman is down to you and I love the flexibility of it. That flexibility that allows you to make it fit your needs and accelerate your development.

Thanks for reading, you can now go and write a bunch of cool software with loads of tests. Or don’t. I’m not your dad and I won’t punish you, but your code will.