a coding nagger's blog

My name is Jean-Dominique Nguele and this is my blog. FLVCTVAT NEC MERGITVR

Category Archives: Productivity


Time: Fail fast and bounce back

Reading Time: 2 minutes

Time saving tube fail

Nowadays, most tools we use exists to save time. In London to travel through public transports we have tons of options to make our lives easier. Contactless debit card, ticket, Oyster card, you name it. However, having the choice between these options may reveal troublesome when in a rush. Indeed, on Monday I used my debit card by accident instead of my Oyster card making myself pay for a right I already have. Also, if I did not use it again while leaving the tube I would have ended up getting charged the max amount. I think it is £6.60 instead of £2.40 for a journey in zone 1. Luckily, I realised my mistake on the spot allowing me to rectify it while leaving at my station.

I did that mistake because I saw the elevator open and jumped in. Yes I got in the elevator, but instead of losing a couple minutes I lost money. Indeed the amount is as insignificant as the time saved, however that got me thinking. I started thinking about these times where I made design or coding decisions to save time. The classic “let’s do something quick” that is basically the coder’s “spray and pray”.

Spray and pray then spray to slay

In Monday’s instance, the “spray and pray” was to tap my wallet on whichever side is more accessible. I knew odds to mistakenly use my debit card was 50/50 and I knew how to limit the loss in case of failure. When the failure happened, I paid a price I was ready for when the time came. Similarly to a project however, you need to reduce the risks of your decisions as much as possible or at least figure a way to turn things around if they go south. Failing to recognise the risks of the choices we make will be as punishing as the risks taken allow.

This might be the key here. Maybe, it’s not about missing a shot, but about the rebound. About what you will do when the ball bounces back. If you know how to bounce back from a mistake you will feel empowered to do more and learn from them. Maybe in the end being a good developer is not necessarily about making the right choice every time. It can be about evaluating the potential consequences of our choices and ensure that they are worth taking. Also it can be about whether we can adapt off the result of our choices.

So next time I take the tube, I will slow down to tap my Oyster instead of my debit card. Coding wise, I could run into the most MacGuffinest MacGuffin piece of software that might help on a project and still take time to evaluate its pros and cons so that I can mitigate the risks of using it.

My Musketeers for DotNet Test driven development

Reading Time: 5 minutes

Test, four letters, one meaning and for some people a struggle. Getting people around you to write tests is easy only when everyone already agrees with you. As often, there are instances where some people show resistance to writing tests. Here are the stuff I hear the most from them:

D: I don’t have time to write tests.

A: I don’t need to test this.

B: I can’t write a test for this.

Not writing tests will always lead to hours of tears and blood. Tears and blood from debugging something you let slip through. Something that broke your super edgy software. I am not saying that writing tests will lead you to a bug free software but at least you know exactly how your code behaves. There, you know what you can reasonably expect from it. Despite having a great code coverage your code will eventually break and it’s perfectly fine. This is where your tests become useful as they will help you ensure you don’t break your existing code while refactoring or fixing a bug. Then you can simply add a new test to cover that unexpected scenario.

Per example, yesterday a colleague had some weird data mix up on a development deployment of an API I created a few months ago which revealed a case I didn’t think of. That API had 95% coverage and still a bug showed up because it is how software works. Although the bug was generated from a virtually impossible case, so what I did was replicate it, write a test for it, fix it and get it through review and released it all within 30 minutes. That project coverage is now at 98% (highest we have now, of course I’m gonna show off about it) and yet I know that one day or the other another bug will pop. When that day comes, it may not be fixed as quickly as yesterday but it will be as easy to refactor parts of it safely.

Yes it takes some time to write tests but on the long run it is more than worth it. For a long time I thought that the only reason for one not to cover his code would be laziness. Not the good laziness that makes you want to save time by writing tests and not spend hours debugging and testing a whole bunch of non covered features. Still, over time I came to learn that no developer walks to his desk everyday to write buggy code on purpose. A lot of factors come into play such as clients and project managers pressuring you with tight deadlines. Tighter and tighter deadlines, day after day. Then ensues a drop of quality in favour of faster delivery that in the long run can hurt a business.

In that kind of situation, blaming a developer for not writing tests will not help anyone. However, what can help is providing tools to help that developer to move faster. This is where today’s post is supposed to help you. Help you to accelerate your development. Today, I am using this post to present three tools that help me everyday to deliver code faster without sacrificing quality. Although nothing is magic, I hope these will help you in a personal or professional context as they help me every single day.

It doesn’t matter whether you have access to continuous integration or not. However, what will matter is your ability to write decent tests. Even if you write only very simple happy path tests, as long as you write those properly you will be fine. Here we go!

Moq

Moq is awesome for unit testing. What is unit testing? Well, I don’t have a proper definition in mind and there are tons of different versions online. The version I learned mostly over experience and you are free not to believe the same thing. To me, a unit test is a piece of software written to test a component regardless of the dependencies it has to make sure that a defined input will provide an expected output. Basically, unit tests allow you to validate your software’s behaviour in a way that prevents you or a potential collaborator from breaking it your software later on.

How does Moq works? The premise is that you can mock any interface which allows you to define how your software behaves based on a dependency input. Which is great in an inversion of control context. This also extends to class virtual and abstract methods so that you can create tests defining how a class behaves based on what a method could return. Another cool feature of Moq is the possibility to verify the methods of a mocked interface/class got called with a specific input. That will allow you to make sure that the method under test is calling its dependencies methods with the parameters you expect.

For more information on Moq you can check out their documentation on Github

AutoFixture

Let’s now move onto AutoFixture that I use pretty much since it exists. AutoFixture is a library that allows generating dummy data on the fly in any context. This thing made my test writing so much faster. It also works great with Moq to quickly write test cases where the input data does not really matter. You can use it to generate data of type, from string to bool to your custom classes. One of my main use for that library is to create data on the fly without thinking too much about it and use that generated data to validate my tests.

I have not reached the limitations of what you can do with that tool yet. However, you need to be careful with types that have a recursive relationship which you often get when you work using EntityFramework. Per example, if you have a Chicken class with an property of type Egg. Imagine that Egg class has a property of type Chicken, you will end up with an exception due to some kind of infinite loop situation. You can avoid that situation by defining what properties you don’t want to set when generating your data.

For more details on AutoFixture, check out their Github.  To jump straight to code samples here is their cheat sheet.

Postman

This one is a bit more different than the others mentioned previously. Indeed, you can use Postman to document how your API works. You can use it for monitoring with a paid account or build a monitoring of your own using Newman. I wrote a couple of posts about it over the past months to get started or to build simple CI using Appveyor. What I like with Postman is that it is pretty intuitive and straightforward to use even for non-technical people. Once you get started you can  do some pretty advanced flow-based testing which is pretty useful in micro services architecture. In the end, how and where you use Postman is down to you and I love the flexibility of it. That flexibility that allows you to make it fit your needs and accelerate your development.

Thanks for reading, you can now go and write a bunch of cool software with loads of tests. Or don’t. I’m not your dad and I won’t punish you, but your code will.

 

Simple continuous integration with Appveyor and Newman

Reading Time: 13 minutes

Last month, I posted about Postman enabling you to test your APIs with little effort so that you can build future-proof software. Here we are going to cover setting up continuous integration for a simple project by using Newman to run your Postman collections. You may have heard about continuous integration in the past. Most commonly, continuous integration will build software from one’s changes before or after merging them to the main codebase. Even though there is an infinity of tools that allow implementing continuous integration, I will focus on Appveyor CI. In order to make things simple, I will create a very basic web API project and will host it on GitHub.

Create GitHub repository

You can create the repository on GitHub by clicking this link: Create a repository on Github. For more details, please follow the documentation they provide on their website.

Big lines, you should see something like this when you create the repository:

Create a repository on GitHub

Once you’re all set, if you have not done it yet, you need to clone your repository. Personally, command-line feels easier as a simple “git clone” will do the job.

Command-line execution will look like this.

Create Web API project

Project setup

Now that your repository is all set, we can actually create the Web API project. For this step, you will need to install Visual Studio, ideally 2017 that you can download here. Once installed, open it and create a new project by selecting “File”, then “New” then “Project”.

After the project template selection popup appears, select “ASP.NET Web Application”. As for the project path, select the one where you cloned your repository and press ok.

Now you will have to select what kind of web application you want to create. Select “Empty” and make sure that the “Web API”  option is enabled like below. Note that selecting “Add unit tests” is not necessary for this tutorial.

Then press “Ok” and wait for the project creation. Once it’s done, your solution explorer should look like this.

Time to add some code. Yeah!

Add your Controller

First, right-click on the “Controllers” folder. Now, select “Add” then “Controller”. Pick “Web API 2 Controller – Empty” and press “Add”.

Next, you get to pick the controller name. Here it will be DivisionController.

Now you should have an empty controller looking like this:

The first project run

From here it’s time to run your project either by pressing F5. Also, you can open the menu and select “Debug” then “Start Debugging”. After a few seconds, a browser window will open and you will see 403 error page.

Chill, it’s perfectly normal as no method in our DivisionController is defined and access to your project directory is limited by default. At this point, we can already open Postman and create our first test.

It’s Postman time!

The first test

Now, open Postman, create a new tab. Once the tab created, copy the URL opened by Visual Studio debugger in Chrome. In my case, it’s “http://localhost:53825” but yours could be different. Paste that URL in your postman tab like this:

Next, press “Send” and you shall see the Postman version of the result we observed previously in Chrome.

From here, we can start writing tests that will define our API behavior for the default endpoint that does not exist yet. Here you can notice a couple of things that we will want to change. First, we don’t want that ugly HTML message to be displayed by default but something a little more friendly. I guess a “Hello Maths!” message is friendlier, from a certain point of view. Let’s add a test for that.

If you remember the previous article, you know that you are supposed to go to the tests tab in order to add it. In this case, will pick the “Response body: Is equal to a string” snippet. You should get some code generated as below:

Next, you will update it to replace “response_body_string” with “Hello Maths!”.

Now that the response test is sorted, let’s add a response code test to validate we should not get that 403 HTTP code. For this, we will use the “Status code: Code is 200” test snippet.

After sending the request again you can see that both tests failed.

Fix the API to make the tests pass

It is now time to write some code to right this wrong. Go back to Visual Studio to modify the DivisionController. We will add an Index method that will return the message we want to see.

This code basically creates a new response object with a status code OK (200) that we want to get. In this object, we add a StringContent object that contains our “Hello Maths!” message. Let’s run the Visual Studio solution again pressing “F5”.

As you can see, the horrible HTML error page has gone now and we see the “Hello Maths!” greeting. Now, if you run that same request in Postman you will see that now our tests pass.

Now save the request in a new collection that we will call “CalculatingWebApiAppveyor” as below.

You should see in the right tab the newly created collection along with the request we just saved.

Implement the division

If you got this far, you’ve done great already unlike our API doesn’t do much yet. It’s time to make it useful. From here, we will add a Divide action that will take in parameter a dividend and a divisor then return the quotient.  You can copy the code below and add it to your controller.

You may notice that the code looks simpler than for “Hello Maths!”. Actually, we could have returned simply return Ok(“Hello Maths!”). However, this would have returned “Hello Maths!” with the quotes for which our test would not have passed. Now, let’s run the project again and add a test for that division endpoint in Postman.

Test the Division

What we want to do is to make sure that our division endpoint actually returns the result of a division. What we will test here is that for 10 divided by 2 we do get 5. From there, you know that the route to be tested will be “divisions/dividends/10/divisors/2/_result”.  Now, create a new tab in Postman and copy the URL from your greetings endpoint. Then, append the route to be tested as below.

Next, we are going to use the “Response body: Is equal to string” snippet to validate that 10 divided by 2 should return 5. Also, we will add a status check just because.

If you followed all the steps correctly you should see both tests passed and the response is indeed 5.

Now, save that last request as “Validate division works” in the CalculatingWebApiAppveyor collection you created.

Finally, you can run your whole collection and you will see all the tests pass green.

Congratulations! You have a fully functional API as long as divisors are different from zero with its own Postman collection. A collection that you can run whenever you like to make sure your API is fine. The one issue though is that you may not be working alone nor want to run Postman whenever you push a change on GitHub.

There is a way to solve this issue and that’s where Appveyor comes into play. But first, let’s commit and push our changes.

Commit and push your code changes

If you haven’t done it yet, it’s time to commit your changes and push them to your Github repository. First, create a new file named .gitignore. More information about what that file does here.

I personally used the Powershell New-Item command but there is an infinity of ways to do that.

Then, open this .gitignore file that is the default one to use for Visual Studio projects, copy the contents into the file you created.

Now you can commit, push your changes and eventually move on to Appveyor thanks to a few commands. Note that you must run these commands from the directory where your solution and .gitignore are.

Once these commands executed you should see your solution with the files created on GitHub.

Get your continuous integration swag on

Create an Appveyor CI account

This is probably the simplest part of this tutorial. Simply go to the Appveyor login page, yes login. From here you can log in with a variety of source control related accounts but pick GitHub.

Once logged in you should land on an empty projects dashboard.

Connect your repository to Appveyor CI

Simply press “New Project” and you will be prompted with a list of repositories you have on your GitHub account.

Select “CalculatingWebApiAppveyor” and press “Add”. After a few seconds, you should see this:

To see how it works, press “New build”. What happens next is that Appveyor will download your source code from Github. Then, your source will be compiled, and if there are unit tests in your solution they will be run. But for now, you will see something like this:

Are you surprised? Are you entertained? Because I am. Don’t panic it’s a benign error caused by the fact that Appveyor does not restore a project’s Nuget packages by default. To get rid of that error, go to the settings tab, then to “Build”.

Scroll down until you see the “Before script option”, enable it by selecting “PS”. Now, a text box should appear for you to input  nuget restore  like below:

Now, press the “Save” button below and go back to your build dashboard and press “New build” again. If everything goes according to plan you should end up with this:

Congratulations again! You now know at to set up a .NET project on Appveyor.

This is more or less where I would have stopped if I went with my original decision of making this tutorial a two-parter. Since it would not make much sense to stop here considering what’s left we can move on our Postman collection again.

Setup Newman on Appveyor

Create environments

Now that our project, collection, and continuous integration tools are setup, it is time to put our collection to a better use. An automated use. To do so, we will need to update our collection so that it can be run both locally and on Appveyor. In order to achieve that, we will extract the host URLs from our requests and place them in environment files. One we will use locally, the other one on Appveyor.

First, we will create our localhost and Appveyor environments. I will name mine CalculatingWebApiLocalhost and CalculatingWebApiAppveyor. If you don’t remember how to create environments and modify collections to use their variables I happen to have written a post about it. You need at least the requests host to be extracted in the collections.

Your localhost should contain the URL you used so far. Your Appveyor one will be “http://localhost”.  Once done, you should have two environments that each should look like this:

Localhost environment

Appveyor environment

Now your environments are ready, update your collection requests as below.

Greetings request update

Division request update

 

From here, you can open the collection runner to make sure your collection still works and tests still pass.

Save your collection and environment to your project

It’s time to introduce you to Postman exporting feature because you will now need to move your collection and Appveyor environment to your project. First, let’s export the collection, click on your collection menu button.

After pressing “Export”, you should see this:

Make sure that “Collection v2” is selected then press “Export” again. Now, save the collection in your solution folder.

Next, we will export the Appveyor environment. Go to the “Manage environments” menu, then click on the “Download environment” icon for CalculatingWebApiAppveyor.

Then, save your environment to your solution folder.

The last step, not the least commit and push your changes. Here is a reminder here:

Now our repository is all set! Let’s get back to Appveyor.

Setup Newman on Appveyor

First, go to the Tests tab:

Then, enter these lines after selecting “PS” on the “After tests script” textbox:

The first line installs Newman on your Appveyor container, prevents the dependencies warnings and adapts the execution display to Appveyor. The second executes your collection using the environment you created and also adapts the execution display to Appveyor. If you used different filenames for your collection and environment, please update the command to match them. You should have something like this:

Now, go back to the “Latest build” tab and click on “New build”.

After a few moments, you will see that your build will fail.

Here you can see that Newman actually tells you what went wrong. All your tests failed, and there was a connection error for each of your collection requests. If your build fails for different reasons, you may want to go a few steps back and try again. But if your failed build looks like the capture above, you’re good to go.

Setup local deployment on Appveyor

Yes, we are very close to finishing setting up our Postman based continuous integration system. Now, we need to tell Appveyor that we want to package our solution and deploy it locally so that we can run our collections against it.

First, we will enable IIS locally. IIS is a service that allows running any kind of .NET web apps or APIs, even though it does not limit to it. To enable IIS, go to the “Environment” settings tab, then click on “Add service” and select “Internet Information Services (IIS)”.

After saving your changes, you will go to the “Build” tab and enable the “Package Web Applications for Web Deploy” option and save again.

That option will generate a zip package that will have the same name as your Appveyor project. What we need to do next is to configure Appveyor to deploy that package on the local IIS. In order to do so, we will go to the “Deployment” tab.

Click on “Add deployment” and select “Local Build Server”. Afterward, we will need to add some settings to tell Appveyor where and how to deploy. To do so, press “Add setting” three times then fill each setting to match these values:

  • CalculatingWebApiAppveyor.deploy_website: true
  • CalculatingWebApiAppveyor.site_name: Default Web Site
  • CalculatingWebApiAppveyor.port: 80

Now, you should see something like this:

Remember the Powershell script we added in the “Test” section of the settings, we will need to put it in the “After deployment script” instead. If we don’t do that, the build will always fail since it will try to run our integration tests before locally deploying our application. I will put it here again in case you don’t feel like scrolling up a bit.

If you followed everything your “Deployment” settings tab should look like this:

Don’t forget to save your changes and to update your “Tests” tab. Now, your “Tests” settings tab should look like that again:

 

After saving it, go back to “Latest build” and press “New Build”. Then, you will see that everything simply works.

Well done!

What’s next ?

Now that you know how to setup Newman powered API tests on Appveyor using GitHub, you can chill and call it a day. However, you can also show off your mastery of CI by adding your project badge to your README file.

Note that Appveyor allows you to deploy only when you push commits to your repository, whether it is a direct push or a pull request being merged. Nevertheless, if you have a private Appveyor account you can enable an option to allow local deployment to run your API tests even on pull requests.

Thanks for reading, I hope you enjoyed reading this as much as I enjoyed writing. Also, I would like to shout out a big thanks to Postman labs for featuring my previous post in their favorites of March, that was a really nice surprise.

Good luck helping to make this world fuller of future-proof software every day!

NB: If you don’t feel like creating the Web Api project and that you scrolled straight to the end of the post to get the sources, help yourself.

Going full necromancian on old projects

Reading Time: 2 minutes

Old projects, they often end in what I call development hell. This odd place where some projects with a good potential become stale after a release or die because too late for a market. Very often personal projects end up there even when they are open source. Open source really seem like a tool to help spreading and sharing knowledge worldwide depending on the kind of project. Before open source democratized with the likes of Github  a huge number of personal projects probably took years to be released when not abandoned.

Today I decided to do the only kind of necromancy one should do: Bringing back an old project to life. A project for which I already wrote the code for the model and business logic. I had to make create a user interface and build a user experience as sleek as possible. However, I stopped it due to my vision of a market I thought crowded along with the lack of time. Now I see clearly that was not as true as I thought.

Here we are. Two months I did not post here, eleven months without working on a personal development project I am back at it. I spent my past weekends alternating between gym, party and sleep. Luckily, I work in a position where my brain I can keep my brain stimulated. Indeed, when not investigating an issue on one of our live apps nor working on our platform features I am defining development tools and processes to be used at company scale within the next months. Eventually, a lot of cool things will come out of that. I will definitely post a few related tutorials depending on schedule.

About the blog, I will try to post more regularly than I have maybe a tutorial. In terms of work I would like to share with you the video that we recorded last week. It is basically the new company careers video that I like not just because I am in it. You can definitely check it out below:

If you arrived that far in the post, first I would like to thank you for reading and watching the video. Second, do not abandon your old projects if you are not a 100% sure they are dead. Check your old source code even if it is to mock your old coding style. In the end you could actually have something worth the hassle.

Everybody wants to be right, worth it ?

Reading Time: 3 minutes

Be right or “be right”, getting the truth on your side or winning an argument ? We find people doing both in many domains, both in science and everyday’s life. When you are lawyer, winning an argument is the end game. For some time I wanted to be one which explains why I have been like that for a while when younger. I listened to people mostly to crush their arguments because it feels good to be right. Even though the intention was not glorious, I became a pretty good listener. It allowed me to be a bit more aware of my surroundings and pay more attention to people and their opinion as they can help my reflexion going forward.

Over time this dragged me into looking more for the truth rather than for a truth that would suit me. Actually, it made me closer the scientific mindset built during studies and allowed me to be a little more objective. Constantly trying to be right instead of searching for the truth never served science. However, objectivity allows to think better as the focus is on facts and not emotions.

I ran into that TED talk with Julia Galef about trying to be right even when wrong to win. I found it really interesting mostly because I used to have that soldier mindset at some point. Even though going through scientific studies and cultivating myself as much as possible I probably have a bit left. Grabbing a theory and trying to prove it true is not a bad thing as Science progresses from research. But it becomes terrible if you keep hanging on it even when proved wrong. Indeed, you may find reasons to keep your theory breathing but it will not make it less wrong.

I will let you have a look at the video first as she talks about it way better than I can then go back to how this can apply to Computer science.

Scout mindset. If you skipped the video, you should go back to it to figure that out. If you do not want to check it out for whatever reason, the scout mindset is what sets you to look after the truth rather than being right. Yearn for facts and proofs rather than feelings and spoofs (I am not looking at the Brexit voters at all). History taught us that being too much in that soldier mindset that she describes can be extremely damaging. Many exemples exist around the world, Nazis, Ukip, Turkey now ? But let us go back to why you are here, Computer science.

In IT the main domain where that scout mindset will be of use is basically any aspect of software development. Wether it is about picking a working methodology, building a new feature or simply bug fixing. The main issue when you build something is that you will naturally be more defensive about it wether you did good or not. And it is very hard when developing something on his own to detect flaws otherwise you have not written that code in the first place. As a way of going through that we have pair programming which works mostly when the collaborators have opposed views. You can also discuss with tech people among your friends and acquaintances as they may provide you a fresher view. Discussing with colleagues during a standup meeting or during a coffee break can help you to open your mind a little more.

I feel like I have been confusing in this article so I will try to sum up a bit. There is a couple of things to retain actually. First, yearn for facts, clues as it is what makes research progress in any domain. Second, to listen to people. Not just hearing them, but paying attention to the input they can provide even if you do not agree. Often people that you would usually ignore because you think you are better than them or whatever. Listening will save you time or open your mind with their knowledge, a knowledge you may not have. Being able to find a solution that makes you being right feels good, even if it is not the best one. Being right is like victory, it always feels good but it does not necessarily make you progress.

I hope you enjoyed the article, for more food for thought I recommend you to check out the TED Talks which contain a lot of interesting topics about everything. I personally focus on Technology, Society and Science subjects but there is a lot more than that. Who knows, maybe some day you will see me in one of those videos. On those words, have a nice evening, day, weekend or whatever this is to you. And oh yeah, get a free meme.

Being technically right

Slack is fucking amazing !

Reading Time: 2 minutes

I don’t know if it’s the fact that after becoming TechHub fussball champion a couple weeks ago or me conquering the Breakfast club pancakes challenge yesterday but I feel like Slack is the best thing ever for all the startups out there.

I was having some code compiling when I decided to get a go at Slack that people told me about and I did not see anything great in there. TechHub members have access to a common slack to get all the latest updates about TechHub and what happens in its community, but I never saw it as more than another social network. Creating a Slack account for Lodgeo was the first of several steps that now get me to think it is a must have for a startup.

Why, you will ask ? First of all, for small teams it is completely free. Then from 10 users on the cost is not that big considering all you can do with it (illimited integrations). Yes, Slack allows integrating with tons of tools that are for most free of use or with low costs for smaller teams. At the moment I integrated some dev oriented components such as Github or Jira to have easy-to-access heads up for any projects you want to keep an eye on, along with marketing oriented components like Intercom. For those who do not know those names well, Google is your best ally. You can even create your own Slack integration, I will look into those possibilities that seem infinite later.

Screen Shot 2016-02-10 at 17.41.57

Slack homepage

I cannot convince you to use Slack for your company with words as I wasn’t when I was told about it, but if you are that’s great. You should just go and have a look by yourself, even if you don’t have a startup you can just have a look and feel that fire inside you. Ok it may be the pancakes talking here.