a coding nagger's blog

My name is Jean-Dominique Nguele and this is my blog. FLVCTVAT NEC MERGITVR

Tag Archives: c#


Did you notice a MemoryCache problem with decimal precision?

Reading Time: 3 minutes

A little bit of context first

Hi everyone. A couple months ago as I was building the CoinzProfit API, I ran into a weird issue with MemoryCache. In order to avoid hitting too often the various APIs CoinzProfit depends on like Coinbase’s, I decided to implement caching. Indeed, a cache allows keeping user calculated profits and various currency rates without having to fetch data too often. In order to save on costs since the app is free and has no ads, I used .NET Core in-memory caching.

Basically when I compute profits and investments and so on, I would cache my computed result in GBP if my app is set to display amounts in GBP. When querying the data again, then the if the app query currency does not match the cache I convert it. Similarly, if I change my app settings to display USD then the data displays in USD. This allows for successive request that do not require to call Coinbase and Binance APIs earlier than needed.  All was good and well until I noticed an issue with my computed profits that would change dramatically in a seemingly random fashion.

MemoryCache-Cache

Originally I thought that maybe, the conversion rates I retrieved were not accurate enough. This could perfectly explain why the conversion works in one direction but goes random in the other. After a couple of hours playing hide-and-seek (or “cache-cache” in French), MemoryCache revealed itself to be the source of my problem. Please do not act surprised, it was the only suspect and kinda is the focus of this post.

The reason why my conversion was all over the place is that my conversion rates were truncated when cached. To validate that assertion I injected a caching implementation that would always fail to return data so that I always get fresh data and conversion rates. Once I confirmed the issue I resorted to serialisation to preserve decimal precision on the data I needed to cache.

Two months later

It has now been two months since I ran into that issue. I originally planned on writing this post way earlier but I had a lot on my plate. That precision issue may be completely gone by now. This is what I will try and test today. I will just setup a simple .NET Core console application and write some decimal data through MemoryCache and read it to see if it now preserves precision. Note that I built the API using one of the .NET Core 2.0.* versions and that the 2.1.1. version is currently available. Therefore today I will use the later for this little experiment .

If you want to try that experiment at home, you can install VSCode and .NET Core if not done yet then run the following commands.

In order these commands:

  1. Create a .NET Core console app project
  2. Move you into the project folder
  3. Install the Microsoft.Extensions.Caching.Memory package from nuget.
  4. Restore the package and builds the project

Now you can copy the code from the file below in your Program.cs

Are you ready to run this? Run the command dotnet run

classic memorycache

Hmmmmmmm

What do you see right now? Exactly, it does work now. I’m not even mad. It’s kinda amazing that Microsoft fixed that thing in such a short period of time. Will try later on to reproduce the bug whenever I find the exact version of .NET Core I used when I ran into that issue. Expect an update or a part 2 to this post at some point. That is unless I forget about it. Do let me know if you run into the same issue which version of .NET Core you have installed.

NDepend’s static code analysis: Noob Review

Reading Time: 8 minutes

Today I am going to do something I have not done before. A couple months ago I was contacted by NDepend to play around their software. I did not check but there is probably a fair amount of software reviews out there. Hence why I will try an hopefully different approach. A noob approach. I’ll read the promise from the software to review and just dive into it without any sort of guidance. Let’s call it Noob Review. Yep, that’s how you create a series that might or might not live longer than a post.

NDepend’s promise

According to the website homepage, NDepend is the only Visual Studio extension that is able to tell us developers about created technical debt. This would then allow undoing the debt before it gets committed. The alleged debt is calculated based off a set of rules predefined using LINQ queries. We also can add our own queries.

Enough with introductions, let’s just get noobie!

Getting started

I downloaded the latest version 2018.1.0 released on Wednesday you can find the link to the latest version here. on NDepend upon download presents itself as a ZIP archive containing some executables and a Visual Studio Extention installer.

As you can see below, the installer propose you to install the NDepend extension to Visual Studio versions all the way back to VS2010.

From there I just installed the extension using the licence key the NDepend team nicely offered me.

Let’s jump right into it

From now on I am going full improv. I will have no idea of what I am doing because that is what most people do when they get a new tool. That approach works when you know how to use a pencil and grab a pen for the first time. It might be a bit more entertaining if I do so with NDepend. Since it is a tool that should allow me to detect technical debt I will write an ok piece of code then some less ok code to see what happens.

The project

First things first, I created a console project running on .Net Core. I am not seeing anything trigger automatically. Being used to Resharper, I check the toolbar and see an NDepend menu that was not there before.

Opening that menu I see that I can create an NDepend project which I do.

Once the project is created, it seems that I still cannot analyze my code. Not before attaching my solution to the NDepend project, which I do.

After attaching the project, I went back to run an analysis on my console app but I kept getting this error:

Turns out the NDepend project did not pick up the Visual Studio project. I then closed Visual Studio to reopen it yet I had the same error after loading the NDepend project and attempting to run the analysis again. Paying more attention to the error message this time, I noticed the error was about a reference to my solution not being loaded in NDepend. I thought that maybe the issue was with me not creating the NDepend project in my console app solution folder. I conjectured that maybe these errors occur because the NDepend project is not in the same directory as my solution. Probably a noob error on my end. So I went on to edit the NDepend project properties.

Above, you can see the NDepend project properties after I added the reference to my solution using the “Add Assemblies from VS solution(s)” button. It seems that it loaded the binary generated by the solution along with It also shows the 3rd-party binaries used by my solution, System.Runtime and System.Console. After that, I ran another analysis and it eventually worked as you can see below:

First analysis

Now that I finally set up the static analysis properly I can dive into what it reveals from a basic “Hello World!” console app. After that first successful analysis run, I could see that the NDepend menu changed. A whole new world opened to me. As a first reflex, I opened the “Rules” submenu. From there I could see that a rule was violated.

What rule could Microsoft’s “Hello World!” code possibly have violated? Well, look down.

Microsoft's coding sins

Microsoft’s code sins

Class with no descendant should be sealed if possible. It is actually more of a warning. A cool bit that I noted is that you even get a more detailed description of the warning cause along with why you should consider it.

I always learned that whenever possible we should have as little warnings as possible so let’s clean that up and make our Program class sealed. After making the change, when I re-ran the analysis, I got the same result and broken rules as before. Also, there was a message telling me that my file Program.cs was out of sync. I got a hunch and rebuilt the solution. Then the analysis result views updated.

NDepend all green

Technical debt (feel free to skip this part)

Now that the code is green and clean. It is time to try and build some technical debt. If you are not familiar with that term I will try to sum it up for you. Technical debt is the implied cost of rework needed in the future when choosing a quick and easy solution over one that would be more thorough but would take more time. More often than not chosing the easy way will hit you back. It will hit you hard.

Let’s say you take a complex subject at school. You could put in place a system to cheat to get good grades. It is easy and does not require extensive preparation work. Yet you can get caught and lose everything. Also, the ink can ruin your cheatsheet. Or, you could learn that subject and try to do your best mastering it class after class, exercise after exercise. You will not necessarily feel the effort was worth it from the start but eventually it will pay off. Learning your subject from the start is hard but you get more confidence to build on top of. Building technical debt on purpose is basically cheating on your Geometry class from high school. Don’t cheat on your Geometry class.

Back on the dirty coding

I felt like I did not want to spend months writing the perfect imperfect piece of code so I just googled “c# bad practices” and opened the first result that came up. From there I just copied the method and adjusted it to be called in our Main(). You can copy the code below if you are trying to reproduce the experiment.

Once the code ready, I rebuilt the solution and ran a new analysis.

In the post mentioned earlier, there are a few things wrong that are pointed at but some that would be unfair to criticize here. However, I will keep the points that I wish would have been picked up and were not.

What was found

  • The Calculate() method is public yet accessed by only one method in a console app. I hoped I would see more from the actually copied code and not from how I access it in my Main() method.

What was not found

  • The use of the magic numbers 0.1, 0.5, 0.7, 1, 2, 3 and 4.
  • Disrespect of the DRY principle with the same piece of logic written twice for discount calculation.
  • An alleged bug where the calculated price is 0 when none of the if-else is matched (to be fair, it might be a valid business logic in some cases but a warning would be welcome).

It can be considered unfair to point these out and it might be. I will try to spend some time later to see whether it would be possible to create custom rules to spot any of these. That will definitely be a fun exercise. Feel free to try the same at home.

Wrapping up

I originally planned on adding a section where I would try to get more warnings and errors but that would be outside the boundaries of what I want a Noob review to be. A follow-up post covering more complex cases and custom queries would be more fitting for a separate post anyway. Since there are loads of things currently happening in my life, that post might not happen for a while. That being said let’s wrap up with some pros and cons I noted during that quick take.

Pros

  • NDepend provides a clear explanation of what rules are broken and where in your project.
  • Don’t need an engineering degree to get started without a manual. I do have one so that might invalidate my comment so people without engineering degrees let me know if you found it hard to use without checking any guide.

Cons

  • Have to re-open the NDepend project when restarting Visual Studio.
  • Have to rebuild project when editing file to verify that no rule was violated or whether a violation was fixed. Also building the project does not automatically trigger NDepend’s analysis either.
  • Not available on VSCode or VS for Mac.

Special note on customization

While people love to customize things I do not trust myself for writing a rules engine determining my code’s quality. I’m likely to make a mistake in there and not notice it. I may actually change it to a pro after experimenting with it more.

Closing

After that first experiment, I do not think I would use NDepend for my personal projects. The cons I pointed above outweight the pros in my opinion. I do believe that spending more time with NDepend could change my vision of it and maybe make me realise that it fits my needs more than I think. I am no evangelist nor influencer, even if I was or become one by the time you read this, you should not take this post as absolute truth. It is a Noob Review after all, it cannot be right nor fair. My piece of advice is to go and have a look for yourself. If your interest got piqued by this post, you should download NDepend and figure out whether it fits your needs. You can have a 14-day trial to play with it. Happy experimentation!

Continuous delivery for free using Docker, CircleCI and Heroku

Reading Time: 10 minutes

Continuous what?

Continuous delivery. You may recall that in my previous post I announced that today’s entry would be revolving around continuous integration. And technically it can count as such since we will cover continuous integration along the next step. That next step is continuous delivery. If you are not familiar with these terms and the concepts behind them I will sum them up briefly.

Basically, continuous integration allows verifying that your codebase still builds and passes tests passing whenever you push changes. Add a trigger to deploy your code to production upon success and you pretty much have the idea around continuous delivery.

These practices help mainly to make sure that you don’t break your codebase when pushing changes. This is good when you work alone but a lifesaver when working in a team. You cannot imagine how many hours I wasted mostly during my studying years because of coding breaking without us realizing before days. Using source control was already a miracle in itself at a time when there limited options for continuous integration, especially for students. If you want more details about source control workflows the GitHub Flow is a great place to start.

Let’s just jump into it!

Back in today’s topic, continuous delivery. Before I start inundating you with scripts and screen captures you need to be familiar with a few things:

Retrieving the code

Since you read my previous tutorial, you should know more or less what the code does. It is the classic Values API sample returning an array with two values “value1” and “value2”. From there, the easiest step is to fork the repository created from that previous post which you can reach by clicking here.

fork that repo

Fork that repo, yep that one. Just do it!

Once the fork completed you will have an exact copy of my repository where you can push changes for the rest of the tutorial. If you have not yet, you need to clone your fork to your machine for the next stage.

Getting acquainted with Docker

Docker is going to be key for today’s tutorial. Why? I hear you ask. Because CircleCI does not support C# for continuous integration. Neither does Heroku for deployment, at least not officially but we’ll get back to that later. But do you know what is supported by both that we can use? Docker container images.

Basically consider a Docker container image as a box in where you put everything your software needs to run properly, from code to settings to system tools and libraries. A containerized software will always run the same way regardless of the environment. It will be completely isolated from its surroundings. The cool thing about this? Well it works on any environment, whether you run it on Windows, Mac and Linux. It is true as long as your computer supports VT-d virtualization. Then, you can make sure your container behaves as you expect locally before deploying. This should be the case if your device is no more than a couple years old. Also note that if you cannot run Docker on your local machine, you can still commit the docker files and it will work on CircleCI.

First things first, you will need to install Docker Community Edition which is free and available at this link. The installation is pretty straightforward so nothing special to mention here. If Docker is not supported on your machine you will get a message when trying to install it on Windows. The same should happen if you try to run it on Mac. If it is the case, don’t worry, you can still go through the tutorial and won’t be missing that much.

Making our tests ready to work on CircleCI

As mentioned previously, if we try to build our API straight away on CircleCI it will fail. Not because the code does not work but because it is not supported. In order to get our tests running, we will have them run in a containerized way. We don’t need to create a container image yet, only to get an existing container image that will support running them.

The first thing to do is to create docker compose file that will allow getting an image that supports running .NET Core 2 applications and run our tests inside of that image. Now you will copy a file definition that will do exactly that upon using the docker-compose command. You need to create a file named docker-compose.unittests.yml at the root of your repository.  Once it’s done, copy into it the contents of the gist below:

Now we need to write the script that will allow our continuous integration tool to restore the solution within the container image. After what the tests will be run. Here is the script to copy inside a file named docker-run-unittests.sh still at the root of your solution:

You may notice a line that is unusual to most people. The command set -eu -o pipefail. A short and stupid explanation is to say that it halts and makes the build process fail if an error occurs. If your build does not compile or that tests fail, that command will allow the docker-compose command to fail which will trigger an error and allow your CI system to know it failed.

Now that we have our tests ready to run within a container we will run them locally to make sure we’re all set. In order to do so, you will need to run the following command with your favourite terminal. This assumes that you are in your solution folder and that you can run Docker commands on your machine.

Running that command will give you an output similar to this:

Docker test run result

We are now able to run tests on any environment supporting Docker. Let’s now setup our continuous integration tool.

Continuous integration

Configuring for CircleCI

Now that we have all the Docker configuration ready to run tests, we can configure our project to have our continuous integration on CircleCI. The first thing to do here is to create a .circleci folder in your solution folder. Then, you will create a config.yml file inside of it so that its relative path to your solution is .circleci/config.yml. Into that file you will copy these contents:

Commit and push your changes, then move onto the next section.

Setting up CircleCI

CircleCI is a platform used for continuous integration and continuous delivery. I picked it for today’s post because it’s free and can be good if you just want to play around. Also, it can be great if you are creating a new business and want to keep the costs low before scaling up.

The first step here is to create an account. You can reach their signup page by clicking here. Once there you should see this screen:

Now you need to press “Sign Up with GitHub” to create your CircleCI account. This will land you on a page where GitHub will ask you if you want to grant CircleCI various permissions. As you will see below it will require your email address(es) and repository access rights.

Press “Authorize cicleci” to move onto the next step. Now you will see a welcome screen as below.

If you noticed the arrow and the red not circle you know where to click next. If not, press “Add projects”. You will see the forked repository name appear. Next to it, you will notice a “Setup project” link, press it.

No red mark to show where to click this time

Now you have pressed the right link you should see the project setup screen. You can leave the operating system as Linux and select “Other” as language.

Once you’ve done that a feedback box will appear asking what language you intend to use. I suppose it is to prioritise what they should add next to their roadmap. Don’t feel obliged to put C# as it might make the unit testing part of this post obsolete. Which I wouldn’t mind much because then I can update this post to avoid the build & test magic you were introduced to previously.

Next, scrolling down you should see a set of instructions to get the build to run but we already took care of that.

Now you can press “Start building” which will send you to your first build screen. Your build might be queuing for a few seconds before starting as below:

Your first Docker powered CircleCI build

In our case, there is not much going on apart from the test run so after up to a couple minutes you should get your successful build.

Successful build circleci

Build passing CI on the first try? A win in my book

Now that our CI tool is ready to build and validate our software, it’s time to prepare for deployment.

Time to deploy that API

Deployment over 9000 with Heroku

Heroku is a platform allowing developers to deploy, manage and scale web apps. They support most of the modern technologies and languages such as Node.js, Java, Go and many more. However, they do not officially support .NET Core even though they allow for extensions from Github (or buildpacks) to have some sort of support. But today we are not going to do that.

The first thing you will need to do now is creating an account. You can do so by clicking here. Once your account created, you will see a screen prompting you to create a new app.

heroku create app screen

Easy

Now you can press “Create New App”, and you will be asked to pick a name and region. For this tutorial, the region does not matter and you can pick any name you like.

heroku create appFrom here, press “Create app” to create your app and access its dashboard.

heroku new app dashboard

Now that the app is ready to receive our API deployment, you need to get your Heroku API key so that we can deploy our code to Heroku from CircleCI. In order to do so, you will have to access your Heroku settings. To get there, click on your profile icon (top right of the screen), you should see this menu pop up.

Heroku profile menu

Next, click “Account settings”. Once on the settings page scroll down until you see this:

heroku api keyFinally, press “Reveal” to display your API key and save it somewhere close, for that we will use it soon.

Creating our own docker image to run the API

Here we are, the time where we create our own (maybe your first) Docker image. The first step is to create our Dockerfile in the project folder.

Our Dockerfile is pretty standard here, it generates an environment allowing to compile build and run .NET Core apps. Then, it restores our project and publishes it locally to eventually run it using port number passed by Docker.

Now that our Dockerfile is ready to go, we will add a .dockerignore file that is a list of files/folders we want Docker to ignore. In our case, we want to make our build context as small as possible so we will ignore binaries as you can see below:

Once the file created, if you can run Docker locally, you may run the following commands to make sure your setup is valid:

Yet again, if you cannot run Docker locally, you will see the results on CircleCI later.

Updating the CircleCI config to set up our continuous delivery

We are almost there! It is time to put the delivery in continuous delivery. Now that we have our Docker image configuration ready, we can finalize our CircleCI configuration. Before editing our configuration file will need to add our Heroku credentials to the project environment variables. In order to do so, go back to your dashboard. From there, press your build’s settings button, it should look like this:

Then, click “Environment Variables” and add the email address you registered with on Heroku as HEROKU_USERNAME. Afterwards, add your Heroku API key as HEROKU_API_KEY. Finally, add your Heroku app name as HEROKU_APP_NAME.

After adding the variables, we can now update our CircleCI configuration file with the deployment steps.

Basically, what we do in that file is building our Docker image then authenticating to Heroku to eventually push our image to Heroku’s container registry. Now it is time to commit and push our changes for the last time. If you go back to CircleCI, you should see your build was successful.

continuous delivery all green

Continuous delivery in action, simply beautiful

Now, if you go to your Heroku app using https://<your-app-name>.herokuapp.com/api/values, you will see the following result.

Continuous delivery ✓✓✓

Congratulations! You are now smarter than 30 minutes ago! Not only you know how to setup continuous delivery using CircleCI and Heroku but you can build a Docker container image. If you missed anything, don’t hesitate to check the source code there.

What your solution folder structure should look like now.

Note that the sake of brevity, I chose to put all the commands in the CircleCI build job. Also I did not put any condition on which branch gets deployed, which is a check that you should always have to avoid publishing a test build to production. In the case of continuous delivery, pushing code to the dev branch should trigger a deployment to the development environment. Pushing code to master should trigger a deployment to production and so on. You can figure how to do this using condition-based instructions and the deployment job here.

Based on your feedback I may write a quick guide on setting up CI for multiple environments using this post as a basis. Since I have a few other things in the pipeline for the next few months it might not happen before a while.

Thanks again for reading, if it was any use to you don’t hesitate to share and subscribe to get more of these. The next future-proof entry should be about what you can do to avoid your continuous delivery to turn into this:

.NET Core CLI Tools: Build a web API in 10 minutes

Reading Time: 7 minutes

This tutorial is an introduction to .NET Core CLI tools. More precisely it is about creating a web API using the CLI tools provided for .NET Core. Whether you are a beginner in development or just new to .NET Core this tutorial is for you. However, you need to be familiar with what an API is and unit tests to fully enjoy this tutorial. Today, we will set up a solution grouping an API project and a test project.

For the next steps, you will need to install .NET Core and Visual Studio Code (referred to as VSCode later for the sake of brevity) that are supported on Mac, Unix and Windows. If you want to know how that multi-platform/framework is working have a look here.

Creating the solution

First things first we will open a terminal (or Powershell for Windows users) to create our solution. Once this is done we can create our solution that I will name DotNetCoreSampleApi as follows:

This command will create a new folder and DotNetCoreSampleApi a solution file with the surprising name DotNetCoreSampleApi.sln .Next, we will enter that folder.

Creating and running the sample web API

Now that the solution is here, we can create our API project. Because I am not the most creative mind I will also name it DotNetCoreSampleApi. Here is the command to create the project.

That command will create a subfolder named DotNetCoreSampleApi to your solution DotNetCoreSampleApi. If you followed all the steps your solution root should contain a file DotNetCoreSampleApi.sln and the web API folder DotNetCoreSampleApi.sln. The web API folder should contain a few files but the one we need now is DotNetCoreSampleApi.csproj. We will add a reference to it in our solution. To do so, run the following command:

After getting a confirmation message we can now start the API by running that command:

After a few seconds, it should display a message notifying you that the API is now running locally. You may access it at http://localhost:5000/api/values which is the Values API default endpoint.

Adding the test project to the solution

You may be aching to see some code by now but unfortunately, you will have to wait a bit more. Back in the days of .NET Framework, there was no such thing as generating projects by command line. You had to use cumbersome windows to pick what you needed to create. So now all of this project generation can be done by command line thanks to the CLI tools you will like it. And this is merely a suggestion. Back to the terminal. If the API is still running you may kill it by pressing Ctrl+C in the window you opened it in.

We are now able to create a test project and add it to the solution. First, let’s create the test project using dotnet new as follows:

That command creates a new unit test project using MSTests in a new folder with the name DotNetCoreSampleApi.Tests. Note that if you are more of a xUnit person you can replace mstest in the command with xunit which will create a xUnit test project. Now similarly to what we did for our web API project, we will add our test project to the solution:

Almost instantly you should have a confirmation that the project was added.

Getting acquainted with VSCode

Now, open VSCode and open the folder containing the file DotNetCoreSampleApi.sln. At this point you have that structure into the folder:

If you never used VSCode before, or at least not for C# development you will be suggested to install the C# extension:

Select “Show Recommendations” and apply what VSCode suggests. Then, once you finished installing the C# extension you will get a warning about adding missing assets to build and debug the project, select “Yes”.

Don’t hesitate to go back a few steps or even to restart this tutorial if something does not seem to work as expected. Here is how your test folder should look like by now:

Time to write our test

And finally, we are getting in the fun code writing part. The part where we put aside our dear CLI tools  By code writing I mean copy/paste the code I will show you later. And by fun, I mean code that compiles. There is nothing less frustrating than code that does not compile. Especially when you have no idea why. Fortunately, this will not happen here.

Now that you have your code editor ready to use you can go ahead and delete the UnitTest1.cs file. Once done, you will create a new file named ValuesControllerTests.cs in your test project. Then your VSCode  more or less looks like this:

Using VSCode the file should be empty, but in case it is not, delete its contents to match the screenshot above. As soon as you get your nice and empty file copy the code below into it:

Now you should get some warnings, which is perfectly fine because they should be here. If you hover over these you will see some referencing related error messages like below:

These appear because we did not reference the API project into our test project yet. It is time to open your terminal again. However, if you feel like having a bit of an adventure you can try VSCode’s terminal that will open in your solution folder. In order to do so, you can press Ctrl+' while in VSCode to open it. Or Ctrl+` if you’re using a Mac, probably either work for Unix.

Once the terminal open we will reference our API project into the test one with that command:

If you don’t see the full command above, you can still copy it using the copy button present when hovering.

Now that the reference to the API project is here the referencing warnings concerning it should be gone. However, a new one might appear about the Get call as below:I am not quite sure why it happens but it seems to be a bug within VSCode not getting this reference is here through the API project. However, you should not worry about it because if you try to build the solution and or run the tests it will work.

Understanding and running our test

Now we lay into the crispy part, the one we need before getting any further. The part we can use as the basis before delving into more advanced stuff like continuous integration or continuous deployment. Running a test that validates our logic. If you had a look at the ValuesController.cs file inside our API project you will see that the Get()  method is returning an array of strings. This array contains the values “value1” and “value2”. The test class you copied earlier contains a method that verifies that both “value1” and “value2” are returned for this Get().

So, back to the ValuesControllerTests.cs file. You may have noticed some links appearing on top of our test method like this:

You can ignore the “0 references” and “debug test” links for now. Press “run test” to execute our test. Actually, it will first build our API project to have the latest version of it before linking it to our test binary. After running the test, you should see something like this:

And unsurprisingly our test is passing. Now let’s see if we remove “value2” from the array returned from ValueController.Get() and run the test again.

Running the test again:

As you can see this time it failed, in order to have it pass you may now undo your changes in ValuesController.cs.

 

A little more of .NET Core CLI tools

It’s nice to know that one of your tests failed, however, you know what is better? Knowing which test actually broke and why. Therefore, this is the perfect time to bring up the  .NET Core CLI tools again. Now, you can run our test using the .NET Core CLI tools with this command:

Which will actually provide you with some more details on what broke:

.NET Core CLI tools magic with tests

.NET Core CLI tools magic

As you can see you get the message “value2 is not returned” that we defined in our test file. Here is a little callback for you:

I won’t say that now you are a fully fledged .NET Core developer but it’s a good start. You just created your (maybe) first API and test projects. Moreover, the test actually validates some of the API controller logic. So you know, congrats on that. However, if for a reason or another, something did not go according to plan, feel free to check the source code here.

I hope you enjoyed this new entry of my future-proof series and I will see you next time. You should look forward to it as I will cover how to setup continuous integration for such a project. It should be different from that other post from last year using Appveyor.

And remember, if you ever need anything from the CLI tools:

dotnet new everything

Just dotnet new it!

My Musketeers for DotNet Test driven development

Reading Time: 5 minutes

Test, four letters, one meaning and for some people a struggle. Getting people around you to write tests is easy only when everyone already agrees with you. As often, there are instances where some people show resistance to writing tests. Here are the stuff I hear the most from them:

D: I don’t have time to write tests.

A: I don’t need to test this.

B: I can’t write a test for this.

Not writing tests will always lead to hours of tears and blood. Tears and blood from debugging something you let slip through. Something that broke your super edgy software. I am not saying that writing tests will lead you to a bug free software but at least you know exactly how your code behaves. There, you know what you can reasonably expect from it. Despite having a great code coverage your code will eventually break and it’s perfectly fine. This is where your tests become useful as they will help you ensure you don’t break your existing code while refactoring or fixing a bug. Then you can simply add a new test to cover that unexpected scenario.

Per example, yesterday a colleague had some weird data mix up on a development deployment of an API I created a few months ago which revealed a case I didn’t think of. That API had 95% coverage and still a bug showed up because it is how software works. Although the bug was generated from a virtually impossible case, so what I did was replicate it, write a test for it, fix it and get it through review and released it all within 30 minutes. That project coverage is now at 98% (highest we have now, of course I’m gonna show off about it) and yet I know that one day or the other another bug will pop. When that day comes, it may not be fixed as quickly as yesterday but it will be as easy to refactor parts of it safely.

Yes it takes some time to write tests but on the long run it is more than worth it. For a long time I thought that the only reason for one not to cover his code would be laziness. Not the good laziness that makes you want to save time by writing tests and not spend hours debugging and testing a whole bunch of non covered features. Still, over time I came to learn that no developer walks to his desk everyday to write buggy code on purpose. A lot of factors come into play such as clients and project managers pressuring you with tight deadlines. Tighter and tighter deadlines, day after day. Then ensues a drop of quality in favour of faster delivery that in the long run can hurt a business.

In that kind of situation, blaming a developer for not writing tests will not help anyone. However, what can help is providing tools to help that developer to move faster. This is where today’s post is supposed to help you. Help you to accelerate your development. Today, I am using this post to present three tools that help me everyday to deliver code faster without sacrificing quality. Although nothing is magic, I hope these will help you in a personal or professional context as they help me every single day.

It doesn’t matter whether you have access to continuous integration or not. However, what will matter is your ability to write decent tests. Even if you write only very simple happy path tests, as long as you write those properly you will be fine. Here we go!

Moq

Moq is awesome for unit testing. What is unit testing? Well, I don’t have a proper definition in mind and there are tons of different versions online. The version I learned mostly over experience and you are free not to believe the same thing. To me, a unit test is a piece of software written to test a component regardless of the dependencies it has to make sure that a defined input will provide an expected output. Basically, unit tests allow you to validate your software’s behaviour in a way that prevents you or a potential collaborator from breaking it your software later on.

How does Moq works? The premise is that you can mock any interface which allows you to define how your software behaves based on a dependency input. Which is great in an inversion of control context. This also extends to class virtual and abstract methods so that you can create tests defining how a class behaves based on what a method could return. Another cool feature of Moq is the possibility to verify the methods of a mocked interface/class got called with a specific input. That will allow you to make sure that the method under test is calling its dependencies methods with the parameters you expect.

For more information on Moq you can check out their documentation on Github

AutoFixture

Let’s now move onto AutoFixture that I use pretty much since it exists. AutoFixture is a library that allows generating dummy data on the fly in any context. This thing made my test writing so much faster. It also works great with Moq to quickly write test cases where the input data does not really matter. You can use it to generate data of type, from string to bool to your custom classes. One of my main use for that library is to create data on the fly without thinking too much about it and use that generated data to validate my tests.

I have not reached the limitations of what you can do with that tool yet. However, you need to be careful with types that have a recursive relationship which you often get when you work using EntityFramework. Per example, if you have a Chicken class with an property of type Egg. Imagine that Egg class has a property of type Chicken, you will end up with an exception due to some kind of infinite loop situation. You can avoid that situation by defining what properties you don’t want to set when generating your data.

For more details on AutoFixture, check out their Github.  To jump straight to code samples here is their cheat sheet.

Postman

This one is a bit more different than the others mentioned previously. Indeed, you can use Postman to document how your API works. You can use it for monitoring with a paid account or build a monitoring of your own using Newman. I wrote a couple of posts about it over the past months to get started or to build simple CI using Appveyor. What I like with Postman is that it is pretty intuitive and straightforward to use even for non-technical people. Once you get started you can  do some pretty advanced flow-based testing which is pretty useful in micro services architecture. In the end, how and where you use Postman is down to you and I love the flexibility of it. That flexibility that allows you to make it fit your needs and accelerate your development.

Thanks for reading, you can now go and write a bunch of cool software with loads of tests. Or don’t. I’m not your dad and I won’t punish you, but your code will.

 

C# dynamic interface implementation at runtime

Reading Time: 2 minutes

Some context first

How did I come to write a class allowing dynamic interface implementation in the first place? Ever had to work on a huge company project over the weekend? Because it is the weekend you pick up fixes what should be easy configuration changes. Then you think it will take you only a couple of hours then you will be off to the gym. I thought that yesterday and boy I mislead myself, much mislead indeed. Basically I had to update a couple of big projects to remove fields that are null from the json response. All of that listening to stuff like the Ding Dong Song, Purple Lamborghini and Slipknot’s Psychosocial. On the first project I had to add a little line to have that working, so the second one should be the same right? I actually thought I would grab another task before leaving that improvised hackathon.

The thought journey

It was all fun and games until, surprise surprise, the second project used a custom formatter. That was to do some processing on the response objects and update some values to match our apps implementation. Fair enough. But the magical line of configuration to ignore null fields when rendering json did not work there. The obvious solution was to get rid of that custom formatter. The obvious thing to do was get rid of that formatter and figure a way to have that object value setting logic without touching the project classes. I say obvious because there were hundreds of classes there and I did not feel like changing all of them even to simply add an interface and its implementation. I had to set properties that may exist for hundred of objects. This is how I started googling, going through StackOverflow to try and figure how to achieve that.

The much lower scale Newton moment

During that thinking process I realized I could try to do something with dynamic objects instead of adding a value during the json formatting process. Interestingly enough, a few minutes later the StackOverflow ex-machina did its thing and I found that post “How to extend class with an extra property“. The answer from unsung hero Mario Stopfer brought me light on something I did not know was possible. You guessed it: Dynamic interface implementation at runtime. Not really in the form I needed but it opened a door of possibilities to me and a new perspective on the property setting issue. And I started coding, building, testing, debugging like crazy. After a few hours, I achieved what did not know was a possibility a few hours before. Dynamic interface implementation was there working and solving my issue.

Dynamic interface implementation: Epilogue

I had a nice afternoon of coding at the office, lots of laughs and problem solving that provides me with an article I really enjoyed writing and a new class for my in progress .NET utility project that should appear when mature enough on Github. However since you have been reading all of this you will have the code in a preview gist along with sample code. The only issue is that it does not work with the new .NET Core (yet?) so I will update it at a later stage when I find the time and solution. That or add another version. Without any further ado, here is what I called the TypeMixer.