My name is Jean-Dominique Nguele and this is my blog. FLVCTVAT NEC MERGITVR
I originally started writing another post last week but in the meantime I did a domain name migration. What is a domain name migration you will ask? It is the act of migrating content from a domain to another domain. Not host, domain. If this was about host migration there would be no need for a post as the changes would be straightforward. People may give you a different or more precise description but it is basically what I did. And also what it sounds like. I like to name things so that when they are described or revealed they turn out to be pretty much what you expect. It’s like coding. If I have a piece of code with a method
boolean validatePassword(string username, string password) when looking at the code you pretty much expect to see some user retrieval and maybe encoded password validation. You definitely do not expect a session to be started or anything funny like that. Enough with this, let’s get back onto today’s topic. Domain name migration.
I decided to separate my blog from my main domain iamnguele.com a couple of weeks ago as I picked a new pseudo that you may have noticed by now: Coding Nagger. Why that name and why move only the blog? I thought I should pick a different and more playful pseudo than IamNguele. IamNguele was more of a statement, some “Hey I’m here” kinda thing. Also, my english was not great and nguele.com was taken. On top of that, iamnguele gets me on top of the SEO rankings on my name. On top of that, I gain rankings for the search term
coding nagger as you can see below.
Now, after five years living in London I like to think I somewhat improved my english speaking and writing skills. Also I picked a name that would be more fitting. The Coding part is pretty self-explanatory, and Nagger because I can try to push you something that I believe is the right thing whether you like it or not. Just to precise the context I’m talking about code reviews, solution analysis/design. So here we are, Coding Nagger and I still have not mentioned anything about domain name migration but you get the context at least.
Changing domain name in itself is relatively easy. Most hosting providers allow you to point and click your way through it even if you go for tech saavy stuff like Amazon Web Services. However, there are a few things that you need to take into consideration before you get started. If your website or blog is part of your personal brand and ranks well either you are a probably a pioneer in your field, no matter how niche it can be. If it is the case, congrats you may not need SEO as much as others such as myself.
What is SEO? I hear you ask. SEO stands for Search Engine Optimisation. The rules seem to change pretty constantly and vary based on which search engine you use. Basically, put out quality content, stay on focus of the topic you want your article/blog/website to rank up and voila. The problem with domain name migration is that if you are not careful, it will take a while before your pages rank back up. It will happen because your search engine will consider that your “renamed” site is a new site. Therefore whatever counter or algorithm search engines use to rank sites will see your website as having a score of 0 out of 100 instead of let’s say 42 figuratively speaking.
Do your pages have a decent SEO rating? Do you intend to keep things like this? If the answer to either of this questions is no, feel free to skip the end of the post and rejoice for the now more useful introduction I gave you.
Eventually, you need to consider whether your existing website requires a SSL certificate and whether the links you find through your favorite search engine use the
https protocol. If your website has secure links the they are likely to rank higher and be prioritised over their non-secure version by your browser or search enine.
Once you figured your constraints we (mostly you) can get to work.
This probably should have been the title of this post, really clickbait worthy. The gist of what needs to happen is redirecting your existing website links towards the new domain name. Let’s get down to business.
If you do not have your new domain name, the obvious first step is to buy. Just the domain name, nothing more, nothing less, yet. Once you have the domain name, get it to point to the location where your existing website is hosted. Some hosting providers will allow you to do so with a couple clicks. If you manage your domain names separately from your hosting you will need to CNAME entries of your new domain yourself to point at the IP address where you host your website.
After a few minutes, you can try to access your website using your new domain name. You should be able to navigate it with the right links and the same structure you have for your existing domain.
You may remember that I mentioned something about
https links visible through a search engine result page. If your existing website has its pages indexed with the
https protocol prefix you have one last thing to deal with before setting up the redirect. You need to setup a SSL certificate so that browsers allow people to access your new site. Believe me, you do not want to just reassign the certificate to the new domain. I tried and had a micro-heart attack when I realised 5 minutes later that the links to my existing blog would not even open redirect or not. This is because if your website tells a browser it has a secure link but have no certificate it will simply be blocked. If it has an invalid certificate it will prompt users that your website is unsafe to access or stole another site’s certificate. Just buy a new certificate and assign it to your new domain. Keep the existing stuff where it is.
Here we do the easy bit. Your hosting provider should give you access to at least a FTP containing a bunch of folders and one of these contains your website. From there, you need to create a new folder sitting next to your website folder that will contain the
.htaccess file where you shall write the permanent redirection configuration. Once that folder is created you will have to upload a
.htaccess file with these contents:
Obviously you may replace
www.codingnagger.com with your domain name unless you want to send me more traffic. I would be more than fine with that. Also, if you do not have nor require a SSL certificate on your existing and new domain, replace
Here I chose not to detail the specific of each step as there is multiple ways to achieve this for all providers. Also, for most hosting provider the process is trivial in itself and if you manage your domain name yourself you definitely know your way around your provider dashboard. The one thing you may need is a bit of experience, of guidance. A flow showing in which order you should proceed to avoid losing your SEO ranking. I mean, personally it did not go so bad. My blog went from top hit when googling myself to bottom of the first page with a title change on top of the domain change. Just look:
Note that I had a look at some of my most popular posts, the one that have the best SEO rankings. Some still appear under the
blog.iamnguele.com domain. Try to google
nsattributedstring color image and see what comes up maybe not first everywhere but at least second.
This is why SEO matters. And I will update the post in a few days/weeks to see if it still pops up on top after Google’s indexer sees it as a
codingnagger.com entry. It has been there for a couple years now and I doubt it changes. Unless I screwed up completely and give you an excellent reason to disregard this whole post.
Thank you for reading and hopefully this will help you doing some domain name migration without losing your content rankings. Till next time.
Hey everyone, let me tell you about Serverless‘ latest release. You must be thinking “Three posts in ten days after three months absence what got into you JD?”. Nothing particular, well now that I go exercise in the morning and finish work around 5 I have tons of time to do stuff afterwards. Also I keep running into things from that feel blog worty. Indeed, today I experienced what can easily become a nightmare for developers. Broken continuous integration from out of nowhere. Indeed, this morning as I was making the latest adjustments to a project set to move towards production in a few days, the continuous integration broke after merging my latest pull request. The pull request contained minor changes in a configuration but nothing that would be used at any point through CI.
Let me add some context about the stack here, the project relies on the Serverless framework which allows building serverless solutions. By serverless solutions I mean stuff running on what you would call Functions apps on Azure or Lambdas on AWS. Pretty clever stuff that reduces the pain of setting up deployment and privileges focus on development.
Now the continuous integration part, I use that
serverless package command that permits to build the package generated by the
serverless deploy command that allows deploying your functions to whichever provider you want. That way after the unit tests are passed I can validate that the configuration for deployment is all set. This is the part of the CI that broke with some random error about babel-polyfill requiring a single instance.
This is where my biggest head-scratcher in a few months started. Continuous integration breaks after changing a configuration file used at runtime by a serverless function but not used at any point through the build steps. I could not reproduce the issue locally no matter how hard nor often I tried. Believe me I tried harder and a Gold V in League of Legends. I tried deleting the package-lock.json & node_modules then running
serverless package on my local but still nothing. Still stuck. Then I checked again the file changes between the last successful CI build and the first failed one. Still nothing.
I thought that maybe it was just some timing error as it happened to me a while ago on VSTS when there was that cloudfront thing with NPM not retrieving packages. So I triggered the CI build manually but still nothing. At that point I was grasping at straws. Even compared the builds to try and figure what was the difference in execution but still nothing.
After a couple of hours I eventually went for lunch with my girlfriend and a friend of hers, still with that issue in mind. As they do when together they were speaking italian which allowed me to think through everything again but far from the screen and temptation to google things I knew would have no answer. Then I got what I thought was my aha moment. What if serverless was the reason the CI broke? Unlikely, yet I had seen way worse in the past. After all all the continuous integration was set to use Serverless’ latest release. It just kept growing on my mind while munching on that sweet chili chicken from Wasabi. It reached the point where I rushed the end of the lunch then went back to the office to check my theory.
The first thing I wanted to validate was updating my local to use Serverless’ latest release (1.28.0) which matched the last few failed builds. Yes few builds, as I was saying I did try loads of stuff. After the update I ran the
serverless package command and it still worked. There goes my aha moment. At that point it crossed my mind that it could be my continuous integration system that could have had a weird temporary issue so I tried my luck one more time. However, this time I did explicitely set my CI definition to install the previous version (1.27.3) of Serverless just in case. I trigerred yet another build and went to grab a coffee, as one does.
You probably guessed it, I came back to another failed build. It starts to get on my nerve a bit so I decided to up the ante. I put my headset on with some some gangsta rap to go all out on this issue. So I eventually went through the logs comparing every single command, displayed verbose from the last successful build with the last failed one. I even went to check Serverless’ latest release page. I even went to checkout Serverless’ npm page. As “King’s Dead” from Kendrick started hitting my hears I got a aha moment from my previous aha moment. Do you know what happened this morning? Serverless 1.28.0 got released. Right after that I noticed something else. The last failed build still installed serverless 1.28.0 instead of 1.27.3.
As it turns out I changed the title of the task and not the actual command. That’s what happens when you do changes pre-coffee.
After correcting it on the continuous build for our development environment, I triggered the build again. It worked! But I was way too happy to have figured that out to care about my previous fix attempt to have been defeated by a dumb mistake. Afterwards, I updated all the build and release definitions using serverless to explicitely target the 1.27.3 version. I guess that was a good refresher that when you setup continuous integration/delivery you should fix the versions of everything you use in your build. You can never know when people behind the tool you rely on will issue a broken release.
Hi everyone. A couple months ago as I was building the CoinzProfit API, I ran into a weird issue with MemoryCache. In order to avoid hitting too often the various APIs CoinzProfit depends on like Coinbase’s, I decided to implement caching. Indeed, a cache allows keeping user calculated profits and various currency rates without having to fetch data too often. In order to save on costs since the app is free and has no ads, I used .NET Core in-memory caching.
Basically when I compute profits and investments and so on, I would cache my computed result in GBP if my app is set to display amounts in GBP. When querying the data again, then the if the app query currency does not match the cache I convert it. Similarly, if I change my app settings to display USD then the data displays in USD. This allows for successive request that do not require to call Coinbase and Binance APIs earlier than needed. All was good and well until I noticed an issue with my computed profits that would change dramatically in a seemingly random fashion.
Originally I thought that maybe, the conversion rates I retrieved were not accurate enough. This could perfectly explain why the conversion works in one direction but goes random in the other. After a couple of hours playing hide-and-seek (or “cache-cache” in French), MemoryCache revealed itself to be the source of my problem. Please do not act surprised, it was the only suspect and kinda is the focus of this post.
The reason why my conversion was all over the place is that my conversion rates were truncated when cached. To validate that assertion I injected a caching implementation that would always fail to return data so that I always get fresh data and conversion rates. Once I confirmed the issue I resorted to serialisation to preserve decimal precision on the data I needed to cache.
It has now been two months since I ran into that issue. I originally planned on writing this post way earlier but I had a lot on my plate. That precision issue may be completely gone by now. This is what I will try and test today. I will just setup a simple .NET Core console application and write some decimal data through MemoryCache and read it to see if it now preserves precision. Note that I built the API using one of the .NET Core 2.0.* versions and that the 2.1.1. version is currently available. Therefore today I will use the later for this little experiment .
If you want to try that experiment at home, you can install VSCode and .NET Core if not done yet then run the following commands.
In order these commands:
Microsoft.Extensions.Caching.Memorypackage from nuget.
Now you can copy the code from the file below in your Program.cs
Are you ready to run this? Run the command
What do you see right now? Exactly, it does work now. I’m not even mad. It’s kinda amazing that Microsoft fixed that thing in such a short period of time. Will try later on to reproduce the bug whenever I find the exact version of .NET Core I used when I ran into that issue. Expect an update or a part 2 to this post at some point. That is unless I forget about it. Do let me know if you run into the same issue which version of .NET Core you have installed.
Long time no blog, loads of catching up to do. Surprisingly there is still a lot of people coming to read here despite the long absence. First of all, thank you for sticking around or swinging by if you are new here. My last post was a noob review that I published late March of this year. I made a quick review of a bit of software that the creator did not find to be fair.
For a time I contemplated updating the post but quickly realised it would defeat the purpose of what I want Noob Review to be. Noob review is about discovering something without studying it first, I litterally write it as I start using it. It takes me about two hours to finish taking up notes and screen shots then about a couple days to edit and format the post. I really want it to be a quick take on stuff.
Enough about Noob review. Let’s move onto the next topic.
For most of the past two years I worked in a company where I have been able to hone my skills as a developer. Surrounded by experienced and talented people I did grow into what I believe to be a senior. Yes, technically I was already seen as such but only seeing what I was able to deliver made me realise to which extent. It was an environment where I used to have my ideas challenged and challenge others ideas day in day out.
Such environment is great to learn how to validate your assertions more carefully and overall improve your thought process to come out on top of these challenges. It became a sort of game where most of my enjoyment laid. Intellectual jousts to build the best solution possible while considering time and money constraints were great. However, my enjoyment shifted over time. It shifted towards Tuesdays football games, Friday drinks and everything else involving the workplace except for the actual work.
This did last for a while before I came to the realisation that I grew bored of work. Not bored of working, more like bored with my job. Yet, I still loved building solutions as you know from my short-lived Hestya experience and that Coinzprofit app I built during that period of time. My problem was that I was not doing any of these things anymore at work. Any attempt of discussing a potential solution or improvement within the way we worked was seen as an offense. Our sort of technical council just became a bunch of people agreeing with each other. On top of that my backlog was running thin. One would be happy with having little work to do with the pay I had. I was not, I wanted to do meaningful work again.
I looked for a new job about a month and got lucky enough to have more than one interesting offer to choose from. That is when I handed over my resignation in March and signed a contract to join BJSS. I believe is the right place to continue my progression. I started two months ago and I do enjoy it so far. Despite moving from the startup world to the corporate one it feels as friendly and open as a startup would. There are a couple more rules to adapt to but nothing too crazy. The biggest change I made is leaving my t-shirts home but now I use them for my gym sessions so I don’t miss them too much. By the way I will be taking part in a charity event for the Make-a-Wish foundation organised by Microsoft. Along with a BJSS sponsored team we will take on other companies by playing Unreal Tournament GOTY edition in a couple weeks time at the Microsoft Reactor. You can find more details about how to register your company and/or make donations to the event here.
I believe that is enough catching up for now, I will try not to have such big breaks in the future and get back to my monthly-ish posting rythm. Thanks for swinging by and the next post will move back to more technical stuff. I guess that if there is something you need to remember from this post apart from the charity bit, it is that you need to do you. If you are not happy with where you are nor what you are doing it is likely to negatively affect you and those around you. Personally and professionally. Take care of yourself, maybe you need to talk to someone, maybe you need to exercise more, maybe you need to chill and take time to enjoy life. Maybe you need to change job. Just listen to yourself and you will be fine. Unless you’re wrong. This post is not the Bible or <insert book of wisdom you like to read like Bridget Jones’ baby>, just do whatever you feel is right to keep movin forth. You won’t live forever so better not waste time being unhappy.
Till next time!
Today I am going to do something I have not done before. A couple months ago I was contacted by NDepend to play around their software. I did not check but there is probably a fair amount of software reviews out there. Hence why I will try an hopefully different approach. A noob approach. I’ll read the promise from the software to review and just dive into it without any sort of guidance. Let’s call it Noob Review. Yep, that’s how you create a series that might or might not live longer than a post.
According to the website homepage, NDepend is the only Visual Studio extension that is able to tell us developers about created technical debt. This would then allow undoing the debt before it gets committed. The alleged debt is calculated based off a set of rules predefined using LINQ queries. We also can add our own queries.
Enough with introductions, let’s just get noobie!
I downloaded the latest version 2018.1.0 released on Wednesday you can find the link to the latest version here. on NDepend upon download presents itself as a ZIP archive containing some executables and a Visual Studio Extention installer.
As you can see below, the installer propose you to install the NDepend extension to Visual Studio versions all the way back to VS2010.
From there I just installed the extension using the licence key the NDepend team nicely offered me.
From now on I am going full improv. I will have no idea of what I am doing because that is what most people do when they get a new tool. That approach works when you know how to use a pencil and grab a pen for the first time. It might be a bit more entertaining if I do so with NDepend. Since it is a tool that should allow me to detect technical debt I will write an ok piece of code then some less ok code to see what happens.
First things first, I created a console project running on .Net Core. I am not seeing anything trigger automatically. Being used to Resharper, I check the toolbar and see an NDepend menu that was not there before.
After attaching the project, I went back to run an analysis on my console app but I kept getting this error:
Turns out the NDepend project did not pick up the Visual Studio project. I then closed Visual Studio to reopen it yet I had the same error after loading the NDepend project and attempting to run the analysis again. Paying more attention to the error message this time, I noticed the error was about a reference to my solution not being loaded in NDepend. I thought that maybe the issue was with me not creating the NDepend project in my console app solution folder. I conjectured that maybe these errors occur because the NDepend project is not in the same directory as my solution. Probably a noob error on my end. So I went on to edit the NDepend project properties.
Above, you can see the NDepend project properties after I added the reference to my solution using the “Add Assemblies from VS solution(s)” button. It seems that it loaded the binary generated by the solution along with It also shows the 3rd-party binaries used by my solution,
System.Console. After that, I ran another analysis and it eventually worked as you can see below:
Now that I finally set up the static analysis properly I can dive into what it reveals from a basic “Hello World!” console app. After that first successful analysis run, I could see that the NDepend menu changed. A whole new world opened to me. As a first reflex, I opened the “Rules” submenu. From there I could see that a rule was violated.
What rule could Microsoft’s “Hello World!” code possibly have violated? Well, look down.
Class with no descendant should be sealed if possible. It is actually more of a warning. A cool bit that I noted is that you even get a more detailed description of the warning cause along with why you should consider it.
I always learned that whenever possible we should have as little warnings as possible so let’s clean that up and make our Program class sealed. After making the change, when I re-ran the analysis, I got the same result and broken rules as before. Also, there was a message telling me that my file
Program.cs was out of sync. I got a hunch and rebuilt the solution. Then the analysis result views updated.
Now that the code is green and clean. It is time to try and build some technical debt. If you are not familiar with that term I will try to sum it up for you. Technical debt is the implied cost of rework needed in the future when choosing a quick and easy solution over one that would be more thorough but would take more time. More often than not chosing the easy way will hit you back. It will hit you hard.
Let’s say you take a complex subject at school. You could put in place a system to cheat to get good grades. It is easy and does not require extensive preparation work. Yet you can get caught and lose everything. Also, the ink can ruin your cheatsheet. Or, you could learn that subject and try to do your best mastering it class after class, exercise after exercise. You will not necessarily feel the effort was worth it from the start but eventually it will pay off. Learning your subject from the start is hard but you get more confidence to build on top of. Building technical debt on purpose is basically cheating on your Geometry class from high school. Don’t cheat on your Geometry class.
I felt like I did not want to spend months writing the perfect imperfect piece of code so I just googled “c# bad practices” and opened the first result that came up. From there I just copied the method and adjusted it to be called in our
Main(). You can copy the code below if you are trying to reproduce the experiment.
Once the code ready, I rebuilt the solution and ran a new analysis.
In the post mentioned earlier, there are a few things wrong that are pointed at but some that would be unfair to criticize here. However, I will keep the points that I wish would have been picked up and were not.
Calculate()method is public yet accessed by only one method in a console app. I hoped I would see more from the actually copied code and not from how I access it in my
if-elseis matched (to be fair, it might be a valid business logic in some cases but a warning would be welcome).
It can be considered unfair to point these out and it might be. I will try to spend some time later to see whether it would be possible to create custom rules to spot any of these. That will definitely be a fun exercise. Feel free to try the same at home.
I originally planned on adding a section where I would try to get more warnings and errors but that would be outside the boundaries of what I want a Noob review to be. A follow-up post covering more complex cases and custom queries would be more fitting for a separate post anyway. Since there are loads of things currently happening in my life, that post might not happen for a while. That being said let’s wrap up with some pros and cons I noted during that quick take.
While people love to customize things I do not trust myself for writing a rules engine determining my code’s quality. I’m likely to make a mistake in there and not notice it. I may actually change it to a pro after experimenting with it more.
After that first experiment, I do not think I would use NDepend for my personal projects. The cons I pointed above outweight the pros in my opinion. I do believe that spending more time with NDepend could change my vision of it and maybe make me realise that it fits my needs more than I think. I am no evangelist nor influencer, even if I was or become one by the time you read this, you should not take this post as absolute truth. It is a Noob Review after all, it cannot be right nor fair. My piece of advice is to go and have a look for yourself. If your interest got piqued by this post, you should download NDepend and figure out whether it fits your needs. You can have a 14-day trial to play with it. Happy experimentation!
A couple of days ago I submitted my first personal app to the Apple App Store. I know it can be surprising considering the few years I spent doing iOS development professionally. You may know this but I was more into Windows phone from its inception until Microsoft decided to murder it a few months ago. As a result, I fully switched to Apple, from the phone to the watch to the mac. Still kept my Windows laptop though.
Back to today’s topic, App Store rejection. I built some app to keep track of my crypto spendings. After a couple weekends working on it, I decided to publish it and see how people react to it. Unfortunately, the reviewers rejected the app because I did not provide a demo account for it.
Continuous delivery. You may recall that in my previous post I announced that today’s entry would be revolving around continuous integration. And technically it can count as such since we will cover continuous integration along the next step. That next step is continuous delivery. If you are not familiar with these terms and the concepts behind them I will sum them up briefly.
Basically, continuous integration allows verifying that your codebase still builds and passes tests passing whenever you push changes. Add a trigger to deploy your code to production upon success and you pretty much have the idea around continuous delivery.
These practices help mainly to make sure that you don’t break your codebase when pushing changes. This is good when you work alone but a lifesaver when working in a team. You cannot imagine how many hours I wasted mostly during my studying years because of coding breaking without us realizing before days. Using source control was already a miracle in itself at a time when there limited options for continuous integration, especially for students. If you want more details about source control workflows the GitHub Flow is a great place to start.
Back in today’s topic, continuous delivery. Before I start inundating you with scripts and screen captures you need to be familiar with a few things:
Since you read my previous tutorial, you should know more or less what the code does. It is the classic Values API sample returning an array with two values “value1” and “value2”. From there, the easiest step is to fork the repository created from that previous post which you can reach by clicking here.
Once the fork completed you will have an exact copy of my repository where you can push changes for the rest of the tutorial. If you have not yet, you need to clone your fork to your machine for the next stage.
Docker is going to be key for today’s tutorial. Why? I hear you ask. Because CircleCI does not support C# for continuous integration. Neither does Heroku for deployment, at least not officially but we’ll get back to that later. But do you know what is supported by both that we can use? Docker container images.
Basically consider a Docker container image as a box in where you put everything your software needs to run properly, from code to settings to system tools and libraries. A containerized software will always run the same way regardless of the environment. It will be completely isolated from its surroundings. The cool thing about this? Well it works on any environment, whether you run it on Windows, Mac and Linux. It is true as long as your computer supports VT-d virtualization. Then, you can make sure your container behaves as you expect locally before deploying. This should be the case if your device is no more than a couple years old. Also note that if you cannot run Docker on your local machine, you can still commit the docker files and it will work on CircleCI.
First things first, you will need to install Docker Community Edition which is free and available at this link. The installation is pretty straightforward so nothing special to mention here. If Docker is not supported on your machine you will get a message when trying to install it on Windows. The same should happen if you try to run it on Mac. If it is the case, don’t worry, you can still go through the tutorial and won’t be missing that much.
As mentioned previously, if we try to build our API straight away on CircleCI it will fail. Not because the code does not work but because it is not supported. In order to get our tests running, we will have them run in a containerized way. We don’t need to create a container image yet, only to get an existing container image that will support running them.
The first thing to do is to create docker compose file that will allow getting an image that supports running .NET Core 2 applications and run our tests inside of that image. Now you will copy a file definition that will do exactly that upon using the
docker-compose command. You need to create a file named
docker-compose.unittests.yml at the root of your repository. Once it’s done, copy into it the contents of the gist below:
Now we need to write the script that will allow our continuous integration tool to restore the solution within the container image. After what the tests will be run. Here is the script to copy inside a file named
docker-run-unittests.sh still at the root of your solution:
You may notice a line that is unusual to most people. The command
set -eu -o pipefail. A short and stupid explanation is to say that it halts and makes the build process fail if an error occurs. If your build does not compile or that tests fail, that command will allow the
docker-compose command to fail which will trigger an error and allow your CI system to know it failed.
Now that we have our tests ready to run within a container we will run them locally to make sure we’re all set. In order to do so, you will need to run the following command with your favourite terminal. This assumes that you are in your solution folder and that you can run Docker commands on your machine.
docker-compose -f docker-compose.unittests.yml run --rm unittests
Running that command will give you an output similar to this:
We are now able to run tests on any environment supporting Docker. Let’s now setup our continuous integration tool.
Now that we have all the Docker configuration ready to run tests, we can configure our project to have our continuous integration on CircleCI. The first thing to do here is to create a
.circleci folder in your solution folder. Then, you will create a
config.yml file inside of it so that its relative path to your solution is
.circleci/config.yml. Into that file you will copy these contents:
Commit and push your changes, then move onto the next section.
CircleCI is a platform used for continuous integration and continuous delivery. I picked it for today’s post because it’s free and can be good if you just want to play around. Also, it can be great if you are creating a new business and want to keep the costs low before scaling up.
The first step here is to create an account. You can reach their signup page by clicking here. Once there you should see this screen:
Now you need to press “Sign Up with GitHub” to create your CircleCI account. This will land you on a page where GitHub will ask you if you want to grant CircleCI various permissions. As you will see below it will require your email address(es) and repository access rights.
If you noticed the arrow and the red not circle you know where to click next. If not, press “Add projects”. You will see the forked repository name appear. Next to it, you will notice a “Setup project” link, press it.
Now you have pressed the right link you should see the project setup screen. You can leave the operating system as Linux and select “Other” as language.
Once you’ve done that a feedback box will appear asking what language you intend to use. I suppose it is to prioritise what they should add next to their roadmap. Don’t feel obliged to put C# as it might make the unit testing part of this post obsolete. Which I wouldn’t mind much because then I can update this post to avoid the build & test magic you were introduced to previously.
Next, scrolling down you should see a set of instructions to get the build to run but we already took care of that.
In our case, there is not much going on apart from the test run so after up to a couple minutes you should get your successful build.
Now that our CI tool is ready to build and validate our software, it’s time to prepare for deployment.
Heroku is a platform allowing developers to deploy, manage and scale web apps. They support most of the modern technologies and languages such as Node.js, Java, Go and many more. However, they do not officially support .NET Core even though they allow for extensions from Github (or buildpacks) to have some sort of support. But today we are not going to do that.
The first thing you will need to do now is creating an account. You can do so by clicking here. Once your account created, you will see a screen prompting you to create a new app.
Now you can press “Create New App”, and you will be asked to pick a name and region. For this tutorial, the region does not matter and you can pick any name you like.
Now that the app is ready to receive our API deployment, you need to get your Heroku API key so that we can deploy our code to Heroku from CircleCI. In order to do so, you will have to access your Heroku settings. To get there, click on your profile icon (top right of the screen), you should see this menu pop up.
Next, click “Account settings”. Once on the settings page scroll down until you see this:
Here we are, the time where we create our own (maybe your first) Docker image. The first step is to create our
Dockerfile in the project folder.
Dockerfile is pretty standard here, it generates an environment allowing to compile build and run .NET Core apps. Then, it restores our project and publishes it locally to eventually run it using port number passed by Docker.
Now that our
Dockerfile is ready to go, we will add a
.dockerignore file that is a list of files/folders we want Docker to ignore. In our case, we want to make our build context as small as possible so we will ignore binaries as you can see below:
Once the file created, if you can run Docker locally, you may run the following commands to make sure your setup is valid:
docker build -t aspnetapp DotNetCoreSampleApi
Yet again, if you cannot run Docker locally, you will see the results on CircleCI later.
We are almost there! It is time to put the delivery in continuous delivery. Now that we have our Docker image configuration ready, we can finalize our CircleCI configuration. Before editing our configuration file will need to add our Heroku credentials to the project environment variables. In order to do so, go back to your dashboard. From there, press your build’s settings button, it should look like this:
Then, click “Environment Variables” and add the email address you registered with on Heroku as
HEROKU_USERNAME. Afterwards, add your Heroku API key as
HEROKU_API_KEY. Finally, add your Heroku app name as
After adding the variables, we can now update our CircleCI configuration file with the deployment steps.
Basically, what we do in that file is building our Docker image then authenticating to Heroku to eventually push our image to Heroku’s container registry. Now it is time to commit and push our changes for the last time. If you go back to CircleCI, you should see your build was successful.
Now, if you go to your Heroku app using
https://<your-app-name>.herokuapp.com/api/values, you will see the following result.
Congratulations! You are now smarter than 30 minutes ago! Not only you know how to setup continuous delivery using CircleCI and Heroku but you can build a Docker container image. If you missed anything, don’t hesitate to check the source code there.
Note that the sake of brevity, I chose to put all the commands in the CircleCI build job. Also I did not put any condition on which branch gets deployed, which is a check that you should always have to avoid publishing a test build to production. In the case of continuous delivery, pushing code to the dev branch should trigger a deployment to the development environment. Pushing code to master should trigger a deployment to production and so on. You can figure how to do this using condition-based instructions and the deployment job here.
Based on your feedback I may write a quick guide on setting up CI for multiple environments using this post as a basis. Since I have a few other things in the pipeline for the next few months it might not happen before a while.
Thanks again for reading, if it was any use to you don’t hesitate to share and subscribe to get more of these. The next future-proof entry should be about what you can do to avoid your continuous delivery to turn into this:
This tutorial is an introduction to .NET Core CLI tools. More precisely it is about creating a web API using the CLI tools provided for .NET Core. Whether you are a beginner in development or just new to .NET Core this tutorial is for you. However, you need to be familiar with what an API is and unit tests to fully enjoy this tutorial. Today, we will set up a solution grouping an API project and a test project.
For the next steps, you will need to install .NET Core and Visual Studio Code (referred to as VSCode later for the sake of brevity) that are supported on Mac, Unix and Windows. If you want to know how that multi-platform/framework is working have a look here.
First things first we will open a terminal (or Powershell for Windows users) to create our solution. Once this is done we can create our solution that I will name
DotNetCoreSampleApi as follows:
dotnet new sln -o DotNetCoreSampleApi
This command will create a new folder and
DotNetCoreSampleApi a solution file with the surprising name
DotNetCoreSampleApi.sln .Next, we will enter that folder.
Now that the solution is here, we can create our API project. Because I am not the most creative mind I will also name it
DotNetCoreSampleApi. Here is the command to create the project.
dotnet new webapi -o DotNetCoreSampleApi
That command will create a subfolder named
DotNetCoreSampleApi to your solution
DotNetCoreSampleApi. If you followed all the steps your solution root should contain a file
DotNetCoreSampleApi.sln and the web API folder
DotNetCoreSampleApi.sln. The web API folder should contain a few files but the one we need now is
DotNetCoreSampleApi.csproj. We will add a reference to it in our solution. To do so, run the following command:
dotnet sln add ./DotNetCoreSampleApi/DotNetCoreSampleApi.csproj
After getting a confirmation message we can now start the API by running that command:
dotnet run --project DotNetCoreSampleApi
After a few seconds, it should display a message notifying you that the API is now running locally. You may access it at http://localhost:5000/api/values which is the Values API default endpoint.
You may be aching to see some code by now but unfortunately, you will have to wait a bit more. Back in the days of .NET Framework, there was no such thing as generating projects by command line. You had to use cumbersome windows to pick what you needed to create. So now all of this project generation can be done by command line thanks to the CLI tools you will like it. And this is merely a suggestion. Back to the terminal. If the API is still running you may kill it by pressing
Ctrl+C in the window you opened it in.
We are now able to create a test project and add it to the solution. First, let’s create the test project using
dotnet new as follows:
dotnet new mstest -o DotNetCoreSampleApi.Tests
That command creates a new unit test project using MSTests in a new folder with the name
DotNetCoreSampleApi.Tests. Note that if you are more of a xUnit person you can replace
mstest in the command with
xunit which will create a xUnit test project. Now similarly to what we did for our web API project, we will add our test project to the solution:
dotnet sln add ./DotNetCoreSampleApi.Tests/DotNetCoreSampleApi.Tests.csproj
Almost instantly you should have a confirmation that the project was added.
Now, open VSCode and open the folder containing the file
DotNetCoreSampleApi.sln. At this point you have that structure into the folder:
If you never used VSCode before, or at least not for C# development you will be suggested to install the C# extension:
Select “Show Recommendations” and apply what VSCode suggests. Then, once you finished installing the C# extension you will get a warning about adding missing assets to build and debug the project, select “Yes”.
Don’t hesitate to go back a few steps or even to restart this tutorial if something does not seem to work as expected. Here is how your test folder should look like by now:
And finally, we are getting in the fun code writing part. The part where we put aside our dear CLI tools By code writing I mean copy/paste the code I will show you later. And by fun, I mean code that compiles. There is nothing less frustrating than code that does not compile. Especially when you have no idea why. Fortunately, this will not happen here.
Now that you have your code editor ready to use you can go ahead and delete the
UnitTest1.cs file. Once done, you will create a new file named ValuesControllerTests.cs in your test project. Then your VSCode more or less looks like this:
Using VSCode the file should be empty, but in case it is not, delete its contents to match the screenshot above. As soon as you get your nice and empty file copy the code below into it:
Now you should get some warnings, which is perfectly fine because they should be here. If you hover over these you will see some referencing related error messages like below:
These appear because we did not reference the API project into our test project yet. It is time to open your terminal again. However, if you feel like having a bit of an adventure you can try VSCode’s terminal that will open in your solution folder. In order to do so, you can press
Ctrl+' while in VSCode to open it. Or
Ctrl+` if you’re using a Mac, probably either work for Unix.
Once the terminal open we will reference our API project into the test one with that command:
dotnet add DotNetCoreSampleApi.Tests/DotNetCoreSampleApi.Tests.csproj reference DotNetCoreSampleApi/DotNetCoreSampleApi.csproj
If you don’t see the full command above, you can still copy it using the copy button present when hovering.
Now that the reference to the API project is here the referencing warnings concerning it should be gone. However, a new one might appear about the
Get call as below:I am not quite sure why it happens but it seems to be a bug within VSCode not getting this reference is here through the API project. However, you should not worry about it because if you try to build the solution and or run the tests it will work.
Now we lay into the crispy part, the one we need before getting any further. The part we can use as the basis before delving into more advanced stuff like continuous integration or continuous deployment. Running a test that validates our logic. If you had a look at the
ValuesController.cs file inside our API project you will see that the
Get() method is returning an array of strings. This array contains the values “value1” and “value2”. The test class you copied earlier contains a method that verifies that both “value1” and “value2” are returned for this
So, back to the
ValuesControllerTests.cs file. You may have noticed some links appearing on top of our test method like this:
You can ignore the “0 references” and “debug test” links for now. Press “run test” to execute our test. Actually, it will first build our API project to have the latest version of it before linking it to our test binary. After running the test, you should see something like this:
It’s nice to know that one of your tests failed, however, you know what is better? Knowing which test actually broke and why. Therefore, this is the perfect time to bring up the .NET Core CLI tools again. Now, you can run our test using the .NET Core CLI tools with this command:
dotnet test DotNetCoreSampleApi.Tests
Which will actually provide you with some more details on what broke:
As you can see you get the message “value2 is not returned” that we defined in our test file. Here is a little callback for you:
I won’t say that now you are a fully fledged .NET Core developer but it’s a good start. You just created your (maybe) first API and test projects. Moreover, the test actually validates some of the API controller logic. So you know, congrats on that. However, if for a reason or another, something did not go according to plan, feel free to check the source code here.
I hope you enjoyed this new entry of my future-proof series and I will see you next time. You should look forward to it as I will cover how to setup continuous integration for such a project. It should be different from that other post from last year using Appveyor.
And remember, if you ever need anything from the CLI tools:
You may remember that underwhelming post I made a few months ago. I wrote it while high on entrepreneurship. As you may not feel like reading it I’ll sum up. I hinted at a side project that could become something cool, something potentially big. I read a lot of blog posts from people who had that illumination on an issue they could solve. People building a solution that could change lives for the better. I think my first mistake was thinking I could force that. I write decent code and designed a few solutions but it was always driven by someone else vision or convictions. All I had to do was find an idea that could make people’s day to day life easier.
One day after work, while having dinner I had a sparkle while my girlfriend was complaining about her work. At least I thought I did. She works as a nanny during the day and goes to uni to study languages after work. Back to the topic, she complained about how her boss is constantly micromanaging her when she prefers to be more in charge due to her extensive experience. Back then I thought how cool it would be to have a website or an app that would allow her to have reviews of other nannies, babysitters on families they work with. Something that could have prevented working with someone incompatible with her. A Glassdoor of sorts for nannies and other childcare workers. I didn’t even bother getting in the legal implications of such a product. I just started designing around.
This is how I have spent a good few weeks writing user stories, picking colours that would send the right sensation to potential users. Putting post-it all across the living rooms based on how I would want people to perceive that system. The main idea was to create something that would allow people to have the bases to access someone else’s home through a relationship based on trust and empathy where all the parties would feel safe. This is how Hestya as an idea was born. I picked the name both off the Greek goddess Hestia and from words matching what I wanted users to feel. Home. Empathy. Safety. Trust. You (I really grasped at straws there). Access.
Once the basics were there I went on to read even more entrepreneurship related blogs, picking tips around. Reading stories on people whose sole purpose in life became to create that great product. From there, I created the kick-off website to try and see if there were interest. Later on, I set up a Facebook page that I never shared with anyone. I wanted the web app to be ready before I share anything. I even ordered business cards I could use to exchange contact with other entrepreneurs at meetups. Technically I did follow the first steps of startup creation. Create a prototype, talk to a few people working in that domain, know your target audience, validate your idea with a kickoff site.
Then I went on to read even more entrepreneurship related blogs, picking tips around. Reading stories on people who managed to turn an idea into a product and made it successful. From my readings, I could follow one of two paths: quit my job to focus on Hestya or to those kill off my social life by working non-stop. With luck maybe Hestya could take off within a few months if it becomes viral but it still seems unlikely to this day. The most likely thing would be to build it in a marathoning way over a year or so making keeping my job the most reasonable option.
No social life, no gaming, no blogging, nothing. For about three months I spent all my free time on Hestya. Eventually, by mid-December, I finished building the API and 90% of the web app. All I had to do was write a few more hundred lines of code and I would have my minimum viable product. Yet, something was wrong. I contemplated the mountains of work ahead of me and keeping the same working rhythm but with stuff, I know nothing about like marketing and brand growth. All of which would have been fine if I had not reached a point of mental and physical exhaustion. I needed a break, badly.
My original Christmas holiday plan was to take my laptop to keep working pretty much 24/7 but keeping a few hours here and there for family and friends. The tiredness was such that I decided then to take a real break and assess why I started working on Hestya in the first place. Which is, now that I think about it would be some sort of Glassdoor for childcare workers with some more stuff.
My first step was to figure what it would achieve to build this tool. Turns out there are a few laws in the UK that would make it a skewed product as some honest comments from nannies (applies for any work actually) towards the previous employer could be followed by disciplinary or legal action. As shown with Mrs Plant’s case, comments made online whether it is on Facebook or Hestya could end up in the employer sight who can then take action against them.
At that point, I pictured other scenarios where someone trying to change employer by contacting a family might end up fired upon the current employer finding out thanks to screenshots. Digging further you can even find childcare workers complaining about their on employers food preferences being part of work constraints in a way that could be seen as hate speech at least in the UK. It seemed like an additional risk I did not have a safety net for. What would have been the central focus was already off limits. I like to think that had I taken a few hours to research this before putting in so much effort I would have saved some time.
But then knowing how blind I was I probably would have talked myself into going ahead anyway. I did do some research but in the end, I only picked what would fit into the vision I had and was not looking at it objectively. Every time I would find a flaw I would find another excuse to keep going. As long as it felt fresh and different it was fine. Back at work I felt underwhelmed kinda stuck in a routine.
Indeed, it had a been a few months that my job consisted of stabilizing and documenting our platform. In my mind, pushing the next release or fixing something was as casual as washing dishes. I need to do it because I commit to it but it is hardly the most satisfying or stimulating activity. Just the same thing day in and day out. Coming in, writing some code, going home. Not only I did not have anything to be passionate about work-wise, but I left aside anything that would passionate me.
Yet I repeated that with Hestya. Yes, the first couple of weeks designing a product showed every stimulating. Plus it would look great for me as an achievement. However, once the excitement past, I realised my day actually became worse. My life was just me balancing a job that became less interesting with a side project as passioning. All while getting very little time to myself and even less for my loved ones. At last, I decided to stop.
It was not worth trying to force myself into entrepreneurship with a project that doesn’t passionate me. It is not just true for coding, it is true for anything that requires passion like Pewdiepie said in one of his recent videos. If you are passionate about something, even if it gets hard you will keep going until you make it. If your only interest in what should start as a hobby is not you enjoying it, just stop it.
Since I was a kid I was always branded as the guy for whom everything comes easy. The guy with a huge potential who can do anything he wants. Year after year, I keep setting myself higher and higher goals in terms of personal growth and achievements. Take on more responsibilities, work on something that will make an impact. Sometimes I even think to myself: “Dude where is your Facebook? Where is that potential there was so much noise about?”.
Now that I think about it further, I got into that whole side project maybe turning startup thing right after failing to get a promotion as Tech lead. I remember thinking before we got the results that almost all the candidates were more qualified than I. To be fair they all had more experience and I believe more maturity than I do. Yet I kept thinking about that potential I was not able to express fully in my day-to-day work. So I rushed things, went with an idea I did not fully believe in. An idea I was not able to hold onto when I got tired. My mind was just gone.
Had I fully believed in Hestya I would definitely still be on it maybe I would have even released the v1 by now, sharing it around. This was my first attempt at entrepreneurship, which I see as a complete failure but definitely not the last. At least through that venture, I got to deepen my knowledge of React and Node.js so all is not lost.
What now? Well, I’m just going to keep on learning by working and reading while trying not to get myself another burnout. I have plenty of time ahead of me. I will turn 27 in about a month yet it will be ten years I have been coding. Eight since I started working, I have plenty of experience and when the time comes to build something great I’ll be ready. All I need is to keep picking projects I am passionate about and everything will work out.
I like to think of that experience as a lesson. A lesson I will use to keep moving forward. I’m done setting myself goals based on that “potential” I lived with for years. I’m done setting myself goals based on others experiences instead of focusing on passion. What makes me enjoy writing code. We all have our own paths and rhythms. All we can do is try to get better day after day, ignore the pressure and enjoy the ride. I will have another shot at entrepreneurship but with something that actually passionates me.
As you might expect I will get back to blogging regularly as it is something that does passionate me. Also I would like to thank everyone coming to read my stuff every now and then. My audience grew tenfold over the past year and it is very exciting. Definitely, the best part is that about 99.6% of you readers are not part of my sharing circle on Facebook or Twitter. I’m glad you like my content and will spend more time improving it. Thank you.
Hi everyone, it’s been exactly a month since my last post and I have a good excuse for it. As it turns out I was pretty busy between a wedding, a holiday and the beginning of a personal project. Yep, another one! From now I will refer to it as my Greek goddess gamble until I reveal what it is all about.
The phase 1 of that gamble started a few weeks ago, hopefully I’ll make enough progress by December. Time is key here so it is more than likely that I post even less until then which makes it an even bigger gamble. Not posting for a month slowed down the growth of the number of views by 8%. Still I am lucky enough to see the number of readers slightly increasing week after week and hope it will last until December. Hopefully, the break will allow me to fully focus through my weekends and evenings to deliver on that crazy move.
Before you ask, no, I am not gonna retire to a corn field to raise my chicken anytime soon. Anything chicken related I leave to KFC (not sponsored, but can be :wink wink:). Here I am digressing again because I don’t want to risk revealing too much. Back to the main topic, that Greek goddess gamble does involve a fair amount of coding along with research. I originally wanted to kind of serialize and post every week about it or even vlog my progress. But eventually I realised that it will be more meaningful if there is a clear narrative through the posts. It is much easier to tell a story when you know the end.
To conclude, if the phase 1 of that gamble goes well, I will start to post on a weekly basis and/or vlog through phase 2. In case of failure, well I’ll just present over a couple posts what it was about and what went wrong. Stay tuned!
P.S. If you feel craving for my personal posts you can check out my recent poetry or my techier stuff. You may even want to keep an eye on Poq’s blog within the next few weeks, just don’t tell anyone I told you.