a coding nagger's blog

My name is Jean-Dominique Nguele and this is my blog. FLVCTVAT NEC MERGITVR

Tag Archives: future


Catching up after 3 months without posts

Reading Time: 4 minutes

Noob Review

Long time no blog, loads of catching up to do. Surprisingly there is still a lot of people coming to read here despite the long absence. First of all, thank you for sticking around or swinging by if you are new here. My last post was a noob review that I published late March of this year. I made a quick review of a bit of software that the creator did not find to be fair.

For a time I contemplated updating the post but quickly realised it would defeat the purpose of what I want Noob Review to be. Noob review is about discovering something without studying it first, I litterally write it as I start using it. It takes me about two hours to finish taking up notes and screen shots then about a couple days to edit and format the post. I really want it to be a quick take on stuff.

Enough about Noob review. Let’s move onto the next topic.

Catching up on that job I left

For most of the past two years I worked in a company where I have been able to hone my skills as a developer. Surrounded by experienced and talented people I did grow into what I believe to be a senior. Yes, technically I was already seen as such but only seeing what I was able to deliver made me realise to which extent. It was an environment where I used to have my ideas challenged and challenge others ideas day in day out.

Such environment is great to learn how to validate your assertions more carefully and overall improve your thought process to come out on top of these challenges. It became a sort of game where most of my enjoyment laid. Intellectual jousts to build the best solution possible while considering time and money constraints were great. However, my enjoyment shifted over time. It shifted towards Tuesdays football games, Friday drinks and everything else involving the workplace except for the actual work.

This did last for a while before I came to the realisation that I grew bored of work. Not bored of working, more like bored with my job. Yet, I still loved building solutions as you know from my short-lived Hestya experience and that Coinzprofit app I built during that period of time. My problem was that I was not doing any of these things anymore at work. Any attempt of discussing a potential solution or improvement within the way we worked was seen as an offense. Our sort of technical council just became a bunch of people agreeing with each other. On top of that my backlog was running thin. One would be happy with having little work to do with the pay I had. I was not, I wanted to do meaningful work again.

That challenge I found

I looked for a new job about a month and got lucky enough to have more than one interesting offer to choose from. That is when I handed over my resignation in March and signed a contract to join BJSS. I believe is the right place to continue my progression. I started two months ago and I do enjoy it so far. Despite moving from the startup world to the corporate one it feels as friendly and open as a startup would. There are a couple more rules to adapt to but nothing too crazy. The biggest change I made is leaving my t-shirts home but now I use them for my gym sessions so I don’t miss them too much. By the way I will be taking part in a charity event for the Make-a-Wish foundation organised by Microsoft. Along with a BJSS sponsored team we will take on other companies by playing Unreal Tournament GOTY edition in a couple weeks time at the Microsoft Reactor. You can find more details about how to register your company and/or make donations to the event here.

Wrapping up

I believe that is enough catching up for now, I will try not to have such big breaks in the future and get back to my monthly-ish posting rythm. Thanks for swinging by and the next post will move back to more technical stuff. I guess that if there is something you need to remember from this post apart from the charity bit, it is that you need to do you. If you are not happy with where you are nor what you are doing it is likely to negatively affect you and those around you. Personally and professionally. Take care of yourself, maybe you need to talk to someone, maybe you need to exercise more, maybe you need to chill and take time to enjoy life. Maybe you need to change job. Just listen to yourself and you will be fine. Unless you’re wrong. This post is not the Bible or <insert book of wisdom you like to read like Bridget Jones’ baby>, just do whatever you feel is right to keep movin forth. You won’t live forever so better not waste time being unhappy.

Till next time!

 

My Musketeers for DotNet Test driven development

Reading Time: 5 minutes

Test, four letters, one meaning and for some people a struggle. Getting people around you to write tests is easy only when everyone already agrees with you. As often, there are instances where some people show resistance to writing tests. Here are the stuff I hear the most from them:

D: I don’t have time to write tests.

A: I don’t need to test this.

B: I can’t write a test for this.

Not writing tests will always lead to hours of tears and blood. Tears and blood from debugging something you let slip through. Something that broke your super edgy software. I am not saying that writing tests will lead you to a bug free software but at least you know exactly how your code behaves. There, you know what you can reasonably expect from it. Despite having a great code coverage your code will eventually break and it’s perfectly fine. This is where your tests become useful as they will help you ensure you don’t break your existing code while refactoring or fixing a bug. Then you can simply add a new test to cover that unexpected scenario.

Per example, yesterday a colleague had some weird data mix up on a development deployment of an API I created a few months ago which revealed a case I didn’t think of. That API had 95% coverage and still a bug showed up because it is how software works. Although the bug was generated from a virtually impossible case, so what I did was replicate it, write a test for it, fix it and get it through review and released it all within 30 minutes. That project coverage is now at 98% (highest we have now, of course I’m gonna show off about it) and yet I know that one day or the other another bug will pop. When that day comes, it may not be fixed as quickly as yesterday but it will be as easy to refactor parts of it safely.

Yes it takes some time to write tests but on the long run it is more than worth it. For a long time I thought that the only reason for one not to cover his code would be laziness. Not the good laziness that makes you want to save time by writing tests and not spend hours debugging and testing a whole bunch of non covered features. Still, over time I came to learn that no developer walks to his desk everyday to write buggy code on purpose. A lot of factors come into play such as clients and project managers pressuring you with tight deadlines. Tighter and tighter deadlines, day after day. Then ensues a drop of quality in favour of faster delivery that in the long run can hurt a business.

In that kind of situation, blaming a developer for not writing tests will not help anyone. However, what can help is providing tools to help that developer to move faster. This is where today’s post is supposed to help you. Help you to accelerate your development. Today, I am using this post to present three tools that help me everyday to deliver code faster without sacrificing quality. Although nothing is magic, I hope these will help you in a personal or professional context as they help me every single day.

It doesn’t matter whether you have access to continuous integration or not. However, what will matter is your ability to write decent tests. Even if you write only very simple happy path tests, as long as you write those properly you will be fine. Here we go!

Moq

Moq is awesome for unit testing. What is unit testing? Well, I don’t have a proper definition in mind and there are tons of different versions online. The version I learned mostly over experience and you are free not to believe the same thing. To me, a unit test is a piece of software written to test a component regardless of the dependencies it has to make sure that a defined input will provide an expected output. Basically, unit tests allow you to validate your software’s behaviour in a way that prevents you or a potential collaborator from breaking it your software later on.

How does Moq works? The premise is that you can mock any interface which allows you to define how your software behaves based on a dependency input. Which is great in an inversion of control context. This also extends to class virtual and abstract methods so that you can create tests defining how a class behaves based on what a method could return. Another cool feature of Moq is the possibility to verify the methods of a mocked interface/class got called with a specific input. That will allow you to make sure that the method under test is calling its dependencies methods with the parameters you expect.

For more information on Moq you can check out their documentation on Github

AutoFixture

Let’s now move onto AutoFixture that I use pretty much since it exists. AutoFixture is a library that allows generating dummy data on the fly in any context. This thing made my test writing so much faster. It also works great with Moq to quickly write test cases where the input data does not really matter. You can use it to generate data of type, from string to bool to your custom classes. One of my main use for that library is to create data on the fly without thinking too much about it and use that generated data to validate my tests.

I have not reached the limitations of what you can do with that tool yet. However, you need to be careful with types that have a recursive relationship which you often get when you work using EntityFramework. Per example, if you have a Chicken class with an property of type Egg. Imagine that Egg class has a property of type Chicken, you will end up with an exception due to some kind of infinite loop situation. You can avoid that situation by defining what properties you don’t want to set when generating your data.

For more details on AutoFixture, check out their Github.  To jump straight to code samples here is their cheat sheet.

Postman

This one is a bit more different than the others mentioned previously. Indeed, you can use Postman to document how your API works. You can use it for monitoring with a paid account or build a monitoring of your own using Newman. I wrote a couple of posts about it over the past months to get started or to build simple CI using Appveyor. What I like with Postman is that it is pretty intuitive and straightforward to use even for non-technical people. Once you get started you can  do some pretty advanced flow-based testing which is pretty useful in micro services architecture. In the end, how and where you use Postman is down to you and I love the flexibility of it. That flexibility that allows you to make it fit your needs and accelerate your development.

Thanks for reading, you can now go and write a bunch of cool software with loads of tests. Or don’t. I’m not your dad and I won’t punish you, but your code will.

 

Simple continuous integration with Appveyor and Newman

Reading Time: 13 minutes

Last month, I posted about Postman enabling you to test your APIs with little effort so that you can build future-proof software. Here we are going to cover setting up continuous integration for a simple project by using Newman to run your Postman collections. You may have heard about continuous integration in the past. Most commonly, continuous integration will build software from one’s changes before or after merging them to the main codebase. Even though there is an infinity of tools that allow implementing continuous integration, I will focus on Appveyor CI. In order to make things simple, I will create a very basic web API project and will host it on GitHub.

Create GitHub repository

You can create the repository on GitHub by clicking this link: Create a repository on Github. For more details, please follow the documentation they provide on their website.

Big lines, you should see something like this when you create the repository:

Create a repository on GitHub

Once you’re all set, if you have not done it yet, you need to clone your repository. Personally, command-line feels easier as a simple “git clone” will do the job.

Command-line execution will look like this.

Create Web API project

Project setup

Now that your repository is all set, we can actually create the Web API project. For this step, you will need to install Visual Studio, ideally 2017 that you can download here. Once installed, open it and create a new project by selecting “File”, then “New” then “Project”.

After the project template selection popup appears, select “ASP.NET Web Application”. As for the project path, select the one where you cloned your repository and press ok.

Now you will have to select what kind of web application you want to create. Select “Empty” and make sure that the “Web API”  option is enabled like below. Note that selecting “Add unit tests” is not necessary for this tutorial.

Then press “Ok” and wait for the project creation. Once it’s done, your solution explorer should look like this.

Time to add some code. Yeah!

Add your Controller

First, right-click on the “Controllers” folder. Now, select “Add” then “Controller”. Pick “Web API 2 Controller – Empty” and press “Add”.

Next, you get to pick the controller name. Here it will be DivisionController.

Now you should have an empty controller looking like this:

The first project run

From here it’s time to run your project either by pressing F5. Also, you can open the menu and select “Debug” then “Start Debugging”. After a few seconds, a browser window will open and you will see 403 error page.

Chill, it’s perfectly normal as no method in our DivisionController is defined and access to your project directory is limited by default. At this point, we can already open Postman and create our first test.

It’s Postman time!

The first test

Now, open Postman, create a new tab. Once the tab created, copy the URL opened by Visual Studio debugger in Chrome. In my case, it’s “http://localhost:53825” but yours could be different. Paste that URL in your postman tab like this:

Next, press “Send” and you shall see the Postman version of the result we observed previously in Chrome.

From here, we can start writing tests that will define our API behavior for the default endpoint that does not exist yet. Here you can notice a couple of things that we will want to change. First, we don’t want that ugly HTML message to be displayed by default but something a little more friendly. I guess a “Hello Maths!” message is friendlier, from a certain point of view. Let’s add a test for that.

If you remember the previous article, you know that you are supposed to go to the tests tab in order to add it. In this case, will pick the “Response body: Is equal to a string” snippet. You should get some code generated as below:

Next, you will update it to replace “response_body_string” with “Hello Maths!”.

Now that the response test is sorted, let’s add a response code test to validate we should not get that 403 HTTP code. For this, we will use the “Status code: Code is 200” test snippet.

After sending the request again you can see that both tests failed.

Fix the API to make the tests pass

It is now time to write some code to right this wrong. Go back to Visual Studio to modify the DivisionController. We will add an Index method that will return the message we want to see.

This code basically creates a new response object with a status code OK (200) that we want to get. In this object, we add a StringContent object that contains our “Hello Maths!” message. Let’s run the Visual Studio solution again pressing “F5”.

As you can see, the horrible HTML error page has gone now and we see the “Hello Maths!” greeting. Now, if you run that same request in Postman you will see that now our tests pass.

Now save the request in a new collection that we will call “CalculatingWebApiAppveyor” as below.

You should see in the right tab the newly created collection along with the request we just saved.

Implement the division

If you got this far, you’ve done great already unlike our API doesn’t do much yet. It’s time to make it useful. From here, we will add a Divide action that will take in parameter a dividend and a divisor then return the quotient.  You can copy the code below and add it to your controller.

You may notice that the code looks simpler than for “Hello Maths!”. Actually, we could have returned simply return Ok(“Hello Maths!”). However, this would have returned “Hello Maths!” with the quotes for which our test would not have passed. Now, let’s run the project again and add a test for that division endpoint in Postman.

Test the Division

What we want to do is to make sure that our division endpoint actually returns the result of a division. What we will test here is that for 10 divided by 2 we do get 5. From there, you know that the route to be tested will be “divisions/dividends/10/divisors/2/_result”.  Now, create a new tab in Postman and copy the URL from your greetings endpoint. Then, append the route to be tested as below.

Next, we are going to use the “Response body: Is equal to string” snippet to validate that 10 divided by 2 should return 5. Also, we will add a status check just because.

If you followed all the steps correctly you should see both tests passed and the response is indeed 5.

Now, save that last request as “Validate division works” in the CalculatingWebApiAppveyor collection you created.

Finally, you can run your whole collection and you will see all the tests pass green.

Congratulations! You have a fully functional API as long as divisors are different from zero with its own Postman collection. A collection that you can run whenever you like to make sure your API is fine. The one issue though is that you may not be working alone nor want to run Postman whenever you push a change on GitHub.

There is a way to solve this issue and that’s where Appveyor comes into play. But first, let’s commit and push our changes.

Commit and push your code changes

If you haven’t done it yet, it’s time to commit your changes and push them to your Github repository. First, create a new file named .gitignore. More information about what that file does here.

I personally used the Powershell New-Item command but there is an infinity of ways to do that.

Then, open this .gitignore file that is the default one to use for Visual Studio projects, copy the contents into the file you created.

Now you can commit, push your changes and eventually move on to Appveyor thanks to a few commands. Note that you must run these commands from the directory where your solution and .gitignore are.

Once these commands executed you should see your solution with the files created on GitHub.

Get your continuous integration swag on

Create an Appveyor CI account

This is probably the simplest part of this tutorial. Simply go to the Appveyor login page, yes login. From here you can log in with a variety of source control related accounts but pick GitHub.

Once logged in you should land on an empty projects dashboard.

Connect your repository to Appveyor CI

Simply press “New Project” and you will be prompted with a list of repositories you have on your GitHub account.

Select “CalculatingWebApiAppveyor” and press “Add”. After a few seconds, you should see this:

To see how it works, press “New build”. What happens next is that Appveyor will download your source code from Github. Then, your source will be compiled, and if there are unit tests in your solution they will be run. But for now, you will see something like this:

Are you surprised? Are you entertained? Because I am. Don’t panic it’s a benign error caused by the fact that Appveyor does not restore a project’s Nuget packages by default. To get rid of that error, go to the settings tab, then to “Build”.

Scroll down until you see the “Before script option”, enable it by selecting “PS”. Now, a text box should appear for you to input  nuget restore  like below:

Now, press the “Save” button below and go back to your build dashboard and press “New build” again. If everything goes according to plan you should end up with this:

Congratulations again! You now know at to set up a .NET project on Appveyor.

This is more or less where I would have stopped if I went with my original decision of making this tutorial a two-parter. Since it would not make much sense to stop here considering what’s left we can move on our Postman collection again.

Setup Newman on Appveyor

Create environments

Now that our project, collection, and continuous integration tools are setup, it is time to put our collection to a better use. An automated use. To do so, we will need to update our collection so that it can be run both locally and on Appveyor. In order to achieve that, we will extract the host URLs from our requests and place them in environment files. One we will use locally, the other one on Appveyor.

First, we will create our localhost and Appveyor environments. I will name mine CalculatingWebApiLocalhost and CalculatingWebApiAppveyor. If you don’t remember how to create environments and modify collections to use their variables I happen to have written a post about it. You need at least the requests host to be extracted in the collections.

Your localhost should contain the URL you used so far. Your Appveyor one will be “http://localhost”.  Once done, you should have two environments that each should look like this:

Localhost environment

Appveyor environment

Now your environments are ready, update your collection requests as below.

Greetings request update

Division request update

 

From here, you can open the collection runner to make sure your collection still works and tests still pass.

Save your collection and environment to your project

It’s time to introduce you to Postman exporting feature because you will now need to move your collection and Appveyor environment to your project. First, let’s export the collection, click on your collection menu button.

After pressing “Export”, you should see this:

Make sure that “Collection v2” is selected then press “Export” again. Now, save the collection in your solution folder.

Next, we will export the Appveyor environment. Go to the “Manage environments” menu, then click on the “Download environment” icon for CalculatingWebApiAppveyor.

Then, save your environment to your solution folder.

The last step, not the least commit and push your changes. Here is a reminder here:

Now our repository is all set! Let’s get back to Appveyor.

Setup Newman on Appveyor

First, go to the Tests tab:

Then, enter these lines after selecting “PS” on the “After tests script” textbox:

The first line installs Newman on your Appveyor container, prevents the dependencies warnings and adapts the execution display to Appveyor. The second executes your collection using the environment you created and also adapts the execution display to Appveyor. If you used different filenames for your collection and environment, please update the command to match them. You should have something like this:

Now, go back to the “Latest build” tab and click on “New build”.

After a few moments, you will see that your build will fail.

Here you can see that Newman actually tells you what went wrong. All your tests failed, and there was a connection error for each of your collection requests. If your build fails for different reasons, you may want to go a few steps back and try again. But if your failed build looks like the capture above, you’re good to go.

Setup local deployment on Appveyor

Yes, we are very close to finishing setting up our Postman based continuous integration system. Now, we need to tell Appveyor that we want to package our solution and deploy it locally so that we can run our collections against it.

First, we will enable IIS locally. IIS is a service that allows running any kind of .NET web apps or APIs, even though it does not limit to it. To enable IIS, go to the “Environment” settings tab, then click on “Add service” and select “Internet Information Services (IIS)”.

After saving your changes, you will go to the “Build” tab and enable the “Package Web Applications for Web Deploy” option and save again.

That option will generate a zip package that will have the same name as your Appveyor project. What we need to do next is to configure Appveyor to deploy that package on the local IIS. In order to do so, we will go to the “Deployment” tab.

Click on “Add deployment” and select “Local Build Server”. Afterward, we will need to add some settings to tell Appveyor where and how to deploy. To do so, press “Add setting” three times then fill each setting to match these values:

  • CalculatingWebApiAppveyor.deploy_website: true
  • CalculatingWebApiAppveyor.site_name: Default Web Site
  • CalculatingWebApiAppveyor.port: 80

Now, you should see something like this:

Remember the Powershell script we added in the “Test” section of the settings, we will need to put it in the “After deployment script” instead. If we don’t do that, the build will always fail since it will try to run our integration tests before locally deploying our application. I will put it here again in case you don’t feel like scrolling up a bit.

If you followed everything your “Deployment” settings tab should look like this:

Don’t forget to save your changes and to update your “Tests” tab. Now, your “Tests” settings tab should look like that again:

 

After saving it, go back to “Latest build” and press “New Build”. Then, you will see that everything simply works.

Well done!

What’s next ?

Now that you know how to setup Newman powered API tests on Appveyor using GitHub, you can chill and call it a day. However, you can also show off your mastery of CI by adding your project badge to your README file.

Note that Appveyor allows you to deploy only when you push commits to your repository, whether it is a direct push or a pull request being merged. Nevertheless, if you have a private Appveyor account you can enable an option to allow local deployment to run your API tests even on pull requests.

Thanks for reading, I hope you enjoyed reading this as much as I enjoyed writing. Also, I would like to shout out a big thanks to Postman labs for featuring my previous post in their favorites of March, that was a really nice surprise.

Good luck helping to make this world fuller of future-proof software every day!

NB: If you don’t feel like creating the Web Api project and that you scrolled straight to the end of the post to get the sources, help yourself.

Postman collections: Making API testing great again!

Reading Time: 8 minutes

Turning shaky code into future-proof software

Over the past years we moved more and more towards web-oriented architectures, connecting to services in order to provide information. Along with the evolution of testing tools and development methodologies we can build crazily robust software. However it happens that sometimes we will not build unit tests because of project constraints. Those reasons often go from time pressure on a project to laziness but I am not here to judge.

Still, when you build a web service there is a way to ensure it works properly after implementation without doing a huge refactoring. I do not endorse not build unit tests and consider myself an herald of test driven development. That being said I am here to offer a solution for those who wrote code far from the 100% test coverage. This solution is to build API tests which is extremely easy using Postman. At least once your API tests are built you can then refactor bit by bit your code so that unit tests can be added at a later stage.

Now you can see this post as the first one of my future-proof series where I will introduce you to Postman collections and how to build flexible tests with them.

Requirements

This document has been written assuming that you have a basic knowledge of Javascript, JSON and web requests. If you do not, please feel free to visit these to be up-to-date:

Using postman makes it way easier and pleasant. Download it if it’s not done yet and you can follow through some examples later.

Creating a scenario

Create a request

Let’s start with something simple, create a new tab on Postman. Then in the textfield containing the placeholder Enter request URL type “http://echo.jsontest.com/ping/pong“. When it’s done press “Send” and you should get something like the next screenshot.

Save your request in a new collection

Now that your request is created you can save it by pressing “Save” and postman will ask you if you want to create a new collection, enter the collection details and press “Save”. Sounds repetitive but it’s the proof that they remain consistent in terms of UX.

Congratulation you just created your first Postman collection! If it is not your first then you just wasted 5 minutes of your life that you will never get back and even more by reading this whole sentence. And if you did it properly you should see this in your “Collections” tab:

Before moving into the whole Collection Runner thing we will add a test in the first request and create a second request using the response of the first one.

Adding tests

So now you will click on the “Tests” tab and you can see on the right that there is some tests snippets to help you writing tests faster. Let’s select two of them “Status Code: Code is 200” and “Response body: JSON value check”. You should now see this:

As you may notice, Postman tests are simple Javascript with some utility methods and variables that allow you to write simple yet powerful tests. This enables you to write very complex tests verifying every bit of your response. Then you can add tests around the response time, the response code the type your receive, etc.

The status code test does not need to change as the expected response code is 200 here. You need to replace “Your test name” with “Test ping value is pong”, and jsonData.value === 100 with jsonData.ping === "pong" now you should get this:

Now press “Save”, then “Send” and if you followed everything properly you should see the following:

Now you see you got the same response, and you can see “Tests (2/2)” which means that both tests passed. If you click on the Tests tab you will see the labels of the passed tests “Status code is 200” and “Test ping value is pong”:

Congratulations, you ran your first tests on Postman. If not, you wasted again some time of your life, yes it’s truly gone.

Adding a global variable for later use.

Now let’s add another request in our collection, but first we will set a global variable from the Ping pong request. Let’s go back to the “Tests” tab of the Ping pong request. In the snippets list on the right select “Set a global variable”.

From there you need to replace “variable_key” with “pingValue” and “variable_value” with jsonData.ping. Now if you press “Send” again the request is sent and the global variable is saved, click on the eye button to see it.

You can see the variable was set so now let’s move on to create the request we will use it in.

Duplicating a request and using global variables

Duplication is quite easy, all you have to do is go in your collection tab, click on the 3-dot button next to your Ping pong request and select duplicate.

Then you can rename your duplicated method “Ting pong”.

Now click on your “Ting pong” to see it in the builder, you can update the url from “http://echo.jsontest.com/ping/pong” to “http://echo.jsontest.com/ping/{{pingValue}}”. Putting pingValue between those brackets allows you to access any global or environment variable. It works as long as you try to access these values from request url, headers or body. To access a global variable from the pre-request script or the tests you use globals.variable_name. Here we will also update the test to retrieve pingValue, to do so you will replace “pong” with globals.pingValue.

Now if you run your request again all tests will pass again.

Now if you clear your globals, and try to run again the test for ping value will fail since it will send the string literal “%7B%7BpingValue%7D%7D”. This happened because you did not set any global or environment variable this time. So it will try to compare that with a variable that doesn’t exist which results in the test failing as you can see below.

However if you run again your Ping pong request, it will set the pingValue global variable therefore when you run the Ting pong request again, your test will pass again until you clear the variables.

The collection runner, finally

Now it is almost time to play with the collection runner. But first you need to know that the collections run requests based off request title alphabetic order so to ensure they run in the order you prefer I strongly advise you to use numbers for the ordering as below:

Yes you did not need this here since alphabet always places Ping pong before Ting pong. Same goes for collection folders on which I will not expand as they are really straightforward to use. If you want to have multiple scenarios in your collection that should not rely on each other you would be wise to group your requests in folders. Not only because it is much cleaner but also they can be ran individually if needed. On a 2 request collection it will not be an issue but the last one I created has 49 with 100s tests.

So now let’s go have a look at that collection runner, on your collection press on the arrow.

You will then see this, obviously the option you will select is “Run”.

The collection runner will then open after a couple seconds. You can simply press “Start Run”:

After the run you will see your collection run results.

Congratulations now you know how to write test scenarios using Postman.

Creating a new environment

Environments are pretty useful to write collections faster by using variables at any level. It can go from urls to tests values and so on. Using multiple environment is frequent for complex systems with multiple deployments and/or gateways.

Now we will create a simple environment  and set the ping service url that we will use in both requests.

First of all, click on the gear icon then “Manage environments” > “Add”. Once you get there you can name your environment and setup the values you need. 

Here we will name it “Postman tutorial environment” and add a key pongServiceUrl set to the value “http://echo.jsontest.com/” then press “Add”.

 

Now you created your environment you need to select it from the dropdown.

Once it’s done you can then update the urls from both your requests to use {{pongServiceUrl}} like this:


Now if you go to the runner again the environment is already set. You can press “Start Run” again.

Now you will see the same results as previously but with updated urls.

Congratulations! You now have all the tools you need to write fully flexible test collections. Your creativity is your only limit.

P.S. Today I turned 26. No I did not write this post on my birthday but did corrections and added screenshots today. Happy birthday to me!

Brace yourselves, Amazon Go is here !!!

Reading Time: 3 minutes

This is a revolution, or is it ?

Yesterday I was told about a revolutionary shop that Amazon will open in Seattle, Amazon Go. I love seeing innovations popping out in this world where it is harder to be amazed. This piece of news, despite interrupting me in the middle of some non-negligible amount of work, made me happy. As a developer, the less I deal with people, the happier I am. It does not make me asocial, it keeps me sane on the contrary. Obviously we need to have a certain balance but this is not the place for that kind of discussion. The main thing that made me happy despite that shop opening on another continent is in relation to a past event. If you are an attentive reader of this blog this should remind you from a certain hackathon a couple months ago.

Granted without salt

I loved being part of it, the few days before discussing about the topic trying to figure out what we should build. The brainstorms listing out what we hate and what we love in retail. All the ideas that came out of it. Even the harder bits where we had to weed out ideas to be able to focus on only one. One we could actually build in twenty-four hours. One idea to impress the judges, one idea to conquer them all. One idea was above others, we hate waiting. This is how it occurred to us. Scanning items using your phone, finding items within a store and checking out by walking out the store. Sounds familiar does it not ? Yes, Amazon Go in a nutshell some would say.

Amazon GO

Amazon GO

Obviously it is ludicrous to even think they would have inspired from us since presenting this idea was not worth of a podium. However, today we see it launched by a giant. In spite of not getting any recognition at this hackathon there is some comfort in seeing the world acknowledge that idea we had too as groundbreaking. We knew it back then, we know it now. It took us a night to implement a proof of that concept using an iOS app, beacons and GPS tech. From the ad I know they used much more than that for their test run. But then they are Amazon. They are giants, and it feels damn good to wake up, being confirmed that your mind is close to giants. It feels good to see it out before those ideas that beat us in a hackathon, that we may never hear about again.

Amazon Go is the future

When we presented that mobile self checkout idea we thought it was the future, god knows I shouted it everywhere. As I said first, I am happy to see Amazon Go out there and hope it will be successful and spread all around the world. I am looking forward to a world where you remove friction that can be generated by depending by how fast your cashier or other customers are. I am looking forward to take full control of my time when I pop in a shop to buy groceries. For those that have not seen it yet here is the ad they posted.

Going full necromancian on old projects

Reading Time: 2 minutes

Old projects, they often end in what I call development hell. This odd place where some projects with a good potential become stale after a release or die because too late for a market. Very often personal projects end up there even when they are open source. Open source really seem like a tool to help spreading and sharing knowledge worldwide depending on the kind of project. Before open source democratized with the likes of Github  a huge number of personal projects probably took years to be released when not abandoned.

Today I decided to do the only kind of necromancy one should do: Bringing back an old project to life. A project for which I already wrote the code for the model and business logic. I had to make create a user interface and build a user experience as sleek as possible. However, I stopped it due to my vision of a market I thought crowded along with the lack of time. Now I see clearly that was not as true as I thought.

Here we are. Two months I did not post here, eleven months without working on a personal development project I am back at it. I spent my past weekends alternating between gym, party and sleep. Luckily, I work in a position where my brain I can keep my brain stimulated. Indeed, when not investigating an issue on one of our live apps nor working on our platform features I am defining development tools and processes to be used at company scale within the next months. Eventually, a lot of cool things will come out of that. I will definitely post a few related tutorials depending on schedule.

About the blog, I will try to post more regularly than I have maybe a tutorial. In terms of work I would like to share with you the video that we recorded last week. It is basically the new company careers video that I like not just because I am in it. You can definitely check it out below:

If you arrived that far in the post, first I would like to thank you for reading and watching the video. Second, do not abandon your old projects if you are not a 100% sure they are dead. Check your old source code even if it is to mock your old coding style. In the end you could actually have something worth the hassle.

Retail week hackathon 2016 aftermath

Reading Time: 3 minutes

Retail week hackathon 2016 result

Last week at the same time I was at home playing League of Legends to break away from the frustration of losing at the Retail Week hackathon 2016. I was frustrated because I was, well I still am, convinced that our idea was good enough to win. Actually, I wanted to write a post immediately after to express the mixed feelings I felt that day. On one hand, I loved the experience and the excitement of suiting up as my schools days nerd self. On the other hand I hated losing in a way that did not feel fair. I discussed about the outcome of the hackathon around and people felt like we should have won.

I still cannot believe that a system to book an hour in-store discussing with an employee about an item you see online. An employee you whose job is to sell you the said item, especially in 2016. Nowadays we are only clicks away from users reviews from all around the world. You can even have video reviews at least on youtube. I guess it makes more sense to the judges otherwise the Retail Week hackathon 2016 winner trophy would be on Poq’s trophies shelf.

When we first got the idea of the self-checkout we thought that the hardest challenge was having a working prototype. We were so wrong. We had a working prototype 4 hours before the hackathon ended. From there, we spent the rest of the time testing and fixing bugs to ensure the presentation’s success. The presentation did not go perfectly but the idea and the product were there. To be fair, I think that pretty much all the teams had a much presentation for lesser ideas which could be what cost us the gold. When the judges are involved in retail during a fashion event I guess this is key.

The self-checkout idea

We built a self checkout app that allows customers in a store to find items they want to purchase with indoor location using estimotes and geolocation to handle both indoor and outdoor app behaviour. The most interesting part is that you can scan your items so they add to your basket and when you leave the store you get charged automatically. We even built a mini-backend displaying the last paid basket.

We built a solid proof of concept even though there is some security flows that are fixable on the operational side. For the security tags, just add a device connected to the store system against which you scan your order generated QR code  to allow you unlocking the number of items you need to remove the tags from. Even further we can use security tags that would emit the value from the item barcode to enforce that someone is not unlocking something they did not buy. It is a 24 hour hackathon and still we thought about some corner cases.

We did focus on bringing people back to store. I think that we did show creativity and innovation using the latest technology. Maybe we did not manage to pass the idea to the judges but I know that this is the future of retail. Walk in a store pick what you need and go. No more queuing hassle. Basically shoplifting without the criminal aspect.

Learning and progressing

I may go next year if we put up a team again and learn from our mistake. Technical advancement is not the focus, presentation is. Coding the whole night to get a working prototype is not the focus, sugar coat is. Still it will remain a special moment to me because I did have fun. The self-checkout will be in your hands in a few years, I will do my best for it. I went, I saw, I learned, that is probably what I do best, learning. I learned things my whole life, both at school and out of it. Even now that I worked for a few years I still try to learn as many things as possible. Learning is key to evolution, it is the key to become a better version of one self.

A great way of learning is to take part in open source development, looking at other people’s code, taking on challenges. Since a few days now I am helping other developers on community based websites such as StackOverflow and Github. I had an account on both for some time but did not do much with them. The good part is that on one hand I can learn and sharpen my skills by taking on issues and at the same time I help others. Well, there is not much downside. On Tuesday I submitted my first (non-professional) pull request and it got approved and merged pretty much instantly. It was not much but it still feels nice, you can check it here.  And yesterday I got my first upvotes on a few posts on StackOverflow showing that giving time is enough sometimes.

That’s where I will end today’s post before I start spreading on random stuff, thank you for reading.

Menorca, dawn of my Lodgegoing

Reading Time: 2 minutes

Hi everyone,

So much happened since my last post, me going in Menorca for holidays just after signing for a new company, France losing to Portugal the Euro’s final. I was watching the game with my girlfriend from a bar called Su Païssa. We were chilling and drinking, having a good time then Eder’s goal happened. I was a bit pissed by how I felt Les Bleus gave the title to Portugal but happy for Cristiano Ronaldo. This guy probably never even dreamt he would ever get a major gold with his national team, and I have the utmost respect for the hard work and dedication he puts in the game.

Hard work and dedication are values that I believe should always be rewarded or at least recognised. It does not matter that it is football or work, if values that allow moving forward are promoted a business can go forth and grow in a virtuous circle. The more recognition for one’s quality work, the more quality work will follow. It is not the only thing that matters though but it plays a big part. Those are thing that I deem essential whenever I sit behind a keyboard or stand in front of a whiteboard.

Dedication is what wakes you up every single morning and go get more and more information about the world around you so you can adapt to it and think about a way to make it better. Hard work is the application of that dedication in your work wether it is personal or commercial. Often after a lot of time offering that dedication and hard work to a party the time may come to move on to a different challenge. A challenge that may be more rewarding at multiple levels.

After a few weeks of search I was trying to find an environment that would allow me to express my skillset to the fullest. An environment where I would be able to take on challenges nobody else tackles. An environment where hardwork and dedication would be rewarded. An environment where I would be back on tracks with my planned learning curve. An environment where I would have a highly talented team to help me progress as an engineer, as a developer and as a person. I can happily announce that I found such an environment as I signed a contract with Poq last Thursday.

Now I am enjoying a well deserved holiday in Menorca with my girlfriend before going back to London for a last week at Lodgeo. Menorca, sweet little island near Majorca in Spain. Menorca where I have the best of views whenever I wake up and do not think about anything else, just clearing my head. Menorca where I am writing this article. You were free to stop reading after the Poq announcement. Bonus for the patient readers, a selfie with my girlfriend having drinks.

Menorca

Menorca

Swift: One language to rule them all ?

Reading Time: 3 minutes
Swift, a language to rule them all ?

Swift, a language to rule them all ?

Two years ago at the WWDC, Apple introduced to the world a new programming language: Swift. Bringing to life a new programming language is always a challenge because even if it is the most beautiful syntactically, the lack of use cases may make it only a nice-to-have in some developers mind.

Apple announced to be the future of iOS and OS X apps, which is quite bold since not all developers may learn especially with the expansion of the cross-platform app development using the likes of Xamarin or Titanium.

However, bringing an open-source language allows the community to port it to multiple platforms allowing the benefits of Java without the need of a JVM to run it. I even found out that you can now do scripting using Swift which opens a lots of doors even in terms of DevOps.

However what drew my attention to Swift at first is that summit that occurred a few months ago between Google, Facebook and Uber in London were discussing the language with the possibility of it getting a grown interest in each party. Add another powerhouse like IBM massively supporting and you get what smells like an IETF meeting for a new norm.

Facebook and Uber are both companies trying to use cutting-edge technologies to get the best services possible for their users and are known to use variety of platforms/languages. About Google I can not really say as I do not have a direct input about it, however based on what I read in “How Google test software” it seems they are in a similar position.

Then comes the big IF, what if Google was to drop Java for Android development in favour of Swift ? It might seem unlikely considering the massive following and use of Android in the developers community along with the fact that Java not only is one of the most used languages in the world but also learn in pretty much any computer science related-school.

However Google may have £8.8 billion reasons to make the big jump, those sum up in one word: Oracle. Oracles, always bringers of bad news in Antic Greece but here it is not about them. It is about the Oracle that acquired Sun Microsystems in January 2010, the same Oracle which is now suing Google only to cash in on their investment. Some even speculate that Oracle purchased Sun only so they can sue Google on the conditions of use of Java, even thought I have not made myself an opinion on the matter it is quite intriguing that the beginning of “Oracle VS Google” case was only 7 months after the acquisition.

With this lawsuit hanging over Google’s Android SDK, Swift is more than a valuable option. Along with the fact it is clearly Open-source, it can draw the iOS developers community to port their apps to Android without the heavy refactoring nor the code converters that are not always on point and may not follow the latest updates of this new language that is now on the verge of publishing its third version in two years.

If Google was to make Swift slowly replace Java as the “First class” language for Android development that would not be before a couple of years. But at least if you read it here first and that it turns out to be true, you can still say that you saw it here first and talk about that oracle guy able to predict the future tech trends.

Now imagine a world where you can build Android, iOS and OS X apps with only one language. Add the fact that this language can be compiled on UNIX systems and now even on Microsoft systems thanks to initiatives like Silver for Windows development using the .NET and Windows Phone APIs or Perfect allowing to build easily RESTful server applications. The possibilities are limitless, desktop clients, servers, scripts. You can do so much, so if you were looking to learn a new language, you know what to pick next: Swift.

Thank you for reading, here is a Taylor Swift video