A few weeks ago, I wrote a post on structuring a Golang app but wrote no tests whatsoever for it. All done on purpose, I wanted to come back today with a refresher of sorts around Postman. When no testing option is available or practical, Postman is a great help when it comes to APIs. Today I will write a Postman collection while attempting to walk you through the various steps needed when writing this sort of test.
Why do we need automated testing?
The biggest convenience from automated testing is that they repeat steps you would apply manually with consistent results. If you run your test suite a thousand time over, you will have the same results as long as the software remains the same. This tends to be a fair baseline to write effective tests. You wouldn’t want tests that randomly fails or succeeds. How can you trust such tests? You can’t, and that’s part of why you need to give a good think as to what your tests should be proving. This work begins by figuring what tests we need and what they need to prove.
What are we testing here?
Here we are working with my previous todo list app post. It’s possible that I extend that codebase in the future for other blog entries but maybe I need a way to validate its behaviour first. If you remember, we have a few different endpoints. These allow to create a task (POST /tasks
), list pending tasks (GET /tasks/pending
), complete a task (POST /tasks/{task_id}/complete
and list completed tasks (GET /tasks/completed
). What tests do we need?
Testing these endpoints individually would make sense if we were dealing with a stateless system. Say you call an endpoint that converts a JSON input into an XML output or any similar computation which does not rely on nor create any specific state for a system. These you can test individually as nothing can impact. The original state of a task before it enters the system is its non-existence so let’s start with that.
Task original non-existence
Testing the retrieval of pending or completed tasks returns the same result but only if the system is in a certain state. If there are no tasks whatsoever, both would endpoints should return an empty collection of tasks. This is true only at a given point in time, provided that no interaction with the system occurred yet. We could use that as our first two tests.
However, that test will fail once the system is in usage. How can we transform that test so that it works for us all the time? We can create a task with a message specific enough that it is virtually impossible to be the duplicate of another and test for its absence in either collection. Validating that task absence in both pending and completed tasks lists is our first test. Using a tool such as JMeter or Postman, we would have an API call to each retrieval endpoint GET /tasks/pending
and /tasks/completed
.
For an empty system, we could validate that both calls return empty collections. However, since we want these to work even when there are existing tasks, both pending and completed. We really want to focus on the task we create within the scope of our test.
Task creation
It could seem odd to the inexperienced eye as to why this endpoint cannot be tested on its own. Why not just test that when you post you get your successful response code, right? Because a successful response code doesn’t necessarily mean that the system is doing what it is designed for. We could call that task creation endpoint, see a successful response code but then find that no task was actually created. It is possible that the piece of code responsible for storing data silently failed.
How do we remedy this? By calling the retrieval endpoints once more after creating a task as these are our guides when it comes to the system’s state. This gives us three calls. First, we post to the create endpoint for which we validate that for a valid request, we get a successful response code, here 201 CREATED. Next, we call the pending tasks endpoint to validate that our task is available there. Lastly, we check the completed tasks list to make sure our newly created task is not there.
Task completion
Following the same logic as previously, we can deduct that to validate a task completion works as designed we will have three calls. One to the completion endpoint, followed by a call validating that the now completed task isn’t part of the pending tasks anymore and eventually wrapping up with a call to ensure that our test task is part of the completed tasks endpoint response.
Now, testing with Postman
So yes, it’s been a very long time since I last brought up the magnificent Postman for API testing even though I’m still using it on the regular. Now to represent these ideas I put together a Postman collection. As you may expect, some tests span over several calls as below:
- Task original non-existence
- There is no pending Task
- Task creation
- Create Task
- The Created Task is Pending
- The Created Task is not Completed
- Task completion
- Complete Task
- The Created Task is not Pending anymore
- The Created Task is now Completed
Update 2020-10-29: You can access the Postman collection and environment files right here on Github.
Running the collection, you get this result:
Note that you can run this collection several times and the tests will pass as expected. You will also notice how the calls each are named in ways that describe accurately what we’re testing.
Wrapping up on testing
The first step when it comes to testing always is to figure out what you want to test. It is the most important part of the process. More often than not it will make the implementation of your tests much easier. It might sound obvious but I could surprise you with how often I’ve seen tests which clearly were written by someone that had absolutely no clue what they were testing. So yeah, it is the key takeaway here. After that, there isn’t much more to add. If you clearly defined your testing needs implementation follows your plan. An ideal is to use automation and once automation is in, you can just profit.
Thanks for reading this and I hope you will make good use of this post. Don’t hesitate to share with everyone you know might need to give it a read.
Cover by Hasan Albari from Pexels