Integration testing of backend microservices at TextNow
One of the key deliverables for Quality Engineers working with the backend team at TextNow is to add automated integration and end-to-end tests to complement the unit tests written by the backend engineers.
Executing these tests gives the team more confidence in the quality of the services we build. We can also get quick feedback on the quality of services when the automated tests are run as the tests carry out regression testing and the team knows if any changes break existing functionality.
The approach is to select a service unit like monetization, which is a group of services that handle purchasing of premium services (e.g., locking in a user’s number for a year), and simulate the way clients interact with the APIs for the microservices that make up the service unit to test functionality. An example of a way to test the microservices that make up monetization is by simulating API requests for purchasing a subscription. Then we verify the responses from the request are what we expect, and that the service is updating user’s capabilities after a purchase the way we expect it to.
We build a local test environment where each of the services and associated databases is built as a docker container, and then write automated tests that use the APIs of the microservices. In testing, we would be verifying that the various component services that make up monetization work together as expected. F servies our services interact with (Apple app store, Google play store etc.), we would mock these by creating a service that simulates the responses from these services and build them as docker containers (“dockerize”) so they form part of the test environment. (See diagram below for the Monetization.)
Diagram 1 – overview of test approach
At TextNow we have a legacy backend system called “TN Server”, that parts of the Monetization services make calls to. Since the monetization calls to TN Server were few at the time (about 4), the decision was made to create a TN Server mock that handled these calls as “dockerizing” the TN Server and all its dependencies was too complex an exercise just to test the Monetization service.
Diagram 2 – Monetization virtual test environment
Most of our microservices are developed using the Go language, so we chose to write our automation tests using Go as well, which would enable better collaboration between the backend developers and test developers. We also use Bazel as our build tool, and to run all automated unit and integration tests. We also decided to adopt the BDD (behavior-driven development) approach when writing our integration/e2e tests and chose the godog package. The reasoning behind using the BDD approach was so the non-technical members of the team, or those not familiar with the code base, could easily look at the test scenarios and understand what was being tested. Another key object is to have the integration tests run as part of Continuous Integration (CI).
Phase 1 – Proof of concept (Monetization)
With the approach and toolset decided, the next step was to start looking at implementation. One key challenge was authentication. As the monetization tests were making client requests using our public APIs, we had to figure out how to overcome this challenge. This is where the TN Server mock came in handy, as we were able to simulate valid and invalid sessions responses when the sessionID in the requests were authenticated. Any calls to the TN server during any workflow were handled by our TN Server mock. This gave us the flexibility to test various edge cases or error conditions that would have otherwise been particularly challenging to test.
For the App Store and Play Store mocks, we generate receipt responses based on the receipt “tokens” that are in the client API requests. As these receipt tokens are encoded, and our services have no way of knowing what the contents are until they are verified by the App and Play Store, we replace the receipt token with customized strings based on the content of the string, so that we can generate receipt responses like what the App Store and Play Store send back to the IAP service, which is the service that handles the response back from the App or Play store. This again gave us the flexibility to test several types of products and edge cases.
There was always close collaboration between the developers and the test engineers when writing the tests. The developers were always on hand to answer questions about the services and show us sample receipt responses from the App Store and Play Store, which helped us model our responses from the mocks.
In phase 1, we did not have the tests “hooked up” to CI, so the integration tests were run locally on the developer or test engineer’s local machine using a shell script (see sample below) to build the containers and start the tests.
Startup script for integration tests
Sample feature file using gherkin language for BDD tests
Each step in the Scenario Outline maps to a corresponding function in the test runner file and using Examples enables us to test various permutations for a particular test scenario.
To illustrate the statement: Given a user with userID <userID> maps to the following step and function in the test runner file
Running tests this way was obviously not ideal, as it relied on developers remembering to run the tests before checking in their code, or on the test's engineers remembering to run them regularly. On the plus side though, we were able to demonstrate that our test approach worked. We continued to add integration tests for the monetization services and, in the process, increased our test coverage.
Phase 2 – Complex service unit (Messaging)
For this phase, a more complex service unit (messaging) was chosen. Because of the complexity of the various component services that make up messaging, the backend developers took on the task of dockerizing all the services and building out the mocks for the 3rd party services that were used in the messaging pipeline. This would become our integration testing framework which would be “hooked” up to CI. See diagram below.
Diagram 3 – Messaging virtual environment
Once the developers finished building the components for the integration test framework, enabling the tests to run using our bazel build tool, and the tests running as part of CI with every pull request, it was then handed over to the test engineers to write the integration tests and extend the functionality of the mocks where applicable. We started writing integration tests for the outgoing (outbound flow) messages, as all the components for the framework had been built. To test incoming (inbound flow) messages, we also had to dockerize the TN Server legacy system as this was the only way we could test the full end-to-end flow that an incoming message would get to the intended TN user, and for outbound flows where a TN user sent a message to another TN user. Once this final piece was added, we were able to write e2e automated tests for both outgoing and incoming message workflows.
It has been an interesting journey so far, and I’m really enjoying the process of discovering new and better ways of doing things. I’ve learnt a lot about how we handle inbound and outbound messages here at TextNow, handle in-app purchases/subscriptions amongst other things.
By no means have we been able to write integration tests for every functionality, but we have been able to cover key priority areas for the most part.
We now have integration/e2e tests running in CI covering a core TextNow service (messaging) as well as other backend services, and our team of dedicated backend test engineers has grown. Working closely with the backend developers has really helped my learning process as they’ve always been available to pair with and resolve challenging issues.
I’m excited for what we’re going to be doing going forward and what the next iteration will be. As you know, we never rest on our laurels and always strive to make things better and more efficient. Challenge accepted!