2021-01-08

30 000+ daily tests for quality and security

Humans are fallible, especially in monotonous tasks. One extremely monotonous task is functionality testing of applications. Human-based functional testing can easily miss bugs and implementations errors, and we do not want that. Testing requires also an intricate knowledge of the application itself to verify the results you are seeing.

With G-REX we have therefore taken the next step and gone to automated testing. As a bonus we can in the future concentrate on exploratory testing instead of doing the tedious regression testing (because those are automated).

What is automated testing?

Automated testing can be divided into several categories:

  • Unit tests are small tests within the code that checks that the piece of code performs as expected.
  • Integration testing checks that different parts of code flow (for example in interfaces between applications) can communicate as expected.
  • Functional testing verifies that the functionality works as expected and throws the right error codes in erroneous operations.
  • Performance testing will verify that the code responds within the desired time frame.
  • Load testing ensures that functionality will work even if the load on the system is great.
  • Security testing verifies that security standards are followed, and that information is only accessible to those who are privy to that information.
Alex Björkholm

Alex Björkholm

Alex Björkholm leads the backend development of our new system G-REX. Alex wants to ensure pragmatic software design and great user experience by building APIs that change the industry standard forever.

Want to discuss the topic? Connect with Alex on LinkedIn or post your comment below!

Tests can either be configured as end-to-end-test which mimics the user’s actions, or then only test a certain part of the logic is tested. Unit testing is only testing a single part of the data flow, while integration testing tests the connection between different parts of the data flow (for example between an API and a database).

How much are we testing?

Since starting with automated testing the number of automated tests has continuously grown, and we are now running around 30 000 tests per day (which all include a performance test). But that is not all. The previous figure does not include unit tests running on each code build or during development. For the actual number of tests, some 2000 unique unit tests should also be included, giving the total of roughly 32 000 automated tests daily.

The number of tests in G-REX will continue to grow. Partly because there are still tests to be implemented, and partly because each time we spot a bug in the system we do not only fix the bug but also ensure that a test is covering the erroneous scenario. This methodology ensures the quality of the end user experience.

These quality controls are extremely important since the business logic is diverse. To completely cover the different use cases with manual testing is almost impossible. With automated testing we can track and ensure the security, performance, and the functionalities work as expected. The best bonus of this is that we can move away from the tedious task of manual regression testing, instead running the automated tests, and concentrate on actually improving the system and doing exploratory testing to find flaws.

How can we manage this then?

Running 30 000+ tests is not something one does manually. We therefore rely heavily on automated pipelines, via continuous integration and continuous deployment, to achieve these automated test volumes. This means that each time a developer changes some code or creates some new functionality, and pushes the code towards the main code repository, the incoming code is built, tested, and its quality is checked before the code is merged into the main code repository. Then the code is automatically built and deployed to a testing environment. We then run scheduled and automated tests on that environment. To further enhance the security and quality of the code the developer runs the automated tests in her or his local environment and the code is peer- and double-reviewed manually.

So, what does all this give us?

Well, confidence mainly. We can rely on the functionality of the G-REX system knowing that it works as intended. And the best of all – no more manual regression testing weeks in the office!