Web-based tool for optimizing software testing

Our way to develop your own tool
In the past, various tools have been tried again and again to support manual testing, but the applications were not satisfactory. The tools were often too extensive and had too many functions, which made the application too complicated, or they did not work properly. That's why we came up with the idea to develop a tool that exactly meets our expectations and offers the required functions.
Insights into how the tests work
Specifications based on behavior-driven software development are used for software testing. This is an agile software development methodology based on collaboration between developers, testers, and non-technical stakeholders. Through a clear and comprehensible description of the desired behavior of the software in natural language, it can be understood by all parties involved. In this case, the simple description language Gherkin is used, which makes it possible to describe various scenarios with very few rules and structured wording. The focus is on the simplest and least formal way possible to describe the scenarios that depict the functional behavior of a software feature as concrete examples.

Several scenarios form a feature, which is then recorded in a feature file. Testers will use these feature files to test the software.
Carrying out a test run
Before a test run can take place, testers must be able to access the feature files. The testers then select the relevant scenarios and put them in a reasonable order to be tested. You can then start. The results are documented. This is where the web-based tool comes into play. The feature files are all automatically loaded into the tool and are immediately available. In addition, the feature files are listed one below the other and presented clearly.

Figure 2 shows the option to run a test run. During a test run, the feature files are reviewed individually and a result is recorded for each feature file. A result can be “passed,” “failed,” or “skipped.” When a test run is completed, the data is automatically saved so that it remains available even after the test run is completed. By automatically loading feature files and automatically saving a test run, the process should be simplified and accelerated.
Other features of the tool
Since not all feature file scenarios are tested manually and it usually makes sense to test scenarios in a specific order, it is possible to create a test plan. Here, the feature files and scenarios relevant for manual testing can be selected using filters. In addition, the order in which the scenarios are to be run through during testing can be configured via sorting. It is also possible to provide descriptive texts for the test plan and individual scenarios, for example to provide testers with detailed instructions. This should make it possible to always start certain software tests with the same test plan in order to have a finished configuration for the same tests.

The software tests are usually carried out with various devices, the use of which can be recorded during the test run for correct documentation. In this way, devices can be added, removed, and edited via device management.

conclusion
Overall, the feedback from the testers shows a strong response to our current features and provides valuable insights for future developments. We are determined to continuously optimize the tool by implementing the option to save results at scenario level and direct error reporting via an issue tracker.