E2E Testing
Last updated
Last updated
End-to-end (E2E) testing is a Software testing methodology to test a functional and data application flow consisting of several sub-systems working together from start to end.
At times, these systems are developed in different technologies by different teams or organizations. Finally, they come together to form a functional business application. Hence, testing a single system would not suffice. Therefore, end-to-end testing verifies the application from start to end putting all its components together.
In many commercial software application scenarios, a modern software system consists of its interconnection with multiple sub-systems. These sub-systems can be within the same organization or can be components of different organizations. Also, these sub-systems can have somewhat similar or different lifetime release cycle from the current system. As a result, if there is any failure or fault in any sub-system, it can adversely affect the whole software system leading to its collapse.
The above illustration is a testing pyramid from Kent C. Dodd's blog which is a combination of the pyramids from Martin Fowler’s blog and the Google Testing Blog.
The majority of your tests are at the bottom of the pyramid. As you move up the pyramid, the number of tests gets smaller. Also, going up the pyramid, tests get slower and more expensive to write, run, and maintain. Each type of testing vary for its purpose, application and the areas it's supposed to cover. For more information on comparison analysis of different testing types, please see this ## Unit vs Integration vs System vs E2E Testing document.
We will look into all the 3 categories one by one:
Following actions should be performed as a part of building user functions:
List user initiated functions of the software systems, and their interconnected sub-systems.
For any function, keep track of the actions performed as well as Input and Output data.
Find the relations, if any between different Users functions.
Find out the nature of different user functions i.e. if they are independent or are reusable.
Following activities should be performed as a part of building conditions based on user functions:
For each and every user functions, a set of conditions should be prepared.
Timing, data conditions and other factors that affect user functions can be considered as parameters.
Following factors should be considered for building test cases:
For every scenario, one or more test cases should be created to test each and every functionality of the user functions. If possible, these test cases should be automated through the standard CI/CD build pipeline processes with the track of each successful and failed build in AzDO.
Every single condition should be enlisted as a separate test case.
Like any other testing, E2E testing also goes through formal planning, test execution, and closure phases.
E2E testing is done with the following steps:
Business and Functional Requirement analysis
Test plan development
Test case development
Production like Environment setup for the testing
Test data setup
Decide exit criteria
Choose the testing methods that most applicable to your system. For the definition of the various testing methods, please see Testing Methods document.
System Testing should be complete for all the participating systems.
All subsystems should be combined to work as a complete application.
Production like test environment should be ready.
Execute the test cases
Register the test results and decide on pass and failure
Report the Bugs in the bug reporting tool
Re-verify the bug fixes
Test report preparation
Evaluation of exit criteria
Test phase closure
The tracing the quality metrics gives insight about the current status of testing. Some common metrics of E2E testing are:
Test case preparation status: Number of test cases ready versus the total number of test cases.
Frequent Test progress: Number of test cases executed in the consistent frequent manner, e.g. weekly, versus a target number of the test cases in the same time period.
Defects Status: This metric represents the status of the defects found during testing. Defects should be logged into defect tracking tool (e.g. AzDO backlog) and resolved as per their severity and priority. Therefore, the percentage of open and closed defects as per their severity and priority should be calculated to track this metric. The AzDO Dashboard Query can be used to track this metric.
Test environment availability: This metric tracks the duration of the test environment used for end-to-end testing versus its scheduled allocation duration.