API Dependent TestCase
Overview
In scriptless automation, dependent test cases allow for sequential execution of API calls where the output of one test is a prerequisite for another. This document describes how to configure and utilize dependent test cases in a scriptless automation environment.
Configuring Dependent Test Cases
Dependent test cases are defined in CSV templates, where one test case is marked as a prerequisite for another. This configuration ensures that tests are executed in a specific order, allowing for the passing of data from one test to the next.
Example
Consider two test cases, POST_Sanity.csv
and GET_Sanity.csv
, where GET_Sanity.csv
depends on POST_Sanity.csv
.
POST_Sanity.csv
This file defines a POST request and stores values for subsequent use:
The POST request generates an ID
and a Name
, which are stored for later use.
GET_Sanity.csv
This file defines a GET request that relies on data from POST_Sanity.csv
:
The DEPENDANT_TEST_CASE
field is set to POST
, indicating that POST_Sanity.csv
must be executed before this GET request.
Functionality of Dependent Test Cases
Sequential Execution with Independent Reporting
Sequential Execution: Dependent test cases are executed in a specific order. For instance, a POST request may be set as a prerequisite (dependent test case) for a GET request.
Independent Reporting: Despite their sequential execution, each test case is reported separately. This means the results for both the GET and the POST requests will appear independently in the test report, maintaining clarity and distinction between the different test cases.
Execution Flow
During execution, the automation framework first runs POST_Sanity.csv
, storing the necessary values (ID
and Name
). Then, it executes GET_Sanity.csv
, utilizing the stored ID
and Name
.
Execution Results
The output of the execution might look like this:
POST request is executed, creating a new entry and storing
ID
andName
.GET request is executed, retrieving data based on the
ID
from the POST request.
Below is the execution from IDE
Skipping Validation in Dependent Test Cases
Validation Skip: When a test case is marked as a dependent (e.g., a POST request being a dependent for a GET request), the validation steps in the dependent test case are skipped. This is because the primary purpose of the dependent test is to set up the pre-requisites for the subsequent test.
Exclusion from Report: As a result of skipping the validation, the validation results of the dependent test case are not included in the final report. The focus is on the successful execution of the dependent test as a setup step rather than its individual validation outcomes.
Conclusion
The use of dependent test cases in scriptless automation serves as a method to ensure the correct sequence of test execution while maintaining the independence of each test case in terms of reporting. This approach provides a clear and organized structure for executing interdependent API tests, where one test's output is required for the execution of another. Additionally, the mechanism of skipping validation in dependent test cases optimizes the testing process, focusing on the necessary setup actions without cluttering the reports with unnecessary validation details. This feature enhances the efficiency and clarity of comprehensive API testing, especially in complex scenarios involving multiple interdependent endpoints.
Sample Report
Last updated
Was this helpful?