ScriptlessAutomation
  • 👋Welcome to Scriptless Automation
  • Discover More About Automation
    • ⚙️Automation platform intro
    • 💡Advantages of Scriptless Automation
    • 🚀Release Notes
      • 📖Open Code
      • 📕License Code
  • Product tools
    • 📪Pre-request Tools
    • 🔧Project Dependencies
  • Automation Architecture
    • 🎨Flow diagram
  • Get Started
    • 🛠️Start with Automation
    • 📖Open Code Automation
      • 🌏Web Automation
      • ↔️API Automation
    • 🏢Maven Configuration
    • 🗜️Setting Up Maven Project in IntelliJ IDEA
    • 🎛️Scriptless Config
      • 🕸️BrowserConfiguration
        • chrome.properties
      • 👥CommunicationConfiguration
        • SlackConfiguration.properties
      • 📧MailConfiguration
        • gmailCredentials.properties
      • 🛂ReportConfiguration
        • extentReportConfiguration.properties
      • 🕵️TestManagementConfiguration
        • testRail.properties
        • zephyrscale.properties
        • testiny.properties
      • ⚙️testNgConfiguration.properties
    • 🍱TestData Configuration
    • 👨‍💼Gherkin Language and Scriptless Automation Framework
  • Automation Import Notes
    • 🎨Points to Remember
  • Automation Platforms
    • 👾ScriptlessApi Automation
      • 🖊️Introduction
      • 🗜️Api Automation Setup
      • 🔃Supported API Request Types
      • 🪧API Automation Template
      • 📚Example of API Requests
        • ⬇️GET
        • ↕️POST
        • ⤵️PUT
        • ❌DELETE
      • 🎯API Response Validation
      • 👨‍👦API Dependent TestCase
      • 📝Store API Variables
      • 📔API with JSON body
      • 🙋‍♂️Api Wait
      • 🗜️API Schema Validation
      • 🏗️API Tailor-Made coding
      • 👨‍🦯API Support Generator
      • ↘️Api Response Store Objects
      • ✍️API Test Report
      • 🚃Api Response Type Validation
    • 🌐ScriptlessWeb Automation
      • 🖊️Introduction
      • 🗜️Web Automation Setup
      • 🪧Web Automation Template
      • 🧮page_object_locators
      • 📜Web Automation Key Phrases
        • 📃PAGE_NAME
        • ⌛WAIT_TYPE
        • 📜SCROLL_TYPE
        • 👨‍💼ELEMENT_NAME
        • 🏎️ACTIONS
        • ⌨️SEND_KEYS
        • ✔️VALIDATION
        • ✅VALIDATION_TYPE
      • 👨‍👦Web Dependent Test Case
      • 🐒MOCK
      • 🛂AutomationAsserts
      • 🏗️Web Tailor-Made coding
      • 📝Store Web Variables
      • 🤼‍♀️Web & API integration
      • 🖇️Dynamic Strings
      • 🗣️ReadFile Annotation for Custom Code
      • 🖼️Page_Comparison
      • 👨‍💼Gherkin Template for Web Automation
    • 📱Mobile Automation
  • 🪶Automation features
    • 🌲Environment and System Variables
    • 🗝️KeyInitializers
      • Types of KeyInitializers
    • ✍️Reporting
      • Dashboard
      • Category
      • Tests
        • Screenshot Section
    • 👯Parallel Testing
    • 🏗️Tailor-Made Coding
  • ⏩Automation Demo
Powered by GitBook
On this page
  • Overview
  • Configuring Dependent Test Cases
  • Functionality of Dependent Test Cases
  • Execution Flow
  • Skipping Validation in Dependent Test Cases
  • Conclusion
  • Sample Report

Was this helpful?

  1. Automation Platforms
  2. ScriptlessApi Automation

API Dependent TestCase

Overview

In scriptless automation, dependent test cases allow for sequential execution of API calls where the output of one test is a prerequisite for another. This document describes how to configure and utilize dependent test cases in a scriptless automation environment.

Configuring Dependent Test Cases

Dependent test cases are defined in CSV templates, where one test case is marked as a prerequisite for another. This configuration ensures that tests are executed in a specific order, allowing for the passing of data from one test to the next.

Example

Consider two test cases, POST_Sanity.csv and GET_Sanity.csv, where GET_Sanity.csv depends on POST_Sanity.csv.

POST_Sanity.csv

This file defines a POST request and stores values for subsequent use:

DEPENDANT_TEST_CASE,NONE,
END_POINT,https://api-generator.retool.com/7kbSLy/data,
METHOD,POST,
...
BODY:KEY,Column 1,
BODY:VALUE,PrecisionTestAutomation,
RESPONSE:CODE,201,
RESPONSE:JSON_PATH,NONE,'Column 1',id
RESPONSE:EXPECTED_VALUE,NONE,PrecisionTestAutomation,
RESPONSE:STORE_VALUE,NONE,Name,ID

The POST request generates an ID and a Name, which are stored for later use.

GET_Sanity.csv

This file defines a GET request that relies on data from POST_Sanity.csv:

DEPENDANT_TEST_CASE,POST,
END_POINT,http://api-generator.retool.com/7kbSLy/data/ApiGlobalVariables:ID,
METHOD,RELAX_GET,
...
RESPONSE:CODE,200,
RESPONSE:JSON_PATH,NONE,id,'Column 1'
RESPONSE:EXPECTED_VALUE,NONE,1,ApiGlobalVariables:Name
RESPONSE:STORE_VALUE,NONE,NONE,NONE

The DEPENDANT_TEST_CASE field is set to POST, indicating that POST_Sanity.csv must be executed before this GET request.

Functionality of Dependent Test Cases

Sequential Execution with Independent Reporting

  • Sequential Execution: Dependent test cases are executed in a specific order. For instance, a POST request may be set as a prerequisite (dependent test case) for a GET request.

  • Independent Reporting: Despite their sequential execution, each test case is reported separately. This means the results for both the GET and the POST requests will appear independently in the test report, maintaining clarity and distinction between the different test cases.

Execution Flow

During execution, the automation framework first runs POST_Sanity.csv, storing the necessary values (ID and Name). Then, it executes GET_Sanity.csv, utilizing the stored ID and Name.

Execution Results

The output of the execution might look like this:

  1. POST request is executed, creating a new entry and storing ID and Name.

  2. GET request is executed, retrieving data based on the ID from the POST request.

Below is the execution from IDE

----------------------------------INSIDE BEFORE TEST----------------------------------
----------------------------------GET Started----------------------------------
----------------------------------POST Started----------------------------------
Request method:	POST
Request URI:	https://api-generator.retool.com/7kbSLy/data
Proxy:			<none>
Request params:	<none>
Query params:	<none>
Form params:	<none>
Path params:	<none>
Headers:		Accept=*/*
				Content-Type=application/json
Cookies:		<none>
Multiparts:		<none>
Body:
{
    "Column 1": "PrecisionTestAutomation"
}
{
    "Column 1": "PrecisionTestAutomation",
    "id": 60
}
----------------------------------POSTEnded----------------------------------
Request method:	GET
Request URI:	http://api-generator.retool.com/7kbSLy/data/60
Proxy:			<none>
Request params:	<none>
Query params:	<none>
Form params:	<none>
Path params:	<none>
Headers:		Accept=*/*
Cookies:		<none>
Multiparts:		<none>
Body:			<none>
{
    "id": 60,
    "Column 1": "PrecisionTestAutomation"
}
----------------------------------GETEnded----------------------------------
Error while converting image to base64 URL: /Users/amupraba/Desktop/PrecisionTestAutomation/Backend/maven/config/logo.jpg (No such file or directory)
Error while converting image to base64 URL: /Users/amupraba/Desktop/PrecisionTestAutomation/Backend/maven/config/logo.jpg (No such file or directory)
[main] WARN com.slack.api.methods.RequestFormBuilder - The top-level `text` argument is missing in the request payload for a chat.postMessage call - It's a best practice to always provide a `text` argument when posting a message. The `text` is used in places where the content cannot be rendered such as: system push notifications, assistive technology such as screen readers, etc.
[main] WARN com.slack.api.methods.RequestFormBuilder - Additionally, the attachment-level `fallback` argument is missing in the request payload for a chat.postMessage call - To avoid this warning, it is recommended to always provide a top-level `text` argument when posting a message. Alternatively, you can provide an attachment-level `fallback` argument, though this is now considered a legacy field (see https://api.slack.com/reference/messaging/attachments#legacy_fields for more details).

===============================================
Scriptless
Total tests run: 1, Passes: 1, Failures: 0, Skips: 0
===============================================

Skipping Validation in Dependent Test Cases

  • Validation Skip: When a test case is marked as a dependent (e.g., a POST request being a dependent for a GET request), the validation steps in the dependent test case are skipped. This is because the primary purpose of the dependent test is to set up the pre-requisites for the subsequent test.

  • Exclusion from Report: As a result of skipping the validation, the validation results of the dependent test case are not included in the final report. The focus is on the successful execution of the dependent test as a setup step rather than its individual validation outcomes.

Conclusion

The use of dependent test cases in scriptless automation serves as a method to ensure the correct sequence of test execution while maintaining the independence of each test case in terms of reporting. This approach provides a clear and organized structure for executing interdependent API tests, where one test's output is required for the execution of another. Additionally, the mechanism of skipping validation in dependent test cases optimizes the testing process, focusing on the necessary setup actions without cluttering the reports with unnecessary validation details. This feature enhances the efficiency and clarity of comprehensive API testing, especially in complex scenarios involving multiple interdependent endpoints.

Sample Report

PreviousAPI Response ValidationNextStore API Variables

Last updated 1 year ago

Was this helpful?

👾
👨‍👦
18KB
AutomationReport.html