Automated Testing Glossary
Jul 20, 2023
Common terms in automation testing and what they mean.
API (Application Programming Interface)
A set of rules and protocols that allows different software applications to communicate and interact with each other.
Testing the functionality, reliability, and security of APIs to ensure they perform as expected and meet the application's requirements.
Evaluating a software application's compliance with business requirements and determining whether it is ready for deployment and use by end-users.
ATDD (Acceptance Test Driven Development)
An approach where acceptance tests are written before the code is developed, helping to ensure the software meets the desired business outcomes.
Evaluating a software application's usability and inclusivity for users with disabilities to ensure it complies with accessibility standards.
The outcome obtained after executing a test case is compared with the expected result to identify any discrepancies.
Ad Hoc Testing
Informal and unplanned testing carried out without predefined test cases, usually used for exploratory testing or quick checks.
Testing practices integrated into Agile development methodologies, emphasizing continuous testing, collaboration, and frequent feedback.
Utilizing automated scripts and tools to execute tests, reducing manual effort and enhancing test efficiency and accuracy.
Big Bang Testing
A testing approach where all components or modules are tested simultaneously without prior integration testing.
Black Box Testing
Testing a software application's functionality without knowing its internal code or structure.
A defect or flaw in a software application that causes it to produce incorrect or unexpected results.
A deployment strategy where a new version of the software is tested with a small subset of users before an entire release.
Testing a system's ability to withstand unpredictable and adverse conditions to ensure its stability and recovery capabilities.
Formal proposals for alterations or enhancements to a software application's functionality or features.
A metric that measures the proportion of code lines or branches exercised by test cases.
A systematic peer examination of a software codebase to identify defects, security vulnerabilities, and maintainability issues.
Assessing a software application's ability to function correctly across different environments, devices, and configurations.
How many tests can be run simultaneously.
Testing the accuracy, relevance, and integrity of content displayed in a software application.
Integrating testing into the continuous integration and continuous delivery (CI/CD) pipeline to ensure rapid and reliable software releases.
Cross Browser Testing
Evaluating a software application's functionality and compatibility across different web browsers.
A pattern used to identify and select HTML elements on a web page to apply styling or interact with them.
Executing test cases with multiple sets of input data to validate various scenarios and conditions.
Data Flow Testing
Assessing the flow and processing of data within a software application to identify defects and vulnerabilities.
The process of identifying and fixing issues and defects in software code.
An issue or flaw that affects a software application's functionality, performance, or security.
The process of capturing, tracking, and resolving defects discovered during testing.
A product or artifact created during the testing process, such as test plans, test cases, or test reports.
Integrating testing into the DevOps culture and practices to ensure seamless collaboration between development and operations teams.
Assessing a software application's behavior and performance during runtime.
Evaluating the entire software application's functionality and performance, including all integrated components.
A human mistake or misunderstanding that leads to a defect in a software application.
Recorded information about errors and exceptions that occur during the execution of a software application.
Software or hardware that replicates the behavior of another system to facilitate testing.
Running a test case or test suite to evaluate a software application's behaviour.
Testing all possible combinations of inputs and scenarios to ensure maximum coverage.
The predefined outcome that a test case should produce is used for comparison with the actual result.
A dynamic and informal testing approach that relies on the tester's intuition and expertise to discover defects.
An Agile development methodology that emphasizes continuous feedback, customer collaboration, and incremental development.
Flaky App / Site
A software application or website that produces inconsistent and unpredictable results during testing.
A test that yields different results in different test executions despite no changes in the application's code.
Assessing the functionality and user experience of a software application's front-end components.
Testing the interaction and compatibility of different functional modules within a software application.
Evaluating a software application's compliance with specified functional requirements.
Future Proof Testing
Ensuring that a software application's design and code can accommodate future changes and requirements.
Glass Box Testing
A testing approach where testers have access to the internal code and structure of the software application.
A testing approach where software components or modules are tested and integrated incrementally.
A formal review process where stakeholders analyze and assess a software application's quality and compliance with requirements.
Assessing the interaction and functionality of integrated components or modules within a software application.
A repetitive and incremental development cycle in Agile methodologies.
Evaluating the interactions and data exchanges between different software components or systems.
Assessing a software application's functionality and user experience for a specific locale or target market.
The ease with which a software application can be modified or updated to meet new requirements or fix defects.
Testing a software application after modifications or updates to ensure its continued performance and stability.
Executing test cases manually without the use of automated testing tools.
Evaluating a software application's behavior under conditions of invalid or unexpected inputs.
Assessing a software application's performance, security, usability, and other non-functional aspects.
POM (Page Object Model)
A design pattern used in test automation to represent the elements and interactions of web pages.
A metric used to evaluate a software application's performance against predefined criteria.
Assessing a software application's response time, scalability, and stability under different workloads.
The state or conditions expected to exist after the execution of a test case.
The level of importance assigned to a defect or requirement.
Quantitative measurements are used to assess the quality and effectiveness of the testing process.
The degree to which a software application meets specified requirements and user expectations.
The process of ensuring that a software application meets quality standards and follows best practices.
A predetermined criteria or threshold that must be met before a software application can progress to the next development phase.
Retesting / Rerunning
Executing previously failed test cases after defects have been fixed to verify their resolution.
Repeating test cases to ensure that changes or new code do not introduce defects or negatively impact existing functionality.
Evaluating a software application's readiness for deployment and release.
Assessing a software application's ability to perform consistently and reliably over time.
A quick and basic test to verify whether a software application's critical functions are working as expected.
Assessing a software application's ability to handle increased workloads and users.
Capturing and comparing screenshots of a software application to identify visual discrepancies.
Evaluating a website's search engine optimization to improve its visibility and ranking in search results.
The impact or criticality of a defect on the software application's functionality or user experience.
Shifting testing activities to occur earlier in the development process to identify and resolve defects sooner.
A basic test to ensure that critical functionalities of a software application are functioning before proceeding with detailed testing.
Software Risk Analysis
Identifying and assessing potential risks and their impact on a software application's development and testing.
Software Development Life Cycle
The process and phases involved in developing a software application from concept to deployment.
The overall level of excellence and adherence to requirements in a software application.
Software Quality Management
The process of overseeing and implementing quality measures throughout the software development and testing lifecycle.
The process of evaluating a software application's functionality, performance, and quality to identify defects and ensure it meets requirements.
Software Testing Life Cycle
The phases and activities involved in planning, executing, and managing software testing.
Assessing a software application's code and documentation without executing the program.
Evaluating a software application's behavior and performance under extreme load conditions.
System Integration Testing
Testing the interaction and compatibility of the entire system, including all integrated components.
The overall strategy and guidelines for conducting software testing.
Using automated tools and scripts to execute tests and compare results.
A set of preconditions, inputs, and expected outcomes used to validate specific functionality in a software application.
A collection of related test cases or test methods.
The process of comparing actual test results with expected results to identify discrepancies.
The extent to which a software application has been tested is measured by the percentage of code or requirements covered.
Test Id (Selector)
A unique identifier is used to locate and interact with elements in test automation.
The hardware, software, and resources required to conduct testing activities.
TDD (Test-Driven Development)
Writing test cases before writing code, with the aim of guiding development and ensuring testability.
The input values and conditions used to execute test cases.
A setup that replicates the production environment for testing purposes.
Running test cases to verify the functionality of a software application.
Software or tool used to perform testing activities, such as test automation or defect tracking.
Recording test activities, results, and any issues encountered during testing.
Planning, coordinating, and controlling testing activities throughout the project.
The ability to monitor and analyze a software application's behavior during testing.
A document outlining the approach, scope, objectives, and schedule of testing activities.
The sequence of activities involved in conducting testing.
Test Process Improvement
Identifying and implementing changes to enhance the testing process and its outcomes.
A formal statement defining the organization's approach to software testing.
A testing strategy that prioritizes a large base of unit tests, followed by fewer integration tests and even fewer end-to-end tests.
Documentation summarizing the test results and outcomes of testing activities.
Software used to execute automated test cases and manage test suites.
Detailed documentation describing test cases, inputs, and expected outcomes.
An overall plan for conducting software testing based on project goals and constraints.
A collection of related test cases designed to execute together.
User Acceptance Testing
Testing a software application from the end user's perspective to ensure it meets their needs and requirements.
Evaluating the functionality, usability, and responsiveness of a software application's user interface.
Unit Test Framework
A set of tools and conventions for writing and executing unit tests.
Assessing individual units or components of a software application in isolation.
A description of interactions between users and a software application to achieve specific goals.
Use Case Testing
Testing a software application's functionality based on various use case scenarios.
Evaluating a software application's ease of use and user-friendliness.
Visible Text (Selector)
A method of identifying web elements using the visible text they contain.
Confirming that a software application meets specified requirements and user needs.
Identifying visual discrepancies and changes between different versions of a software application.
White Box Testing
Assessing a software application's internal code and structure to validate its functionality and logic.
Automating test execution and validation of web applications.
Web Performance Testing
Evaluating the speed, responsiveness, and efficiency of web applications.
Evaluating the functionality, usability, and compatibility of web applications.
A method of identifying web elements using their XML Path location.