QA Testing Glossary

Key Takeaways
- Comprehensive collection of essential QA and testing terminology
- Clear definitions and explanations for both beginners and professionals
- Organized alphabetically for easy reference
- Includes modern testing concepts and methodologies
In the world of software quality assurance (QA) and testing, understanding key terminology is essential for effective communication and collaboration. Whether you're a beginner or an experienced QA professional, having a clear grasp of common QA/testing terms will help you navigate discussions, planning, and execution with confidence.
In this blog post, we'll define and explain some of the most frequently used terms in QA and software testing. This glossary will serve as a handy reference for testers, developers, and stakeholders alike.
1. Quality Assurance (QA)
Definition: Quality Assurance refers to the systematic process of ensuring that a product meets specified requirements and standards before it reaches the end user. It focuses on preventing defects through process improvements rather than just finding them.
Key Point: QA is about building quality into the development process, not just testing for bugs after the fact.
2. Software Testing
Definition: Software testing is the process of evaluating a software application to ensure it behaves as expected, meets user requirements, and is free of defects.
Key Point: Testing can be manual or automated and includes functional, non-functional, and regression testing.
3. Test Case
Definition: A test case is a set of conditions or variables under which a tester determines whether a system satisfies requirements or works correctly.
Example: A test case for login functionality might include steps like entering valid credentials and verifying redirection to the dashboard.
4. Test Plan
Definition: A test plan is a document that outlines the scope, objectives, resources, schedule, and approach for testing activities.
Key Components:
- Scope of testing
- Test objectives
- Test deliverables
- Risks and mitigation strategies
5. Test Scenario
Definition: A test scenario is a high-level description of what needs to be tested. It represents a specific situation or use case that the tester will validate.
Example: "Verify that users can reset their password using the 'Forgot Password' feature."
6. Test Script
Definition: A test script is a set of instructions written in a programming language to automate the execution of test cases.
Key Point: Test scripts are commonly used in automation testing tools like Selenium or Cypress.
7. Defect/Bug
Definition: A defect (or bug) is any deviation from the expected behavior of the software. It occurs when the actual result does not match the expected result.
Severity Levels:
- Critical: Blocks core functionality.
- High: Affects major features but doesn't block usage.
- Medium: Minor impact on usability.
- Low: Cosmetic issues like typos or UI inconsistencies.
8. Regression Testing
Definition: Regression testing ensures that recent code changes—such as bug fixes, enhancements, or new features—haven't introduced defects into previously working parts of the application.
Key Point: It's often automated to save time and improve efficiency.
9. Smoke Testing
Definition: Smoke testing is a quick, shallow check to verify that the most critical functionalities of an application work after a new build.
Analogy: Think of it as a "sanity check" to ensure the application isn't completely broken.
10. Sanity Testing
Definition: Sanity testing is a focused check performed after minor changes to confirm that specific functionalities work as expected.
Difference from Smoke Testing: Smoke testing is broader, while sanity testing is more targeted.
11. Exploratory Testing
Definition: Exploratory testing is an unscripted approach where testers explore the application dynamically to uncover unexpected issues.
Key Point: It relies on the tester's creativity and intuition rather than predefined test cases.
12. Acceptance Testing
Definition: Acceptance testing is performed to determine whether the software meets business requirements and is ready for release.
Types:
- User Acceptance Testing (UAT): Conducted by end-users.
- Alpha Testing: Performed internally before release.
- Beta Testing: Conducted by a limited group of external users.
13. Black Box Testing
Definition: Black box testing evaluates the functionality of an application without knowing its internal code structure.
Key Point: The tester focuses only on inputs and outputs, not how the application processes them.
14. White Box Testing
Definition: White box testing examines the internal structure, logic, and code of an application.
Key Point: It requires programming knowledge and is often performed by developers.
15. Gray Box Testing
Definition: Gray box testing combines elements of both black box and white box testing. The tester has partial knowledge of the internal workings of the application.
Use Case: Often used in integration testing to validate interactions between components.
16. Unit Testing
Definition: Unit testing verifies individual units or components of the code in isolation to ensure they function correctly.
Key Point: Developers typically perform unit tests during the coding phase.
17. Integration Testing
Definition: Integration testing ensures that different modules or components of an application work together as expected.
Approaches:
- Top-down
- Bottom-up
- Big Bang
18. System Testing
Definition: System testing evaluates the complete and integrated software system to ensure it meets specified requirements.
Key Point: It's performed after integration testing and before acceptance testing.
19. Performance Testing
Definition: Performance testing evaluates how an application performs under various conditions, such as load, stress, and scalability.
Types:
- Load Testing
- Stress Testing
- Spike Testing
- Endurance Testing
20. API Testing
Definition: API testing validates the functionality, reliability, and security of APIs (Application Programming Interfaces).
Tools: Postman, SoapUI, RestAssured.
21. Cross-Browser Testing
Definition: Cross-browser testing ensures that an application works consistently across different browsers (e.g., Chrome, Firefox, Safari).
Key Point: It's crucial for web applications to provide a uniform user experience.
22. Traceability Matrix
Definition: A traceability matrix maps requirements to test cases to ensure complete coverage.
Purpose: Helps track whether all requirements have been tested.
23. Automation Testing
Definition: Automation testing uses scripts and tools to execute test cases automatically, reducing manual effort and increasing efficiency.
Tools: Selenium, Cypress, Appium, TestComplete.
24. Continuous Integration (CI)
Definition: Continuous Integration integrates automated tests into the development pipeline to validate code changes continuously.
Benefits: Reduces integration issues and accelerates feedback loops.
25. Shift-Left Testing
Definition: Shift-left testing involves starting testing earlier in the development lifecycle to identify issues sooner.
Advantage: Reduces costs and improves quality by catching defects early.
26. Test Environment
Definition: A test environment is a setup that mimics the production environment for executing tests.
Components: Hardware, software, network configurations, and test data.
27. Boundary Value Analysis
Definition: Boundary value analysis tests input values at the edges of acceptable ranges to catch potential errors.
Example: If a field accepts values between 1 and 100, test with 0, 1, 99, and 100.
28. Equivalence Partitioning
Definition: Equivalence partitioning divides inputs into groups that are expected to behave similarly, reducing the number of test cases.
Example: For a field accepting ages 18–60, test with one value from each group: below 18, 18–60, and above 60.
29. Test Automation Framework
Definition: A test automation framework is a set of guidelines, tools, and libraries used to design and execute automated tests efficiently.
Examples: Data-driven, keyword-driven, and hybrid frameworks.
30. Risk-Based Testing
Definition: Risk-based testing prioritizes testing efforts based on the likelihood and impact of risks.
Goal: Maximize test coverage while minimizing resource usage.
Conclusion
Understanding these common QA/testing terms is crucial for anyone involved in software development and testing. By familiarizing yourself with this vocabulary, you'll be better equipped to communicate effectively, plan testing activities, and contribute to delivering high-quality software.
Are there any other terms you'd like us to define? Let us know in the comments below!
Join Our Community of QA Professionals
Get exclusive access to in-depth articles, testing strategies, and industry insights. Stay ahead of the curve with our expert-curated content delivered straight to your inbox.

Nikunj Mistri
Founder, QA Blogs