If you don’t work in QA, it can be difficult to keep track of all the lingo. (Even if you do work in QA, it’s not always easy!) So we put together a handy primer on QA testing vocabulary. After all, everyone has to start somewhere, and a good team will never make you feel embarrassed to ask questions.
While this list may not help you win your next Scrabble tournament, it should help give you a basic understanding of common QA testing vocabulary.
Types of QA Testing Vocabulary
Manual Testing means testing the app or site by hand. For example, opening a browser and manually navigating to different sections of a website, looking for user experience issues or bugs. (For more, see What is Manual Testing?)
Automated Testing means using a programming language (such as Java) to write scripts that will navigate a website or app. These scripts can generate reports for issues such as broken links, missing text, etc. (For more on the differences between manual and automated, see Manual vs. Automated Testing.)
API Testing means checking the quality/accuracy of an API, which stands for Application Programming Interface. APIs send requests and responses to/from remote servers.
For example, say that you type in text and click “Search” on Google.com. The search engine API then sends the content request, and returns the search results.
Performance Testing involves checking the response time for an application or website in typical usage scenarios.
For example, let’s say that you know that your website gets 50,000 hits on a normal day. With performance testing, you could use a program to see how many seconds it would take to load in that scenario.
Load Testing is very similar to performance testing, but with even more of a focus on finding the exact point at which an app or site would crash or go down.
For example, say that you’re about to launch your new mobile app. You have no idea how many people will be using it. How do you know if your servers will stand up to the demand? To be safe, you can do load testing to identify the maximum number of users it could support.
Load testing goes hand in hand with performance testing, and the differences are minor.
Learn more about the different types of software testing.
Methods of QA Testing
User Acceptance Testing means having real users beta test your app or site and provide feedback. Also known as “UAT,” it can also refer to regular manual testing. Learn more about the different types of user acceptance testing.
Accessibility Testing means checking that the app or site is user-friendly for people with various disabilities. For example, verifying that videos have closed captioning for people with hearing disabilities, or that images have descriptions for people with visual disabilities. Accessibility testing can cover many other disabilities including motor impairments, learning disabilities, and more.
Unit Testing means creating automated scripts to test individual parts of the app or website code. Although it’s a form of testing, unit testing is usually done by developers. The goal of unit tests is to ensure that each area of the code is working properly.
Ad-hoc Testing means testing an app or site without following any specific test cases. Instead, QA will poke around the application at will, identifying any issues that they spot throughout the process.
Exploratory Testing means testing with existing experience and knowledge of the mobile app or website. This insight gives the QA tester the ability to have a focused engagement without following formal test cases.
Basic QA Testing
Smoke testing is one of the quickest/most basic forms of testing. It involves doing a simple test of major features, often right before a release. The purpose is to see if anything “catches on fire,” so to speak. (If you really want to get your metaphor on, you could also use “Where there’s smoke, there’s fire.” Or, if you’re a Billy Joel fan and/or developer, “We didn’t start the fire.”)
Ideally the app will have also gone through more rigorous testing. But smoke testing is used as a back-up to be extra cautious when there’s not enough time for the ideal level of testing.
For example, say that you’re about to launch a new version of an app. It already passed QA, but you want some quick last-minute testing done before you publish to the App store, just to be safe. If you ask QA to smoke test the login feature, they might check that logging in with valid credentials works. They would also likely verify that attempting to log in with an incorrect password brings up an error message. However, with smoke testing, QA would not be nearly as thorough as other types of testing. For example, regression testing would likely incorporate testing case sensitivity, switching between different users, etc.
An app or site vulnerable to a much higher level of bugs if smoke testing is the only type of testing done. But it’s a great process to use as an extra precaution.
Smoke testing is one of the most common phrases in the QA testing vocabulary. Learn more in our article: What is Smoke Testing?
In-Depth QA Testing
Regression testing is much more thorough than smoke testing. A regression is a bug with an existing feature, caused by code updates from a new feature or different bug fix. Regression testing involves checking every possible aspect of the pre-existing app features after a new feature or bug fix is deployed. This is to make sure that the code updates didn’t break any other area of the software.
For example, say that a developer adds a new profile field for “Birthday.” Regression testing the profile section would mean verifying that all of the other fields were still editable, saving changes still worked, numbers were still not allowed in the first name field, etc. Sometimes even a small code change can cause a ton of regressions. As a result, this type of testing is extremely important whenever there’s an update — big or small.
As with smoke testing, regression testing is common in any QA testing vocabulary. Learn more about how to do regression testing.
Cross-browser/cross-device means that testing is being done, or a bug is occurring, on multiple internet browsers (such as Safari, Chrome, Firefox, Internet Explorer, etc) or multiple devices (Androids, iPhones, tablets, desktops, etc). Learn more about the importance of cross browser testing.
Cross-browser/device testing is important, as many bugs will be on one browser or device but not another.
Planning
Test Cases are requirements with steps for testing whether a given part of the app or site is working properly. If this sounds vague or confusing, don’t worry — we have a full explanation (with examples!) in our post, What Are QA Test Cases?
Test Suite is a set of test cases. For example, you might have test cases for the registration section, the homepage, video playback, etc. A test suite is a spreadsheet consisting of all of these different test cases.
Sprint is a set amount of time in an Agile QA process. A sprint includes a given number of tasks that the team expects to finish in the timeframe (usually one to two weeks).
Before a Sprint starts, the team gets together for Sprint Planning. During this session, product manager(s), developers, and QA testers will decide which bug fixes or features can be realistically included in the Sprint. To learn more about the prioritization process, see How to Prioritize Bug Fixes.
Process
Agile is a software development process that involves regular releases. It also entails updating requirements on the fly. When working with an Agile process, it’s common for new releases/updates to go out every few weeks. To learn more, see our article on the Agile QA Process.
Acceptance criteria are a set of conditions that must be met in order for a feature to be considered ready to release. With an Agile process, the exact conditions can change on the fly. After all, Agile teams pivot based on new information or ideas. However, in order to consider the feature done, the final set of acceptance criteria must be met.
For example, here’s acceptance criteria for a messaging feature:
- Premium users must be able to message any user on their friends list
- All users must be able to block any user
- Admin users must be able to delete a message
- All users must have “inbox” and “sent” sections
Learn more in our full guide to acceptance criteria.
Specs (short for specifications) are documentation or resources that describe how an app or site should look or behave. For example, a tester might ask for “design specs” in order to make sure that images and layout match expectations.
Requirements are essentially the same as “specs” — documentation that details all information about a feature. This allows developers to build and QA to test the right details.
Behavior-Driven Development uses a language called Gherkin to document requirements. These requirements then become the basis for automated tests. Gherkin uses a format of “Given, Then, When” that helps less technical team members understand the feature.
For example:
Given a user wants to post to Facebook
When they type the message and click “publish”
Then their friends can view the post
Users (or “end users”) are the people who use your app or website. For example, your customers or clients.
Releases
MVP stands for Minimum Viable Product. For a version of an app or website to be “MVP,” it needs to meet criteria that the team has decided is the bare minimum required for launch.
For example, a business owner might decide that a GPS section of an app is an “MVP feature,” meaning it has to be included even for a soft launch. They also might decide that a video feature is “Post-MVP,” meaning it can be added after the initial launch.
Release candidate is a version that is ready to release to the public, assuming no major bugs are found during testing.
For example, say that you want the next version of your iOS app to feature new content. You also want it to include a bug fix in the “favorites” section. Developers will send a new build to QA as a “release candidate” once they’ve finished updating the content and fixing the bug. If QA finds any significant bugs, the build is no longer a release candidate. On the other hand, if QA doesn’t find any notable bugs, it’s ready to release.
Code complete means that the developers have finished implementing the bug fix or new feature. This means it’s either ready for QA, or will be soon once the code is deployed. “Code complete” doesn’t mean that the new version won’t have any bugs. In fact, it probably will! It’s QA’s job to verify the validity and quality once the developer’s first pass is done.
Quality
Bugs are problems with an app or website. Sometimes they’re obvious issues, such as a crash or unexpected error message. Other times, they’re considered problematic because they don’t match the company’s expectations.
For example, a website taking two full minutes to load would be a pretty straightforward bug. But if a company wants the background color to be blue, and it appears as green, this would be a bug too (even if it doesn’t look bad to users).
Bug Reports are formal ways of documenting problems with an app or site. Bug reports are usually filed as ‘tickets’ in a project management system such as Jira. (Jira is an online tool for tracking software development progress. To learn more, see Jira QA Workflow and Best Practices.)
To learn more about bug reports and see examples, check out our post on Best Practices for Reporting Bugs.
Showstopper is a bug that is absolutely critical. If QA finds any showstoppers in a new version of a test build, it shouldn’t be released to the public. Showstoppers are considered top priority for developers to fix — especially if they’re found in a live version.
For example, if a mobile app consistently crashes whenever users sign up, that would be considered a showstopper bug.
Blocker is essentially the same thing as a showstopper (see above). A blocker bug prevents a new release.
Edge case bugs only happens in rare situations. This could mean only on an old operating system or device, or only occurring 1 in 200 times. Prioritization for edge cases is usually low. In many cases, edge cases will stay in the backlog permanently. Learn more in our articles on Edge Cases in Software Testing.
For example, say that 99.9% of your users are on iOS version 10 and above. An edge case could be a formatting bug on iOS 9, which would only affect .01% of users.
Defects are issues within an app or site that don’t meet the acceptance criteria (see above). For example, maybe a background is the wrong shade of blue. This wouldn’t necessarily seem like a “bug” to real users. But because it doesn’t match the company requirements for the design, it would be a “defect.”
Hot Fix is a critical bug fix that needs to go live before the next scheduled release date.
User Experience refers to the quality of the experience and interactions that a user has with an app or site. An application can have bad user experience without being explicitly “buggy.”
For example, say that you have a registration section in an iOS app. Maybe each field works correctly, and users are able to successfully save and register. But if a user has to move to a new screen every time they finish a field (as opposed to having multiple fields on one page), this would be bad user experience.
“User experience” is a hot topic in anyone’s QA testing vocabulary. To learn more, see What is User Experience?
Feature is a service or functionality in an app or site. For example, being able to ‘like’ tweets is a feature on Twitter.
You made it to the end of the QA testing vocabulary list — congrats!
While these definitions are by no means exhaustive, you’re now an honorary QA tester. Looking for more detailed explanations on all things QA? Check out our QA blog.
Ready for us to put these definitions into action testing your app or site? Check out our full list of QA testing services.
[…] testing is one of the most common terms in a QA vocabulary. It involves doing light (and often ad-hoc) tests of major features, typically right before or […]