15 Common QA Job Interview Questions

This post is co-authored with Olha Hladka, a QA Engineer at QA Madness.

Interviewing for a QA job position is always a two-step process. At first, an HR specialist checks whether your profile matches the described quality assurance job requirements and discusses work-related moments. After that, a tech specialist comes for a technical interview, where a candidate is to prove to fit the QA position requirements in practice.

While QA roles, skills, and responsibilities are described in Software Testing Engineer’s job requirements, you won’t find a list of questions that help to prepare for a tech interview. So how should you prepare for it? Is there anything specific to repeat?

As a rule, Junior specialists can expect the general questions, like “What’s the difference between QA and QC?” Senior QA Engineers and QA Lead should expect similar queries, but with some twists and turns. Most likely, an interviewer would require specific examples based on the previous experience. The core questions, however, remain quite standard and related to the basics of testing.

We decided to recollect some theory and help out those who are preparing for a job interview. So, beginners, get ready to learn what to expect during the interview. More seasoned professionals can refresh the memory and recollect the theory. And if you are about to interview a person for a QA position, feel free to use some of the questions below.

What Is Quality Assurance?

Quality assurance, or QA, is a set of activities covering all technological stages of software development, release, and exploitation. QA is an ongoing process that lasts through all the phases of a software development life cycle (SDLC). The purpose of QA is to ensure the required level of quality for the product.

What Is Software Testing?

Software testing is a part of the quality assurance process. It encompasses all the testing activities that take place during the software development life cycle. These activities can be related to planning, preparation, or evaluation of a software product, reporting, etc. Software testing aims to:

  1. determine whether a product meets the requirements stated in documentation;
  2. detect the defects that interfere with business logic or deteriorate user experience;
  3. and prove that this product is suitable for the stated purposes.

What Levels of Testing Do You Know?

Traditionally, QA specialists tend to distinguish between four levels of software testing:

  1. Unit testing.
  2. Integration testing.
  3. System testing.
  4. Acceptance testing.

Unit testing focuses on the minor components of software code. It looks into the smallest functioning parts of an application that can work and, therefore, tested for defects separately. These tiniest software parts are program modules, objects, classes, functions, etc.

Integration testing checks the interaction between several components of the system after they have been checked individually.

System testing is an inspection of the whole software system based on functional and nonfunctional requirements. It takes place after different subsystems are connected into one. At this stage, we can find the following defects:

  • incorrect use of system resources;
  • unexpected combinations of user-level data;
  • incompatibility with the environment;
  • unexpected response to uncommon use cases;
  • missing or incorrect functionality;
  • lack of usability, etc.

As for the acceptance testing, we prefer to distinguish it as a separate type of testing, not one of the levels. In practice, a team runs acceptance tests to check if a product meets the requirements. It is not a one-time check but an ongoing process. In other words, acceptance tests are present at each of the levels mentioned above, in one form or another.

What Types of Testing Do You Know?

We can distinguish two categories of tests – functional and nonfunctional.

Types of functional testing:

  • Functional testing per se.
  • User interface testing.
  • Security and access control testing.
  • Interoperability testing.

Types of nonfunctional testing:

  • Performance testing (load, stress, stability & reliability, volume testing).
  • Installation testing.
  • Usability testing.
  • Failover and recovery testing.
  • Configuration testing.

What Is Black Box Testing?

Black box testing is a method of testing functional software behavior from a user’s point of view. Simply put, a QA specialist is not familiar with the internal structure of the tested object. Black box testing is a systematic selection of features to inspect and writing tests to cover those features. Technical requirements and specifications become the basis for the behavioral test strategy.

What Is A Bug?

A bug is an inconsistency of an expected and an actual result of a testing iteration. It can be an incorrect output for an agreed input, performance that differs from the agreed specifications, etc.

What Are Bug Severity and Bug Priority?

Bug severity is a level of the potential impact a particular defect can have on a design or operation of a certain component or system.

Bug priority is the severity assigned to the bug. In other words, it determines how urgent the bug fix should be.

What Is Test Design?

Test design is a stage of the software testing process, during which QA engineers write test scenarios based on previously defined quality criteria and testing goals.

What Test Design Techniques Do You Know?

There are three main test design techniques that help to reduce the number of required test cases: equivalent partition, boundary value analysis, and pairwise testing.

What Are an Emulator and a Simulator?

A software emulator is a fully functional analog of an original device or its version. An emulator mimics software and hardware behavior, modeling the core capabilities and restrictions of the functionality.

A software simulator is a model of an original device with its logic implemented partially or completely. A simulator, however, doesn’t mimic hardware features. Therefore, a simulator can reproduce software behavior and interface closely, but not the resources of a native system.

What Are Approaches to Integration Testing?

There are three main approaches to integration testing – the bottom-up, the top-down, and the big bang approach.

Bottom-Up Approach

Using this approach, you put all the low-level modules, procedures, and functions together and start testing a product. Then, assemble the next level of modules for integration testing, and so on. It is reasonable to use the bottom-up approach if all or almost all of the units are ready.

Top-Down Approach

First, we test all the high-level modules. Then, we gradually add low-level modules, one by one. Instead of the lower-level modules that aren’t ready yet, QA engineers use stubs with similar functionality to simulate the features. And when all the active components are ready, we use them to replace stubs.

Big Bang Approach

In this case, we assemble all or the majority of modules to get a close-to-complete system. Then, we run integration testing. This approach is time-saving – that’s the main benefit. However, if you don’t record test cases or their results correctly, the integration process can become complicated and create obstacles during tests.

What Are the Requirements in Testing?

Requirements are the specifications of the functionality of the developed software described in the product documentation. In other words, it is what the team needs to create and implement.

What Are the Criteria for Well-Written Requirements?

There are certain requirements to requirements 🙂 In this case, well-written encompasses the following criteria:

  • correct;
  • unambiguous;
  • complete;
  • consistent;
  • verifiable;
  • traceable;
  • and understandable.

What Is a Requirements Traceability Matrix?

A requirements traceability matrix is a two-dimensional table that maps product’s functional requirements to the prepared test cases. For example, table columns list requirements, and table rows list test scenarios. At the intersection, a QA engineer places a mark that helps to figure out if a test case covering a particular requirement exists.

What Is Sanity Testing?

Sanity testing is a highly targeted type of testing used to prove that a particular function works as stated in the specifications. It allows you to determine if a certain part of an application is ‘healthy’ enough after changes so the team can proceed with the testing activities. Sanity testing is a subset of regression testing, and it is mostly manual.

Bonus Question: Why Testing?

Though not technical, you should always be ready to answer this one. Tell about the advantages you’ve experienced working as a QA engineer, about what you aim to achieve, and why this career path attracts you in the first place. While QA job requirements describe what a QA company expects from you, telling an interviewer about your own expectations and involvement reveals as much as your CV.

Hopefully, you’ve found some helpful information in this article. Good luck with your interview! 😉

Inna Feshchuk: