End-to-End Testing for the AI-Powered Meeting Platform

A man communicating via an online video callusing a laptop

Industry

Office Software
Online Meetings

Country

Austria

Type of Service

Manual Testing

Cooperation Type

Part-time

Project Type

Web Testing
SaaS Testing

Overview

Apollo.ai is an intelligent all-in-one board & decision cloud that captures, manages, and shares knowledge before, during, and after meetings. The platform is designed to transform decision-making – to simplify it and make it more impactful. Apollo.ai aims to modernize boardrooms and leadership practices, leaving outdated processes in the past. The core strength of the platform lies in its integration with AI technology into the online meeting functionality.

Challenge

The client requested manual testing for the web application, emphasizing end-to-end testing in particular. SaaS platforms must be able to handle complex data, process multiple access levels correctly, and allow real-time collaboration. Together with Apollo.ai being a new platform developed from scratch, it determined the central challenges for quality assurance:

  • To set up the proper testing process from scratch. The team had no QA engineer, and the project manager was in charge of testing.
  • To provide sufficient test coverage, taking into account all the important features but without overloading the sprint with too many items on the checklist.
  • To test the platform’s functionality for different user roles and access levels on different devices and browsers.
  • To check AI features that are being developed to supplement and extend the platform’s basic functionality.
  • To ensure the stability of every new app’s version and, whenever possible, prevent defects and gaps in logic at the planning stage.

Solution

After joining the project, the QA engineer started to get familiar with the platform to prepare test documentation, ensure sufficient coverage, and provide informative reports. Since the client requested end-to-end testing, it was decided to focus on the functionality and user interface on various devices and browsers.

The task was to set up the testing process from scratch. It entailed the following phases and activities:

  1. Conducting exploratory testing. The QA engineer got to learn how the product works while reporting defects in parallel to learning about the functionality. The previous checks didn’t involve QA experts, so some bugs remained undetected.
  2. Establishing an effective communication framework. It is critical to set up efficient reporting and feedback mechanisms between the QA engineer, developers, and Product Manager.
  3. Preparing the checklist. It includes various aspects of functional, UI, compatibility, regression, and basic usability testing and is updated as new features are added.
  4. Preparing bug reports. The QA engineer writes detailed bug reports to help the developers easily locate defects and their root causes.
  5. Running change-related testing. Retesting is mandatory after each set of fixes to verify that the defects are no longer reproducible. The QA engineer also runs smoke and regression testing to confirm that the rest of the functionality hasn’t been affected by code changes.

Results

  • The QA engineer joins Apollo.ai’s team for a week during each sprint. They run extended testing after each code iteration to estimate the software quality on all levels and usage scenarios.
  • The platform is tested on Windows, MacOS, and iOS, as well as different browsers and versions for each OS. Sometimes, the QA engineer includes testing in BrowserStack.
  • All blockers and Critical/Major defects are caught before the release of the new software version to production.
  • The QA engineer always pays attention to the details of the functionality and interface and suggests improvements to enhance the platform’s convenience. The majority of such suggestions have already been approved and implemented.

Let’s Start a New Project Together

QA Madness helps tech companies strengthen their in-house teams by staffing dedicated manual and automated testing experts.

Anastasiia Letychivska

Head of Growth

Ready to speed up the testing process?