The What

Android is officially the most popular operating system for mobile devices –smartphones and tablets– in the world (more than 70% users as of 2021), so it’s not very hard to reach the conclusion that testing apps for the Android platform is indeed very important if we are developing apps for a variety of platforms.

In terms of techniques or strategies, testing an Android app does not differ from what any mobile app should require, however, there are certain technical particularities in its approach.

To begin testing an Android app we must first have a reasonably clear notion of key aspects of the Android architecture, as well as the typical failures that Android apps are prone to experience.

After we assess the aspects (functionality, performance, accessibility, compatibility, etc.) and the scope (unit, integration, system, E2E, etc.) that we want our tests to cover, we must then strategize our testing, first, in terms of physical or emulated devices in order to balance testing fidelity with testing speed. As is known, higher fidelity tests, being faster –since they run on physical or emulated devices, take more time and resources, while the opposite is true for lower fidelity tests (local tests), being they run on local machines. A local test runs rather directly on the workstation than an Android device or emulator, and so, it uses the local Java Virtual Machine (JVM) to run tests. Local tests enable us to do quicker assessment of our app’s logic, but, bear in mind that the inability to interact with the Android framework limits the types of possible tests to execute.

It’s important to focus on testable architectures. Testable architectures are a way of structuring our code in such a way it helps us isolate different parts of it (decoupling), resulting in advantages like better readability, maintainability, scalability, and reusability. When an architecture doesn’t follow a testable structure e.g.: (classes that can’t be unit-tested), it often gives way to what is called flaky tests, which are bigger and slower tests that do not pass 100% of times (e.g.: bigger integration tests or UI tests, which reduce the possible testing scenarios).

So, when we decouple our app’s architecture we can start testing its different layers –Presentation, Domain and Data– separately.

Basic Unit Tests

Some of the best practices for testing Android apps using this decoupling paradigm would be a set of unit tests which comprise:

  • Tests for the ViewModels or Presentation layer.

Some UI test examples:

  • Screen tests, for critical user interactions. Actions like clicking buttons, typing in text fields, checking boxes, etc.
  • Flow or Navigation tests may cover the most usual paths, and simulate user movements as in a normal navigation flow. They’re good candidates for smoke tests.
  • Tests for the Data layer, focusing on platform independence to allow for mock or fake tests to use different database modules and remote data sources.
  • Tests for the Domain layer.
  • Additionally, Utility classes should also have tests performed in order to check the correct string manipulation and math operations.

We will have normal and edge cases. Examples of edge cases can include:

  • Math operations using negative numbers, zero and boundary conditions.
  • All possible network connection errors.
  • Corrupted data, –like malformed JSON.
  • Simulating full storage when saving to a file.
  • Object recreated in the middle of a process (such as an activity when the device is rotated).

Test Doubles

Isolation (decoupling) might not always be possible when testing our apps, due to dependencies like data repositories, external databases, etc. This is when test doubles come into play. Test doubles play the role of components, only they just exist in our tests to present the behavior/data specific to our needs. This results in tests being faster and simpler.

Within the types of test doubles we can find:

  • Fake. Good for tests, bad for production. E.g.: in-memory databases.
  • Mock. A custom parameterized test double which behaves as designed and receives specific defined data (interactions).
  • Stub. Same as mocks, only they don’t expect specific data.
  • Dummy. A parameter ‘wildcard’. E.g.: empty function callbacks.
  • Spy. Wraps the real object.

We will then need to configure the dependencies for our project in order to use the APIs provided by our testing framework.

Running Tests from the Command Line


This package provides a unified command line interface to Autify and a client Go package.

CLI Installation

brew tap koukikitamura/autify-cli

brew install autify-cli
Download a binary

Download TAR archive from Github release page.

curl -LSfs | \

  sh -s -- \

    --git koukikitamura/autify-client \

    --target autify-cli_linux_x86_64 \

    --to /usr/local/bin

CLI Configuration

Before using the autify-cli, you need to configure your credentials. You can use an environment variable.

export AUTIFY_PERSONAL_ACCESS_TOKEN=<access token>

CLI Basic Commands

An autify-cli command has the following structure:

$ atf <command> [options]

To run test plan and wait to finish, the command would be:

$ atf run --project-id=999 --plan-id=999

{"id":999,"status":"passed","duration":26251,"started_at":"2021-03-28T11:03:31.288Z","finished_at":"2021-03-28T11:03:57.54Z","created_at":"2021-03-28T11:03:04.716Z","updated_at":"2021-03-28T11:04:00.738Z","test_plan":{"id":999,"name":"main flow","created_at":"2021-03-26T08:25:12.987Z","updated_at":"2021-03-26T08:33:45.462Z"}}

To fetch scenario, the command would be:

$ atf scenario --project-id=999 --scenario-id=999


To fetch test plan execution result, the command would be:

$ atf result --project-id=999 --result-id=999

{"id":999,"status":"waiting","duration":26621,"started_at":"2021-03-26T10:09:12.915Z","finished_at":"2021-03-26T10:09:39.537Z","created_at":"2021-03-26T10:08:54.769Z","updated_at":"2021-03-26T10:09:44.542Z","test_plan":{"id":999,"name":"main flow","created_at":"2021-03-26T08:25:12.987Z","updated_at":"2021-03-26T08:33:45.462Z"}}


The path of the Autify dashboard home is /projects/[project-id]. This path parameter is the project-id.
The path of the test plan's detail page is /projects/[project-id]/test_plans/[plan-id]. This path parameter is the plan-id.
The path of the scenario's detail page is /projects/[project-id]/scenarios/[scenario-id]. This path parameter is the scenario-id.
The path of the result's detail page is /projects/[project-id]/results/[result-id]. This path parameter is the result-id.

Go package

The following is the code to run the test plan and poll its status.

package main

import (


const (

    ExitCodeOk    int = 0
    ExitCodeError int = 1

func main() {

    var projectId = 999
    var planId = 999
    autify := client.NewAutfiy(client.GetAccessToken())
    runResult, err := autify.RunTestPlan(planId)

    if err != nil {

        fmt.Println("Error: Failed to run the test plan")

    ticker := time.NewTicker(time.Duration(1) * time.Second)
    defer ticker.Stop()

    var testResult *client.TestPlanResult

    for {
        select {
        case <-ticker.C:
            testResult, err = autify.FetchResult(projectId, runResult.Attributes.Id)
            if err != nil {
                fmt.Println("Error: Failed to fetch the result")
            if testResult.Status != client.TestPlanStatuWaiting &&
                testResult.Status != client.TestPlanStatusQueuing &&
                testResult.Status != client.TestPlanStatusRunning {
                jsonStr, err := json.Marshal(*testResult)
                if err != nil {
                    fmt.Println("Error: Failed to marshal the test result")

        case <-time.After(time.Duration(5) * time.Minute):
            fmt.Println("Error: Timeout")

Go Codeless

Technological advances like AI have caused testing tools to evolve dramatically. With the implementation of Machine Learning algorithms, tools can now learn about components, elements and their respective changes within a software, adapting to them and making the appropriate decisions as to what the test design and maintenance concern.

Most codeless testing tools offer a wide range of very useful features. What is certain, considering the state of the art, is that a codeless tool should reunite a specific set of features which would make it ideal to get the job done.

A quick search online will show you a lot of options in this new world of no code platforms. But, when it comes to pricing, most of them aren’t exactly what we would deem transparent. Besides, real, comprehensive tech customer support is a value that differentiates a good tool/service from an average one. That’s a thing to keep in mind.

At Autify we take such things in high account, because client success is cause and effect of our success.

We invite you to check these amazing customer stories:

You can see other of our client’s success stories here:

  • Autify is positioned to become the leader in Automation Testing Tools.
  • We got 10M in Series A in Oct 2021, and are growing super fast.

As said before, transparent pricing is key to our business philosophy.

At Autify we have different pricing options available in our plans:

  • Small(Free Trial). Offers 400~ test runs per month, 30 days of monthly test runs on 1 workspace.
  • Advance. Offers 1000~ test runs per month, 90~ days of monthly test runs on 1~ workspace.
  • Enterprise. Offers Custom test runs per month, custom days of monthly test runs and 2~ workspaces.

All plans invariably offer unlimited testing of apps and number of users.

We sincerely encourage you to request for our Free Trial and our Demo for both Web and Mobile products.