October 10, 2017

Bottle Rocket QA: How Many Devices Is Too Many?

Never Too Many Devices

Mobile software testing involves many variables such as operating system versions, types of connectivity, networks, devices, and environments. To account for so many variables, the approach to testing apps poses an interesting dilemma: Should testing be performed on physical devices, simulators, emulators, or cloud-based devices? Is there a unique value in sticking to physical devices? At Bottle Rocket, we use a combination of these with an emphasis on physical devices—and we do mean emphasis. We recently surpassed 400 test devices in our library.

Device Alternatives?

Users expect apps, including those with the ability to sync between devices with shared accounts, to work flawlessly on their devices. So, a comprehensive library of devices to mimic the user experience is essential in testing. However, the number of combinations of devices and operating systems that users have is huge. For this reason, many companies opt to rely mostly on simulators, emulators, and/or cloud-based devices for their software testing.

These device alternatives can be very helpful in the beginning of the application development lifecycle, but they are not ideal as the sole testing resource. Many important pieces of mobile interaction are lost when physical devices are left out.

Real-world users are equipped with real devices in different environments, meaning the effects of the following items may vary:

  • Battery life
  • Network conditions
  • Memory limitations
  • Location services
  • Bluetooth
  • NFC
  • Device orientations
  • Accelerometer
  • Operating systems
  • Peripherals

Recreating the user experience as closely as possible is what Bottle Rocket strives for. In our efforts to do this, we recently surpassed four hundred test devices in our QA arsenal! This is a milestone in itself since one of the biggest challenges that exists in application testing is the cost of obtaining and maintaining such a large volume of physical devices for testing purposes.

Bottle Rocket Setup

Our device lab contains Griffin cabinets with a capacity to hold and charge up to eighty devices at a time. Rocketeers pick out the devices they need, check them out library-style with our device checkout software, and begin their testing. All Rocketeers can access the checkout software remotely, so devices can be easily located and accounted for conveniently.

This system allows testers to plan ahead and dedicate certain devices to their project for as long as they are needed. Once a project starts, the lead tester maps out which devices will be needed during the testing process. They are checked out from the device cabinets and used throughout the project, only being returned when all testing is completed. When finished, checking the devices back in is as easy as pushing a button.

Maintainability

Solely relying on physical devices to test requires a great deal of time and money. Maintaining such a library involves frequent upgrades in order to remain relevant and effective. If our system doesn’t contain the latest and greatest mobile devices, a big chunk of tech-savvy users and their experiences will be unaccounted for. We need to remain ahead when it comes to testing on the latest hardware. We pay close attention to mobile announcements such as Google I/O and WWDC every year to gather intel on what’s coming and plan ahead by preordering any new devices or scoping out the latest operating system and its release date. So why all the focus on what’s coming? It’s important to know the adoption trends, new hardware features, such as aspect ratios, or if any new peripherals are announced.

New operating systems are especially impactful as they can touch any part of the device and installed applications. New features and tweaks to existing operating systems can affect corresponding areas of our applications, so we must look into the patch notes of any release and test our applications accordingly. A recent example of this is the release of iOS 11 – for every new beta released, we update our device operating systems and run tests on all of our applications. It’s a quick sanity check to verify all features cooperate with the updates. Developers have taken advantage of these betas as well in order to research and experiment with the new ARKit.

It is beneficial to have varying device types - high end and low end, new and old. Keeping old operating systems on devices is part of this method. Android users especially are not pushed to upgrade operating systems as aggressively as other platforms. Android OS 6 is still the most widely used according to Android’s developer portal, despite OS 7 having been out for more than a year. Covering OS 4, 5, 6, and 7 is necessary to ensure quality for all users.

In summary, using physical devices will assure the most accurate testing and analysis since that is the only way to duplicate the diversity of users’ environments, network settings, operating system versions, and hardware. Keeping up with current and future devices and operating system versions will also ensure quality results. Bottle Rocket will continue to build its device library – we’re beyond 400 now, and have no plans of stopping!

Interested in testing your app in our device library? Let us know at [email protected]

June 15, 2017

Inside the QA Automation Lab

Bottle Rocket’s Quality Assurance team is on an exciting automation journey. We’ve written the code, trained our manual testers, and are now focusing efforts on the place where it will all be put together. We’ve put a lot of hard work into our automation process and we wanted an automation lab that was just as awesome.

Our lab is an open space that is accessible to all. Shelves containing a collection of devices covering every major type, size, and operating system line the walls. Each device has its own stand and the shelves are backlit to maximize visibility. USB hubs with up to two amps per port are positioned to the side, allowing devices to charge and sync at the same time. The hub is connected to an iMac acting as the server. We anticipate being able to run tests on 32 devices at once per server.

Many of our large projects that have had multiple releases have had automation scripts created for them and can be continuously monitored. Any time a new build is created, a test plan can be triggered to download the build and run a smoke test on selected devices. This can really ease the pain of having to run the same tests on each build over and over, not to mention those server testing sessions in the wee hours of the morning.

Reports are generated and QA on the project are notified. Testers get reports from these runs complete with screenshots of each test outcome. These reports are divided into different sections, such as pages of the app, and the results are clear-cut with color-coded pass/fail icons. Each individual test can pass or fail based on criteria detailed in the test cases. If the tester is unable to see exactly what happened with just the screenshots, they can swing by the automation lab to run the test again and follow what happens in person.

This has been a journey because it wasn’t always easy; there were plenty of issues that had to be worked through along the way. Physical devices were a big problem. They’re definitely the most beneficial for testing because they are the closest we can get to real-world application of an app. The fact is, there are lots of issues that can be missed on a simulator, like things related to performance, battery level, or any kind of hardware component. The more variety we can get on the higher number of devices, the better. But working with this volume of devices has its hardships. Some of them required enhancements to our setup; high-performance devices (namely tablets) require lots of power to even maintain a charge, hence the beefy USB hub mentioned earlier. The less powerful hubs we used initially just could not keep up. Different devices can also have different settings menu navigation, meaning our test cases need to be general enough to apply to different hierarchies. Some devices even disconnect after a period of time.

Our automation engineers solved this one with a program they created called Vadr. Vadr is the interface that allows testers to access the lab and devices remotely. It shows all devices, their connection status (so we know when those difficult devices have disconnected), and allows us to choose which test/test plan to run on each device. This will make it easy for any QA tester to take advantage of our automation tools.

Our goals for an automation lab were accessibility, visibility, and efficiency. We ended up with that and more. As the physical space was created and evolved, our process and understanding of automation testing grew and solidified with it.

October 4, 2016

Quality Assurance Automation

In today’s mobile world, quality assurance automation is a new necessary in successful testing. It allows us to be hyper-focused on the quality of our client’s work by checking overlooked things without using team resources. At Bottle Rocket, we’ve defined a formal process for it that allows our QA team to concentrate on more complex testing.

It wasn’t easy developing an automation process from scratch. Deciding to adopt automation brought about a lot of questions: How would automation engineers be allocated within the company and our projects? How much work can one automation engineer handle? What technology would we need to adopt to accommodate them?

Since starting from scratch also meant that there were no design patterns or coding standards in place, there was no testing framework or concrete plan for integrating automation into our current processes and projects. So we started off by selecting our tools of trade – Appium and Postman.

 

Step one: deciding the tools

Appium is an industry-standard, open source quality assurance automation tool based on Selenium's WebDriver. It works by sending commands such as swipe, tap, and long press exactly like a user would. It also includes features such as flexibility to program in multiple languages and a large community surrounding the framework.

For server testing, we selected Postman. This tool sends and receives HTTP messages and can easily test various user flows through API calls. Some benefits of Postman include ease of use, great user experience, and flexibility.

 

Step two: getting the ball rolling

Working out the new processes and workflows of automation comprised of a lot of trial and error. For example, some significant hurdles included provisioning profiles and attempting to install iOS apps from the command line. In the end, Rocketeers discovered a way to create the right build and how to get it onto the device.

After developing a framework from scratch to form automated test plans, automation engineers designed test cases that are general enough to be used for any project. Although reusable test cases take longer to build, they save time in the long run because testers don’t have to write or edit tests for every new project.

Encapsulation, or the explicit bundling of certain parts of code, was also a major focus of our initial automation process. This helps change the target operating system or device for a test run without affecting the test cases themselves. Having two separate code pieces makes the overall framework much more configurable.

 

Step three: integrating automation

The first phase of integrating automation with our current QA process involved training for manual testers. We started off with the basics: programming with Java. We hosted an interactive coding class covering everything from objects to arrays in an approximately eight-week time frame.

After the training, Rocketeers moved on to automation itself with a focus on Appium. We were given some sample code to examine and edit, and after a few weeks, our team wrote scripts and manipulated the app on a physical device.

Since adopting automation, test coverage for both test cases and devices has improved, and testing volume has increased. In the future, we will continue to grow our automation lab with a myriad of test devices to choose from as well as expand our automation team and refine our framework.

 

Automation streamlines the quality assurance process and frees up our testers to focus on tasks that matter most to our clients. Make sure to contact our team of Rocketeers if you’re interested in hearing more about our automation services.

© 2020 Bottle Rocket. All Rights Reserved.