August 2, 2018

Bottle Rocket’s 3rd Annual QA Bootcamp Recap

Another year, another successful Bottle Rocket Quality Assurance Boot Camp!

Every year, the QA team puts on an informative and interactive event for testers outside of the company. Interested participants come from all over DFW to learn about a variety of topics covering the test approach to mobile and connected devices. This gives people outside of the mobile space a chance to get more exposure to the mobile world as well as how Bottle Rocket’s QA team works – and this particular boot camp was our most successful yet.

In 2017, the QA Bootcamp was a half day affair with about 20 participants.

In 2018, a full day was needed to cover all of the topics given and the audience grew to 25.

But that’s not the only thing that changed – this year, the boot camp had its first ever sponsor. It was so great to be supported by Beacon Hill Staffing Group. As a neighbor in our building, they have seen the boot camps of previous years and wanted to be a part of this year’s fun. Thank you so much for your support!

What Happened at our QA Bootcamp?

Our speakers went over the following areas of interest:

  • Bottle Rocket’s general test approach as it pertains to mobile and connected devices
  • A detailed overview of mobile automation
  • Our preferred testing tools
  • Different mobile platforms
  • Advanced testing types such as performance, security, and load testing

The topics covered were similar to last year’s agenda, but the added time allowed the speakers to go more in depth. A full day event gave them an opportunity to kick up their presentations to the next level. Instead of giving an overview of each idea, they were able to go into more technical details, which gave the attendees a clearer picture of what it would be like to work with these tools and processes. One of the attendees said the experience was “very enriching” and that it “enabled [them] to gain insights and understanding of the concepts of mobile”.

But it wasn’t all lectures – there was also an introduction by our Director of Quality Assurance, awesome guest speakers from our User Experience and Engineering departments, a tour of our quirky space-obsessed office, a fantastic Q&A, and the delicious lunch provided by Beacon Hill Staffing Group.

Why Do We Host Bootcamps?

This type of event is not only a benefit to the participants but also to our QA team and department as a whole. As the saying goes, the best way to learn something is to teach. We believe in giving our team members the opportunity to share what they’ve learned whenever possible because this helps the people they share it with as well as strengthening their own proficiency in the subject. The QA Boot Camp is a great way to do this.

One of our new presenters this year, Jyoti Pothukuchi, commented that “sessions like this help us share our expertise and experience with the testing community” as well as “helping us improve based on the feedback”. Another presenter, AJ Mejorado, called out that the audience “had a series of really great questions during the Q&A panel” and that this gives us “an updated perspective on how many individuals in the area are really interested in not only QA but also in increasing their knowledge and skillset”.

In the end, connecting with people who share our passion for quality is what keeps us working hard on this event every year. The increase in attendees this year really boosted the amount of feedback and questions we received, so thank you so much if you were able to come and participate!

If this year’s growth is any indication, you won’t want to miss next year’s boot camp. Look out for the LinkedIn post for more details.

October 18, 2017

Bottle Rocket Hosts Second Annual QA Bootcamp

Each year, Bottle Rocket hosts a Quality Assurance (QA) Bootcamp, a half-day event covering mobile and connected device application testing. The bootcamp is a chance for testers in other industries to get exposure to the mobile application testing space.

Our agenda for the day included various mobile-specific topics:

  • QA Automation
  • Augmented reality, virtual reality, and mixed reality
  • A tour of our office (including the new automation lab and AR/VR lab)
  • Tools
  • Know your platform
  • TV Everywhere

Last year, all speakers were from Bottle Rocket’s QA team, but this year we decided that having input from Rocketeers of other disciplines would add to the experience. So, we invited department leads from engineering and experience design (XD) to give their perspective. Development leads focused on upcoming iOS/Android developments along with the collaboration of the engineering and QA teams working together. Our head of experience design reiterated the value of having QA's input in XD and involvement throughout all phases of the project. The outcome of those discussions allowed participants to get a glimpse of working in a collaborative environment with multiple disciplines and what to look for in future announcements.

Having other departments involved, in a way, also replicates our working model in the office - all disciplines are integrated. Other companies are known for the separation of QA from the rest of the development team, but here at Bottle Rocket, we believe that all disciplines should work together in close proximity to create a quality product. The speakers helped show that our culture can truly increase productivity and especially that there is value in including QA from the beginning of the project, throughout its development, and upon release. Our working model and their obvious respect for our team seemed to leave a positive impact on the attendees.

The idea was to open up a broad discussion that isn’t necessarily QA-centric; for example, AR and VR is something that impacts many fields and is on the rise. Sharing our current work in this area with exciting platforms was a fun way to elicit the engagement of the participants. Mobile automation is always a popular topic – it’s becoming integral to technology as a whole. We hope participants were inspired and ready to explore the mobile automation world within their own companies.

One of the highlights of the bootcamp was the quality of participants attending the event. Their questions allowed us to dive deeper into the material and to gain a better understanding of where interest lies for future bootcamps. Having the right group of participants makes all the difference in the impact of an event like this. The talent in the room and their inquisitive minds allowed for its success.

Part of our own success story includes the way we work (which correlates to our level of productivity), and what better way to give a glimpse of that to our participants than touring our office space? Stops on the tour included the VR lab, the QA device and automation labs, and examples of our lo-fi to hi-fi whiteboard approach throughout the work areas.

We could have spent years planning and perfecting a bootcamp agenda and presentations, but without an interested and eager audience, it wouldn’t have amounted to much in the end. We owe the success of this year’s event to the questions and passion of the attendees. The positive and insightful feedback is what keeps us going and strengthens our desire to share our knowledge with the community. It makes all the work worth it when we receive feedback like:

“Well Organized”
“Great work environment and great team”
“Very informative & helpful”

Seeing what participants were interested in hearing more about has inspired new focuses for the next bootcamp — there will be many more events like this coming from the Bottle Rocket QA team!

Interested in working with our QA team for your brand experience? Let us know at [email protected]

October 10, 2017

Bottle Rocket QA: How Many Devices Is Too Many?

Never Too Many Devices

Mobile software testing involves many variables such as operating system versions, types of connectivity, networks, devices, and environments. To account for so many variables, the approach to testing apps poses an interesting dilemma: Should testing be performed on physical devices, simulators, emulators, or cloud-based devices? Is there a unique value in sticking to physical devices? At Bottle Rocket, we use a combination of these with an emphasis on physical devices—and we do mean emphasis. We recently surpassed 400 test devices in our library.

Device Alternatives?

Users expect apps, including those with the ability to sync between devices with shared accounts, to work flawlessly on their devices. So, a comprehensive library of devices to mimic the user experience is essential in testing. However, the number of combinations of devices and operating systems that users have is huge. For this reason, many companies opt to rely mostly on simulators, emulators, and/or cloud-based devices for their software testing.

These device alternatives can be very helpful in the beginning of the application development lifecycle, but they are not ideal as the sole testing resource. Many important pieces of mobile interaction are lost when physical devices are left out.

Real-world users are equipped with real devices in different environments, meaning the effects of the following items may vary:

  • Battery life
  • Network conditions
  • Memory limitations
  • Location services
  • Bluetooth
  • NFC
  • Device orientations
  • Accelerometer
  • Operating systems
  • Peripherals

Recreating the user experience as closely as possible is what Bottle Rocket strives for. In our efforts to do this, we recently surpassed four hundred test devices in our QA arsenal! This is a milestone in itself since one of the biggest challenges that exists in application testing is the cost of obtaining and maintaining such a large volume of physical devices for testing purposes.

Bottle Rocket Setup

Our device lab contains Griffin cabinets with a capacity to hold and charge up to eighty devices at a time. Rocketeers pick out the devices they need, check them out library-style with our device checkout software, and begin their testing. All Rocketeers can access the checkout software remotely, so devices can be easily located and accounted for conveniently.

This system allows testers to plan ahead and dedicate certain devices to their project for as long as they are needed. Once a project starts, the lead tester maps out which devices will be needed during the testing process. They are checked out from the device cabinets and used throughout the project, only being returned when all testing is completed. When finished, checking the devices back in is as easy as pushing a button.

Maintainability

Solely relying on physical devices to test requires a great deal of time and money. Maintaining such a library involves frequent upgrades in order to remain relevant and effective. If our system doesn’t contain the latest and greatest mobile devices, a big chunk of tech-savvy users and their experiences will be unaccounted for. We need to remain ahead when it comes to testing on the latest hardware. We pay close attention to mobile announcements such as Google I/O and WWDC every year to gather intel on what’s coming and plan ahead by preordering any new devices or scoping out the latest operating system and its release date. So why all the focus on what’s coming? It’s important to know the adoption trends, new hardware features, such as aspect ratios, or if any new peripherals are announced.

New operating systems are especially impactful as they can touch any part of the device and installed applications. New features and tweaks to existing operating systems can affect corresponding areas of our applications, so we must look into the patch notes of any release and test our applications accordingly. A recent example of this is the release of iOS 11 – for every new beta released, we update our device operating systems and run tests on all of our applications. It’s a quick sanity check to verify all features cooperate with the updates. Developers have taken advantage of these betas as well in order to research and experiment with the new ARKit.

It is beneficial to have varying device types - high end and low end, new and old. Keeping old operating systems on devices is part of this method. Android users especially are not pushed to upgrade operating systems as aggressively as other platforms. Android OS 6 is still the most widely used according to Android’s developer portal, despite OS 7 having been out for more than a year. Covering OS 4, 5, 6, and 7 is necessary to ensure quality for all users.

In summary, using physical devices will assure the most accurate testing and analysis since that is the only way to duplicate the diversity of users’ environments, network settings, operating system versions, and hardware. Keeping up with current and future devices and operating system versions will also ensure quality results. Bottle Rocket will continue to build its device library – we’re beyond 400 now, and have no plans of stopping!

Interested in testing your app in our device library? Let us know at [email protected]

July 10, 2017

Engineering Jedi: iOS Breakpoint Secret Sauce for Better Debugging

As most iOS developers are aware, breakpoints are an essential tool to debugging apps. In Xcode, setting a breakpoint is as easy as clicking on the line number that you want to pause app execution on. When execution is paused, you can view the state of any variables in scope, issue commands to the debugger, and even step through your code line-by-line to watch the execution flow unfold before your eyes.

gif of setting up a point break in xcode

To make your debugging experience with breakpoints even better, we’ll cover the 5 breakpoints that every iOS developer should enable. If you’re a seasoned iOS developer, you may already know about some of these. If you’re new to iOS development, don’t worry, we’ll start with a quick overview of how to setup and use breakpoints.

The Breakpoint Navigator

Managing breakpoints in Xcode is super simple. Enter the Breakpoint Navigator, which lives in the Navigator Panel on the left side of Xcode. Here, you will see all the breakpoints currently set in your project. Click on the name of the method to jump right to that line of code or click on the blue breakpoint icon to enable/disable the breakpoint. Removing a breakpoint is as easy as dragging it outside of the panel.

screenshot of breakpoint navigator UI

Symbolic Breakpoints

While it’s very useful to set breakpoints on specific lines of code, it can also be helpful to have some breakpoints function globally. Symbolic breakpoints are breakpoints that will trigger on certain conditions, like any time a certain method is called on a class. Adding a symbolic breakpoint is achieved by clicking the "+" icon at the lower left of the Breakpoint Navigator and selecting “Symbolic Breakpoint”.

selecting symbolic breakpoint in breakpoint navigator

The Top 5 Xcode Breakpoints

Here we are, the secret sauce! Use the "+" button at the bottom left of the Breakpoint Navigator to add the following breakpoints:

All Objective-C Exceptions

This breakpoint catches exceptions thrown by Objective-C code. Due to the current transition to Swift, this breakpoint isn’t as useful as it once was, but it can still be handy if your app contains any Objective-C code or uses third-party libraries that are written in Objective-C.

Turn it on:

  1. Add an "Exception Breakpoint"
  2. Change the "Exception" value from "All" to "Objective-C"
  3. Click "Add Action" to add a "Debugger Command" action
  4. Type in "po $arg1" for the command. This will automatically print the relevant error to the console when you encounter an exception
adding in all breakpoint exceptions

-[UIApplication main]

This puts a breakpoint at the entry point to your application’s launch. We’ll be using the "Debugger Command" action again to have our debugger import UIKit, which will make the debugger much more aware of properties and methods on things like UIView. This will make it easier to interact with and print properties on these types of classes when you’re debugging.

Turn it on:

  1. Add a "Symbolic Breakpoint"
  2. Type in "-[UIApplication main]" for the Symbol
  3. Add a "Debugger Command" action
  4. Enter "expr @import UIKit" for the command
  5. Check "Automatically continue after evaluating actions" — this will ensure that your app doesn’t immediately pause execution every time you build and run
adding breakpoint in debugger command

UIViewAlertForUnsatisfiableConstraints

This breakpoint helps you catch undesirable constraint configurations. If you’ve ever seen the “unable to simultaneously satisfy constraints” error message in the console, it will suggest you set a breakpoint here. Usually, these situations don’t cause obvious visual errors, but they should be fixed since we don’t know how they will be handled in future versions of iOS.

setting up UIViewAlertForUnsatisfiableConstraints

Turn it on:

  1. Add a "Symbolic Breakpoint"
  2. Type in "UIViewAlertForUnsatisfiableConstraints" for the Symbol

NOTE: In Xcode 9, there is a new "Constraint Error" breakpoint that can be used instead of manually creating this symbolic breakpoint.

using constraint error breakpoint instead of using UIViewAlertForUnsatisfiableConstraints

-[UIView(UIConstraintBasedLayout) _viewHierarchyUnpreparedForConstraint:]

This is another breakpoint to help you identify Auto Layout constraint problems. Hitting this breakpoint is typically much less common than hitting the "UIViewAlertForUnsatisfiableConstraints" breakpoint above, but it’s still good to go ahead and make sure you turn this one on.

Turn it on:

  1. Add a "Symbolic Breakpoint"
  2. Enter "-[UIView(UIConstraintBasedLayout) _viewHierarchyUnpreparedForConstraint:]" for the Symbol
breakpoint to identify Auto Layout constraint issues

UICollectionViewFlowLayoutBreakForInvalidSizes

This last breakpoint helps catch layout errors in UICollectionView’s flow layout. This is especially useful if you ever work with self-sizing collection view cells or create your own flow layout subclass.

Turn it on:

  1. Add a "Symbolic Breakpoint"
  2. Enter "UICollectionViewFlowLayoutBreakForInvalidSizes" for the Symbol
catching errors in UICollectionView's flow layout

One More Thing - User Breakpoints

As you can imagine, it can be quite tedious and time consuming to create all five of these breakpoints for every project you work on. This is where the power of promoting your project breakpoints to "User Breakpoints" comes into play. User Breakpoints will automatically be present in any Xcode project you open! To convert a breakpoint to a user breakpoint, right-click on the breakpoint and choose "Move Breakpoint To" -> "User."

When everything is said and done, you should now have five breakpoints listed under "User" in the Breakpoint Navigator. These breakpoints will be set up for any Xcode project you open from here on out. Also, notice that any project-specific breakpoints will be listed above these "User" breakpoints, under your project’s name.

checking the 5 breakpoints in breakpoint navigator

June 15, 2017

Inside the QA Automation Lab

Bottle Rocket’s Quality Assurance team is on an exciting automation journey. We’ve written the code, trained our manual testers, and are now focusing efforts on the place where it will all be put together. We’ve put a lot of hard work into our automation process and we wanted an automation lab that was just as awesome.

Our lab is an open space that is accessible to all. Shelves containing a collection of devices covering every major type, size, and operating system line the walls. Each device has its own stand and the shelves are backlit to maximize visibility. USB hubs with up to two amps per port are positioned to the side, allowing devices to charge and sync at the same time. The hub is connected to an iMac acting as the server. We anticipate being able to run tests on 32 devices at once per server.

Many of our large projects that have had multiple releases have had automation scripts created for them and can be continuously monitored. Any time a new build is created, a test plan can be triggered to download the build and run a smoke test on selected devices. This can really ease the pain of having to run the same tests on each build over and over, not to mention those server testing sessions in the wee hours of the morning.

Reports are generated and QA on the project are notified. Testers get reports from these runs complete with screenshots of each test outcome. These reports are divided into different sections, such as pages of the app, and the results are clear-cut with color-coded pass/fail icons. Each individual test can pass or fail based on criteria detailed in the test cases. If the tester is unable to see exactly what happened with just the screenshots, they can swing by the automation lab to run the test again and follow what happens in person.

This has been a journey because it wasn’t always easy; there were plenty of issues that had to be worked through along the way. Physical devices were a big problem. They’re definitely the most beneficial for testing because they are the closest we can get to real-world application of an app. The fact is, there are lots of issues that can be missed on a simulator, like things related to performance, battery level, or any kind of hardware component. The more variety we can get on the higher number of devices, the better. But working with this volume of devices has its hardships. Some of them required enhancements to our setup; high-performance devices (namely tablets) require lots of power to even maintain a charge, hence the beefy USB hub mentioned earlier. The less powerful hubs we used initially just could not keep up. Different devices can also have different settings menu navigation, meaning our test cases need to be general enough to apply to different hierarchies. Some devices even disconnect after a period of time.

Our automation engineers solved this one with a program they created called Vadr. Vadr is the interface that allows testers to access the lab and devices remotely. It shows all devices, their connection status (so we know when those difficult devices have disconnected), and allows us to choose which test/test plan to run on each device. This will make it easy for any QA tester to take advantage of our automation tools.

Our goals for an automation lab were accessibility, visibility, and efficiency. We ended up with that and more. As the physical space was created and evolved, our process and understanding of automation testing grew and solidified with it.

© 2020 Bottle Rocket. All Rights Reserved.