May 9, 2019

Two To-Do’s and a Word of Caution for Brands from Google I/O 2019

If a tree falls in a forest and no one is around to hear it – does it make a sound? If a feature is added and nobody can find it – does it exist?

Read more

May 12, 2018

4 Things Businesses Need to Know from Google I/O 2018

It’s May, and you know what that means! Ok, maybe you don’t – it’s time for Google I/O! Every year, thousands of developers flood Mountain View, California to learn about the latest innovations and announcements from the software giant. The event kicked off today with its inaugural keynote presentation, and while many of us watched from our office in Dallas, a few lucky Rocketeers were onsite to experience the action firsthand. Over the course of the next several days, we will share insider details, perspectives, and helpful recaps of what we uncover…starting with today’s keynote.

Make Good Things Together

Setting the theme in the first minute of the presentation, Google kicked off its annual developer conference with the phrase “make good things together.” This was present in nearly every segment of the presentation, whether it was about Google facing challenges to better the world or giving everyone the tools to do it themselves.

Here are four key things that caught our attention and that we believe will have the greatest impact on business in the next year: Google Assistant, App Actions, App Slices, and ML Kit (click any topic to jump to it in the blog). There were, of course, more than those four topics that piqued our interest, but those are the ones that will impact businesses the most in the coming year. Keep in mind that several of these are sneak peeks and some will have many more features and capabilities in the coming months – so, be sure to check back in for more information as it is released.

1. Google Assistant

Making its debut two years ago, Google Assistant is much more than it was when it started. At I/O today, three new features were announced for Assistant that could truly give it the edge over Alexa and Siri (and Cortana I guess). Those features are Continued Conversation, Multiple Actions, and improved interactions with Google Assistant Apps (the really big one).

Continued Conversation allows for Assistant to continue providing answers without having to prompt each question with an “Okay Google”. Once a user has completed the conversation at hand, a simple “thank you” will end the interaction and kill the mic. This move allows Assistant to understand what’s conversational and what is a request as well.

Multiple Actions sounds simple but is extremely complex. Simply put, this allows the user to say things such as “what time is it and how long will it take me to get to work?” and get answers to both questions without having to ask them individually to Assistant.

Google Assistant showing Starbucks menu items in Android P

Google Assistant Apps have some new capabilities as well. To get ready for Smart Displays, Google gave Assistant a visual overhaul. Now, information is displayed full screen and can include additional visuals such as videos and images. eCommerce applications can benefit greatly from the visual overhaul as the transition into Assistant Apps is much easier and more natural for the user. Previously, a user had to request to be connected to a Google Assistant App, but now a simple request such as “order my usual from Starbucks” will take the user directly into the Starbucks Assistant App. Seen above, the user can quickly and easily select additional menu items to include in their order via the new visual interface mentioned before. From first request to completed order, this interaction will likely involve fewer steps for the user than going directly into the Starbucks app (given it’s not on the user’s home screen).

2. App Actions

Suggestions already appear below the Google search bar while typing. Soon, suggested actions will begin to appear as well. This might not sound like much, but imagine someone is searching for a particular product, like laundry detergent, the Walmart app could prompt an App Action to “add to my grocery order” for pickup later.

App Actions example from Google I/O 2018 keynote

As shown above, Google provided an example of searching for Infinity War. When the user searched for it, they were prompted with options to buy tickets or watch the trailer. This is a great example of a contextual interface, but this doesn’t just happen like magic. Apps needs to be optimized to allow for this type of interaction.

Headphones and smartphone showcasing App Actions

In this example, Google has placed App Actions in the launch menu. The suggestions are based off of your everyday behavior. In this instance, it is suggesting the presenter call Fiona, as he usually does at this time of day, or continue listening to the song he last listened to since his headphones are connected.

3. App Slices

Similar to App Actions, App Slices also appear in search. But there is a difference. Instead of simply suggesting an action, App Slices use the functionality of an app to display information in search. It can present a clip of a video, allow the user to check in to a hotel, or even show photos from the user’s last vacation.

App Slices showcased at Google I/O 2018

In the example shown here, simply searching “Lyft” brings up the suggested routes in the Lyft app and displays the cost of the trip as well. We’ll learn more about what App Slices are available soon, so be sure to check back to learn more about the potential benefits of this innovation.

4. ML Kit

Part of Firebase, ML Kit (Machine Learning Kit) now offers a range of machine learning APIs for businesses to leverage. Instead of having to build custom ML algorithms for anything and everything, optimize for mobile usage, and then train with hundreds, or preferably thousands of samples, Google now provides “templates” for some common business needs.

ML Kit and templates shown at I/O 2018

Leveraging TensorFlowLite and available on both Android and iOS, ML Kit will make it easier to integrate image labeling, text recognition, face detection, barcode scanning and more. It can even be used to build custom chatbots.

But That’s Not All

There were plenty of other announcements in the keynote and even more on their way as the week goes on. For instance, right after the keynote, we found out that Starbucks had nearly as many orders come through its PWA than via its mobile app. We learned that Google Assistant can now make phone calls to schedule appointments – without the customer service representative realizing it’s a computer. Google announced a new Shush mode to completely eliminate notifications when a phone is upside down on a table, and a lot more.

Even among the four topics covered in this recap, there is more information to come as the week goes on. We’ll dive into each as we get more information back from our Rocketeers in California, so be sure to check back in a couple of days.

May 10, 2018

Google I/O 2018 Rocketeer Recap – Day 2: A Day for Developers

Google shocked the world with its appointment-booking, conversational AI, Assistant Duplex, when the conference began. They covered new UI elements in Android P, unveiled their new TPU 3.0 servers that power their Machine Learning and Artificial Intelligence, and announced Google Assistant will help teach children to say "please" and free up time on your calendar. Believe it or not, all of these announcements were made in the first two hours of the conference. So, what could Google possibly have in store the next two days of I/O 2018? We’re glad you asked.

While a lot of these might not be as flashy as an AI that makes phone calls, there’s no shortage of new and/or updated tools for developers to leverage in the coming year. Here’s what caught the attention of our Rocketeers during the second day of Google’s annual developer conference.

Android Jetpack

Developers can’t just write code and expect it to run perfectly, it needs to be tested – the more often, the better. Back in the day (like last week) developers had to decide whether they wanted to run it on the machine they’re using for development or on a device. Now, they can simply choose which they would like to test, and Jetpack will take care of the rest. The Jetpack Test will even simulate the conditions an Android device “in the wild” would face to make the test more accurate.


For many companies, Angular is the basis for the majority of their Web Applications. As the reigning king of Web Applications, new features and improvements directly correspond to improvements in applications. Here are a few features that Senior Technical Architect, Jonathan Campos, found to be the most exciting:

Schematics – Customize the generated code for an application; improves development speed.

Angular Universal – A way to render out Angular applications at first request by a user; improves first draw speed and user experience.

Angular Elements – Allows rendering of Angular components without needing to include the entire Angular framework on a webpage.

Ivy Renderer – This remarkable change in rendering can both reduce bundle sizes and improve the initial load time of an application by removing unused code and only compiling the necessary code that changed between releases.

Core IoT

The Internet of Things (IoT) continues to grow, but it’s not getting any easier to manage – until now. Google’s new IoT management tool, Cloud IoT Core, will make it much easier to manage, connect, and grow IoT ecosystems that seamlessly connect to Google’s Cloud Platform (GCP). It’s not just the development that’s streamlined; analytics are also more manageable than ever.


An underrated feature hidden in Chrome, Lighthouse helps web developers pinpoint areas for optimization to increase the performance of websites. As of I/O 2018, Google is expanding on its feature set. One feature that will help companies the most in monitoring their sites is the added Lighthouse API. This way, businesses can integrate diagnostics right into their Continuous Integration and Continuous Delivery pipelines.


Google Photos is great on its own, but it doesn’t play well with others. As users take photos, Google Photos is great for backing up and indexing those images. However, finding an image in another app, like a photo editor, can result in minutes of searching through device folders. Yesterday, Google introduced a developer API for Google Photos. This will allow user-permitted apps to search through images directly or by using categories like “documents” or “selfies” as a filter. Director of Android Engineering, Luke Wallace, had this to say about the new API: “Imagine picking a profile photo by just seeing your last 10 selfies in Google Photos, it would be so much quicker than it is today! The API allows for basic filtering of photos, adding photos to Google Photos, creating albums, and even enhancing the albums with more information around the photos like descriptions and map views.”

Support Library

Unless an app has a singular purpose, it’s going to need a menu – among other things. Instead of starting from scratch each time, Google has made it easier than ever to edit UI components for Material Design in the support library. Now, instead of having to reinvent the wheel every time you want a custom interface, you can start with something that resembles a wheel and modify where you see fit. The best part? It’s not just for apps! The support library provides UI components that can be used for Android, iOS, web, Flutter and React.


A lot happens behind-the-scenes when actively using an app and when it’s running in the background. Ever put your phone in your pocket and it seemed unusually warm? It’s probably due to an unoptimized app ravaging your CPU. That’s why Google made WorkManager. With this nifty tool, developers have more visibility into solutions for background work – which will ultimately help developers make more battery-friendly apps.

Closing out Day 2

While this may seem like a lot, this is just the tip of the iceberg. It seems no service, tool, or platform was left untouched this year. What’s even more astonishing is that Google has more releases on the way. Some of the updates we’re learning about are just now being released to the public and some aren’t even out yet. So, be sure to check in every now and then as we explore even more of these new features and services from Google.

Until then, if you’d like to hear more about the updates coming to Android and how Google’s services can improve both your iOS and Android applications, contact us today.

May 9, 2018

Google I/O 2018 Rocketeer Recap – Day 1

During the annual Google I/O event, so much more is released than just what the keynote includes. Sure, most of the big news comes out in the first two hours, but more and more details and announcements come out as Google holds session after session over the course of the three-day event.

Google’s goal this year seems to be quality-of-life updates – both for developers and for the people who will ultimately use the products and services created by developers. Since Google is leaning heavily on Machine Learning (ML) to accomplish this goal, talk of ML and AI permeated the entire conference. The second most important theme of the day was simplification (and ML is helping with that too).

Improvements for Developers

Once developers had a chance to learn more about the technologies originally discussed during the keynote, a trend emerged – Firebase was everywhere. Firebase had previously been used for a few things in app development, like crash reporting and user authentication through other web services such as Facebook and Twitter, but this year Google has made it a tool that every developer should be using. As we mentioned in our Google I/O 2018 keynote recap, Google added several new ML models within Firebase to make applications not only functional, but smart. This isn’t where the Machine Learning stops though. Google also added several new features to help with communication, overall application health, and data management. Senior Technical Architect, Jonathan Campos, believes the services in Firebase are so powerful that this could likely be the way most companies will implement Machine Learning in their applications for the foreseeable future.

Another Google-backed technology facing a revival is the Progressive Web App or PWA. Introduced a few years ago, PWAs are basically light-weight versions of apps that live on the web. This year PWAs were front and center, featuring native integration with Chrome OS along with a host of new Lighthouse tools to create more actionable guidance developers.

Google also stepped up and added a host of new features and development best practices for Android. A key resource to aid in this is Android Jetpack – a set of libraries, tools, and architectural guidance to make it easier to build great Android applications.

Improvements for Users

Digital well-being is not a new idea, but Google is using it to help drive some of their initiatives. The Dashboard in Android P will help users understand where their time is going, and even encourage them to stop using apps that eat up too much time. For those that want to disconnect more, it could make things a lot easier by providing a little external force that can drive behavior change. But it’s not just a person’s time and energy that Android P is going to help with – it’s also going to help extend the time their phone has energy. When monitoring what and when apps are used to help users disconnect, Google will also be using ML to know when to close or block apps from operating in the background, ultimately extending the battery life of Android devices.

Carrying along with the theme of simplicity, Android P features a few new UI adjustments as well. For instance, many interactions will now occur at the bottom of the phone – where it is easiest to access them with one hand. Other UI adjustments include gestures in place of button presses, icons such as the "back" button only appearing when they can be used, or the rotation icon when the phone is turned 90 degrees.

There are other products aside from Android P that received updates users will enjoy as well. For instance, Android TV got an overhauled setup process that reduced setup time by about a third. Thanks to ML, Android TV will also predict what settings users are looking for as well. Google also updated ARCore to version 1.2 and announced Cloud Anchors. As one of our Android Engineers, Chris Koeberle, put it, “Cloud Anchors were the missing element to make it easy to create immersive multiperson augmented reality experiences. Being able to create an AR app that allows people to not just experience but collectively modify a virtual world - on Android and iOS - is going to open up possibilities we can't even imagine yet.”

Closing out Day 1

“As a developer, I don’t focus on what something does, but what it enables me to do. Today Google enabled me to do a lot – specifically around Google Assistant, application development with Firebase and Machine Learning, web application quality with PWAs, and improvements to the Android ecosystem."

Jonathan Campos, Senior Technical Architect

With this many announcements made on day one, it’s hard to imagine what Google has in store for the rest of the week. But, that’s why we have developers on the scene to tell us what they discover, as they discover it. Be sure to check in tomorrow to hear what they stumble upon next.

In the meantime, contact us for more information about the changes coming in Android P. As one of the select Google Certified Agencies, are privy to detailed information, beta releases, and direct access to in-house Google developers, unlike others without this elite certification. With so many changes on their way, it’s more important than ever to build out your digital roadmap, and we can help.

August 3, 2017

Engineering Jedi: Preparing for Android O

Android O, the upcoming OS for Android revealed at this year’s Google I/O, features many updates that developers are excited about. But you shouldn’t wait until its release to begin thinking about how O will affect your current or future apps. Let’s look at what you can prepare for right now before Android O’s release. Before you dive in, we should explain that we’re not reviewing what’s new for users here—this post is a rundown for Android developers. So let’s talk code!

We’ve divided the subject into two main sections. The first section deals with apps that target any API level and reveals changes that your app will see even without targeting Android O. The second section deals with new APIs and behavior changes when targeting Android O. Both of these sections assume that your app is running on an O device.

All links point to the Android developer preview site and are subject to change once O is officially released into the wild—we’ll update these as they change so you don’t miss out on anything.

Apps Targeting All API Levels

Important Changes

Consider the following behavior changes that will take place even if your app doesn’t target O just yet.

Other Observations

  • On O devices, Notification badges (aka notification dots) are displayed by default (on supported launchers)
  • In Android O, Developer Options > Show Layout Bounds now displays an “X” icon over the element that currently has focus

The full list of changes can be found here.


You’ll basically want to test out your app on a device with the O Preview or an emulator focusing on system behavior changes and going through all app flows. Read more on this on the Android developer site.

Apps Targeting O (API 26)

Changes requiring action

Consider the following behavior changes that you will need to handle (if they affect your app).

Background execution limits

Check out your strategy around background execution considering the following changes:

  • Apps cannot use their manifests to register for implicit broadcasts. You must register for them at runtime
  • Apps that are running in the background now have limits on how freely they can access background services
  • Wakelocks are removed when the app is backgrounded with no active components (components being an activity, service, broadcast receiver, or content provider)

By default, these restrictions are enabled when targeting O, but the user on an O device can still enable them from Device Settings even if you don’t target O.

Other Changes

  • Notification channels are mandatory. You are required to add at least one notification channel for your app. If you don’t, no notifications will be shown and the system will log an error. To see a Toast when this happens on your O test devices, turn on Settings > Developer options > Show notification channel warnings
  • Set a maximum aspect ratio. The app’s default max screen aspect ratio is no longer set to 1.86 (16:9) when you set your target sdk to 26. Make sure to set a default aspect ratio meta-data entry in your manifest and test your app on the Samsung Galaxy S8 or LG G6. Read more on this Android Developer’s blog post
  • Test any code paths in your app executing sort() and Lists.sort() due to internal changes in O
  • dns1, net.dns2, net.dns3, and net.dns4 are no longer available in O

Optional (but Recommended) Changes

The following is a list of things you are free to implement (or not) in your app. Doing these things is not required but will result in a better experience on O devices.

  • Provide hints on input views (username, password, address, etc) and mark them as important for AutoFill to help the AutoFill framework better understand your app
  • Consider supporting Picture-In-Picture mode if your app focuses on a media playback experience. Note that android:resizableActivity isn’t required to be set if you only want to support PIP and not other multi-window modes
  • Add adaptive launcher icons. Check out the AdaptiveIconPlayground app mentioned here and on GitHub to test how your adaptive icons respond to various masks and demonstrate possible animations using your adaptive icon
  • Consider adjusting your App’s UI to let the user pin app shortcuts and widgets rather than relying on the user manually adding them from their Launcher
  • If you really want to make your Chromebook users happy, consider adding keyboard navigation clusters to your views and viewgroups
  • Consider using Google’s Safe Browsing API on all your WebViews by adding a meta-data entry to your manifest

Other Observations

The full list of Android O behavior changes can be found here.


You’ll basically want to test out your app on a device with the O Preview or an emulator focusing by targeting O, addressing system behavior changes, implementing O specific features and going through all app flows. Learn more on migrating apps to Android O on the Android developer page.

Want even more insight on Android updates and their impact on brand experiences? As one of only 25 global Android Certified Agencies, we’re always thinking ahead for our clients. Ask your questions at [email protected]

© 2020 Bottle Rocket. All Rights Reserved.