May 9, 2019

Two To-Do’s and a Word of Caution for Brands from Google I/O 2019

If a tree falls in a forest and no one is around to hear it – does it make a sound? If a feature is added and nobody can find it – does it exist?

Read more

May 12, 2018

4 Things Businesses Need to Know from Google I/O 2018

It’s May, and you know what that means! Ok, maybe you don’t – it’s time for Google I/O! Every year, thousands of developers flood Mountain View, California to learn about the latest innovations and announcements from the software giant. The event kicked off today with its inaugural keynote presentation, and while many of us watched from our office in Dallas, a few lucky Rocketeers were onsite to experience the action firsthand. Over the course of the next several days, we will share insider details, perspectives, and helpful recaps of what we uncover…starting with today’s keynote.

Make Good Things Together

Setting the theme in the first minute of the presentation, Google kicked off its annual developer conference with the phrase “make good things together.” This was present in nearly every segment of the presentation, whether it was about Google facing challenges to better the world or giving everyone the tools to do it themselves.

Here are four key things that caught our attention and that we believe will have the greatest impact on business in the next year: Google Assistant, App Actions, App Slices, and ML Kit (click any topic to jump to it in the blog). There were, of course, more than those four topics that piqued our interest, but those are the ones that will impact businesses the most in the coming year. Keep in mind that several of these are sneak peeks and some will have many more features and capabilities in the coming months – so, be sure to check back in for more information as it is released.

1. Google Assistant

Making its debut two years ago, Google Assistant is much more than it was when it started. At I/O today, three new features were announced for Assistant that could truly give it the edge over Alexa and Siri (and Cortana I guess). Those features are Continued Conversation, Multiple Actions, and improved interactions with Google Assistant Apps (the really big one).

Continued Conversation allows for Assistant to continue providing answers without having to prompt each question with an “Okay Google”. Once a user has completed the conversation at hand, a simple “thank you” will end the interaction and kill the mic. This move allows Assistant to understand what’s conversational and what is a request as well.

Multiple Actions sounds simple but is extremely complex. Simply put, this allows the user to say things such as “what time is it and how long will it take me to get to work?” and get answers to both questions without having to ask them individually to Assistant.

Google Assistant showing Starbucks menu items in Android P

Google Assistant Apps have some new capabilities as well. To get ready for Smart Displays, Google gave Assistant a visual overhaul. Now, information is displayed full screen and can include additional visuals such as videos and images. eCommerce applications can benefit greatly from the visual overhaul as the transition into Assistant Apps is much easier and more natural for the user. Previously, a user had to request to be connected to a Google Assistant App, but now a simple request such as “order my usual from Starbucks” will take the user directly into the Starbucks Assistant App. Seen above, the user can quickly and easily select additional menu items to include in their order via the new visual interface mentioned before. From first request to completed order, this interaction will likely involve fewer steps for the user than going directly into the Starbucks app (given it’s not on the user’s home screen).

2. App Actions

Suggestions already appear below the Google search bar while typing. Soon, suggested actions will begin to appear as well. This might not sound like much, but imagine someone is searching for a particular product, like laundry detergent, the Walmart app could prompt an App Action to “add to my grocery order” for pickup later.

App Actions example from Google I/O 2018 keynote

As shown above, Google provided an example of searching for Infinity War. When the user searched for it, they were prompted with options to buy tickets or watch the trailer. This is a great example of a contextual interface, but this doesn’t just happen like magic. Apps needs to be optimized to allow for this type of interaction.

Headphones and smartphone showcasing App Actions

In this example, Google has placed App Actions in the launch menu. The suggestions are based off of your everyday behavior. In this instance, it is suggesting the presenter call Fiona, as he usually does at this time of day, or continue listening to the song he last listened to since his headphones are connected.

3. App Slices

Similar to App Actions, App Slices also appear in search. But there is a difference. Instead of simply suggesting an action, App Slices use the functionality of an app to display information in search. It can present a clip of a video, allow the user to check in to a hotel, or even show photos from the user’s last vacation.

App Slices showcased at Google I/O 2018

In the example shown here, simply searching “Lyft” brings up the suggested routes in the Lyft app and displays the cost of the trip as well. We’ll learn more about what App Slices are available soon, so be sure to check back to learn more about the potential benefits of this innovation.

4. ML Kit

Part of Firebase, ML Kit (Machine Learning Kit) now offers a range of machine learning APIs for businesses to leverage. Instead of having to build custom ML algorithms for anything and everything, optimize for mobile usage, and then train with hundreds, or preferably thousands of samples, Google now provides “templates” for some common business needs.

ML Kit and templates shown at I/O 2018

Leveraging TensorFlowLite and available on both Android and iOS, ML Kit will make it easier to integrate image labeling, text recognition, face detection, barcode scanning and more. It can even be used to build custom chatbots.

But That’s Not All

There were plenty of other announcements in the keynote and even more on their way as the week goes on. For instance, right after the keynote, we found out that Starbucks had nearly as many orders come through its PWA than via its mobile app. We learned that Google Assistant can now make phone calls to schedule appointments – without the customer service representative realizing it’s a computer. Google announced a new Shush mode to completely eliminate notifications when a phone is upside down on a table, and a lot more.

Even among the four topics covered in this recap, there is more information to come as the week goes on. We’ll dive into each as we get more information back from our Rocketeers in California, so be sure to check back in a couple of days.

May 9, 2018

Google I/O 2018 Rocketeer Recap – Day 1

During the annual Google I/O event, so much more is released than just what the keynote includes. Sure, most of the big news comes out in the first two hours, but more and more details and announcements come out as Google holds session after session over the course of the three-day event.

Google’s goal this year seems to be quality-of-life updates – both for developers and for the people who will ultimately use the products and services created by developers. Since Google is leaning heavily on Machine Learning (ML) to accomplish this goal, talk of ML and AI permeated the entire conference. The second most important theme of the day was simplification (and ML is helping with that too).

Improvements for Developers

Once developers had a chance to learn more about the technologies originally discussed during the keynote, a trend emerged – Firebase was everywhere. Firebase had previously been used for a few things in app development, like crash reporting and user authentication through other web services such as Facebook and Twitter, but this year Google has made it a tool that every developer should be using. As we mentioned in our Google I/O 2018 keynote recap, Google added several new ML models within Firebase to make applications not only functional, but smart. This isn’t where the Machine Learning stops though. Google also added several new features to help with communication, overall application health, and data management. Senior Technical Architect, Jonathan Campos, believes the services in Firebase are so powerful that this could likely be the way most companies will implement Machine Learning in their applications for the foreseeable future.

Another Google-backed technology facing a revival is the Progressive Web App or PWA. Introduced a few years ago, PWAs are basically light-weight versions of apps that live on the web. This year PWAs were front and center, featuring native integration with Chrome OS along with a host of new Lighthouse tools to create more actionable guidance developers.

Google also stepped up and added a host of new features and development best practices for Android. A key resource to aid in this is Android Jetpack – a set of libraries, tools, and architectural guidance to make it easier to build great Android applications.

Improvements for Users

Digital well-being is not a new idea, but Google is using it to help drive some of their initiatives. The Dashboard in Android P will help users understand where their time is going, and even encourage them to stop using apps that eat up too much time. For those that want to disconnect more, it could make things a lot easier by providing a little external force that can drive behavior change. But it’s not just a person’s time and energy that Android P is going to help with – it’s also going to help extend the time their phone has energy. When monitoring what and when apps are used to help users disconnect, Google will also be using ML to know when to close or block apps from operating in the background, ultimately extending the battery life of Android devices.

Carrying along with the theme of simplicity, Android P features a few new UI adjustments as well. For instance, many interactions will now occur at the bottom of the phone – where it is easiest to access them with one hand. Other UI adjustments include gestures in place of button presses, icons such as the "back" button only appearing when they can be used, or the rotation icon when the phone is turned 90 degrees.

There are other products aside from Android P that received updates users will enjoy as well. For instance, Android TV got an overhauled setup process that reduced setup time by about a third. Thanks to ML, Android TV will also predict what settings users are looking for as well. Google also updated ARCore to version 1.2 and announced Cloud Anchors. As one of our Android Engineers, Chris Koeberle, put it, “Cloud Anchors were the missing element to make it easy to create immersive multiperson augmented reality experiences. Being able to create an AR app that allows people to not just experience but collectively modify a virtual world - on Android and iOS - is going to open up possibilities we can't even imagine yet.”

Closing out Day 1

“As a developer, I don’t focus on what something does, but what it enables me to do. Today Google enabled me to do a lot – specifically around Google Assistant, application development with Firebase and Machine Learning, web application quality with PWAs, and improvements to the Android ecosystem."

Jonathan Campos, Senior Technical Architect

With this many announcements made on day one, it’s hard to imagine what Google has in store for the rest of the week. But, that’s why we have developers on the scene to tell us what they discover, as they discover it. Be sure to check in tomorrow to hear what they stumble upon next.

In the meantime, contact us for more information about the changes coming in Android P. As one of the select Google Certified Agencies, are privy to detailed information, beta releases, and direct access to in-house Google developers, unlike others without this elite certification. With so many changes on their way, it’s more important than ever to build out your digital roadmap, and we can help.

February 15, 2018

What You Need To Know About AI

Lately, when clients come to me as a consultant, Artificial Intelligence (AI) usually comes up in our conversation. And when I’m asked, “How do I use it?” that tends to actually mean “What is it?” Let’s reach a basic understanding of AI so when you’re ready to explore what AI can offer, your discussion can be as productive as it can be.

What Is Artificial Intelligence?

As the name implies, AI is the intelligent behavior of machines. Most companies could utilize AI to interpret complex data. Here’s how it works (in a very simple way): The AI model asks a set of data a question and returns an answer. To accomplish this task, an AI model needs to understand the data it’s interpreting. So, for AI to deliver accurate, useful information, an AI model needs to be trained on the data it’s given. We’ll get into that soon, but first let’s talk about that data.

Learning From Data

For Artificial Intelligence to work, it needs to learn from specific kinds of data. With organized, not random, data, an AI platform can learn what it needs to. Let’s say you want to train an AI model to identify dogs in images. Organized data would consist of animal images, including dogs, to help the AI discern what is and what is not a dog. Random data (in this case, images of tables, lawnmowers, anything not reasonably close to our concept of a dog) doesn’t help the AI distinguish dogs from other animals. Know what’s in your data and you should be able to avoid any randomness.

At this point, I should clarify that AI isn’t actually telling you what something is — AI tells you what something probably is not. This is determined through a prediction percentage -- you’re not teaching an AI model to know an animal is a dog but rather training it to tell you it’s a certain percentage confident it recognized a dog. If you’ve been feeding your AI model with images of only dogs and cats, then introduce an image of a squirrel, your AI will be less certain of what it sees. But, once you teach the AI model what a squirrel looks like, the AI can discern with more certainty.

How To Train Your AI

Machine learning, the ability for computers to learn without being programmed, can take place a couple different ways. One way is supervised learning, where an AI model infers a function from labeled training data. Those images of animals I mentioned earlier would be labeled “dog,” “cat,” "hippopotamus,” etc. to help the AI learn. The other machine learning method is unsupervised learning. This is where machine learning draws inferences from data sets consisting of input data without labeled responses — you provide a bunch of pictures of animals with no descriptions and let the AI figure out what is and what is not a dog. Remember to, with either approach, provide organized data that your AI will learn from.

Going With Google

Google provides a lot of data sets and pre-trained AI models for purchase. But, they’re all about object recognition and will only do a good job of recognizing things that existed when the model was trained. So, a Google AI model may have learned what a bike and pogo stick are at some point, but a newer invention, like a Segway, could confuse it.

Beyond Cats And Dogs

I’ve led or been on some teams that have worked on training data for tasks other than image recognition, like classifying audio samples or, during an innovation session, recognizing what restaurant is delivering food by identifying food service workers and recognizing what was being delivered. The latter is an excellent example of what can happen when you train very specific models. In this application, the AI model learned from uniforms, not beverage cups, cars or even people. The data set provided in this case included shapes of what delivery drivers could be carrying and the clothes they wore.

The accuracy of your AI model is really about the bucket of data you give it. You need a lot of pictures, and they need to be of a similar quality. The more similar they are, the less training you likely need.

What Could Go Wrong?

Okay, our hypothetical AI is up and running. What is at stake if it’s wrong? Within your own organization, determine the impact of a false positive or false negative. AI that determines what is and isn’t email spam could afford to let some spam through but could create problems if it marks something important as spam. Imagine the consequences of AI providing results of x-rays with a false negative cancer diagnosis.

Now, if you feel more comfortable with how AI works, you can begin the challenging task of figuring out where it fits in your organization.

For information about Artificial Intelligence and Google Assistant, download our Google Assistant POV by filling out the form below.

Originally published on Forbes on Jan 8, 2018.

September 19, 2017

5 Big Ideas from our Product Owner’s Guide to the Universe at MWCA

We had a stellar program for “Mobile Product Owner’s Guide to the Universe” at Mobile World Congress Americas. There were a ton of ideas that came from our speakers during the all-day event, and these are five of the most interesting:

1. Exponential Acceleration — and Convergence — of Lots of Tech (AI, AR, VR, MR, Computer Vision, Machine Learning, Mobile and More)

“We used to say disruption is the new normal,” said Tom Edwards, Chief Digital Officer at Epsilon in his keynote address. “But now, I see this more as exponential acceleration. It’s more about consolidation and bundling of existing technologies.” With the rise of interconnected systems, marketers will need to keep up with customer expectations for seamless, intuitive, lightning fast, “magical” experiences with technologies. See Tom’s video that dives into the themes in his keynote here:

2. Leaving Room for Innovation (and Making Sure Your Definition of Innovation is Helping Not Hurting You)

Organizations can’t stop everything to innovate — but they can’t afford to fall behind either. It’s important, said panelists Todd Stricker with MarriottScott Cuppari with Coca-Cola Freestyle and Dorothy Jensen from Southwest Airlines to leave bandwidth on your teams to experiment, ideate and stay ahead of the game — even if many of the ideas never make it into production. They also advocated for carefully considering how your team defines innovation.

 Todd Stricker with Marriott, Scott Cuppari with Coca-Cola Freestyle and Dorothy Jensen from Southwest Airlines at MWCA17

“We frequently define innovation as unlocking value we weren’t unlocking before,” said Stricker. “Re-defining innovation in those terms helps people think in the problem spaces we’re really attacking to unlock customer value. That helps break the paradigm that innovation has to be a massive new crazy thing. It can be at a micro-level, and super meaningful when you’re dogging customer problems and making things better for them.”

3. It’s Time to Revisit A Few Technologies You Might Have Written Off

Technologies you might have tried a few years ago have matured: write them off at your own risk. For example, AI and natural language processing have helped create vast improvements in chatbots and voice assistants, as Vera Tzoneva, Global Product Partnerships, Google Assistant demonstrated.

VR technology is also better and more immersive than it’s ever been, said Andy Mathis, Mobile Partnerships and Business Development Lead at Oculus. For brands that want to connect with customers through indelible, immersive experiences, VR is an avenue that’s waiting to be explored. Red Bull’s VR hub lets you go cliff diving, fly a plane in the Red Bull air race and more, connecting with their adreneline-fueled branding. Tom’s in-store VR experience (see below) makes you an eyewitness and participant, making their brand promise of “buy a pair give a pair” come to life for customers.

4. Creating a Continuous Stream of Crowd-Sourced Customer Feedback to Help Drive Your Product Roadmap

Getting more (and more balanced) customer feedback helps product and marketing teams act on better, more balanced data about what customers want and need more quickly said Rob Pace, CEO of HundredX — and helps bake a listening culture into your organization. That’s critical for ensuring your products and features align to what real customers really want — not just what your team thinks they want.

Rob Pace, CEO of HundredX at MWCA

5. Data-Fueled Context is Increasingly Critical for Personalized Marketing

“The internet of things is too focused on the things,” said Dimitri Maex, President of Sentiance. “It’s on its way to becoming the internet of you — and I believe that will happen through AI and data.”

Maex shared how — using movement, location and time data from mobile phones — it’s possible to learn an enormous amount about a user’s context (Are they walking, driving, boating? Are they near home, work or school? Where are they likely to be going next?) and customize their experience for 1:1 interactions fast and at scale.

The exclamation point at the end gives me cavities, but a period is too bored... Thank you to our speakers, everyone in attendance, and our super smart, super helpful sponsors who helped make it all happen!

Also a big thank you to Urban Airship for this amazing recap! (original article)

© 2020 Bottle Rocket. All Rights Reserved.